Accuracy of inference on the physics of binary evolution from gravitational-wave observations

Gravitational-wave astronomy lets us observing binary black holes. These systems, being made up of two black holes, are pretty difficult to study by any other means. It has long been argued that with this new information we can unravel the mysteries of stellar evolution. Just as a palaeontologist can discover how long-dead animals lived from their bones, we can discover how massive stars lived by studying their black hole remnants. In this paper, we quantify how much we can really learn from this black hole palaeontology—after 1000 detections, we should pin down some of the most uncertain parameters in binary evolution to a few percent precision.

Life as a binary

There are many proposed ways of making a binary black hole. The current leading contender is isolated binary evolution: start with a binary star system (most stars are in binaries or higher multiples, our lonesome Sun is a little unusual), and let the stars evolve together. Only a fraction will end with black holes close enough to merge within the age of the Universe, but these would be the sources of the signals we see with LIGO and Virgo. We consider this isolated binary scenario in this work [bonus note].

Now, you might think that with stars being so fundamentally important to astronomy, and with binary stars being so common, we’d have the evolution of binaries figured out by now. It turns out it’s actually pretty messy, so there’s lots of work to do. We consider constraining four parameters which describe the bits of binary physics which we are currently most uncertain of:

  • Black hole natal kicks—the push black holes receive when they are born in supernova explosions. We now the neutron stars get kicks, but we’re less certain for black holes [bonus note].
  • Common envelope efficiency—one of the most intricate bits of physics about binaries is how mass is transferred between stars. As they start exhausting their nuclear fuel they puff up, so material from the outer envelope of one star may be stripped onto the other. In the most extreme cases, a common envelope may form, where so much mass is piled onto the companion, that both stars live in a single fluffy envelope. Orbiting inside the envelope helps drag the two stars closer together, bringing them closer to merging. The efficiency determines how quickly the envelope becomes unbound, ending this phase.
  • Mass loss rates during the Wolf–Rayet (not to be confused with Wolf 359) and luminous blue variable phases–stars lose mass through out their lives, but we’re not sure how much. For stars like our Sun, mass loss is low, there is enough to gives us the aurora, but it doesn’t affect the Sun much. For bigger and hotter stars, mass loss can be significant. We consider two evolutionary phases of massive stars where mass loss is high, and currently poorly known. Mass could be lost in clumps, rather than a smooth stream, making it difficult to measure or simulate.

We use parameters describing potential variations in these properties are ingredients to the COMPAS population synthesis code. This rapidly (albeit approximately) evolves a population of stellar binaries to calculate which will produce merging binary black holes.

The question is now which parameters affect our gravitational-wave measurements, and how accurately we can measure those which do?

Merger rate with redshift and chirp mass

Binary black hole merger rate at three different redshifts z as calculated by COMPAS. We show the rate in 30 different chirp mass bins for our default population parameters. The caption gives the total rate for all masses. Figure 2 of Barrett et al. (2018)

Gravitational-wave observations

For our deductions, we use two pieces of information we will get from LIGO and Virgo observations: the total number of detections, and the distributions of chirp masses. The chirp mass is a combination of the two black hole masses that is often well measured—it is the most important quantity for controlling the inspiral, so it is well measured for low mass binaries which have a long inspiral, but is less well measured for higher mass systems. In reality we’ll have much more information, so these results should be the minimum we can actually do.

We consider the population after 1000 detections. That sounds like a lot, but we should have collected this many detections after just 2 or 3 years observing at design sensitivity. Our default COMPAS model predicts 484 detections per year of observing time! Honestly, I’m a little scared about having this many signals…

For a set of population parameters (black hole natal kick, common envelope efficiency, luminous blue variable mass loss and Wolf–Rayet mass loss), COMPAS predicts the number of detections and the fraction of detections as a function of chirp mass. Using these, we can work out the probability of getting the observed number of detections and fraction of detections within different chirp mass ranges. This is the likelihood function: if a given model is correct we are more likely to get results similar to its predictions than further away, although we expect their to be some scatter.

If you like equations, the from of our likelihood is explained in this bonus note. If you don’t like equations, there’s one lurking in the paragraph below. Just remember, that it can’t see you if you don’t move. It’s OK to skip the equation.

To determine how sensitive we are to each of the population parameters, we see how the likelihood changes as we vary these. The more the likelihood changes, the easier it should be to measure that parameter. We wrap this up in terms of the Fisher information matrix. This is defined as

\displaystyle F_{ij} = -\left\langle\frac{\partial^2\ln \mathcal{L}(\mathcal{D}|\left\{\lambda\right\})}{\partial \lambda_i \partial\lambda_j}\right\rangle,

where \mathcal{L}(\mathcal{D}|\left\{\lambda\right\}) is the likelihood for data \mathcal{D} (the number of observations and their chirp mass distribution in our case), \left\{\lambda\right\} are our parameters (natal kick, etc.), and the angular brackets indicate the average over the population parameters. In statistics terminology, this is the variance of the score, which I think sounds cool. The Fisher information matrix nicely quantifies how much information we can lean about the parameters, including the correlations between them (so we can explore degeneracies). The inverse of the Fisher information matrix gives a lower bound on the covariance matrix (the multidemensional generalisation of the variance in a normal distribution) for the parameters \left\{\lambda\right\}. In the limit of a large number of detections, we can use the Fisher information matrix to estimate the accuracy to which we measure the parameters [bonus note].

We simulated several populations of binary black hole signals, and then calculate measurement uncertainties for our four population uncertainties to see what we could learn from these measurements.

Results

Using just the rate information, we find that we can constrain a combination of the common envelope efficiency and the Wolf–Rayet mass loss rate. Increasing the common envelope efficiency ends the common envelope phase earlier, leaving the binary further apart. Wider binaries take longer to merge, so this reduces the merger rate. Similarly, increasing the Wolf–Rayet mass loss rate leads to wider binaries and smaller black holes, which take longer to merge through gravitational-wave emission. Since the two parameters have similar effects, they are anticorrelated. We can increase one and still get the same number of detections if we decrease the other. There’s a hint of a similar correlation between the common envelope efficiency and the luminous blue variable mass loss rate too, but it’s not quite significant enough for us to be certain it’s there.

Correaltions between population parameters

Fisher information matrix estimates for fractional measurement precision of the four population parameters: the black hole natal kick \sigma_\mathrm{kick}, the common envelope efficiency \alpha_\mathrm{CE}, the Wolf–Rayet mass loss rate f_\mathrm{WR}, and the luminous blue variable mass loss rate f_\mathrm{LBV}. There is an anticorrealtion between f_\mathrm{WR} and \alpha_\mathrm{CE}, and hints at a similar anticorrelation between f_|mathrm{LBV} and \alpha_\mathrm{CE}. We show 1500 different realisations of the binary population to give an idea of scatter. Figure 6 of Barrett et al. (2018)

Adding in the chirp mass distribution gives us more information, and improves our measurement accuracies. The fraction uncertainties are about 2% for the two mass loss rates and the common envelope efficiency, and about 5% for the black hole natal kick. We’re less sensitive to the natal kick because the most massive black holes don’t receive a kick, and so are unaffected by the kick distribution [bonus note]. In any case, these measurements are exciting! With this type of precision, we’ll really be able to learn something about the details of binary evolution.

Standard deviation of measurements of population parameters

Measurement precision for the four population parameters after 1000 detections. We quantify the precision with the standard deviation estimated from the Fisher inforamtion matrix. We show results from 1500 realisations of the population to give an idea of scatter. Figure 5 of Barrett et al. (2018)

The accuracy of our measurements will improve (on average) with the square root of the number of gravitational-wave detections. So we can expect 1% measurements after about 4000 observations. However, we might be able to get even more improvement by combining constraints from other types of observation. Combining different types of observation can help break degeneracies. I’m looking forward to building a concordance model of binary evolution, and figuring out exactly how massive stars live their lives.

arXiv: 1711.06287 [astro-ph.HE]
Journal: Monthly Notices of the Royal Astronomical Society; 477(4):4685–4695; 2018
Favourite dinosaur: Professor Science

Bonus notes

Channel selection

In practise, we will need to worry about how binary black holes are formed, via isolated evolution or otherwise, before inferring the parameters describing binary evolution. This makes the problem more complicated. Some parameters, like mass loss rates or black hole natal kicks, might be common across multiple channels, while others are not. There are a number of ways we might be able to tell different formation mechanisms apart, such as by using spin measurements.

Kick distribution

We model the supernova kicks v_\mathrm{kick} as following a Maxwell–Boltzmann distribution,

\displaystyle p(v_\mathrm{kick}) = \sqrt{\frac{2}{\pi}}  \frac{v_\mathrm{kick}^2}{\sigma_\mathrm{kick}^3} \exp\left(\frac{-v_\mathrm{kick}^2}{2\sigma_\mathrm{kick}^2}\right),

where \sigma_\mathrm{kick} is the unknown population parameter. The natal kick received by the black hole v^*_\mathrm{kick} is not the same as this, however, as we assume some of the material ejected by the supernova falls back, reducing the over kick. The final natal kick is

v^*_\mathrm{kick} = (1-f_\mathrm{fb})v_\mathrm{kick},

where f_\mathrm{fb} is the fraction that falls back, taken from Fryer et al. (2012). The fraction is greater for larger black holes, so the biggest black holes get no kicks. This means that the largest black holes are unaffected by the value of \sigma_\mathrm{kick}.

The likelihood

In this analysis, we have two pieces of information: the number of detections, and the chirp masses of the detections. The first is easy to summarise with a single number. The second is more complicated, and we consider the fraction of events within different chirp mass bins.

Our COMPAS model predicts the merger rate \mu and the probability of falling in each chirp mass bin p_k (we factor measurement uncertainty into this). Our observations are the the total number of detections N_\mathrm{obs} and the number in each chirp mass bin c_k (N_\mathrm{obs} = \sum_k c_k). The likelihood is the probability of these observations given the model predictions. We can split the likelihood into two pieces, one for the rate, and one for the chirp mass distribution,

\mathcal{L} = \mathcal{L}_\mathrm{rate} \times \mathcal{L}_\mathrm{mass}.

For the rate likelihood, we need the probability of observing N_\mathrm{obs} given the predicted rate \mu. This is given by a Poisson distribution,

\displaystyle \mathcal{L}_\mathrm{rate} = \exp(-\mu t_\mathrm{obs}) \frac{(\mu t_\mathrm{obs})^{N_\mathrm{obs}}}{N_\mathrm{obs}!},

where t_\mathrm{obs} is the total observing time. For the chirp mass likelihood, we the probability of getting a number of detections in each bin, given the predicted fractions. This is given by a multinomial distribution,

\displaystyle \mathcal{L}_\mathrm{mass} = \frac{N_\mathrm{obs}!}{\prod_k c_k!} \prod_k p_k^{c_k}.

These look a little messy, but they simplify when you take the logarithm, as we need to do for the Fisher information matrix.

When we substitute in our likelihood into the expression for the Fisher information matrix, we get

\displaystyle F_{ij} = \mu t_\mathrm{obs} \left[ \frac{1}{\mu^2} \frac{\partial \mu}{\partial \lambda_i} \frac{\partial \mu}{\partial \lambda_j}  + \sum_k\frac{1}{p_k} \frac{\partial p_k}{\partial \lambda_i} \frac{\partial p_k}{\partial \lambda_j} \right].

Conveniently, although we only need to evaluate first-order derivatives, even though the Fisher information matrix is defined in terms of second derivatives. The expected number of events is \langle N_\mathrm{obs} \rangle = \mu t_\mathrm{obs}. Therefore, we can see that the measurement uncertainty defined by the inverse of the Fisher information matrix, scales on average as N_\mathrm{obs}^{-1/2}.

For anyone worrying about using the likelihood rather than the posterior for these estimates, the high number of detections [bonus note] should mean that the information we’ve gained from the data overwhelms our prior, meaning that the shape of the posterior is dictated by the shape of the likelihood.

Interpretation of the Fisher information matrix

As an alternative way of looking at the Fisher information matrix, we can consider the shape of the likelihood close to its peak. Around the maximum likelihood point, the first-order derivatives of the likelihood with respect to the population parameters is zero (otherwise it wouldn’t be the maximum). The maximum likelihood values of N_\mathrm{obs} = \mu t_\mathrm{obs} and c_k = N_\mathrm{obs} p_k are the same as their expectation values. The second-order derivatives are given by the expression we have worked out for the Fisher information matrix. Therefore, in the region around the maximum likelihood point, the Fisher information matrix encodes all the relevant information about the shape of the likelihood.

So long as we are working close to the maximum likelihood point, we can approximate the distribution as a multidimensional normal distribution with its covariance matrix determined by the inverse of the Fisher information matrix. Our results for the measurement uncertainties are made subject to this approximation (which we did check was OK).

Approximating the likelihood this way should be safe in the limit of large N_\mathrm{obs}. As we get more detections, statistical uncertainties should reduce, with the peak of the distribution homing in on the maximum likelihood value, and its width narrowing. If you take the limit of N_\mathrm{obs} \rightarrow \infty, you’ll see that the distribution basically becomes a delta function at the maximum likelihood values. To check that our N_\mathrm{obs} = 1000 was large enough, we verified that higher-order derivatives were still small.

Michele Vallisneri has a good paper looking at using the Fisher information matrix for gravitational wave parameter estimation (rather than our problem of binary population synthesis). There is a good discussion of its range of validity. The high signal-to-noise ratio limit for gravitational wave signals corresponds to our high number of detections limit.

 

Advertisement

Hierarchical analysis of gravitational-wave measurements of binary black hole spin–orbit misalignments

Gravitational waves allow us to infer the properties of binary black holes (two black holes in orbit about each other), but can we use this information to figure out how the black holes and the binary form? In this paper, we show that measurements of the black holes’ spins can help us this out, but probably not until we have at least 100 detections.

Black hole spins

Black holes are described by their masses (how much they bend spacetime) and their spins (how much they drag spacetime to rotate about them). The orientation of the spins relative to the orbit of the binary could tell us something about the history of the binary [bonus note].

We considered four different populations of spin–orbit alignments to see if we could tell them apart with gravitational-wave observations:

  1. Aligned—matching the idealised example of isolated binary evolution. This stands in for the case where misalignments are small, which might be the case if material blown off during a supernova ends up falling back and being swallowed by the black hole.
  2. Isotropic—matching the expectations for dynamically formed binaries.
  3. Equal misalignments at birth—this would be the case if the spins and orbit were aligned before the second supernova, which then tilted the plane of the orbit. (As the binary inspirals, the spins wobble around, so the two misalignment angles won’t always be the same).
  4. Both spins misaligned by supernova kicks, assuming that the stars were aligned with the orbit before exploding. This gives a more general scatter of unequal misalignments, but typically the primary (bigger and first forming) black hole is more misaligned.

These give a selection of possible spin alignments. For each, we assumed that the spin magnitude was the same and had a value of 0.7. This seemed like a sensible idea when we started this study [bonus note], but is now towards the upper end of what we expect for binary black holes.

Hierarchical analysis

To measurement the properties of the population we need to perform a hierarchical analysis: there are two layers of inference, one for the individual binaries, and one of the population.

From a gravitational wave signal, we infer the properties of the source using Bayes’ theorem. Given the data d_\alpha, we want to know the probability that the parameters \mathbf{\Theta}_\alpha have different values, which is written as p(\mathbf{\Theta}_\alpha|d_\alpha). This is calculated using

\displaystyle p(\mathbf{\Theta}_\alpha|d_\alpha) = \frac{p(d_\alpha | \mathbf{\Theta}_\alpha) p(\mathbf{\Theta}_\alpha)}{p(d_\alpha)},

where p(d_\alpha | \mathbf{\Theta}_\alpha) is the likelihood, which we can calculate from our knowledge of the noise in our gravitational wave detectors, p(\mathbf{\Theta}_\alpha) is the prior on the parameters (what we would have guessed before we had the data), and the normalisation constant p(d_\alpha) is called the evidence. We’ll use the evidence again in the next layer of inference.

Our prior on the parameters should actually depend upon what we believe about the astrophysical population. It is different if we believed that Model 1 were true (when we’d only consider aligned spins) than for Model 2. Therefore, we should really write

\displaystyle p(\mathbf{\Theta}_\alpha|d_\alpha, \lambda) = \frac{p(d_\alpha | \mathbf{\Theta}_\alpha,\lambda) p(\mathbf{\Theta}_\alpha,\lambda)}{p(d_\alpha|\lambda)},

where  \lambda denotes which model we are considering.

This is an important point to remember: if you our using our LIGO results to test your theory of binary formation, you need to remember to correct for our choice of prior. We try to pick non-informative priors—priors that don’t make strong assumptions about the physics of the source—but this doesn’t mean that they match what would be expected from your model.

We are interested in the probability distribution for the different models: how many binaries come from each. Given a set of different observations \{d_\alpha\}, we can work this out using another application of Bayes’ theorem (yay)

\displaystyle p(\mathbf{\lambda}|\{d_\alpha\}) = \frac{p(\{d_\alpha\} | \mathbf{\lambda}) p(\mathbf{\lambda})}{p(\{d_\alpha\})},

where p(\{d_\alpha\} | \mathbf{\lambda}) is just all the evidences for the individual events (given that model) multiplied together, p(\mathbf{\lambda}) is our prior for the different models, and p(\{d_\alpha\}) is another normalisation constant.

Now knowing how to go from a set of observations to the probability distribution on the different channels, let’s give it a go!

Results

To test our approach made a set of mock gravitational wave measurements. We generated signals from binaries for each of our four models, and analysed these as we would for real signals (using LALInference). This is rather computationally expensive, and we wanted a large set of events to analyse, so using these results as a guide, we created a larger catalogue of approximate distributions for the inferred source parameters p(\mathbf{\Theta}_\alpha|d_\alpha). We then fed these through our hierarchical analysis. The GIF below shows how measurements of the fraction of binaries from each population tightens up as we get more detections: the true fraction is marked in blue.

Fraction of binaries from each of the four models

Probability distribution for the fraction of binaries from each of our four spin misalignment populations for different numbers of observations. The blue dot marks the true fraction: and equal fraction from all four channels.

The plot shows that we do zoom in towards the true fraction of events from each model as the number of events increases, but there are significant degeneracies between the different models. Notably, it is difficult to tell apart Models 1 and 3, as both have strong support for both spins being nearly aligned. Similarly, there is a degeneracy between Models 2 and 4 as both allow for the two spins to have very different misalignments (and for the primary spin, which is the better measured one, to be quite significantly misaligned).

This means that we should be able to distinguish aligned from misaligned populations (we estimated that as few as 5 events would be needed to distinguish the case that all events came from either Model 1  or Model 2 if those were the only two allowed possibilities). However, it will be more difficult to distinguish different scenarios which only lead to small misalignments from each other, or disentangle whether there is significant misalignment due to big supernova kicks or because binaries are formed dynamically.

The uncertainty of the fraction of events from each model scales roughly with the square root of the number of observations, so it may be slow progress making these measurements. I’m not sure whether we’ll know the answer to how binary black hole form, or who will sit on the Iron Throne first.

arXiv: 1703.06873 [astro-ph.HE]
Journal: Monthly Notices of the Royal Astronomical Society471(3):2801–2811; 2017
Birmingham science summary: Hierarchical analysis of gravitational-wave measurements of binary black hole spin–orbit misalignment (by Simon)
If you like this you might like: Farr et al. (2017)Talbot & Thrane (2017), Vitale et al. (2017), Trifirò et al. (2016), Minogue (2000)

Bonus notes

Spin misalignments and formation histories

If you have two stars forming in a binary together, you’d expect them to be spinning in roughly the same direction, rotating the same way as they go round in their orbit (like our Solar System). This is because they all formed from the same cloud of swirling gas and dust. Furthermore, if two stars are to form a black hole binary that we can detect gravitational waves from, they need to be close together. This means that there can be tidal forces which gently tug the stars to align their rotation with the orbit. As they get older, stars puff up, meaning that if you have a close-by neighbour, you can share outer layers. This transfer of material will tend to align rotate too. Adding this all together, if you have an isolated binary of stars, you might expect that when they collapse down to become black holes, their spins are aligned with each other and the orbit.

Unfortunately, real astrophysics is rarely so clean. Even if the stars were initially rotating the same way as each other, they doesn’t mean that their black hole remnants will do the same. This depends upon how the star collapses. Massive stars explode as supernova, blasting off their outer layers while their cores collapse down to form black holes. Escaping material could carry away angular momentum, meaning that the black hole is spinning in a different direction to its parent star, or material could be blasted off asymmetrically, giving the new black hole a kick. This would change the plane of the binary’s orbit, misaligning the spins.

Alternatively, the binary could be formed dynamically. Instead of two stars living their lives together, we could have two stars (or black holes) come close enough together to form a binary. This is likely to happen in regions where there’s a high density of stars, such as a globular cluster. In this case, since the binary has been randomly assembled, there’s no reason for the spins to be aligned with each other or the orbit. For dynamically assembled binaries, all spin–orbit misalignments are equally probable.

Slow and steady

This project was led by Simon Stevenson. It was one of the first things we started working on at the beginning of his PhD. He has now graduated, and is off to start a new exciting life as a postdoc in Australia. We got a little distracted by other projects, most notably analysing the first detections of gravitational waves. Simon spent a lot of time developing the COMPAS population code, a code to simulate the evolution of binaries. Looking back, it’s impressive how far he’s come. This paper used a simple approximation to to estimate the masses of our black holes: we called it the Post-it note model, as we wrote it down on a single Post-it. Now Simon’s writing papers including the complexities of common-envelope evolution in order to explain LIGO’s actual observations.

A black hole Pokémon

The world is currently going mad for Pokémon Go, so it seems like the perfect time to answer the most burning of scientific questions: what would a black hole Pokémon be like?

Black hole Pokémon

Type: Dark/Ghost

Black holes are, well, black. Their gravity is so strong that if you get close enough, nothing, not even light, can escape. I think that’s about as dark as you can get!

After picking Dark as a primary type, I thought Ghost was a good secondary type, since black holes could be thought of as the remains of dead stars. This also fit well with black holes not really being made of anything—they are just warped spacetime—and so are ethereal in nature. Of course, black holes’ properties are grounded in general relativity and not the supernatural.

In the games, having a secondary type has another advantage: Dark types are weak against Fighting types. In reality, punching or kicking a black hole is a Bad Idea™: it will not damage the black hole, but will certainly cause you some difficulties. However, Ghost types are unaffected by Fighting-type moves, so our black hole Pokémon doesn’t have to worry about them.

Height: 0’04″/0.1 m

Real astrophysical black holes are probably a bit too big for Pokémon games.  The smallest Pokémon are currently the electric bug Joltik and fairy Flabébé, so I’ve made our black hole Pokémon the same size as these. It should comfortably fit inside a Pokéball.

Measuring the size of a black hole is actually rather tricky, since they curve spacetime. When talking about the size of a black hole, we normally think in terms of the Schwarzschild radius. Named after Karl Schwarzschild, who first calculated the spacetime of a black hole (although he didn’t realise that at the time), the Schwarzschild radius correspond to the event horizon (the point of no return) of a non-spinning black hole. It’s rather tricky to measure the distance to the centre of a black hole, so really the Schwarzschild radius gives an idea of the circumference (the distance around the edge) of the event horizon: this is 2π times the Schwarschild radius. We’ll take the height to really mean twice the Schwarzschild radius (which would be the Schwarzschild diameter, if that were actually a thing).

Weight: 7.5 × 1025 lbs/3.4 × 1025 kg

Although we made our black hole pocket-sized, it is monstrously heavy. The mass is for a black hole of the size we picked, and it is about 6 times that of the Earth. That’s still quite small for a black hole (it’s 3.6 million times less massive than the black hole that formed from GW150914’s coalescence). With this mass, our Pokémon would have a significant effect on the tides as it would quickly suck in the Earth’s oceans. Still, Pokémon doesn’t need to be too realistic.

Our black hole Pokémon would be by far the heaviest Pokémon, despite being one of the smallest. The heaviest Pokémon currently is the continent Pokémon Primal Groudon. This is 2,204.4 lbs/999.7 kg, so about 34,000,000,000,000,000,000,000 times lighter.

Within the games, having such a large weight would make our black hole Pokémon vulnerable to Grass Knot, a move which trips a Pokémon. The heavier the Pokémon, the more it is hurt by the falling over, so the more damage Grass Knot does. In the case of our Pokémon, when it trips it’s not so much that it hits the ground, but that the Earth hits it, so I think it’s fair that this hurts.

Gender: Unknown

Black holes are beautifully simple, they are described just by their mass, spin and electric charge. There’s no other information you can learn about them, so I don’t think there’s any way to give them a gender. I think this is rather fitting as the sun-like Solrock is also genderless, and it seems right that stars and black holes share this.

Ability: Sticky Hold
Hidden ability:
 Soundproof

Sticky Hold prevents a Pokémon’s item from being taken. (I’d expect wild black hole Pokémon to be sometimes found holding Stardust, from stars they have consumed). Due to their strong gravity, it is difficult to remove an object that is orbiting a black hole—a common misconception is that it is impossible to escape the pull of a black hole, this is only true if you cross the event horizon (if you replaced the Sun with a black hole of the same mass, the Earth would happily continue on its orbit as if nothing had happened).

Soundproof is an ability that protects Pokémon from sound-based moves. I picked it as a reference to sonic (or acoustic) black holes. These are black hole analogues—systems which mimic some of the properties of black holes. A sonic black hole can be made in a fluid which flows faster than its speed of sound. When this happens, sound can no longer escape this rapidly flowing region (it just gets swept away), just like light can’t escape from the event horizon or a regular black hole.

Sonic black holes are fun, because you can make them in the lab. You can them use them to study the properties of black holes—there is much excitement about possibly observing the equivalent of Hawking radiation. Predicted by Stephen Hawking (as you might guess), Hawking radiation is emitted by black holes, and could cause them to evaporate away (if they didn’t absorb more than they emit). Hawking radiation has never been observed from proper black holes, as it is very weak. However, finding the equivalent for sonic black holes might be enough to get Hawking his Nobel Prize…

Moves:

Start — Gravity
Start — Crunch

The starting two moves are straightforward. Gravity is the force which governs black holes; it is gravity which pulls material in and causes the collapse  of stars. I think Crunch neatly captures the idea of material being squeezed down by intense gravity.

Level 16 — Vacuum Wave

Vacuum Wave sounds like a good description of a gravitational wave: it is a ripple in spacetime. Black holes (at least when in a binary) are great sources of gravitational waves (as GW150914 and GW151226 have shown), so this seems like a sensible move for our Pokémon to learn—although I may be biased. Why at level 16? Because Einstein first predicted gravitational waves from his theory of general relativity in 1916.

Level 18 — Discharge

Black holes can have an electric charge, so our Pokémon should learn an Electric-type move. Charged black holes can have some weird properties. We don’t normally worry about charged black holes for two reasons. First, charged black holes are difficult to make: stuff is usually neutral overall, you don’t get a lot of similarly charged material in one place that can collapse down, and even if you did, it would quickly attract the opposite charge to neutralise itself. Second, if you did manage to make a charged black hole, it would quickly lose its charge: the strong electric and magnetic fields about the black hole would lead to the creation of charged particles that would neutralise the black hole. Discharge seems like a good move to describe this process.

Why level 18? The mathematical description of charged black holes was worked out by Hans Reissner and Gunnar Nordström, the second paper was published in 1918.

Level 19 —Light Screen

In general relativity, gravity bends spacetime. It is this warping that causes objects to move along curved paths (like the Earth orbiting the Sun). Light is affected in the same way and gets deflected by gravity, which is called gravitational lensing. This was the first experimental test of general relativity. In 1919, Arthur Eddington led an expedition to measure the deflection of light around the Sun during a solar eclipse.

Black holes, having strong gravity, can strongly lens light. The graphics from the movie Interstellar illustrate this beautifully. Below you can see how the image of the disc orbiting the black hole is distorted. The back of the disc is visible above and below the black hole! If you look closely, you can also see a bright circle inside the disc, close to the black hole’s event horizon. This is known as the light ring. It is where the path of light gets so bent, that it can orbit around and around the black hole many times. This sounds like a Light Screen to me.

Black hole and light bending

Light-bending around the black hole Gargantua in Interstellar. The graphics use proper simulations of black holes, but they did fudge a couple of details to make it look extra pretty. Credit: Warner Bros./Double Negative.

Level 29 — Dark Void
Level 36 — Hyperspace Hole
Level 62 — Shadow Ball

These are three moves which with the most black hole-like names. Dark Void might be “black hole” after a couple of goes through Google Translate. Hyperspace Hole might be a good name for one of the higher dimensional black holes theoreticians like to play around with. (I mean, they like to play with the equations, not actually the black holes, as you’d need more than a pair of safety mittens for that). Shadow Ball captures the idea that a black hole is a three-dimensional volume of space, not just a plug-hole for the Universe. Non-rotating black holes are spherical (rotating ones bulge out at the middle, as I guess many of us do), so “ball” fits well, but they aren’t actually the shadow of anything, so it falls apart there.

I’ve picked the levels to be the masses of the two black holes which inspiralled together to produce GW150914, measured in units of the Sun’s mass, and the mass of the black hole that resulted from their merger. There’s some uncertainty on these measurements, so it would be OK if the moves were learnt a few levels either way.

Level 63 — Whirlpool
Level 63 — Rapid Spin

When gas falls into a black hole, it often spirals around and forms into an accretion disc. You can see an artistic representation of one in the image from Instellar above. The gas swirls around like water going down the drain, making Whirlpool and apt move. As it orbits, the gas closer to the black hole is moving quicker than that further away. Different layers rub against each other, and, just like when you rub your hands together on a cold morning, they heat up. One of the ways we look for black holes is by spotting the X-rays emitted by these hot discs.

As the material spirals into a black hole, it spins it up. If a black hole swallows enough things that were all orbiting the same way, it can end up rotating extremely quickly. Therefore, I thought our black hole Pokémon should learn Rapid Spin as the same time as Whirlpool.

I picked level 63, as the solution for a rotating black hole was worked out by Roy Kerr in 1963. While Schwarzschild found the solution for a non-spinning black hole soon after Einstein worked out the details of general relativity in 1915, and the solution for a charged black hole came just after these, there’s a long gap before Kerr’s breakthrough. It was some quite cunning maths! (The solution for a rotating charged black hole was quickly worked out after this, in 1965).

Level 77 — Hyper Beam

Another cool thing about discs is that they could power jets. As gas sloshes around towards a black hole, magnetic fields can get tangled up. This leads to some of the material to be blasted outwards along the axis of the field. We’ve some immensely powerful jets of material, like the one below, and it’s difficult to imagine anything other than a black hole that could create such high energies! Important work on this was done by Roger Blandford and Roman Znajek in 1977, which is why I picked the level. Hyper Beam is no exaggeration in describing these jets.

Galaxy-scale radio jets

Jets from Centaurus A are bigger than the galaxy itself! This image is a composite of X-ray (blue), microwave (orange) and visible light. You can see the jets pushing out huge bubbles above and below the galaxy. We think the jets are powered by the galaxy’s central supermassive black hole. Credit: ESO/WFI/MPIfR/APEX/NASA/CXC/CfA/A.Weiss et al./R.Kraft et al.

After using Hyper Beam, a Pokémon must recharge for a turn. It’s an exhausting move. A similar thing may happen with black holes. If they accrete a lot of stuff, the radiation produced by the infalling material blasts away other gas and dust, cutting off the black hole’s supply of food. Black holes in the centres of galaxies may go through cycles of feeding, with discs forming, blowing away the surrounding material, and then a new disc forming once everything has settled down. This link between the black hole and its environment may explain why we see a trend between the size of supermassive black holes and the properties of their host galaxies.

Level 100 — Spacial Rend
Level 100 — Roar of Time

To finish off, since black holes are warped spacetime, a space move and a time move. Relativity say that space and time are two aspects of the same thing, so these need to be learnt together.

It’s rather tricky to imagine space and time being linked. Wibbly-wobbly, timey-wimey, spacey-wacey stuff gets quickly gets befuddling. If you imagine just two space dimension (forwards/backwards and left/right), then you can see how to change one to the other by just rotating. If you turn to face a different way, you can mix what was left to become forwards, or to become a bit of right and a bit of forwards. Black holes sort of do the same thing with space and time. Normally, we’re used to the fact that we a definitely travelling forwards in time, but if you stray beyond the event horizon of a black hole, you’re definitely travelling towards the centre of the black hole in the same inescapable way. Black holes are the masters when it comes to manipulating space and time.

There we have it, we can now sleep easy knowing what a black hole Pokémon would be like. Well almost, we still need to come up with a name. Something resembling a pun would be traditional. Suggestions are welcome. The next games in the series are Pokémon Sun and Pokémon Moon. Perhaps with this space theme Nintendo might consider a black hole Pokémon too?

The Boxing Day Event

Advanced LIGO’s first observing run (O1) got off to an auspicious start with the detection of GW150914 (The Event to its friends). O1 was originally planned to be three months long (September to December), but after the first discovery, there were discussions about extending the run. No major upgrades to the detectors were going to be done over the holidays anyway, so it was decided that we might as well leave them running until January.

By the time the Christmas holidays came around, I was looking forward to some time off. And, of course, lots of good food and the Doctor Who Christmas Special. The work on the first detection had been exhausting, and the Collaboration reached the collective decision that we should all take some time off [bonus note]. Not a creature was stirring, not even a mouse.

On Boxing Day, there was a sudden flurry of emails. This could only mean one thing. We had another detection! Merry GW151226 [bonus note]!

A Christmas gift

I assume someone left out milk and cookies at the observatories. A not too subtle hint from Nutsinee Kijbunchoo’s comic in the LIGO Magazine.

I will always be amazed how lucky we were detecting GW150914. This could have been easily missed if we were just a little later starting observing. If that had happened, we might not have considered extended O1, and would have missed GW151226 too!

GW151226 is another signal from a binary black hole coalescence. This wasn’t too surprising at the time, as we had estimated such signals should be pretty common. It did, however, cause a slight wrinkle in discussions of what to do in the papers about the discovery of GW150914. Should we mention that we had another potential candidate? Should we wait until we had analysed the whole of O1 fully? Should we pack it all in and have another slice of cake? In the end we decided that we shouldn’t delay the first announcement, and we definitely shouldn’t rush the analysis of the full data set. Therefore, we went ahead with the original plan of just writing about the first month of observations and giving slightly awkward answers, mumbling about still having data to analyse, when asked if we had seen anything else [bonus note]. I’m not sure how many people outside the Collaboration suspected.

The science

What have we learnt from analysing GW151226, and what have we learnt from the whole of O1? We’ve split our results into two papers.

0. The Boxing Day Discovery Paper

Title: GW151226: Observation of gravitational waves from a 22-solar-mass binary black hole
arXiv: 1606.04855 [gr-qc]
Journal: Physical Review Letters116(24):241103(14)
LIGO science summary: GW151226: Observation of gravitational waves from a 22 solar-mass binary black hole (by Hannah Middleton and Carl-Johan Haster)

This paper presents the discovery of GW151226 and some of the key information about it. GW151226 is not as loud as GW150914, you can’t spot it by eye in the data, but it still stands out in our search. This is a clear detection! It is another binary black hole system, but it is a lower mass system than GW150914 (hence the paper’s title—it’s a shame they couldn’t put in the error bars though).

This paper summarises the highlights of the discovery, so below, I’ll explain these without going into too much technical detail.

More details: The Boxing Day Discovery Paper summary

1. The O1 Binary Black Hole Paper

Title: Binary black hole mergers in the first Advanced LIGO observing run
arXiv: 1606.04856 [gr-qc]
Journal: Physical Review X6(4):041015(36)
Posterior samples: Release v1.0

This paper brings together (almost) everything we’ve learnt about binary black holes from O1. It discusses GW150915, LVT151012 and GW151226, and what we are starting to piece together about stellar-mass binary black holes from this small family of gravitational-wave events.

For the announcement of GW150914, we put together 12 companion papers to go out with the detection announcement. This paper takes on that role. It is Robin, Dr Watson, Hermione and Samwise Gamgee combined. There’s a lot of delicious science packed into this paper (searches, parameter estimation, tests of general relativity, merger rate estimation, and astrophysical implications). In my summary below, I’ll delve into what we have done and what our results mean.

The results of this paper have now largely been updated in the O2 Catalogue Paper.

More details: The O1 Binary Black Hole Paper summary

If you are interested in our science results, you can find data releases accompanying the events at the LIGO Open Science Center. These pages also include some wonderful tutorials to play with.

The Boxing Day Discovery Paper

Synopsis: Boxing Day Discovery Paper
Read this if: You are excited about the discovery of GW151226
Favourite part: We’ve done it again!

The signal

GW151226 is not as loud as GW150914, you can’t spot it by eye in the data. Therefore, this paper spends a little more time than GW150914’s Discovery Paper talking about the ingredients for our searches.

GW151226 was found by two pipelines which specifically look for compact binary coalescences: the inspiral and merger of neutron stars or black holes. We have templates for what we think these signals should look like, and we filter the data against a large bank of these to see what matches [bonus note].

For the search to work, we do need accurate templates. Figuring out what the waveforms for binary black coalescence should look like is a difficult job, and has taken almost as long as figuring out how to build the detectors!

The signal arrived at Earth 03:38:53 GMT on 26 December 2015 and was first identified by a search pipeline within 70 seconds. We didn’t have a rapid templated search online at the time of GW150914, but decided it would be a good idea afterwards. This allowed us to send out an alert to our astronomer partners so they could look for any counterparts (I don’t think any have been found [bonus note]).

The unmodelled searches (those which don’t use templates, but just coherent signals in both detectors) which first found GW150914 didn’t find GW151226. This isn’t too surprising, as they are less sensitive. You can think of the templated searches as looking for Wally (or Waldo if you’re North American), using the knowledge that he’s wearing glasses, and a red and white stripped bobble hat, but the unmodelled searches are looking for him just knowing that he’s the person that’s on on every page.

GW151226 is the second most significant event in the search for binary black holes after The Event. Its significance is not quite off the charts, but is great enough that we have a hard time calculating exactly how significant it is. Our two search pipelines give estimates of the p-value (the probability you’d see something at least this signal-like if you only had noise in your detectors) of < 10^{-7} and 3.5 \times 10^{-6}, which are pretty good!

The source

To figure out the properties of the source, we ran our parameter-estimation analysis.

GW151226 comes from a black hole binary with masses of 14.2^{+8.3}_{-3.7} M_\odot and 7.5^{+2.3}_{-2.3} M_\odot [bonus note], where M_\odot is the mass of our Sun (about 330,000 times the mass of the Earth). The error bars indicate our 90% probability ranges on the parameters. These black holes are less massive than the source of GW150914 (the more massive black hole is similar to the less massive black hole of LVT151012). However, the masses are still above what we believe is the maximum possible mass of a neutron star (around 3 M_\odot). The masses are similar to those observed for black holes in X-ray binaries, so perhaps these black holes are all part of the same extended family.

A plot showing the probability distributions for the masses is shown below. It makes me happy. Since GW151226 is lower mass than GW150914, we see more of the inspiral, the portion of the signal where the two black holes are spiralling towards each other. This means that we measure the chirp mass, a particular combination of the two masses really well. It is this which gives the lovely banana shape to the distribution. Even though I don’t really like bananas, it’s satisfying to see this behaviour as this is what we have been expecting too see!

Binary black hole masses

Estimated masses for the two black holes in the binary of the Boxing Day Event. The dotted lines mark the edge of our 90% probability intervals. The different coloured curves show different models: they agree which again made me happy! The two-dimensional distribution follows a curve of constant chirp mass. The sharp cut-off at the top-left is because m_1^\mathrm{source} is defined to be bigger than m_2^\mathrm{source}. Figure 3 of The Boxing Day Discovery Paper.

The two black holes merge to form a final black hole of 20.8^{+6.1}_{-1.7} M_\odot [bonus note].

If you add up the initial binary masses and compare this to the final mass, you’ll notice that something is missing. Across the entire coalescence, gravitational waves carry away 1.0^{+0.1}_{-0.2} M_\odot c^2 \simeq 1.8^{+0.2}_{-0.4} \times 10^{47}~\mathrm{J} of energy (where c is the speed of light, which is used to convert masses to energies). This isn’t quite as impressive as the energy of GW150914, but it would take the Sun 1000 times the age of the Universe to output that much energy.

The mass measurements from GW151226 are cool, but what’re really exciting are the spin measurements. Spin, as you might guess, is a measure of how much angular momentum a black hole has. We define it to go from zero (not spinning) to one (spinning as much as is possible). A black hole is fully described by its mass and spin. The black hole masses are most important in defining what a gravitational wave looks like, but the imprint of spin is more subtle. Therefore its more difficult to get a good measurement of the spins than the masses.

For GW150915 and LVT151012, we get a little bit of information on the spins. We can conclude that the spins are probably not large, or at least they are not large and aligned with the orbit of the binary. However, we can’t say for certain that we’ve seen any evidence that the black holes are spinning. For GW151226, al least one of the black holes (although we can’t say which) has to be spinning [bonus note].

The plot below shows the probability distribution for the two spins of the binary black holes. This shows the both the magnitude of the spin and the direction that of the spin (if the tilt is zero the black hole and the binary’s orbit both go around in the same way). You can see we can’t say much about the spin of the lower mass black hole, but we have a good idea about the spin of the more massive black hole (the more extreme the mass ratio, the less important the spin of lower mass black is, making it more difficult to measure). Hopefully we’ll learn more about spins in future detections as these could tell us something about how these black holes formed.

Orientation and magnitudes of the two spins

Estimated orientation and magnitude of the two component spins. Calculated with our precessing waveform model. The distribution for the more massive black hole is on the left, and for the smaller black hole on the right. Part of Figure 4 of The Boxing Day Discovery Paper.

There’s still a lot to learn about binary black holes, and future detections will help with this. More information about what we can squeeze out of our current results are given in the O1 Binary Black Hole Paper.

The O1 Binary Black Hole Paper

Synopsis: O1 Binary Black Hole Paper
Read this if: You want to know everything we’ve learnt about binary black holes
Favourite part: The awesome table of parameters at the end

This paper contains too much science to tackle all at once, so I’ve split it up into more bite-sized pieces, roughly following the flow of the paper. First we discuss how we find signals. Then we discuss the parameters inferred from the signals. This is done assuming that general relativity is correct, so we check for any deviations from predictions in the next section. After that, we consider the rate of mergers and what we expect for the population of binary black holes from our detections. Finally, we discuss our results in the context of wider astrophysics.

Searches

Looking for signals hidden amongst the data is the first thing to do. This paper only talks about the template search for binary black holes: other search results (including the results for binaries including neutron stars) we will reported elsewhere.

The binary black hole search was previously described in the Compact Binary Coalescence Paper. We have two pipelines which look for binary black holes using templates: PyCBC and GstLAL. These look for signals which are found in both detectors (within 15 ms of each other) which match waveforms in the template bank. A few specifics of these have been tweaked since the start of O1, but these don’t really change any of the results. An overview of the details for both pipelines are given in Appendix A of the paper.

The big difference from Compact Binary Coalescence Paper is the data. We are now analysing the whole of O1, and we are using an improved version of the calibration (although this really doesn’t affect the search). Search results are given in Section II. We have one new detection: GW151226.

Search results and GW150914, GW151226 and LVT151012

Search results for PyCBC (left) and GstLAL (right). The histograms show the number of candidate events (orange squares) compare to the background. The further an orange square is to the right of the lines, the more significant it is. Different backgrounds are shown including and excluding GW150914 (top row) and GW151226 (bottom row). Figure 3 from the O1 Binary Black Hole Paper.

The plots above show the search results. Candidates are ranked by a detection statistic (a signal-to-noise ratio modified by a self-consistency check \hat{\rho}_c for PyCBC, and a ratio of likelihood for the signal and noise hypotheses \ln \mathcal{L} for GstLAL). A larger detection statistic means something is more signal-like and we assess the significance by comparing with the background of noise events. The further above the background curve an event is, the more significant it is. We have three events that stand out.

Number 1 is GW150914. Its significance has increased a little from the first analysis, as we can now compare it against more background data. If we accept that GW150914 is real, we should remove it from the estimation of the background: this gives us the purple background in the top row, and the black curve in the bottom row.

GW151226 is the second event. It clearly stands out when zooming in for the second row of plots. Identifying GW150914 as a signal greatly improves GW151226’s significance.

The final event is LVT151012. Its significance hasn’t changed much since the initial analysis, and is still below our threshold for detection. I’m rather fond of it, as I do love an underdog.

Parameter estimation

To figure out the properties of all three events, we do parameter estimation. This was previously described in the Parameter Estimation Paper. Our results for GW150914 and LVT151012 have been updated as we have reran with the newer calibration of the data. The new calibration has less uncertainty, which improves the precision of our results, although this is really only significant for the sky localization. Technical details of the analysis are given in Appendix B and results are discussed in Section IV. You may recognise the writing style of these sections.

The probability distributions for the masses are shown below. There is quite a spectrum, from the low mass GW151226, which is consistent with measurements of black holes in X-ray binaries, up to GW150914, which contains the biggest stellar-mass black holes ever observed.

All binary black hole masses

Estimated masses for the two binary black holes for each of the events in O1. The contours mark the 50% and 90% credible regions. The grey area is excluded from our convention that m_1^\mathrm{source} \geq m_2^\mathrm{source}. Part of Figure 4 of the O1 Binary Black Hole Paper.

The distributions for the lower mass GW151226 and LVT151012 follow the curves of constant chirp mass. The uncertainty is greater for LVT151012 as it is a quieter (lower SNR) signal. GW150914 looks a little different, as the merger and ringdown portions of the waveform are more important. These place tighter constraints on the total mass, explaining the shape of the distribution.

Another difference between the lower mass inspiral-dominated signals and the higher mass GW150915 can be seen in the plot below. The shows the probability distributions for the mass ratio q = m_2^\mathrm{source}/m_1^\mathrm{source} and the effective spin parameter \chi_\mathrm{eff}, which is a mass-weighted combination of the spins aligned with the orbital angular momentum. Both play similar parts in determining the evolution of the inspiral, so there are stretching degeneracies for GW151226 and LVT151012, but this isn’t the case for GW150914.

All mass ratios and effective spins

Estimated mass ratios q and effective spins \chi_\mathrm{eff} for each of the events in O1. The contours mark the 50% and 90% credible regions. Part of Figure 4 of the O1 Binary Black Hole Paper.

If you look carefully at the distribution of \chi_\mathrm{eff} for GW151226, you can see that it doesn’t extend down to zero. You cannot have a non-zero \chi_\mathrm{eff} unless at least one of the black holes is spinning, so this clearly shows the evidence for spin.

The final masses of the remnant black holes are shown below. Each is around 5% less than the total mass of the binary which merged to form it, with the rest radiated away as gravitational waves.

All final masses and spins

Estimated masses M_\mathrm{f}^\mathrm{source} and spins a_\mathrm{f} of the remnant black holes for each of the events in O1. The contours mark the 50% and 90% credible regions. Part of Figure 4 of the O1 Binary Black Hole Paper.

The plot also shows the final spins. These are much better constrained than the component spins as they are largely determined by the angular momentum of the binary as it merged. This is why the spins are all quite similar. To calculate the final spin, we use an updated formula compared to the one in the Parameter Estimation Paper. This now includes the effect of the components’ spin which isn’t aligned with the angular momentum. This doesn’t make much difference for GW150914 or LVT151012, but the change is slightly more for GW151226, as it seems to have more significant component spins.

The luminosity distance for the sources is shown below. We have large uncertainties because the luminosity distance is degenerate with the inclination. For GW151226 and LVT151012 this does result in some beautiful butterfly-like distance–inclination plots. For GW150914, the butterfly only has the face-off inclination wing (probably as consequence of the signal being louder and the location of the source on the sky). The luminosity distances for GW150914 and GW151226 are similar. This may seem odd, because GW151226 is a quieter signal, but that is because it is also lower mass (and so intrinsically quieter).

All luminosity distances

Probability distributions for the luminosity distance of the source of each of the three events in O1. Part of Figure 4 of the O1 Binary Black Hole Paper.

Sky localization is largely determined by the time delay between the two observatories. This is one of the reasons that having a third detector, like Virgo, is an awesome idea. The plot below shows the localization relative to the Earth. You can see that each event has a localization that is part of a ring which is set by the time delay. GW150914 and GW151226 were seen by Livingston first (apparently there is some gloating about this), and LVT151012 was seen by Hanford first.

Sky localization relative to Earth.

Estimated sky localization relative to the Earth for each of the events in O1. The contours mark the 50% and 90% credible regions. H+ and L+ mark the locations of the two observatories. Part of Figure 5 of the O1 Binary Black Hole Paper.

Both GW151226 and LVT151012 are nearly overhead. This isn’t too surprising, as this is where the detectors are most sensitive, and so where we expect to make the most detections.

The improvement in the calibration of the data is most evident in the sky localization. For GW150914, the reduction in calibration uncertainty improves the localization by a factor of ~2–3! For LVT151012 it doesn’t make much difference because of its location and because it is a much quieter signal.

The map below shows the localization on the sky (actually where in Universe the signal came from). The maps have rearranged themselves because of the Earth’s rotation (each event was observed at a different sidereal time).

Sky localization in equatorial coordinates

Estimated sky localization (in right ascension and declination) for each of the events in O1. The contours mark the 50% and 90% credible regions. Part of Figure 5 of the O1 Binary Black Hole Paper.

We’re nowhere near localising sources to single galaxies, so we may never know exactly where these signals originated from.

Tests of general relativity

The Testing General Relativity Paper reported several results which compared GW150914 with the predictions of general relativity. Either happily or sadly, depending upon your point of view, it passed them all. In Section V of the paper, we now add GW151226 into the mix. (We don’t add LVT151012 as it’s too quiet to be much use).

A couple of the tests for GW150914 looked at the post-inspiral part of the waveform, looking at the consistency of mass and spin estimates, and trying to match the ringdown frequency. Since GW151226 is lower mass, we can’t extract any meaningful information from the post-inspiral portion of the waveform, and so it’s not worth repeating these tests.

However, the fact that GW151226 has such a lovely inspiral means that we can place some constraints on post-Newtonian parameters. We have lots and lots of cycles, so we are sensitive to any small deviations that arise during inspiral.

The plot below shows constraints on deviations for a set of different waveform parameters. A deviation of zero indicates the value in general relativity. The first four boxes (for parameters referred to as \varphi_i in the Testing General Relativity Paper) are parameters that affect the inspiral. The final box on the right is for parameters which impact the merger and ringdown. The top row shows results for GW150914, these are updated results using the improved calibrated data. The second row shows results for GW151226, and the bottom row shows what happens when you combine the two.

O1 testing general relativity bounds

Probability distributions for waveform parameters. The top row shows bounds from just GW150914, the second from just GW151226, and the third from combining the two. A deviation of zero is consistent with general relativity. Figure 6 from the O1 Binary Black hole Paper.

All the results are happily about zero. There were a few outliers for GW150914, but these are pulled back in by GW151226. We see that GW151226 dominates the constraints on the inspiral parameters, but GW150914 is more important for the merger–ringdown \alpha_i parameters.

Again, Einstein’s theory passes the test. There is no sign of inconsistency (yet). It’s clear that adding more results greatly improves our sensitivity to these parameters, so these tests will continue put general relativity through tougher and tougher tests.

Rates

We have a small number of events, around 2.9 in total, so any estimates of how often binary black holes merge will be uncertain. Of course, just because something is tricky, it doesn’t mean we won’t give it a go! The Rates Paper discussed estimates after the first 16 days of coincident data, when we had just 1.9 events. Appendix C gives technical details and Section VI discusses results.

The whole of O1 is about 52 days’ worth of coincident data. It’s therefore about 3 times as long as the initial stretch. in that time we’ve observed about 3/2 times as many events. Therefore, you might expect that the event rate is about 1/2 of our original estimates. If you did, get yourself a cookie, as you are indeed about right!

To calculate the rates we need to assume something about the population of binary black holes. We use three fiducial distributions:

  1. We assume that binary black holes are either like GW150914, LVT151012 or GW151226. This event-based rate is different from the previous one as it now includes an extra class for GW151226.
  2. A flat-in-the-logarithm-of-masses distribution, which we expect gives a sensible lower bound on the rate.
  3. A power law slope for the larger black hole of -2.35, which we expect gives a sensible upper bound on the rate.

We find that the rates are 1. 54^{+111}_{-40}~\mathrm{Gpc^{-3}\,yr^{-1}}, 2. 30^{+46}_{-21}~\mathrm{Gpc^{-3}\,yr^{-1}}, and 3. 97^{+149}_{-68}~\mathrm{Gpc^{-3}\,yr^{-1}}. As expected, the first rate is nestled between the other two.

Despite the rates being lower, there’s still a good chance we could see 10 events by the end of O2 (although that will depend on the sensitivity of the detectors).

A new results that is included in with the rates, is a simple fit for the distribution of black hole masses [bonus note]. The method is described in Appendix D. It’s just a repeated application of Bayes’ theorem to go from the masses we measured from the detected sources, to the distribution of masses of the entire population.

We assume that the mass of the larger black hole is distributed according to a power law with index \alpha, and that the less massive black hole has a mass uniformly distributed in mass ratio, down to a minimum black hole mass of 5 M_\odot. The cut-off, is the edge of a speculated mass gap between neutron stars and black holes.

We find that \alpha = 2.5^{+1.5}_{-1.6}. This has significant uncertainty, so we can’t say too much yet. This is a slightly steeper slope than used for the power-law rate (although entirely consistent with it), which would nudge the rates a little lower. The slope does fit in with fits to the distribution of masses in X-ray binaries. I’m excited to see how O2 will change our understanding of the distribution.

Astrophysical implications

With the announcement of GW150914, the Astrophysics Paper reviewed predictions for binary black holes in light of the discovery. The high masses of GW150914 indicated a low metallicity environment, perhaps no more than half of solar metallicity. However, we couldn’t tell if GW150914 came from isolated binary evolution (two stars which have lived and died together) or a dynamical interaction (probably in a globular cluster).

Since then, various studies have been performed looking at both binary evolution (Eldridge & Stanway 2016; Belczynski et al. 2016de Mink & Mandel 2016Hartwig et al. 2016; Inayoshi et al. 2016; Lipunov et al. 2016) and dynamical interactions (O’Leary, Meiron & Kocsis 2016; Mapelli 2016; Rodriguez et al. 2016), even considering binaries around supermassive black holes (Bartos et al. 2016; Stone, Metzger & Haiman 2016). We don’t have enough information to tell the two pathways apart. GW151226 gives some new information. Everything is reviewed briefly in Section VII.

GW151226 and LVT151012 are lower mass systems, and so don’t need to come from as low a metallicity environment as GW150914 (although they still could). Both are also consistent with either binary evolution or dynamical interactions. However, the low masses of GW151226 mean that it probably does not come from one particular binary formation scenario, chemically homogeneous evolution, and it is less likely to come from dynamical interactions.

Building up a population of sources, and getting better measurements of spins and mass ratios will help tease formation mechanisms apart. That will take a while, but perhaps it will be helped if we can do multi-band gravitational-wave astronomy with eLISA.

This section also updates predictions from the Stochastic Paper for the gravitational-wave background from binary black holes. There’s a small change from an energy density of \Omega_\mathrm{GW} = 1.1^{+2.7}_{-0.9} \times 10^{-9} at a frequency of 25 Hz to \Omega_\mathrm{GW} = 1.2^{+1.9}_{-0.9} \times 10^{-9}. This might be measurable after a few years at design sensitivity.

Conclusion

We are living in the future. We may not have hoverboards, but the era of gravitational-wave astronomy is here. Not in 20 years, not in the next decade, not in five more years, now. LIGO has not just opened a new window, it’s smashed the window and jumped through it just before the explosion blasts the side off the building. It’s so exciting that I can’t even get my metaphors straight. The introductory paragraphs of papers on gravitational-wave astronomy will never be the same again.

Although we were lucky to discover GW150914, it wasn’t just a fluke. Binary black coalescences aren’t that rare and we should be detecting more. Lots more. You know that scene in a movie where the heroes have defeated a wave of enemies and then the camera pans back to show the approaching hoard that stretches to the horizon? That’s where we are now. O2 is coming. The second observing run, will start later this year, and we expect we’ll be adding many entries to our list of binary black holes.

We’re just getting started with LIGO and Virgo. There’ll be lots more science to come.

If you made it this far, you deserve a biscuit. A fancy one too, not just a digestive.

Or, if you’re hungry for more, here are some blogs from my LIGO colleagues

  • Daniel Williams (a PhD student at University of Glasgow)
  • Matt Pitkin (who is hunting for continuous gravitational waves)
  • Shane Larson (who is also investigating mutli-band gravitational-wave astronomy)
  • Amber Sturver (who works at the Livingston Observatory)

My group at Birmingham also made some short reaction videos (I’m too embarrassed to watch mine).

Bonus notes

Christmas cease-fire

In the run-up to the holidays, there were lots of emails that contained phrases like “will have to wait until people get back from holidays” or “can’t reply as the group are travelling and have family commitments”. No-one ever said that they were taking a holiday, but just that it was happening in general, so we’d all have to wait for a couple of weeks. No-one ever argued with this, because, of course, while you were waiting for other people to do things, there was nothing you could do, and so you might as well take some time off. And you had been working really hard, so perhaps an evening off and an extra slice of cake was deserved…

Rather guiltily, I must confess to ignoring the first few emails on Boxing Day. (Although I saw them, I didn’t read them for reasons of plausible deniability). I thought it was important that my laptop could have Boxing Day off. Thankfully, others in the Collaboration were more energetic and got things going straight-away.

Naming

Gravitational-wave candidates (or at least the short ones from merging binary black holes which we have detected so far), start off life named by a number in our database. This event started life out as G211117. After checks and further analysis, to make sure we can’t identify any environmental effects which could have caused the detector to misbehave, candidates are renamed. Those which are significant enough to be claimed as a detection get the Gravitational Wave (GW) prefix. Those we are less certain of get the LIGO–Virgo Trigger (LVT) prefix. The rest of the name is the date in Coordinated Universal Time (UTC). The new detection is GW151226.

Informally though, it is the Boxing Day Event. I’m rather impressed that this stuck as the Collaboration is largely US based: it was still Christmas Day in the US when the detection was made, and Americans don’t celebrate Boxing Day anyway.

Other searches

We are now publishing the results of the O1 search for binary black holes with a template bank which goes up to total observed binary masses of 100 M_\odot. Therefore we still have to do the same about searches for anything else. The results from searches for other compact binaries should appear soon (binary neutron star and neutron star–black hole upper limits; intermediate mass black hole binary upper limits). It may be a while before we have all the results looking for continuous waves.

Matched filtering

The compact binary coalescence search uses matched filtering to hunt for gravitational waves. This is a well established technique in signal processing. You have a template signal, and you see how this correlates with the data. We use the detectors’ sensitivity to filter the data, so that we give more weight to bits which match where we are sensitive, and little weight to matches where we have little sensitivity.

I imagine matched filtering as similar to how I identify a piece of music: I hear a pattern of notes and try to compare to things I know. Dum-dum-dum-daah? Beethoven’s Fifth.

Filtering against a large number of templates takes a lot of computational power, so we need to be cunning as to which templates we include. We don’t want to miss anything, so we need enough templates to cover all possibilities, but signals from similar systems can look almost identical, so we just need one representative template included in the bank. Think of trying to pick out Under Pressure, you could easily do this with a template for Ice Ice Baby, and you don’t need both Mr Brightside and Ode to Joy.

It doesn’t matter if the search doesn’t pick out a template that perfectly fits the properties of the source, as this is what parameter estimation is for.

The figure below shows how effective matched filtering can be.

  • The top row shows the data from the two interferometers. It’s been cleaned up a little bit for the plot (to keep the experimentalists happy), but you can see that the noise in the detectors is seemingly much bigger than the best match template (shown in black, the same for both detectors).
  • The second row shows the accumulation of signal-to-noise ratio (SNR). If you correlate the data with the template, you see that it matches the template, and keeps matching the template. This is the important part, although, at any moment it looks like there’s just random wibbles in the detector, when you compare with a template you find that there is actually a signal which evolves in a particular way. The SNR increases until the signal stops (because the black holes have merged). It is a little lower in the Livinston detector as this was slightly less sensitive around the time of the Boxing Day Event.
  • The third row shows how much total SNR you would get if you moved the best match template around in time. There’s a clear peak. This is trying to show that the way the signal changes is important, and you wouldn’t get a high SNR when the signal isn’t there (you would normally expect it to be about 1).
  • The final row shows the amount of energy at a particular frequency at a particular time. Compact binary coalescences have a characteristic chirp, so you would expect a sweep from lower frequencies up to higher frequencies. You can just about make it out in these plots, but it’s not obvious as for GW150914. This again shows the value of matched filtering, but it also shows that there’s no other weird glitchy stuff going on in the detectors at the time.
The effectiveness of matched filtering for GW151226

Observation of The Boxing Day Event in LIGO Hanford and LIGO Livingston. The top row shows filtered data and best match template. The second row shows how this template accumulates signal-to-noise ratio. The third row shows signal-to-noise ratio of this template at different end times. The fourth row shows a spectrogram of the data. Figure 1 of the Boxing Day Discovery Paper.

Electromagnetic and neutrino follow-up

Reports by electromagnetic astronomers on their searches for counterparts so far are:

Reports by neutrino astronomers are:

  • ANTARES and IceCube—a search for high-energy neutrinos (above 100 GeV) coincident with LVT151012 or GW151226.
  • KamLAND—a search for neutrinos (1.8 MeV to 111 MeV) coincident with GW150914, LVT151012 or GW151226.
  • Pierre Auger Observatory—a search for ultra high-energy (above 100 PeV) neutrinos coincident with GW150914, LVT151012 or GW151226.
  • Super-Kamiokande—a search for neutrinos (of a wide range of energies, from 3.5 MeV to 100 PeV) coincident with GW150914 or GW151226.
  • Borexino—a search for low-energy (250 keV to 15 MeV) neutrinos coincident with GW150914, GW151226 and GW170104.
  • NOvA—a search for neutrinos and cosmic rays (or a wide range of energies, from 10 MeV to over a GeV) coincident with all events from O1 and O2, plus triggers from O3.

No counterparts have been claimed, which isn’t surprising for a binary black hole coalescence.

Rounding

In various places, the mass of the smaller black hole is given as 8 M_\odot. The median should really round to 7 M_\odot as to three significant figures it is 7.48 M_\odot. This really confused everyone though, as with rounding you’d have a binary with components of masses 14 M_\odot and 7 M_\odot and total mass 22 M_\odot. Rounding is a pain! Fortunately, 8 M_\odot lies well within the uncertainty: the 90% range is 5.2\text{--}9.8 M_\odot.

Black holes are massive

I tried to find a way to convert the mass of the final black hole into every day scales. Unfortunately, the thing is so unbelievably massive, it just doesn’t work: it’s no use relating it to elephants or bowling balls. However, I did have some fun looking up numbers. Currently, it costs about £2 to buy a 180 gram bar of Cadbury’s Bourneville. Therefore, to buy an equivalent amount of dark chocolate would require everyone on Earth to save up for about 600 millions times the age of the Universe (assuming GDP stays constant). By this point, I’m sure the chocolate will be past its best, so it’s almost certainly a big waste of time.

Maximum minimum spin

One of the statistics people really seemed to latch on to for the Boxing Day Event was that at least one of the binary black holes had to have a spin of greater than 0.2 with 99% probability. It’s a nice number for showing that we have a preference for some spin, but it can be a bit tricky to interpret. If we knew absolutely nothing about the spins, then we would have a uniform distribution on both spins. There’d be a 10% chance that the spin of the more massive black hole is less than 0.1, and a 10% chance that the spin of the other black hole is less than 0.1. Hence, there’s a 99% probability that there is at least one black hole with spin greater than 0.1, even though we have no evidence that the black holes are spinning (or not). Really, you need to look at the full probability distributions for the spins, and not just the summary statistics, to get an idea of what’s going on.

Just one more thing…

The fit for the black hole mass distribution was the last thing to go in the paper. It was a bit frantic to get everything reviewed in time. In the last week, there were a couple of loud exclamations from the office next to mine, occupied by John Veitch, who as one of the CBC chairs has to keep everything and everyone organised. (I’m not quite sure how John still has so much of his hair). It seems that we just can’t stop doing science. There is a more sophisticated calculation in the works, but the foot was put down that we’re not trying to cram any more into the current papers.

Inference on gravitational waves from coalescences of stellar-mass compact objects and intermediate-mass black holes

I love collecting things, there’s something extremely satisfying about completing a set. I suspect that this is one of the alluring features of Pokémon—you’ve gotta catch ’em all. The same is true of black hole hunting. Currently, we know of stellar-mass black holes which are a few times the mass of our Sun, up to a few tens of the mass of our Sun (the black holes of GW150914 are the biggest yet to be observed), and we know of supermassive black holes, which are ten thousand to ten billion times the mass our Sun. However, we are missing intermediate-mass black holes which lie in the middle. We have Charmander and Charizard, but where is Charmeleon? The elusive ones are always the most satisfying to capture.

Knitted black hole

Adorable black hole (available for adoption). I’m sure this could be a Pokémon. It would be a Dark type. Not that I’ve given it that much thought…

Intermediate-mass black holes have evaded us so far. We’re not even sure that they exist, although that would raise questions about how you end up with the supermassive ones (you can’t just feed the stellar-mass ones lots of rare candy). Astronomers have suggested that you could spot intermediate-mass black holes in globular clusters by the impact of their gravity on the motion of other stars. However, this effect would be small, and near impossible to conclusively spot. Another way (which I’ve discussed before), would to be to look at ultra luminous X-ray sources, which could be from a disc of material spiralling into the black hole.  However, it’s difficult to be certain that we understand the source properly and that we’re not misclassifying it. There could be one sure-fire way of identifying intermediate-mass black holes: gravitational waves.

The frequency of gravitational waves depend upon the mass of the binary. More massive systems produce lower frequencies. LIGO is sensitive to the right range of frequencies for stellar-mass black holes. GW150914 chirped up to the pitch of a guitar’s open B string (just below middle C). Supermassive black holes produce gravitational waves at too low frequency for LIGO (a space-based detector would be perfect for these). We might just be able to detect signals from intermediate-mass black holes with LIGO.

In a recent paper, a group of us from Birmingham looked at what we could learn from gravitational waves from the coalescence of an intermediate-mass black hole and a stellar-mass black hole [bonus note].  We considered how well you would be able to measure the masses of the black holes. After all, to confirm that you’ve found an intermediate-mass black hole, you need to be sure of its mass.

The signals are extremely short: we only can detect the last bit of the two black holes merging together and settling down as a final black hole. Therefore, you might think there’s not much information in the signal, and we won’t be able to measure the properties of the source. We found that this isn’t the case!

We considered a set of simulated signals, and analysed these with our parameter-estimation code [bonus note]. Below are a couple of plots showing the accuracy to which we can infer a couple of different mass parameters for binaries of different masses. We show the accuracy of measuring the chirp mass \mathcal{M} (a much beloved combination of the two component masses which we are usually able to pin down precisely) and the total mass M_\mathrm{total}.

Measurement of chirp mass

Measured chirp mass for systems of different total masses. The shaded regions show the 90% credible interval and the dashed lines show the true values. The mass ratio q is the mass of the stellar-mass black hole divided by the mass of the intermediate-mass black hole. Figure 1 of Haster et al. (2016).

Measurement of total mass

Measured total mass for systems of different total masses. The shaded regions show the 90% credible interval and the dashed lines show the true values. Figure 2 of Haster et al. (2016).

For the lower mass systems, we can measure the chirp mass quite well. This is because we get a little information from the part of the gravitational wave from when the two components are inspiralling together. However, we see less and less of this as the mass increases, and we become more and more uncertain of the chirp mass.

The total mass isn’t as accurately measured as the chirp mass at low masses, but we see that the accuracy doesn’t degrade at higher masses. This is because we get some constraints on its value from the post-inspiral part of the waveform.

We found that the transition from having better fractional accuracy on the chirp mass to having better fractional accuracy on the total mass happened when the total mass was around 200–250 solar masses. This was assuming final design sensitivity for Advanced LIGO. We currently don’t have as good sensitivity at low frequencies, so the transition will happen at lower masses: GW150914 is actually in this transition regime (the chirp mass is measured a little better).

Given our uncertainty on the masses, when can we conclude that there is an intermediate-mass black hole? If we classify black holes with masses more than 100 solar masses as intermediate mass, then we’ll be able to say to claim a discovery with 95% probability if the source has a black hole of at least 130 solar masses. The plot below shows our inferred probability of there being an intermediate-mass black hole as we increase the black hole’s mass (there’s little chance of falsely identifying a lower mass black hole).

Intermediate-mass black hole probability

Probability that the larger black hole is over 100 solar masses (our cut-off mass for intermediate-mass black holes M_\mathrm{IMBH}). Figure 7 of Haster et al. (2016).

Gravitational-wave observations could lead to a concrete detection of intermediate mass black holes if they exist and merge with another black hole. However, LIGO’s low frequency sensitivity is important for detecting these signals. If detector commissioning goes to plan and we are lucky enough to detect such a signal, we’ll finally be able to complete our set of black holes.

arXiv: 1511.01431 [astro-ph.HE]
Journal: Monthly Notices of the Royal Astronomical Society457(4):4499–4506; 2016
Birmingham science summary: Inference on gravitational waves from coalescences of stellar-mass compact objects and intermediate-mass black holes (by Carl)
Other collectables: Breakthrough, Gruber, Shaw, Kavli

Bonus notes

Jargon

The coalescence of an intermediate-mass black hole and a stellar-mass object (black hole or neutron star) has typically been known as an intermediate mass-ratio inspiral (an IMRI). This is similar to the name for the coalescence of a a supermassive black hole and a stellar-mass object: an extreme mass-ratio inspiral (an EMRI). However, my colleague Ilya has pointed out that with LIGO we don’t really see much of the intermediate-mass black hole and the stellar-mass black hole inspiralling together, instead we see the merger and ringdown of the final black hole. Therefore, he prefers the name intermediate mass-ratio coalescence (or IMRAC). It’s a better description of the signal we measure, but the acronym isn’t as good.

Parameter-estimation runs

The main parameter-estimation analysis for this paper was done by Zhilu, a summer student. This is notable for two reasons. First, it shows that useful research can come out of a summer project. Second, our parameter-estimation code installed and ran so smoothly that even an undergrad with no previous experience could get some useful results. This made us optimistic that everything would work perfectly in the upcoming observing run (O1). Unfortunately, a few improvements were made to the code before then, and we were back to the usual level of fun in time for The Event.

GW150914—The papers

In 2015 I made a resolution to write a blog post for each paper I had published. In 2016 I’ll have to break this because there are too many to keep up with. A suite of papers were prepared to accompany the announcement of the detection of GW150914 [bonus note], and in this post I’ll give an overview of these.

The papers

As well as the Discovery Paper published in Physical Review Letters [bonus note], there are 12 companion papers. All the papers are listed below in order of arXiv posting. My favourite is the Parameter Estimation Paper.

Subsequently, we have produced additional papers on GW150914, describing work that wasn’t finished in time for the announcement. The most up-to-date results are currently given in the O2 Catalogue Paper.

0. The Discovery Paper

Title: Observation of gravitational waves from a binary black hole merger
arXiv:
 1602.03837 [gr-qc]
Journal:
 Physical Review Letters; 116(6):061102(16); 2016
LIGO science summary:
 Observation of gravitational waves from a binary black hole merger

This is the central paper that announces the observation of gravitational waves. There are three discoveries which are describe here: (i) the direct detection of gravitational waves, (ii) the existence of stellar-mass binary black holes, and (iii) that the black holes and gravitational waves are consistent with Einstein’s theory of general relativity. That’s not too shabby in under 11 pages (if you exclude the author list). Coming 100 years after Einstein first published his prediction of gravitational waves and Schwarzschild published his black hole solution, this is the perfect birthday present.

More details: The Discovery Paper summary

1. The Detector Paper

Title: GW150914: The Advanced LIGO detectors in the era of first discoveries
arXiv:
 1602.03838 [gr-qc]
Journal: Physical Review Letters; 116(13):131103(12); 2016
LIGO science summary: GW150914: The Advanced LIGO detectors in the era of the first discoveries

This paper gives a short summary of how the LIGO detectors work and their configuration in O1 (see the Advanced LIGO paper for the full design). Giant lasers and tiny measurements, the experimentalists do some cool things (even if their paper titles are a little cheesy and they seem to be allergic to error bars).

More details: The Detector Paper summary

2. The Compact Binary Coalescence Paper

Title: GW150914: First results from the search for binary black hole coalescence with Advanced LIGO
arXiv:
 1602.03839 [gr-qc]
Journal: Physical Review D; 93(12):122003(21); 2016
LIGO science summary: How we searched for merging black holes and found GW150914

Here we explain how we search for binary black holes and calculate the significance of potential candidates. This is the evidence to back up (i) in the Discovery Paper. We can potentially detect binary black holes in two ways: with searches that use templates, or with searches that look for coherent signals in both detectors without assuming a particular shape. The first type is also used for neutron star–black hole or binary neutron star coalescences, collectively known as compact binary coalescences. This type of search is described here, while the other type is described in the Burst Paper.

This paper describes the compact binary coalescence search pipelines and their results. As well as GW150914 there is also another interesting event, LVT151012. This isn’t significant enough to be claimed as a detection, but it is worth considering in more detail.

More details: The Compact Binary Coalescence Paper summary

3. The Parameter Estimation Paper

Title: Properties of the binary black hole merger GW150914
arXiv:
 1602.03840 [gr-qc]
Journal: Physical Review Letters; 116(24):241102(19); 2016
LIGO science summary: The first measurement of a black hole merger and what it means

If you’re interested in the properties of the binary black hole system, then this is the paper for you! Here we explain how we do parameter estimation and how it is possible to extract masses, spins, location, etc. from the signal. These are the results I’ve been most heavily involved with, so I hope lots of people will find them useful! This is the paper to cite if you’re using our best masses, spins, distance or sky maps. The masses we infer are so large we conclude that the system must contain black holes, which is discovery (ii) reported in the Discovery Paper.

More details: The Parameter Estimation Paper summary

4. The Testing General Relativity Paper

Title: Tests of general relativity with GW150914
arXiv:
 1602.03841 [gr-qc]
Journal: Physical Review Letters; 116(22):221101(19); 2016
LIGO science summary:
 Was Einstein right about strong gravity?

The observation of GW150914 provides a new insight into the behaviour of gravity. We have never before probed such strong gravitational fields or such highly dynamical spacetime. These are the sorts of places you might imagine that we could start to see deviations from the predictions of general relativity. Aside from checking that we understand gravity, we also need to check to see if there is any evidence that our estimated parameters for the system could be off. We find that everything is consistent with general relativity, which is good for Einstein and is also discovery (iii) in the Discovery Paper.

More details: The Testing General Relativity Paper summary

5. The Rates Paper

Title: The rate of binary black hole mergers inferred from Advanced LIGO observations surrounding GW150914
arXiv:
 1602.03842 [astro-ph.HE]1606.03939 [astro-ph.HE]
Journal: Astrophysical Journal Letters; 833(1):L1(8); 2016; Astrophysical Journal Supplement Series; 227(2):14(11); 2016
LIGO science summary: The first measurement of a black hole merger and what it means

Given that we’ve spotted one binary black hole (plus maybe another with LVT151012), how many more are out there and how many more should we expect to find? We answer this here, although there’s a large uncertainty on the estimates since we don’t know (yet) the distribution of masses for binary black holes.

More details: The Rates Paper summary

6. The Burst Paper

Title: Observing gravitational-wave transient GW150914 with minimal assumptions
arXiv: 1602.03843 [gr-qc]
Journal: Physical Review D; 93(12):122004(20); 2016

What can you learn about GW150914 without having to make the assumptions that it corresponds to gravitational waves from a binary black hole merger (as predicted by general relativity)? This paper describes and presents the results of the burst searches. Since the pipeline which first found GW150914 was a burst pipeline, it seems a little unfair that this paper comes after the Compact Binary Coalescence Paper, but I guess the idea is to first present results assuming it is a binary (since these are tightest) and then see how things change if you relax the assumptions. The waveforms reconstructed by the burst models do match the templates for a binary black hole coalescence.

More details: The Burst Paper summary

7. The Detector Characterisation Paper

Title: Characterization of transient noise in Advanced LIGO relevant to gravitational wave signal GW150914
arXiv: 1602.03844 [gr-qc]
Journal: Classical & Quantum Gravity; 33(13):134001(34); 2016
LIGO science summary:
How do we know GW150914 was real? Vetting a Gravitational Wave Signal of Astrophysical Origin
CQG+ post: How do we know LIGO detected gravitational waves? [featuring awesome cartoons]

Could GW150914 be caused by something other than a gravitational wave: are there sources of noise that could mimic a signal, or ways that the detector could be disturbed to produce something that would be mistaken for a detection? This paper looks at these problems and details all the ways we monitor the detectors and the external environment. We can find nothing that can explain GW150914 (and LVT151012) other than either a gravitational wave or a really lucky random noise fluctuation. I think this paper is extremely important to our ability to claim a detection and I’m surprised it’s not number 2 in the list of companion papers. If you want to know how thorough the Collaboration is in monitoring the detectors, this is the paper for you.

More details: The Detector Characterisation Paper summary

8. The Calibration Paper

Title: Calibration of the Advanced LIGO detectors for the discovery of the binary black-hole merger GW150914
arXiv:
 1602.03845 [gr-qc]
Journal: Physical Review D; 95(6):062003(16); 2017
LIGO science summary:
 Calibration of the Advanced LIGO detectors for the discovery of the binary black-hole merger GW150914

Completing the triumvirate of instrumental papers with the Detector Paper and the Detector Characterisation Paper, this paper describes how the LIGO detectors are calibrated. There are some cunning control mechanisms involved in operating the interferometers, and we need to understand these to quantify how they effect what we measure. Building a better model for calibration uncertainties is high on the to-do list for improving parameter estimation, so this is an interesting area to watch for me.

More details: The Calibration Paper summary

9. The Astrophysics Paper

Title: Astrophysical implications of the binary black-hole merger GW150914
arXiv:
 1602.03846 [astro-ph.HE]
Journal: Astrophysical Journal Letters; 818(2):L22(15); 2016
LIGO science summary:
 The first measurement of a black hole merger and what it means

Having estimated source parameters and rate of mergers, what can we say about astrophysics? This paper reviews results related to binary black holes to put our findings in context and also makes statements about what we could hope to learn in the future.

More details: The Astrophysics Paper summary

10. The Stochastic Paper

Title: GW150914: Implications for the stochastic gravitational wave background from binary black holes
arXiv:
 1602.03847 [gr-qc]
Journal: Physical Review Letters; 116(13):131102(12); 2016
LIGO science summary: Background of gravitational waves expected from binary black hole events like GW150914

For every loud signal we detect, we expect that there will be many more quiet ones. This paper considers how many quiet binary black hole signals could add up to form a stochastic background. We may be able to see this background as the detectors are upgraded, so we should start thinking about what to do to identify it and learn from it.

More details: The Stochastic Paper summary

11. The Neutrino Paper

Title: High-energy neutrino follow-up search of gravitational wave event GW150914 with ANTARES and IceCube
arXiv:
 1602.05411 [astro-ph.HE]
Journal: Physical Review D; 93(12):122010(15); 2016
LIGO science summary: Search for neutrinos from merging black holes

We are interested so see if there’s any other signal that coincides with a gravitational wave signal. We wouldn’t expect something to accompany a black hole merger, but it’s good to check. This paper describes the search for high-energy neutrinos. We didn’t find anything, but perhaps we will in the future (perhaps for a binary neutron star merger).

More details: The Neutrino Paper summary

12. The Electromagnetic Follow-up Paper

Title: Localization and broadband follow-up of the gravitational-wave transient GW150914
arXiv: 1602.08492 [astro-ph.HE]; 1604.07864 [astro-ph.HE]
Journal: Astrophysical Journal Letters; 826(1):L13(8); 2016; Astrophysical Journal Supplement Series; 225(1):8(15); 2016

As well as looking for coincident neutrinos, we are also interested in electromagnetic observations (gamma-ray, X-ray, optical, infra-red or radio). We had a large group of observers interesting in following up on gravitational wave triggers, and 25 teams have reported observations. This companion describes the procedure for follow-up observations and discusses sky localisation.

This work split into a main article and a supplement which goes into more technical details.

More details: The Electromagnetic Follow-up Paper summary

The Discovery Paper

Synopsis: Discovery Paper
Read this if: You want an overview of The Event
Favourite part: The entire conclusion:

The LIGO detectors have observed gravitational waves from the merger of two stellar-mass black holes. The detected waveform matches the predictions of general relativity for the inspiral and merger of a pair of black holes and the ringdown of the resulting single black hole. These observations demonstrate the existence of binary stellar-mass black hole systems. This is the first direct detection of gravitational waves and the first observation of a binary black hole merger.

The Discovery Paper gives the key science results and is remarkably well written. It seems a shame to summarise it: you should read it for yourself! (It’s free).

The Detector Paper

Synopsis: Detector Paper
Read this if: You want a brief description of the detector configuration for O1
Favourite part: It’s short!

The LIGO detectors contain lots of cool pieces of physics. This paper briefly outlines them all: the mirror suspensions, the vacuum (the LIGO arms are the largest vacuum envelopes in the world and some of the cleanest), the mirror coatings, the laser optics and the control systems. A full description is given in the Advanced LIGO paper, but the specs there are for design sensitivity (it is also heavy reading). The main difference between the current configuration and that for design sensitivity is the laser power. Currently the circulating power in the arms is 100~\mathrm{kW}, the plan is to go up to 750~\mathrm{kW}. This will reduce shot noise, but raises all sorts of control issues, such as how to avoid parametric instabilities.

Noise curves

The noise amplitude spectral density. The curves for the current observations are shown in red (dark for Hanford, light for Livingston). This is around a factor 3 better than in the final run of initial LIGO (green), but still a factor of 3 off design sensitivity (dark blue). The light blue curve shows the impact of potential future upgrades. The improvement at low frequencies is especially useful for high-mass systems like GW150914. Part of Fig. 1 of the Detector Paper.

The Compact Binary Coalescence Paper

Synopsis: Compact Binary Coalescence Paper
Read this if: You are interested in detection significance or in LVT151012
Favourite part: We might have found a second binary black hole merger

There are two compact binary coalescence searches that look for binary black holes: PyCBC and GstLAL. Both match templates to the data from the detectors to look for anything binary like, they then calculate the probability that such a match would happen by chance due to a random noise fluctuation (the false alarm probability or p-value [unhappy bonus note]). The false alarm probability isn’t the probability that there is a gravitational wave, but gives a good indication of how surprised we should be to find this signal if there wasn’t one. Here we report the results of both pipelines on the first 38.6 days of data (about 17 days where both detectors were working at the same time).

Both searches use the same set of templates to look for binary black holes [bonus note]. They look for where the same template matches the data from both detectors within a time interval consistent with the travel time between the two. However, the two searches rank candidate events and calculate false alarm probabilities using different methods. Basically, both searches use a detection statistic (the quantity used to rank candidates: higher means less likely to be noise), that is based on the signal-to-noise ratio (how loud the signal is) and a goodness-of-fit statistic. They assess the significance of a particular value of this detection statistic by calculating how frequently this would be obtained if there was just random noise (this is done by comparing data from the two detectors when there is not a coincident trigger in both). Consistency between the two searches gives us greater confidence in the results.

PyCBC’s detection statistic is a reweighted signal-to-noise ratio \hat{\rho}_c which takes into account the consistency of the signal in different frequency bands. You can get a large signal-to-noise ratio from a loud glitch, but this doesn’t match the template across a range of frequencies, which is why this test is useful. The consistency is quantified by a reduced chi-squared statistic. This is used, depending on its value, to weight the signal-to-noise ratio. When it is large (indicating inconsistency across frequency bins), the reweighted signal-to-noise ratio becomes smaller.

To calculate the background, PyCBC uses time slides. Data from the two detectors are shifted in time so that any coincidences can’t be due to a real gravitational wave. Seeing how often you get something signal-like then tells you how often you’d expect this to happen due to random noise.

GstLAL calculates the signal-to-noise ratio and a residual after subtracting the template. As a detection statistic, it uses a likelihood ratio \mathcal{L}: the probability of finding the particular values of the signal-to-noise ratio and residual in both detectors for signals (assuming signal sources are uniformly distributed isotropically in space), divided by the probability of finding them for noise.

The background from GstLAL is worked out by looking at the likelihood ratio fro triggers that only appear in one detector. Since there’s no coincident signal in the other, these triggers can’t correspond to a real gravitational wave. Looking at their distribution tells you how frequently such things happen due to noise, and hence how probable it is for both detectors to see something signal-like at the same time.

The results of the searches are shown in the figure below.

Search results for GW150914

Search results for PyCBC (left) and GstLAL (right). The histograms show the number of candidate events (orange squares) compare to the background. The black line includes GW150914 in the background estimate, the purple removes it (assuming that it is a signal). The further an orange square is above the lines, the more significant it is. Particle physicists like to quote significance in terms of \sigma and for some reason we’ve copied them. The second most significant event (around 2\sigma) is LVT151012. Fig. 7 from the Compact Binary Coalescence Paper.

GW150914 is the most significant event in both searches (it is the most significant PyCBC event even considering just single-detector triggers). They both find GW150914 with the same template values. The significance is literally off the charts. PyCBC can only calculate an upper bound on the false alarm probability of < 2 \times 10^{-7}. GstLAL calculates a false alarm probability of 1.4 \times 10^{-11}, but this is reaching the level that we have to worry about the accuracy of assumptions that go into this (that the distribution of noise triggers in uniform across templates—if this is not the case, the false alarm probability could be about 10^3 times larger). Therefore, for our overall result, we stick to the upper bound, which is consistent with both searches. The false alarm probability is so tiny, I don’t think anyone doubts this signal is real.

There is a second event that pops up above the background. This is LVT151012. It is found by both searches. Its signal-to-noise ratio is 9.6, compared with GW150914’s 24, so it is quiet. The false alarm probability from PyCBC is 0.02, and from GstLAL is 0.05, consistent with what we would expect for such a signal. LVT151012 does not reach the standards we would like to claim a detection, but it is still interesting.

Running parameter estimation on LVT151012, as we did for GW150914, gives beautiful results. If it is astrophysical in origin, it is another binary black hole merger. The component masses are lower, m_1^\mathrm{source} = 23^{+18}_{-5} M_\odot and m_2^\mathrm{source} 13^{+4}_{-5} M_\odot (the asymmetric uncertainties come from imposing m_1^\mathrm{source} \geq m_2^\mathrm{source}); the chirp mass is \mathcal{M} = 15^{+1}_{-1} M_\odot. The effective spin, as for GW150914, is close to zero \chi_\mathrm{eff} = 0.0^{+0.3}_{-0.2}. The luminosity distance is D_\mathrm{L} = 1100^{+500}_{-500}~\mathrm{Mpc}, meaning it is about twice as far away as GW150914’s source. I hope we’ll write more about this event in the future; there are some more details in the Rates Paper.

Trust LIGO

Is it random noise or is it a gravitational wave? LVT151012 remains a mystery. This candidate event is discussed in the Compact Binary Coalescence Paper (where it is found), the Rates Paper (which calculates the probability that it is extraterrestrial in origin), and the Detector Characterisation Paper (where known environmental sources fail to explain it). SPOILERS

The Parameter Estimation Paper

Synopsis: Parameter Estimation Paper
Read this if: You want to know the properties of GW150914’s source
Favourite part: We inferred the properties of black holes using measurements of spacetime itself!

The gravitational wave signal encodes all sorts of information about its source. Here, we explain how we extract this information  to produce probability distributions for the source parameters. I wrote about the properties of GW150914 in my previous post, so here I’ll go into a few more technical details.

To measure parameters we match a template waveform to the data from the two instruments. The better the fit, the more likely it is that the source had the particular parameters which were used to generate that particular template. Changing different parameters has different effects on the waveform (for example, changing the distance changes the amplitude, while changing the relative arrival times changes the sky position), so we often talk about different pieces of the waveform containing different pieces of information, even though we fit the whole lot at once.

Waveform explained

The shape of the gravitational wave encodes the properties of the source. This information is what lets us infer parameters. The example signal is GW150914. I made this explainer with Ban Farr and Nutsinee Kijbunchoo for the LIGO Magazine.

The waveform for a binary black hole merger has three fuzzily defined parts: the inspiral (where the two black holes orbit each other), the merger (where the black holes plunge together and form a single black hole) and ringdown (where the final black hole relaxes to its final state). Having waveforms which include all of these stages is a fairly recent development, and we’re still working on efficient ways of including all the effects of the spin of the initial black holes.

We currently have two favourite binary black hole waveforms for parameter estimation:

  • The first we refer to as EOBNR, short for its proper name of SEOBNRv2_ROM_DoubleSpin. This is constructed by using some cunning analytic techniques to calculate the dynamics (known as effective-one-body or EOB) and tuning the results to match numerical relativity (NR) simulations. This waveform only includes the effects of spins aligned with the orbital angular momentum of the binary, so it doesn’t allow us to measure the effects of precession (wobbling around caused by the spins).
  • The second we refer to as IMRPhenom, short for IMRPhenomPv2. This is constructed by fitting to the frequency dependence of EOB and NR waveforms. The dominant effects of precession of included by twisting up the waveform.

We’re currently working on results using a waveform that includes the full effects of spin, but that is extremely slow (it’s about half done now), so those results won’t be out for a while.

The results from the two waveforms agree really well, even though they’ve been created by different teams using different pieces of physics. This was a huge relief when I was first making a comparison of results! (We had been worried about systematic errors from waveform modelling). The consistency of results is partly because our models have improved and partly because the properties of the source are such that the remaining differences aren’t important. We’re quite confident that we’ve most of the parameters are reliably measured!

The component masses are the most important factor for controlling the evolution of the waveform, but we don’t measure the two masses independently.  The evolution of the inspiral is dominated by a combination called the chirp mass, and the merger and ringdown are dominated by the total mass. For lighter mass systems, where we gets lots of inspiral, we measure the chirp mass really well, and for high mass systems, where the merger and ringdown are the loudest parts, we measure the total mass. GW150914 is somewhere in the middle. The probability distribution for the masses are shown below: we can compensate for one of the component masses being smaller if we make the other larger, as this keeps chirp mass and total mass about the same.

Binary black hole masses

Estimated masses for the two black holes in the binary. Results are shown for the EOBNR waveform and the IMRPhenom: both agree well. The Overall results come from averaging the two. The dotted lines mark the edge of our 90% probability intervals. The sharp diagonal line cut-off in the two-dimensional plot is a consequence of requiring m_1^\mathrm{source} \geq m_2^\mathrm{source}.  Fig. 1 from the Parameter Estimation Paper.

To work out these masses, we need to take into account the expansion of the Universe. As the Universe expands, it stretches the wavelength of the gravitational waves. The same happens to light: visible light becomes redder, so the phenomenon is known as redshifting (even for gravitational waves). If you don’t take this into account, the masses you measure are too large. To work out how much redshift there is you need to know the distance to the source. The probability distribution for the distance is shown below, we plot the distance together with the inclination, since both of these affect the amplitude of the waves (the source is quietest when we look at it edge-on from the side, and loudest when seen face-on/off from above/below).

Distance and inclination

Estimated luminosity distance and binary inclination angle. An inclination of \theta_{JN} = 90^\circ means we are looking at the binary (approximately) edge-on. Results are shown for the EOBNR waveform and the IMRPhenom: both agree well. The Overall results come from averaging the two. The dotted lines mark the edge of our 90% probability intervals.  Fig. 2 from the Parameter Estimation Paper.

After the masses, the most important properties for the evolution of the binary are the spins. We don’t measure these too well, but the probability distribution for their magnitudes and orientations from the precessing IMRPhenom model are shown below. Both waveform models agree that the effective spin \chi_\mathrm{eff}, which is a combination of both spins in the direction of the orbital angular momentum) is small. Therefore, either the spins are small or are larger but not aligned (or antialigned) with the orbital angular momentum. The spin of the more massive black hole is the better measured of the two.

Orientation and magnitudes of the two spins

Estimated orientation and magnitude of the two component spins from the precessing IMRPhenom model. The magnitude is between 0 and 1 and is perfectly aligned with the orbital angular momentum if the angle is 0. The distribution for the more massive black hole is on the left, and for the smaller black hole on the right. Part of Fig. 5 from the Parameter Estimation Paper.

The Testing General Relativity Paper

Synopsis: Testing General Relativity Paper
Read this if: You want to know more about the nature of gravity.
Favourite part: Einstein was right! (Or more correctly, we can’t prove he was wrong… yet)

The Testing General Relativity Paper is one of my favourites as it packs a lot of science in. Our first direct detection of gravitational waves and of the merger of two black holes provides a new laboratory to test gravity, and this paper runs through the results of the first few experiments.

Before we start making any claims about general relativity being wrong, we first have to check if there’s any weird noise present. You don’t want to have to rewrite the textbooks just because of an instrumental artifact. After taking out a good guess for the waveform (as predicted by general relativity), we find that the residuals do match what we expect for instrumental noise, so we’re good to continue.

I’ve written about a couple of tests of general relativity in my previous post: the consistency of the inspiral and merger–ringdown parts of the waveform, and the bounds on the mass of the graviton (from evolution of the signal). I’ll cover the others now.

The final part of the signal, where the black hole settles down to its final state (the ringdown), is the place to look to check that the object is a black hole and not some other type of mysterious dark and dense object. It is tricky to measure this part of the signal, but we don’t see anything odd. We can’t yet confirm that the object has all the properties you’d want to pin down that it is exactly a black hole as predicted by general relativity; we’re going to have to wait for a louder signal for this. This test is especially poignant, as Steven Detweiler, who pioneered a lot of the work calculating the ringdown of black holes, died a week before the announcement.

We can allow terms in our waveform (here based on the IMRPhenom model) to vary and see which values best fit the signal. If there is evidence for differences compared with the predictions from general relativity, we would have evidence for needing an alternative. Results for this analysis are shown below for a set of different waveform parameters \hat{p}_i: the \varphi_i parameters determine the inspiral, the \alpha_i parameters determine the merger–ringdown and the \beta_i parameters cover the intermediate regime. If the deviation \delta \hat{p}_i is zero, the value coincides with the value from general relativity. The plot shows what would happen if you allow all the variable to vary at once (the multiple results) and if you tried just that parameter on its own (the single results).

Testing general relativity bounds

Probability distributions for waveform parameters. The single analysis only varies one parameter, the multiple analysis varies all of them, and the J0737-3039 result is the existing bound from the double pulsar. A deviation of zero is consistent with general relativity. Fig. 7 from the Testing General Relativity Paper.

Overall the results look good. Some of the single results are centred away from zero, but we think that this is just a random fluctuate caused by noise (we’ve seen similar behaviour in tests, so don’t panic yet). It’s not surprising the \varphi_3, \varphi_4 and \varphi_{5l} all show this behaviour, as they are sensitive to similar noise features. These measurements are much tighter than from any test we’ve done before, except for the measurement of \varphi_0 which is better measured from the double pulsar (since we have lots and lots of orbits of that measured).

The final test is to look for additional polarizations of gravitational waves. These are predicted in several alternative theories of gravity. Unfortunately, because we only have two detectors which are pretty much aligned we can’t say much, at least without knowing for certain the location of the source. Extra detectors will be useful here!

In conclusion, we have found no evidence to suggest we need to throw away general relativity, but future events will help us to perform new and stronger tests.

The Rates Paper

Synopsis: Rates Paper
Read this if: You want to know how often binary black holes merge (and how many we’ll detect)
Favourite part: There’s a good chance we’ll have ten detections by the end of our second observing run (O2)

Before September 14, we had never seen a binary stellar-mass black hole system. We were therefore rather uncertain about how many we would see. We had predictions based on simulations of the evolution of stars and their dynamical interactions. These said we shouldn’t be too surprised if we saw something in O1, but that we shouldn’t be surprised if we didn’t see anything for many years either. We weren’t really expecting to see a black hole system so soon (the smart money was on a binary neutron star). However, we did find a binary black hole, and this happened right at the start of our observations! What do we now believe about the rate of mergers?

To work out the rate, you first need to count the number of events you have detected and then work out how sensitive you are to the population of signals (how many could you see out of the total).

Counting detections sounds simple: we have GW150914 without a doubt. However, what about all the quieter signals? If you have 100 events each with a 1% probability of being real, then even though you can’t say with certainty that anyone is an actual signal, you would expect one to be so. We want to work out how many events are real and how many are due to noise. Handily, trying to tell apart different populations of things when you’re not certain about individual members is a common problem is astrophysics (where it’s often difficult to go and check what something actually is), so there exists a probabilistic framework for doing this.

Using the expected number of real and noise events for a given detection statistic (as described in the Compact Binary Coalescence Paper), we count the number of detections and as a bonus, get a probability that each event is of astrophysical origin. There are two events with more than a 50% chance of being real: GW150914, where the probability is close to 100%, and LVT151012, where to probability is 84% based on GstLAL and 91% based on PyCBC.

By injecting lots of fake signals into some data and running our detection pipelines, we can work out how sensitive they are (in effect, how far away can they find particular types of sources). For a given number of detections, the more sensitive we are, the lower the actual rate of mergers should be (for lower sensitivity we would miss more, while there’s no hiding for higher sensitivity).

There is one final difficulty in working out the total number of binary black hole mergers: we need to know the distribution of masses, because our sensitivity depends on this. However, we don’t yet know this as we’ve only seen GW150914 and (maybe) LVT151012. Therefore, we try three possibilities to get an idea of what the merger rate could be.

  1. We assume that binary black holes are either like GW150914 or like LVT151012. Given that these are our only possible detections at the moment, this should give a reasonable estimate. A similar approach has been used for estimating the population of binary neutron stars from pulsar observations [bonus note].
  2. We assume that the distribution of masses is flat in the logarithm of the masses. This probably gives more heavy black holes than in reality (and so a lower merger rate)
  3. We assume that black holes follow a power law like the initial masses of stars. This probably gives too many low mass black holes (and so a higher merger rate)

The estimated merger rates (number of binary black hole mergers per volume per time) are then: 1. 83^{+168}_{-63}~\mathrm{Gpc^{-3}\,yr^{-1}}; 2. 61^{+124}_{-48}~\mathrm{Gpc^{-3}\,yr^{-1}}, and 3. 200^{+400}_{-160}~\mathrm{Gpc^{-3}\,yr^{-1}}. There is a huge scatter, but the flat and power-law rates hopefully bound the true value.

We’ll pin down the rate better after a few more detections. How many more should we expect to see? Using the projected sensitivity of the detectors over our coming observing runs, we can work out the probability of making N more detections. This is shown in the plot below. It looks like there’s about about a 10% chance of not seeing anything else in O1, but we’re confident that we’ll have 10 more by the end of O2, and 35 more by the end of O3! I may need to lie down…

Expected number of detections

The percentage chance of making 0, 10, 35 and 70 more detections of binary black holes as time goes on and detector sensitivity improves (based upon our data so far). This is a simplified version of part of Fig. 3 of the Rates Paper taken from the science summary.

The Burst Paper

Synopsis: Burst Paper
Read this if: You want to check what we can do without a waveform template
Favourite part: You don’t need a template to make a detection

When discussing what we can learn from gravitational wave astronomy, you can almost guarantee that someone will say something about discovering the unexpected. Whenever we’ve looked at the sky in a new band of the electromagnetic spectrum, we found something we weren’t looking for: pulsars for radio, gamma-ray burst for gamma-rays, etc. Can we do the same in gravitational wave astronomy? There may well be signals we weren’t anticipating out there, but will we be able to detect them? The burst pipelines have our back here, at least for short signals.

The burst search pipelines, like their compact binary coalescence partners, assign candidate events a detection statistic and then work out a probability associated with being a false alarm caused by noise. The difference is that the burst pipelines try to find a wider range of signals.

There are three burst pipelines described: coherent WaveBurst (cWB), which famously first found GW150914; omicron–LALInferenceBurst (oLIB), and BayesWave, which follows up on cWB triggers.

As you might guess from the name, cWB looks for a coherent signal in both detectors. It looks for excess power (indicating a signal) in a time–frequency plot, and then classifies candidates based upon their structure. There’s one class for blip glitches and resonance lines (see the Detector Characterisation Paper), these are all thrown away as noise; one class for chirp-like signals that increase in frequency with time, this is where GW150914 was found, and one class for everything else. cWB’s detection statistic \eta_c is something like a signal-to-noise ratio constructed based upon the correlated power in the detectors. The value for GW150914 was \eta_c = 20, which is higher than for any other candidate. The false alarm probability (or p-value), folding in all three search classes, is 2\times 10^{-6}, which is pretty tiny, even if not as significant as for the tailored compact binary searches.

The oLIB search has two stages. First it makes a time–frequency plot and looks for power coincident between the two detectors. Likely candidates are then followed up by matching a sine–Gaussian wavelet to the data, using a similar algorithm to the one used for parameter estimation. It’s detection statistic is something like a likelihood ratio for the signal verses noise. It calculates a false alarm probability of about 2\times 10^{-6} too.

BayesWave fits a variable number of sine–Gaussian wavelets to the data. This can model both a signal (when the wavelets are the same for both detectors) and glitches (when the wavelets are independent). This is really clever, but is too computationally expensive to be left running on all the data. Therefore, it follows up on things highlighted by cWB, potentially increasing their significance. It’s detection statistic is the Bayes factor comparing the signal and glitch models. It estimates the false alarm probability to be about 7 \times 10^{-7} (which agrees with the cWB estimate if you only consider chirp-like triggers).

None of the searches find LVT151012. However, as this is a quiet, lower mass binary black hole, I think that this is not necessarily surprising.

cWB and BayesWave also output a reconstruction of the waveform. Reassuringly, this does look like binary black hole coalescence!

Estimated waveforms from different models

Gravitational waveforms from our analyses of GW150914. The wiggly grey line are the data from Hanford (top) and Livinston (bottom); these are analysed coherently. The plots show waveforms whitened by the noise power spectral density. The dark band shows the waveform reconstructed by BayesWave without assuming that the signal is from a binary black hole (BBH). The light bands show the distribution of BBH template waveforms that were found to be most probable from our parameter-estimation analysis. The two techniques give consistent results: the match between the two models is 94^{+2}_{-3}\%. Fig. 6 of the Parameter Estimation Paper.

The paper concludes by performing some simple fits to the reconstructed waveforms. For this, you do have to assume that the signal cane from a binary black hole. They find parameters roughly consistent with those from the full parameter-estimation analysis, which is a nice sanity check of our results.

The Detector Characterisation Paper

Synopsis: Detector Characteristation Paper
Read this if: You’re curious if something other than a gravitational wave could be responsible for GW150914 or LVT151012
Favourite part: Mega lightning bolts can cause correlated noise

The output from the detectors that we analyses for signals is simple. It is a single channel that records the strain. To monitor instrumental behaviour and environmental conditions the detector characterisation team record over 200,000 other channels. These measure everything from the alignment of the optics through ground motion to incidence of cosmic rays. Most of the data taken by LIGO is to monitor things which are not gravitational waves.

This paper examines all the potential sources of noise in the LIGO detectors, how we monitor them to ensure they are not confused for a signal, and the impact they could have on estimating the significance of events in our searches. It is amazingly thorough work.

There are lots of potential noise sources for LIGO. Uncorrelated noise sources happen independently at both sites, therefore they can only be mistaken for a gravitational wave if by chance two occur at the right time. Correlated noise sources effect both detectors, and so could be more confusing for our searches, although there’s no guarantee that they would cause a disturbance that looks anything like a binary black hole merger.

Sources of uncorrelated noise include:

  • Ground motion caused by earthquakes or ocean waves. These create wibbling which can affect the instruments, even though they are well isolated. This is usually at low frequencies (below 0.1~\mathrm{Hz} for earthquakes, although it can be higher if the epicentre is near), unless there is motion in the optics around (which can couple to cause higher frequency noise). There is a network of seismometers to measure earthquakes at both sites. There where two magnitude 2.1 earthquakes within 20 minutes of GW150914 (one off the coast of Alaska, the other south-west of Seattle), but both produced ground motion that is ten times too small to impact the detectors. There was some low frequency noise in Livingston at the time of LVT151012 which is associated with a period of bad ocean waves. however, there is no evidence that these could be converted to the frequency range associated with the signal.
  • People moving around near the detectors can also cause vibrational or acoustic disturbances. People are kept away from the detectors while they are running and accelerometers, microphones and seismometers monitor the environment.
  • Modulation of the lasers at 9~\mathrm{MHz} and 45~\mathrm{MHz} is done to monitor and control several parts of the optics. There is a fault somewhere in the system which means that there is a coupling to the output channel and we get noise across 10~\mathrm{Hz} to 2~\mathrm{kHz}, which is where we look for compact binary coalescences. Rai Weiss suggested shutting down the instruments to fix the source of this and delaying the start of observations—it’s a good job we didn’t. Periods of data where this fault occurs are flagged and not included in the analysis.
  • Blip transients are a short glitch that occurs for unknown reasons. They’re quite mysterious. They are at the right frequency range (30~\mathrm{Hz} to 250~\mathrm{Hz}) to be confused with binary black holes, but don’t have the right frequency evolution. They contribute to the background of noise triggers in the compact binary coalescence searches, but are unlikely to be the cause of GW150914 or LVT151012 since they don’t have the characteristic chirp shape.

    Normalised spectrogram of a blip transient.

    A time–frequency plot of a blip glitch in LIGO-Livingston. Blip glitches are the right frequency range to be confused with binary coalescences, but don’t have the chirp-like structure. Blips are symmetric in time, whereas binary coalescences sweep up in frequency. Fig. 3 of the Detector Characterisation Paper.

Correlated noise can be caused by:

  • Electromagnetic signals which can come from lightning, solar weather or radio communications. This is measured by radio receivers and magnetometers, and its extremely difficult to produce a signal that is strong enough to have any impact of the detectors’ output. There was one strong  (peak current of about 500~\mathrm{kA}) lightning strike in the same second as GW150914 over Burkino Faso. However, the magnetic disturbances were at least a thousand times too small to explain the amplitude of GW150914.
  • Cosmic ray showers can cause electromagnetic radiation and particle showers. The particle flux become negligible after a few kilometres, so it’s unlikely that both Livingston and Hanford would be affected, but just in case there is a cosmic ray detector at Hanford. It has seen nothing suspicious.

All the monitoring channels give us a lot of insight into the behaviour of the instruments. Times which can be identified as having especially bad noise properties (where the noise could influence the measured output), or where the detectors are not working properly, are flagged and not included in the search analyses. Applying these vetoes mean that we can’t claim a detection when we know something else could mimic a gravitational wave signal, but it also helps us clean up our background of noise triggers. This has the impact of increasing the significance of the triggers which remain (since there are fewer false alarms they could be confused with). For example, if we leave the bad period in, the PyCBC false alarm probability for LVT151012 goes up from 0.02 to 0.14. The significance of GW150914 is so great that we don’t really need to worry about the effects of vetoes.

At the time of GW150914 the detectors were running well, the data around the event are clean, and there is nothing in any of the auxiliary channels that record anything which could have caused the event. The only source of a correlated signal which has not been rules out is a gravitational wave from a binary black hole merger. The time–frequency plots of the measured strains are shown below, and its easy to pick out the chirps.

Normalised spectrograms for GW150914

Time–frequency plots for GW150914 as measured by Hanford (left) and Livingston (right). These show the characteristic increase in frequency with time of the chirp of a binary merger. The signal is clearly visible above the noise. Fig. 10 of the Detector Characterisation Paper.

The data around LVT151012 are significantly less stationary than around GW150914. There was an elevated noise transient rate around this time. This is probably due to extra ground motion caused by ocean waves. This low frequency noise is clearly visible in the Livingston time–frequency plot below. There is no evidence that this gets converted to higher frequencies though. None of the detector characterisation results suggest that LVT151012 has was caused by a noise artifact.

Normalised spectrograms for LVT151012

Time–frequency plots for LVT151012 as measured by Hanford (left) and Livingston (right). You can see the characteristic increase in frequency with time of the chirp of a binary merger, but this is mixed in with noise. The scale is reduced compared with for GW150914, which is why noise features appear more prominent. The band at low frequency in Livingston is due to ground motion; this is not present in Hanford. Fig. 13 of the Detector Characterisation Paper.

If you’re curious about the state of the LIGO sites and their array of sensors, you can see more about the physical environment monitors at pem.ligo.org.

The Calibration Paper

Synopsis: Calibration Paper
Read this if: You like control engineering or precision measurement
Favourite part: Not only are the LIGO detectors sensitive enough to feel the push from a beam of light, they are so sensitive that you have to worry about where on the mirrors you push

We want to measure the gravitational wave strain—the change in length across our detectors caused by a passing gravitational wave. What we actually record is the intensity of laser light out the output of our interferometer. (The output should be dark when the strain is zero, and the intensity increases when the interferometer is stretched or squashed). We need a way to convert intensity to strain, and this requires careful calibration of the instruments.

The calibration is complicated by the control systems. The LIGO instruments are incredibly sensitive, and maintaining them in a stable condition requires lots of feedback systems. These can impact how the strain is transduced into the signal readout by the interferometer. A schematic of how what would be the change in the length of the arms without control systems \Delta L_\mathrm{free} is changed into the measured strain h is shown below. The calibration pipeline build models to correct for the effects of the control system to provide an accurate model of the true gravitational wave strain.

Calibration control system schematic

Model for how a differential arm length caused by a gravitational wave \Delta L_\mathrm{free} or a photon calibration signal x_\mathrm{T}^\mathrm{(PC)} is converted into the measured signal h. Fig. 2 from the Calibration Paper.

To measure the different responses of the system, the calibration team make several careful measurements. The primary means is using photon calibration: an auxiliary laser is used to push the mirrors and the response is measured. The spots where the lasers are pointed are carefully chosen to minimise distortion to the mirrors caused by pushing on them. A secondary means is to use actuators which are parts of the suspension system to excite the system.

As a cross-check, we can also use two auxiliary green lasers to measure changes in length using either a frequency modulation or their wavelength. These are similar approaches to those used in initial LIGO. These go give consistent results with the other methods, but they are not as accurate.

Overall, the uncertainty in the calibration of the amplitude of the strain is less than 10\% between 20~\mathrm{Hz} and 1~\mathrm{kHz}, and the uncertainty in phase calibration is less than 10^\circ. These are the values that we use in our parameter-estimation runs. However, the calibration uncertainty actually varies as a function of frequency, with some ranges having much less uncertainty. We’re currently working on implementing a better model for the uncertainty, which may improve our measurements. Fortunately the masses, aren’t too affected by the calibration uncertainty, but sky localization is, so we might get some gain here. We’ll hopefully produce results with updated calibration in the near future.

The Astrophysics Paper

Synopsis: Astrophysics Paper
Read this if: You are interested in how binary black holes form
Favourite part: We might be able to see similar mass binary black holes with eLISA before they merge in the LIGO band [bonus note]

This paper puts our observations of GW150914 in context with regards to existing observations of stellar-mass black holes and theoretical models for binary black hole mergers. Although it doesn’t explicitly mention LVT151012, most of the conclusions would be just as applicable to it’s source, if it is real. I expect there will be rapid development of the field now, but if you want to catch up on some background reading, this paper is the place to start.

The paper contains lots of references to good papers to delve into. It also highlights the main conclusion we can draw in italics, so its easy to skim through if you want a summary. I discussed the main astrophysical conclusions in my previous post. We will know more about binary black holes and their formation when we get more observations, so I think it is a good time to get interested in this area.

The Stochastic Paper

Synopsis: Stochastic Paper
Read this if: You like stochastic backgrounds
Favourite part: We might detect a background in the next decade

A stochastic gravitational wave background could be created by an incoherent superposition of many signals. In pulsar timing, they are looking for a background from many merging supermassive black holes. Could we have a similar thing from stellar-mass black holes? The loudest signals, like GW150914, are resolvable, they stand out from the background. However, for every loud signal, there will be many quiet signals, and the ones below our detection threshold could form a background. Since we’ve found that binary black hole mergers are probably plentiful, the background may be at the high end of previous predictions.

The background from stellar-mass black holes is different than the one from supermassive black holes because the signals are short. While the supermassive black holes produce an almost constant hum throughout your observations, stellar-mass black hole mergers produce short chirps. Instead of having lots of signals that overlap in time, we have a popcorn background, with one arriving on average every 15 minutes. This might allow us to do some different things when it comes to detection, but for now, we just use the standard approach.

This paper calculates the energy density of gravitational waves from binary black holes, excluding the contribution from signals loud enough to be detected. This is done for several different models. The standard (fiducial) model assumes parameters broadly consistent with those of GW150914’s source, plus a particular model for the formation of merging binaries. There are then variations on the the model for formation, considering different time delays between formation and merger, and adding in lower mass systems consistent with LVT151012. All these models are rather crude, but give an idea of potential variations in the background. Hopefully more realistic distributions will be considered in the future. There is some change between models, but this is within the (considerable) statistical uncertainty, so predictions seems robust.

Models for a binary black hole stochastic background

Different models for the stochastic background of binary black holes. This is plotted in terms of energy density. The red band indicates the uncertainty on the fiducial model. The dashed line indicates the sensitivity of the LIGO and Virgo detectors after several years at design sensitivity. Fig. 2 of the Stochastic Paper.

After a couple of years at design sensitivity we may be able to make a confident detection of the stochastic background. The background from binary black holes is more significant than we expected.

If you’re wondering about if we could see other types of backgrounds, such as one of cosmological origin, then the background due to binary black holes could make detection more difficult. In effect, it acts as another source of noise, masking the other background. However, we may be able to distinguish the different backgrounds by measuring their frequency dependencies (we expect them to have different slopes), if they are loud enough.

The Neutrino Paper

Synopsis: Neutrino Paper
Read this if: You really like high energy neutrinos
Favourite part: We’re doing astronomy with neutrinos and gravitational waves—this is multimessenger astronomy without any form of electromagnetic radiation

There are multiple detectors that can look for high energy neutrinos. Currently, LIGO–Virgo Observations are being followed up by searches from ANTARES and IceCube. Both of these are Cherenkov detectors: they look for flashes of light created by fast moving particles, not the neutrinos themselves, but things they’ve interacted with. ANTARES searches the waters of the Mediterranean while IceCube uses the ice of Antarctica.

Within 500 seconds either side of the time of GW150914, ANTARES found no neutrinos and IceCube found three. These results are consistent with background levels (you would expect on average less than one and 4.4 neutrinos over that time from the two respectively). Additionally, none of the IceCube neutrinos are consistent with the sky localization of GW150914 (even though the sky area is pretty big). There is no sign of a neutrino counterpart, which is what we were expecting.

Subsequent non-detections have been reported by KamLAND, the Pierre Auger ObservatorySuper-Kamiokande, Borexino and NOvA.

The Electromagnetic Follow-up Paper

Synopsis: Electromagnetic Follow-up Paper
Read this if: You are interested in the search for electromagnetic counterparts
Favourite part: So many people were involved in this work that not only do we have to abbreviate the list of authors (Abbott, B.P. et al.), but we should probably abbreviate the list of collaborations too (LIGO Scientific & Virgo Collaboration et al.)

This is the last of the set of companion papers to be released—it took a huge amount of coordinating because of all the teams involved. The paper describes how we released information about GW150914. This should not be typical of how we will do things going forward (i) because we didn’t have all the infrastructure in place on September 14 and (ii) because it was the first time we had something we thought was real.

The first announcement was sent out on September 16, and this contained sky maps from the Burst codes cWB and LIB. In the future, we should be able to send out automated alerts with a few minutes latency.

For the first alert, we didn’t have any results which assumed the the source was a binary, as the searches which issue triggers at low latency were only looking for lower mass systems which would contain a neutron star. I suspect we’ll be reprioritising things going forward. The first information we shared about the potential masses for the source was shared on October 3. Since this was the first detection, everyone was cautious about triple-checking results, which caused the delay. Revised false alarm rates including results from GstLAL and PyCBC were sent out October 20.

The final sky maps were shared January 13. This is when we’d about finished our own reviews and knew that we would be submitting the papers soon [bonus note]. Our best sky map is the one from the Parameter Estimation Paper. You might it expect to be more con straining than the results from the burst pipelines since it uses a proper model for the gravitational waves from a binary black hole. This is the case if we ignore calibration uncertainty (which is not yet included in the burst codes), then the 50% area is 48~\mathrm{deg}^2 and the 90% area is 150~\mathrm{deg^2}. However, including calibration uncertainty, the sky areas are 150~\mathrm{deg^2} and 590~\mathrm{deg^2} at 50% and 90% probability respectively. Calibration uncertainty has the largest effect on sky area. All the sky maps agree that the source is in in some region of the annulus set by the time delay between the two detectors.

Sky map

The different sky maps for GW150914 in an orthographic projection. The contours show the 90% region for each algorithm. The faint circles show lines of constant time delay \Delta t_\mathrm{HL} between the two detectors. BAYESTAR rapidly computes sky maps for binary coalescences, but it needs the output of one of the detection pipelines to run, and so was not available at low latency. The LALInference map is our best result. All the sky maps are available as part of the data release. Fig. 2 of the Electromagnetic Follow-up Paper.

A timeline of events is shown below. There were follow-up observations across the electromagnetic spectrum from gamma-rays and X-rays through the optical and near infra-red to radio.

EM follow-up timeline

Timeline for observations of GW15014. The top (grey) band shows information about gravitational waves. The second (blue) band shows high-energy (gamma- and X-ray) observations. The third and fourth (green) bands show optical and near infra-red observations respectively. The bottom (red) band shows radio observations. Fig. 1 from the Electromagnetic Follow-up Paper.

Observations have been reported (via GCN notices) by

Together they cover an impressive amount of the sky as shown below. Many targeted the Large Magellanic Cloud before the knew the source was a binary black hole.

Follow-up observations

Footprints of observations compared with the 50% and 90% areas of the initially distributed (cWB: thick lines; LIB: thin lines) sky maps, also in orthographic projection. The all-sky observations are not shown. The grey background is the Galactic plane. Fig. 3 of the Electromagnetic Follow-up Paper.

Additional observations have been done using archival data by XMM-Newton and AGILE.

We don’t expect any electromagnetic counterpart to a binary black hole. No-one found anything with the exception of Fermi GBM. This has found a weak signal which may be coincident. More work is required to figure out if this is genuine (the statistical analysis looks OK, but some times you do have a false alarm). It would be a surprise if it is, so most people are sceptical. However, I think this will make people more interested in following up on our next binary black hole signal!

Bonus notes

Naming The Event

GW150914 is the name we have given to the signal detected by the two LIGO instruments. The “GW” is short for gravitational wave (not galactic worm), and the numbers give the date the wave reached the detectors (2015 September 14). It was originally known as G184098, its ID in our database of candidate events (most circulars sent to and from our observer partners use this ID). That was universally agreed to be terrible to remember. We tried to think of a good nickname for the event, but failed to, so rather by default, it has informally become known as The Event within the Collaboration. I think this is fitting given its significance.

LVT151012 is the name of the most significant candidate after GW150914, it doesn’t reach our criteria to claim detection (a false alarm rate of less than once per century), which is why it’s not GW151012. The “LVT” is short for LIGO–Virgo trigger. It took a long time to settle on this and up until the final week before the announcement it was still going by G197392. Informally, it was known as The Second Monday Event, as it too was found on a Monday. You’ll have to wait for us to finish looking at the rest of the O1 data to see if the Monday trend continues. If it does, it could have serious repercussions for our understanding of Garfield.

Following the publication of the O2 Catalogue Paper, LVT151012 was upgraded to GW151012, AND we decided to get rid of the LVT class as it was rather confusing.

Publishing in Physical Review Letters

Several people have asked me if the Discovery Paper was submitted to Science or Nature. It was not. The decision that any detection would be submitted to Physical Review was made ahead of the run. As far as I am aware, there was never much debate about this. Physical Review had been good about publishing all our non-detections and upper limits, so it only seemed fair that they got the discovery too. You don’t abandon your friends when you strike it rich. I am glad that we submitted to them.

Gaby González, the LIGO Spokesperson, contacted the editors of Physical Review Letters ahead of submission to let them know of the anticipated results. They then started to line up some referees to give confidential and prompt reviews.

The initial plan was to submit on January 19, and we held a Collaboration-wide tele-conference to discuss the science. There were a few more things still to do, so the paper was submitted on January 21, following another presentation (and a long discussion of whether a number should be a six or a two) and a vote. The vote was overwhelmingly in favour of submission.

We got the referee reports back on January 27, although they were circulated to the Collaboration the following day. This was a rapid turnaround! From their comments, I suspect that Referee A may be a particle physicist who has dealt with similar claims of first detection—they were most concerned about statistical significance; Referee B seemed like a relativist—they made comments about the effect of spin on measurements, knew about waveforms and even historical papers on gravitational waves, and I would guess that Referee C was an astronomer involved with pulsars—they mentioned observations of binary pulsars potentially claiming the title of first detection and were also curious about sky localization. While I can’t be certain who the referees were, I am certain that I have never had such positive reviews before! Referee A wrote

The paper is extremely well written and clear. These results are obviously going to make history.

Referee B wrote

This paper is a major breakthrough and a milestone in gravitational science. The results are overall very well presented and its suitability for publication in Physical Review Letters is beyond question.

and Referee C wrote

It is an honor to have the opportunity to review this paper. It would not be an exaggeration to say that it is the most enjoyable paper I’ve ever read. […] I unreservedly recommend the paper for publication in Physical Review Letters. I expect that it will be among the most cited PRL papers ever.

I suspect I will never have such emphatic reviews again [happy bonus note][unhappy bonus note].

Publishing in Physical Review Letters seems to have been a huge success. So much so that their servers collapsed under the demand, despite them adding two more in anticipation. In the end they had to quintuple their number of servers to keep up with demand. There were 229,000 downloads from their website in the first 24 hours. Many people remarked that it was good that the paper was freely available. However, we always make our papers public on the arXiv or via LIGO’s Document Control Center [bonus bonus note], so there should never be a case where you miss out on reading a LIGO paper!

Publishing the Parameter Estimation Paper

The reviews for the Parameter Estimation Paper were also extremely positive. Referee A, who had some careful comments on clarifying notation, wrote

This is a beautiful paper on a spectacular result.

Referee B, who commendably did some back-of-the-envelope checks, wrote

The paper is also very well written, and includes enough background that I think a decent fraction of it will be accessible to non-experts. This, together with the profound nature of the results (first direct detection of gravitational waves, first direct evidence that Kerr black holes exist, first direct evidence that binary black holes can form and merge in a Hubble time, first data on the dynamical strong-field regime of general relativity, observation of stellar mass black holes more massive than any observed to date in our galaxy), makes me recommend this paper for publication in PRL without hesitation.

Referee C, who made some suggestions to help a non-specialist reader, wrote

This is a generally excellent paper describing the properties of LIGO’s first detection.

Physical Review Letters were also kind enough to publish this paper open access without charge!

Publishing the Rates Paper

It wasn’t all clear sailing getting the companion papers published. Referees did give papers the thorough checking that they deserved. The most difficult review was of the Rates Paper. There were two referees, one astrophysics, one statistics. The astrophysics referee was happy with the results and made a few suggestions to clarify or further justify the text. The statistics referee has more serious complaints…

There are five main things which I think made the statistics referee angry. First, the referee objected to our terminology

While overall I’ve been impressed with the statistics in LIGO papers, in one respect there is truly egregious malpractice, but fortunately easy to remedy. It concerns incorrectly using the term “false alarm probability” (FAP) to refer to what statisticians call a p-value, a deliberately vague term (“false alarm rate” is similarly misused). […] There is nothing subtle or controversial about the LIGO usage being erroneous, and the practice has to stop, not just within this paper, but throughout the LIGO collaboration (and as a matter of ApJ policy).

I agree with this. What we call the false alarm probability is not the probability that the detection is a false alarm. It is not the probability that the given signal is noise rather that astrophysical, but instead it is the probability that if we only had noise that we would get a detection statistic as significant or more so. It might take a minute to realise why those are different. The former (the one we should call p-value) is what the search pipelines give us, but is less useful than the latter for actually working out if the signal is real. The probabilities calculated in the Rates Paper that the signal is astrophysical are really what you want.

p-values are often misinterpreted, but most scientists are aware of this, and so are cautious when they come across them

As a consequence of this complaint, the Collaboration is purging “false alarm probability” from our papers. It is used in most of the companion papers, as they were published before we got this report (and managed to convince everyone that it is important).

Second, we were lacking in references to existing literature

Regarding scholarship, the paper is quite poor. I take it the authors have written this paper with the expectation, or at least the hope, that it would be read […] If I sound frustrated, it’s because I am.

This is fair enough. The referee made some good suggestions to work done on inferring the rate of gamma-ray bursts by Loredo & Wasserman (Part I, Part II, Part III), as well as by Petit, Kavelaars, Gladman & Loredo on trans-Neptunian objects, and we made sure to add as much work as possible in revisions. There’s no excuse for not properly citing useful work!

Third, the referee didn’t understand how we could be certain of the distribution of signal-to-noise ratio \rho without also worrying about the distribution of parameters like the black hole masses. The signal-to-noise ratio is inversely proportional to distance, and we expect sources to be uniformly distributed in volume. Putting these together (and ignoring corrections from cosmology) gives a distribution for signal-to-noise ratio of p(\rho) \propto \rho^{-4} (Schulz 2011).  This is sufficiently well known within the gravitational-wave community that we forgot that those outside wouldn’t appreciate it without some discussion. Therefore, it was useful that the referee did point this out.

Fourth, the referee thought we had made an error in our approach. They provided an alternative derivation which

if useful, should not be used directly without some kind of attribution

Unfortunately, they were missing some terms in their expressions. When these were added in, their approach reproduced our own (I had a go at checking this myself). Given that we had annoyed the referee on so many other points, it was tricky trying to convince them of this. Most of the time spent responding to the referees was actually working on the referee response and not on the paper.

Finally, the referee was unhappy that we didn’t make all our data public so that they could check things themselves. I think it would be great, and it will happen, it was just too early at the time.

LIGO Document Control Center

Papers in the LIGO Document Control Center are assigned a number starting with P (for “paper”) and then several digits. The Discover Paper’s reference is P150914. I only realised why this was the case on the day of submission.

The überbank

The set of templates used in the searches is designed to be able to catch binary neutron stars, neutron star–black hole binaries and binary neutron stars. It covers component masses from 1 to 99 solar masses, with total masses less than 100 solar masses. The upper cut off is chosen for computational convenience, rather than physical reasons: we do look for higher mass systems in a similar way, but they are easier to confuse with glitches and so we have to be more careful tuning the search. Since bank of templates is so comprehensive, it is known as the überbank. Although it could find binary neutron stars or neutron star–black hole binaries, we only discuss binary black holes here.

The template bank doesn’t cover the full parameter space, in particular it assumes that spins are aligned for the two components. This shouldn’t significantly affect its efficiency at finding signals, but gives another reason (together with the coarse placement of templates) why we need to do proper parameter estimation to measure properties of the source.

Alphabet soup

In the calculation of rates, the probabilistic means for counting sources is known as the FGMC method after its authors (who include two Birmingham colleagues and my former supervisor). The means of calculating rates assuming that the population is divided into one class to match each observation is also named for the initial of its authors as the KKL approach. The combined FGMCKKL method for estimating merger rates goes by the name alphabet soup, as that is much easier to swallow.

Multi-band gravitational wave astronomy

The prospect of detecting a binary black hole with a space-based detector and then seeing the same binary merger with ground-based detectors is especially exciting. My officemate Alberto Sesana (who’s not in LIGO) has just written a paper on the promise of multi-band gravitational wave astronomy. Black hole binaries like GW150914 could be spotted by eLISA (if you assume one of the better sensitivities for a detector with three arms). Then a few years to weeks later they merge, and spend their last moments emitting in LIGO’s band. The evolution of some binary black holes is sketched in the plot below.

Binary black hole mergers across the eLISA and LIGO frequency bands

The evolution of binary black hole mergers (shown in blue). The eLISA and Advanced LIGO sensitivity curves are shown in purple and orange respectively. As the black holes inspiral, they emit gravitational waves at higher frequency, shifting from the eLISa band to the LIGO band (where they merge). The scale at the top gives the approximate time until merger. Fig. 1 of Sesana (2016).

Seeing the signal in two bands can help in several ways. First it can increase our confidence in detection, potentially picking out signals that we wouldn’t otherwise. Second, it gives us a way to verify the calibration of our instruments. Third, it lets us improve our parameter-estimation precision—eLISA would see thousands of cycles, which lets it pin down the masses to high accuracy, these results can be combined with LIGO’s measurements of the strong-field dynamics during merger to give a fantastic overall picture of the system. Finally, since eLISA can measure the signal for a considerable time, it can well localise the source, perhaps just to a square degree; since we’ll also be able to predict when the merger will happen, you can point telescopes at the right place ahead of time to look for any electromagnetic counterparts which may exist. Opening up the gravitational wave spectrum is awesome!

The LALInference sky map

One of my jobs as part of the Parameter Estimation group was to produce the sky maps from our parameter-estimation runs. This is a relatively simple job of just running our sky area code. I had done it many times while were collecting our results, so I knew that the final versions were perfectly consistent with everything else we had seen. While I was comfortable with running the code and checking the results, I was rather nervous uploading the results to our database to be shared with our observational partners. I somehow managed to upload three copies by accident. D’oh! Perhaps future historians will someday look back at the records for G184098/GW150914 and wonder what was this idiot Christopher Berry doing? Probably no-one would every notice, but I know the records are there…

Advanced LIGO detects gravitational waves!

The first observing run (O1) of Advanced LIGO was scheduled to start 9 am GMT (10 am BST), 14 September 2015. Both gravitational-wave detectors were running fine, but there were few a extra things the calibration team wanted to do and not all the automated analysis had been set up, so it was decided to postpone the start of the run until 18 September. No-one told the Universe. At 9:50 am, 14 September there was an event. To those of us in the Collaboration, it is known as The Event.

Measured strain

The Event’s signal as measured by LIGO Hanford and LIGO Livingston. The shown signal has been filtered to make it more presentable. The Hanford signal is inverted because of the relative orientations of the two interferometers. You can clearly see that both observatories see that same signal, and even without fancy analysis, that there are definitely some wibbles there! Part of Fig. 1 from the Discovery Paper.

Detection

The detectors were taking data and the coherent WaveBurst (cWB) detection pipeline was set up analysing this. It finds triggers in near real time, and so about 3 minutes after the gravitational wave reached Earth, cWB found it. I remember seeing the first few emails… and ignoring them—I was busy trying to finalise details for our default parameter-estimation runs for the start of O1. However, the emails kept on coming. And coming. Something exciting was happening. The detector scientists at the sites swung in to action and made sure that the instruments would run undisturbed so we could get lots of data about their behaviour; meanwhile, the remaining data analysis codes were set running with ruthless efficiency.

The cWB algorithm doesn’t search for a particular type of signal, instead it looks for the same thing in both detectors—it’s what we call a burst search. Burst searches could find supernova explosions, black hole mergers, or something unexpected (so long as the signal is short). Looking at the data, we saw that the frequency increased with time, there was the characteristic chirp of a binary black hole merger! This meant that the searches that specifically look for the coalescence of binaries (black hole or neutron stars) should find it too, if the signal was from a binary black hole. It also meant that we could analyse the data to measure the parameters.

Time–frequency plot of The Event

A time–frequency plot that shows The Event’s signal power in the detectors. You can see the signal increase in frequency as time goes on: the characteristic chirp of a binary merger! The fact that you can spot the signal by eye shows how loud it is. Part of Fig. 1 from the Discovery Paper.

The signal was quite short, so it was quick for us to run parameter estimation on it—this makes a welcome change as runs on long, binary neutron-star signals can take months. We actually had the first runs done before all the detection pipelines had finished running. We kept the results secret: the detection people didn’t want to know the results before they looked at their own results (it reminded me of the episode of Whatever Happened to the Likely Lads where they try to avoid hearing the results of the football until they can watch the match). The results from each of the detection pipelines came in [bonus note]. There were the other burst searches: LALInferenceBurst found strong evidence for a signal, and BayesWave classified it clearly as a signal, not noise or a glitch; then the binary searches: both GstLAL and PyCBC found the signal (the same signal) at high significance. The parameter-estimation results were beautiful—we had seen the merger of two black holes!

At first, we couldn’t quite believe that we had actually made the detection. The signal seemed too perfect. Famously, LIGO conducts blind injections: fake signals are secretly put into the data to check that we do things properly. This happened during the run of initial LIGO (an event known as the Big Dog), and many people still remembered the disappointment. We weren’t set up for injections at the time (that was part of getting ready for O1), and the heads of the Collaboration said that there were no plans for blind injections, but people wanted to be sure. Only three or four people in the Collaboration can perform a blind injection; however, it’s a little publicised fact that you can tell if there was an injection. The data from the instruments is recorded at many stages, so there’s a channel which records the injected signal. During a blind-injection run, we’re not allowed to look at this, but this wasn’t a blind-injection run, so this was checked and rechecked. There was nothing. People considered other ways of injecting the signal that wouldn’t be recorded (perhaps splitting the signal up and putting small bits in lots of different systems), but no-one actually understands all the control systems well enough to get this to work. There were basically two ways you could fake the signal. The first is hack into the servers at both sites and CalTech simultaneously and modify the data before it got distributed. You would need to replace all the back-ups and make sure you didn’t leave any traces of tampering. You would also need to understand the control system well enough that all the auxiliary channels (the signal as recorded at over 30 different stages throughout the detectors’ systems) had the right data. The second is to place a device inside the interferometers that would inject the signal. As long as you had a detailed understanding of the instruments, this would be simple: you’d just need to break into both interferometers without being noticed. Since the interferometers are two of the most sensitive machines ever made, this is like that scene from Mission:Impossible, except on the actually impossible difficulty setting. You would need to break into the vacuum tube (by installing an airlock in the concrete tubes without disturbing the seismometers), not disturb the instrument while working on it, and not scatter any of the (invisible) infra-red laser light. You’d need to do this at both sites, and then break in again to remove the devices so they’re not found now that O1 is finished. The devices would also need to be perfectly synchronised. I would love to see a movie where they try to fake the signal, but I am convinced, absolutely, that the easiest way to inject the signal is to collide two black holes a billion years ago. (Also a good plot for a film?)

There is no doubt. We have detected gravitational waves. (I cannot articulate how happy I was to hit the button to update that page! [bonus note])

I still remember the exact moment this hit me. I was giving a public talk on black holes. It was a talk similar to ones I have given many times before. I start with introducing general relativity and the curving of spacetime, then I talk about the idea of a black hole. Next I move on to evidence for astrophysical black holes, and I showed the video zooming into the centre of the Milky Way, ending with the stars orbiting around Sagittarius A*, the massive black hole in the centre of our galaxy (shown below). I said that the motion of the stars was our best evidence for the existence of black holes, then I realised that this was no longer the case. Now, we have a whole new insight into the properties of black holes.

Gravitational-wave astronomy

Having caught a gravitational wave, what do you do with it? It turns out that there’s rather a lot of science you can do. The last few months have been exhausting. I think we’ve done a good job as a Collaboration of assembling all the results we wanted to go with the detection—especially since lots of things were being done for the first time! I’m sure we’ll update our analysis with better techniques and find new ways of using the data, but for now I hope everyone can enjoy what we have discovered so far.

I will write up a more technical post on the results, here we’ll run through some of the highlights. For more details of anything, check out the data release.

The source

The results of our parameter-estimation runs tell us about the nature of the source. We have a binary with objects of masses 36^{+5}_{-4} M_\odot and 29^{+4}_{-4} M_\odot, where M_\odot indicates the mass of our Sun (about 2 \times 10^{30} kilograms). If you’re curious what’s going with these numbers and the pluses and minuses, check out this bonus note.

Binary black hole masses

Estimated masses for the two black holes in the binary. m_1^\mathrm{source} is the mass of the heavier black hole and m_2^\mathrm{source} is the mass of the lighter black hole. The dotted lines mark the edge of our 90% probability intervals. The different coloured curves show different models: they agree which made me incredibly happy! Fig. 1 from the Parameter Estimation Paper.

We know that we’re dealing with compact objects (regular stars could never get close enough together to orbit fast enough to emit gravitational waves at the right frequency), and the only compact objects that can be as massive as these object are black holes. This means we’re discovered the first stellar-mass black hole binary! We’ve also never seen stellar-mass black holes (as opposed to the supermassive flavour that live in the centres of galaxies) this heavy, but don’t get too attached to that record.

Black holes have at most three properties. This makes them much simpler than a Starbucks Coffee (they also stay black regardless of how much milk you add). Black holes are described by their mass, their spin (how much they rotate), and their electric charge. We don’t expect black holes out in the Universe to have much electric charge because (i) its very hard to separate lots of positive and negative charge in the first place, and (ii) even if you succeed at (i), it’s difficult to keep positive and negative charge apart. This is kind of like separating small children and sticky things that are likely to stain. Since the electric charge can be ignored, we just need mass and spin. We’ve measured masses, can we measure spins?

Black hole spins are defined to be between 0 (no spin) and 1 (the maximum amount you can have). Our best estimates are that the bigger black hole has spin 0.3_{-0.3}^{+0.5}, and the small one has spin 0.5_{-0.4}^{+0.5} (these numbers have been rounded). These aren’t great measurements. For the smaller black hole, its spin is almost equally probable to take any allowed value; this isn’t quite the case, but we haven’t learnt much about its size. For the bigger black hole, we do slightly better, and it seems that the spin is on the smaller side. This is interesting, as measurements of spins for black holes in X-ray binaries tend to be on the higher side: perhaps there are different types of black holes?

We can’t measure the spins precisely for a few reasons. The signal is short, so we don’t see lots of wibbling while the binaries are orbiting each other (the tell-tale sign of spin). Results for the orientation of the binary also suggest that we’re looking at it either face on or face off, which makes any wobbles in the orbit that are there less visible. However, there is one particular combination of the spins, which we call the effective spin, that we can measure. The effective spin controls how the black holes spiral together. It has a value of 1 if both black holes have max spin values, and are rotating the same way as the binary is orbiting. It has a value of −1 if the black holes have max spin values and are both rotating exactly the opposite way to the binary’s orbit. We find that the effective spin is small, -0.06_{-0.18}^{+0.17}. This could mean that both black holes have small spins, or that they have larger spins that aren’t aligned with the orbit (or each other). We have learnt something about the spins, it’s just not too easy to tease that apart to give values for each of the black holes.

As the two black holes orbit each other, they (obviously, given what we’ve seen) emit gravitational waves. These carry away energy and angular momentum, so the orbit shrinks and the black holes inspiral together. Eventually they merge and settle down into a single bigger black hole. All this happens while we’re watching (we have great seats). A simulation of this happening is below. You can see that the frequency of the gravitational waves is twice that of the orbit, and the video freezes around the merger so you can see two become one.

What are the properties of the final black hole? The mass of the remnant black holes is 62^{+4}_{-4} M_\odot. It is the new record holder for the largest observed stellar-mass black hole!

If you do some quick sums, you’ll notice that the final black hole is lighter than the sum of the two initial black holes. This is because of that energy that was carried away by the gravitational waves. Over the entire evolution of the system, 3.0^{+0.5}_{-0.4} M_\odot c^2 \simeq 5.3_{-0.8}^{+0.9} \times 10^{47}~\mathrm{J} of energy was radiated away as gravitational waves (where c is the speed of light as in Einstein’s famous equation). This is a colossal amount of energy. You’d need to eat over eight billion times the mass of the Sun in butter to get the equivalent amount of calories. (Do not attempt the wafer-thin mint afterwards). The majority of that energy is radiated within the final second. For a brief moment, this one black hole merger outshines the whole visible Universe if you compare its gravitational-wave luminosity, to everything else’s visible-light luminosity!

We’ve measured mass, what about spin? The final black hole’s spin in 0.67^{+0.05}_{-0.07}, which is in the middling-to-high range. You’ll notice that we can deduce this to a much higher precisely than the spins of the two initial black holes. This is because it is largely fixed by the orbital angular momentum of the binary, and so its value is set by orbital dynamics and gravitational physics. I think its incredibly satisfying that we we can such a clean measurement of the spin.

We have measured both of the properties of the final black hole, and we have done this using spacetime itself. This is astounding!

Final black hole mass and spin

Estimated mass M_\mathrm{f}^\mathrm{source} and spin a_\mathrm{f}^\mathrm{source} for the final black hole. The dotted lines mark the edge of our 90% probability intervals. The different coloured curves show different models: they agree which still makes me incredibly happy! Fig. 3 from the Parameter Estimation Paper.

How big is the final black hole? My colleague Nathan Johnson-McDaniel has done some calculations and finds that the total distance around the equator of the black hole’s event horizon is about 1100~\mathrm{km} (about six times the length of the M25). Since the black hole is spinning, its event horizon is not a perfect sphere, but it bulges out around the equator. The circumference going over the black hole’s poles is about 1000~\mathrm{km} (about five and a half M25s, so maybe this would be the better route for your morning commute). The total area of the event horizon is about 37000~\mathrm{km}^2. If you flattened this out, it would cover an area about the size of Montana. Neil Cornish (of Montana State University) said that he’s not sure which we know more accurately: the area of the event horizon or the area of Montana!

OK, we’ve covered the properties of the black holes, perhaps it’s time for a celebratory biscuit and a sit down? But we’re not finished yet, where is the source?

We infer that the source is at a luminosity distance of 410^{+160}_{-180}~\mathrm{Mpc}, a megaparsec is a unit of length (regardless of what Han Solo thinks) equal to about 3 million light-years. The luminosity distance isn’t quite the same as the distance you would record using a tape measure because it takes into account the effects of the expansion of the Universe. But it’s pretty close. Using our 90% probability range, the merger would have happened sometime between 700 million years and 1.6 billion years ago. This coincides with the Proterozoic Eon on Earth, the time when the first oxygen-dependent animals appeared. Gasp!

With only the two LIGO detectors in operation, it is difficult to localise where on the sky source came from. To have a 90% chance of finding the source, you’d need to cover 600~\mathrm{deg^2} of the sky. For comparison, the full Moon is about 0.2~\mathrm{deg^2}. This is a large area to cover with a telescope, and we don’t expect there to be anything to see for a black hole merger, but that hasn’t stopped our intrepid partners from trying. For a lovely visualisation of where we think the source could be, marvel at the Gravoscope.

Astrophysics

The detection of this black hole merger tells us:

  • Black holes 30 times the mass of our Sun do form These must be the remains of really massive stars. Stars lose mass throughout their lifetime through stellar winds. How much they lose depends on what they are made from. Astronomers have a simple periodic table: hydrogen, helium and metals. (Everything that is not hydrogen or helium is a metal regardless of what it actually is). More metals means more mass loss, so to end up with our black holes, we expect that they must have started out as stars with less than half the fraction of metals found in our Sun. This may mean the parent stars were some of the first stars to be born in the Universe.
  • Binary black holes exist There are two ways to make a black hole binary. You can start with two stars in a binary (stars love company, so most have at least one companion), and have them live their entire lives together, leaving behind the two black holes. Alternatively, you could have somewhere where there are lots of stars and black holes, like a globular cluster, and the two black holes could wander close enough together to form the binary. People have suggested that either (or both) could happen. You might be able to tell the two apart using spin measurements. The spins of the black holes are more likely to be aligned (with each other and the way that the binary orbits) if they came from stars formed in a binary. The spins would be randomly orientated if two black holes came together to form a binary by chance. We can’t tell the two apart now, but perhaps when we have more observations!
  • Binary black holes merge Since we’ve seen a signal from two black holes inspiralling together and merging, we know that this happens. We can also estimate how often this happens, given how many signals we’ve seen in our observations. Somewhere in the observable Universe, a similar binary could be merging about every 15 minutes. For LIGO, this should mean that we’ll be seeing more. As the detectors’ sensitivity improves (especially at lower frequencies), we’ll be able to detect more and more systems [bonus note]. We’re still uncertain in our predictions of exactly how many we’ll see. We’ll understand things better after observing for longer: were we just lucky, or were we unlucky not to have seen more? Given these early results, we estimate that the end of the third observing run (O3), we could have over 30. It looks like I will be kept busy over the next few years…

Gravitational physics

Black holes are the parts of the Universe with the strongest possible gravity. They are the ideal place to test Einstein’s theory of general relativity. The gravitational waves from a black hole merger let us probe right down to the event horizon, using ripples in spacetime itself. This makes gravitational waves a perfect way of testing our understanding of gravity.

We have run some tests on the signal to see how well it matches our expectations. We find no reason to doubt that Einstein was right.

The first check is that if we try to reconstruct the signal, without putting in information about what gravitational waves from a binary merger look like, we find something that agrees wonderfully with our predictions. We can reverse engineer what the gravitational waves from a black hole merger look like from the data!

Estimated waveforms from different models

Recovered gravitational waveforms from our analysis of The Event. The dark band shows our estimate for the waveform without assuming a particular source (it is build from wavelets, which sound adorable to me). The light bands show results if we assume it is a binary black hole (BBH) as predicted by general relativity. They match really well! Fig. 6 from the Parameter Estimation Paper.

As a consistency test, we checked what would happen if you split the signal in two, and analysed each half independently with our parameter-estimation codes. If there’s something weird, we would expect to get different results. We cut the data into a high frequency piece and a low frequency piece at roughly where we think the merger starts. The lower frequency (mostly) inspiral part is more similar to the physics we’ve tested before, while the higher frequency (mostly) merger and ringdown is new and hence more uncertain. Looking at estimates for the mass and spin of the final black hole, we find that the two pieces are consistent as expected.

In general relativity, gravitational waves travel at the speed of light. (The speed of light is misnamed, it’s really a property of spacetime, rather than of light). If gravitons, the theoretical particle that carries the gravitational force, have a mass, then gravitational waves can’t travel at the speed of light, but would travel slightly slower. Because our signals match general relativity so well, we can put a limit on the maximum allowed mass. The mass of the graviton is less than 1.2 \times 10^{-22}~\mathrm{eV\,c^{-2}} (in units that the particle physicists like). This is tiny! It is about as many times lighter than an electron as an electron is lighter than a teaspoon of water (well, 4~\mathrm{g}, which is just under a full teaspoon), or as many times lighter than the almost teaspoon of water is than three Earths.

Limits on the Compton wavelength of the graviton

Bounds on the Compton wavelength \lambda_g of the graviton from The Event (GW150914). The Compton wavelength is a length defined by the mass of a particle: smaller masses mean large wavelengths. We place much better limits than existing tests from the Solar System or the double pulsar. There are some cosmological tests which are stronger still (but they make assumptions about dark matter). Fig. 8 from the Testing General Relativity Paper.

Overall things look good for general relativity, it has passed a tough new test. However, it will be extremely exciting to get more observations. Then we can combine all our results to get the best insights into gravity ever. Perhaps we’ll find a hint of something new, or perhaps we’ll discover that general relativity is perfect? We’ll have to wait and see.

Conclusion

100 years after Einstein predicted gravitational waves and Schwarzschild found the equations describing a black hole, LIGO has detected gravitational waves from two black holes orbiting each other. This is the culmination of over forty years of effort. The black holes inspiral together and merge to form a bigger black hole. This is the signal I would have wished for. From the signal we can infer the properties of the source (some better than others), which makes me exceedingly happy. We’re starting to learn about the properties of black holes, and to test Einstein’s theory. As we continue to look for gravitational waves (with Advanced Virgo hopefully joining next year), we’ll learn more and perhaps make other detections too. The era of gravitational-wave astronomy has begun!

After all that, I am in need of a good nap! (I was too excited to sleep last night, it was like a cross between Christmas Eve and the night before final exams). For more on the story from scientists inside the LIGO–Virgo Collaboration, check out posts by:

  • Matt Pitkin (the tireless reviewer of our parameter-estimation work)
  • Brynley Pearlstone (who’s just arrived at the LIGO Hanford site)
  • Amber Stuver (who  blogged through LIGO’s initial runs too)
  • Rebecca Douglas (a good person to ask about what build a detector out of)
  • Daniel Williams (someone fresh to the Collaboration)
  • Sean Leavey (a PhD student working on on interferometry)
  • Andrew Williamson (who likes to look for gravitational waves that coincide with gamma-ray bursts)
  • Shane Larson (another fan of space-based gravitational-wave detectors)
  • Roy Williams (who helps to make all the wonderful open data releases for LIGO)
  • Chris North (creator of the Gravoscope amongst other things)

There’s also this video from my the heads of my group in Birmingham on their reactions to the discovery (the credits at the end show how large an effort the detection is).

Discovery paper: Observation of Gravitational Waves from a Binary Black Hole Merger
Date release:
LIGO Open Science Center

Bonus notes

Search pipelines

At the Large Hadron Collider, there are separate experiments that independently analyse data, and this is an excellent cross-check of any big discoveries (like the Higgs). We’re not in a position to do this for gravitational waves. However, the different search pipelines are mostly independent of each other. They use different criteria to rank potential candidates, and the burst and binary searches even look for different types of signals. Therefore, the different searches act as a check of each other. The teams can get competitive at times, so they do check each other’s results thoroughly.

The announcement

Updating Have we detected gravitational waves yet? was doubly exciting as I had to successfully connect to the University’s wi-fi. I managed this with about a minute to spare. Then I hovered with my finger on the button until David Reitze said “We. Have detected. Gravitational waves!” The exact moment is captured in the video below, I’m just off to the left.

Univeristy of Birmingham detection announcement

The moment of the announcement of the first observation of gravitational waves at the University of Birmingham. Credit: Kat Grover

Parameters and uncertainty

We don’t get a single definite number from our analysis, we have some uncertainty too. Therefore, our results are usually written  as the median value (which means we think that the true value is equally probable to be above or below this number), plus the range needed to safely enclose 90% of the probability (so there’s a 10% chance the true value is outside this range. For the mass of the bigger black hole, the median estimate is 36 M_\odot, we think there’s a 5% chance that the mass is below 32 M_\odot =(36 - 4) M_\odot, and a 5% chance it’s above 41 M_\odot =(36 + 5) M_\odot, so we write our result as 36^{+5}_{-4} M_\odot.

Sensitivity and ranges

Gravitational-wave detectors measure the amplitude of the wave (the amount of stretch and squash). The measured amplitude is smaller for sources that are further away: if you double the luminosity distance of a source, you halve its amplitude. Therefore, if you improve your detectors’ sensitivity by a factor of two, you can see things twice as far away. This means that we observe a volume of space (2 × 2 × 2) = 8 times as big. (This isn’t exactly the case because of pesky factors from the expansion of the Universe, but is approximately right). Even a small improvement in sensitivity can have a considerable impact on the number of signals detected!

Interstellar—science and fiction

Interstellar black hole

Planet and accretion disc orbiting Gargantua, the black hole in Interstellar. Visual effects produced by the cunning people of Double Negative.

Interstellar is the latest film from Christopher Nolan. After completing his work with the Dark Knight, it seems he has moved onto even darker material: black holes. I have been looking forward to the film for some time, but not because of Nolan’s involvement (even though I have enjoyed his previous work). The film is based upon the ideas of Kip Thorne, an eminent theoretical physicist. Kip literally wrote the book on general relativity. He was a pioneer of gravitational-wave science, and earlier versions of the script included the detection of gravitational waves (I’m sad that this has been removed). Here, I’ll briefly discuss the film, before going on to look at it’s science (there are some minor spoilers).

General relativity textbooks

My copies of Gravitation by Misner, Thorne & Wheeler, and General Theory of Relativity by Dirac. The difference in length might tell you something about the authors. MTW (as Gravitation is often called) is a useful textbook. It is so heavy that you can actually use it for experiments testing gravity.

Last week, my research group organised a meeting for our LIGO collaborators. We all got together in Birmingham to work on how we analyse gravitational-wave data. It was actually rather productive. We decided to celebrate the end of our meeting with a trip to see Interstellar. The consensus was that it was good. We were rather pleased by the amount of science in the film, undoubtedly due to Kip’s involvement (even if he doesn’t approve of everything)—we also liked how there was a robot called KIPP.

My favourite characters were, by far, the robots. They had more personality than any of the other characters: I was more concerned for their survival than for anyone else. (No-one was wearing red, but I thought it was quite obvious who was expendable). Michael Caine’s character is apparently based upon Kip—they do have similar beards.

The film is beautiful. Its visualisations have been much hyped (we’ll discuss these later). It shows an obvious debt to Kubrick’s 2001: A Space Odyssey. This is both for better and worse: mimicking the wonderful cinematography and the slow pacing. However, the conclusion lacks the mystery of 2001 or even the intelligence of Nolan’s earlier work Memento or Inception (both of which I would highly recommend).

I don’t want to say too much about the plot. I (unsurprisingly) approve of its pro-science perspective. There were some plot points that irked me. In particular, why on Earth (or off Earth) would a mission with the aim of continuing the human race only take one woman? Had no-one heard about putting all your eggs in one basket? Also, using Morse code to transmit complicated scientific data seems like a bad idea™. What if there were a typo? However, I did enjoy the action sequences and the few tense moments.

Why so scientific?

I expect that if you were after a proper film critique you’d be reading something else, so let’s discuss science. There is a lot of science in Interstellar, and I can’t go through it all, so I want to highlight a couple of pieces that I think are really cool.

Time is relative

An interesting story device is the idea that time is relative, and its passing depends upon where you are in gravitational field. This is entirely correct (and although time might flow at different apparent speeds, it never goes backwards). Imagine that you are tied to a length of extremely long and strong string, and lowered towards a black hole. (I wonder if that would make a good movie?) Let’s start off by considering a non-rotating black hole. The passage of time for you, relative to your friend with the other end of the string infinitely far away from the black hole, depends how close to the black hole you are. Times are related by

\displaystyle \Delta T_\mathrm{infinity} = \left(1 - \frac{2 G M}{c^2 r}\right)^{-1/2} \Delta T_\mathrm{string},

where M is the black hole’s mass, G is Newton’s gravitational constant, c is the speed of light, and r measures how far you are from (the centre of) the black hole (more on this in a moment). If you were to flash a light every \Delta T_\mathrm{string}, your friend at infinity would see them separated by time \Delta T_\mathrm{infinity}; it would be as if you were doing things in slow motion.

You might recognise 2GM/c^2 as the location of the event horizon: the point of no return from a black hole. At the event horizon, we would be dividing by zero in the equation above, time would appear to run infinitely slowly for you. This is rather curious, time continues to run fine for you, but watching from infinity you would fade to a complete stand-still.

Actually, you would also fade from view too. The frequency of light gets shifted by gravity. Light is a wave, it’s frequency is set by how fast it completes one cycle. The period of the wave gets stretched using the formula above. As you get closer to a black hole, light from you becomes more red (then infra-red, radio, etc.), and also becomes dimmer (as less energy arrives at your friend at infinity in a given time). You appear to fade to to black as you approach the event horizon. This stretching of light to lower frequencies is known as red-shifting (as red light has the lowest frequencies of the visible spectrum). I didn’t see much sign of it in Interstellar (we’ll see the effect it should have had below), although it has appeared in an episode of Stargate: SG-1 as a plot device.

The event horizon is also the point where the force on the string would become infinite. Your friend at infinity would only be able to pull you back up if they ate an infinite amount of spinach, and sadly there is not enough balsamic dressing to go around.

A technicality that is often brushed over is what the distance r actually measures. I said it tells you how how you are from the centre of the black hole, but it’s not as simple as dropping a tape measure in the see where the singularity is. In fact, we measure the distance differently. We instead measure the distance around the circumference of a circle, and divide this by 2\pi to calculate r. The further away we are, the bigger the circle, and so the larger r. If space were flat, this distance would be exactly the same as the distance to the middle, but when considering a black hole, we do not have flat space!

This time stretching due to gravity is a consequence of Einstein’s theory of general relativity. There is another similar effect in his theory of special relativity. If something travels past you with a speed v, then time is slowed according to

\displaystyle \Delta T_\mathrm{you} = \left(1 - \frac{v^2}{c^2}\right)^{-1/2} \Delta T_\mathrm{whizzing\:thing}.

If it were to travel closer and closer to the speed of light, the passage of time for it would slow to closer and closer to a standstill. This is just like crossing the event horizon.

Imagine that while you were sitting on the end of your string, a planet orbiting the black hole whizzed by. Someone of the planet flashes a torch every second (as they measure time), and when you see this, you flash your torch to your friend at infinity. The passage of time on the planet appears slowed to you because of the planet’s speed (using the special relativity formula above), and the passage of time for you appears slowed because of gravity to your friend at infinity. We can combine the two effects to work out the total difference in the apparent passage of time on the planet and at infinity. We need to know how fast the planet moves, but it’s not too difficult for a circular orbit, and after some algebra

\displaystyle \Delta T_\mathrm{infinity} = \left(1 - \frac{3 G M}{c^2 r}\right)^{-1/2} \Delta T_\mathrm{planet}.

In Interstellar, there is a planet where each hour corresponds the seven years at a distance. That is a difference of about 61000. We can get this with our formula if r \approx 3GM/c^2. Sadly, you can’t have a stable orbit inside r = 6GM/c^2, so there wouldn’t be a planet there. However, the film does say that the black hole is spinning. This does change things (you can orbit closer in), so it should work out. I’ve not done the calculations, but I might give it a go in the future.

Black holes

Interstellar does an excellent job of representing a black hole. Black holes are difficult to visualise, but the film correctly depicts them as three-dimensional: they are not a two-dimensional hole.

As nothing escapes from a black hole (and they don’t have a surface), they are dark, a shadow on the sky. However, we can see their effects. The image at the top shows a disc about the black hole. Material falling into a black hole often has some angular momentum: it doesn’t fall straight in, but goes off to the side and swirls about, exactly as water whirls around the plug-hole before falling in. This material swirling around is known as an accretion disc. In the disc, things closer to the black hole are orbiting faster (just as planets closer to the Sun orbit faster than those further away). Hence different parts of the disc rub against each other. This slows the inner layers (making them lose angular momentum so that they move inwards), and also heats the disc. Try rubbing your hands together for a few seconds, they soon warm up. In an accretion disc about a black hole, things can become so hot (millions of degrees) that they emits X-rays. You wouldn’t want to get close because of this radiation! Looking for these X-rays is one way of spotting black holes.

The video below shows a simulation from NASA of an accretion disc about a black hole. It’s not quite as fancy as the Interstellar one, but it’s pretty cool. You can see the X-rays being red-shifted and blue-shifted (the opposite of red-shifted, when radiation gets squashed to higher frequencies) as a consequence of their orbital motion (the Doppler effect), but I’m not sure if it shows gravitational red-shifting.

Black holes bend spacetime, so light gets bent as it travels close to them. The video above shows this. You can see the light ring towards the centre, from light that has wrapped around the black hole. You can also see this in Interstellar. I especially like how the ring is offset to one side. This is exactly what you should expect for a rotating black hole: you can get closer in when you’re moving with the rotation of the black hole, getting swept around like a plastic duck around a whirlpool. You can also see how the disc appears bent as light from the back of the disc (which has to travel around the black hole) gets curved.

Light-bending and redshifting of an accretion disc around a black hole.

Light-bending around a black hole. This is figure 15 from James, von Tunzelmann, Franklin & Thorne (2015). The top image shows an accretion disc as seen in Interstellar, but without the lens flare. The middle image also includes (Doppler and gravitational) red-shifting that changes the colour of the light. To make the colour changes clear, the brightness has been artificially kept constant. The bottom image also includes the changes in brightness that would come with red-shifting. The left side of the disc is moving towards us, so it is brighter and blue-shifted, the right side is moving away so it is red-shifted. You can see (or rather can’t) how red-shifting causes things to fade from view. This is what the black hole and accretion disc would actually look like, but it was thought too confusing for the actual film.

It’s not only light from the disc that gets distorted, but light from stars (and galaxies) behind the black hole. This is known as gravitational lensing. This is one way of spotting black holes without accretion discs: you watch a field of stars and if a black hole passes in front of one, it’s gravitational lensing will magnify the star. Spotting that change tells you something has passed between you and the star, working our its mass and size can tell you if it’s a black hole.

Looking at the shadow of a black hole (the region from which there is no light, which is surrounded by the innermost light ring) can tell you about the structure of spacetime close to the black hole. This could give you an idea of its mass and spin, or maybe even test if it matches the predictions of general relativity. We are hoping to do this for the massive black hole at the centre of our Galaxy and the massive black hole of the galaxy Messier 87 (M87). This will be done using the Event Horizon Telescope, an exciting project to use several telescopes together to make extremely accurate images.

Simulated Event Horizon Telescope image

False-colour image of what the Event Horizon Telescope could see when look at Sagittarius A* (Dexter et al. 2010). Red-shifting makes some part of the the disc appear brighter and other parts dimmer.

Interstellar is science fiction, it contains many elements of fantasy. However, it does much better than most on getting the details of the physics correct. I hope that it will inspire many to investigate the fact behind the fiction (there’s now a paper out in Classical & Quantum Gravity about the visualisation of the black hole, it comes with some interesting videos). If you’ve not seen the film yet, it’s worth a watch. I wonder if they could put the gravitational waves back in for an extended DVD version?

Score out of 5 solar masses: enough for a neutron star, possibly not enough for a black hole.

Update: The Event Horizon Telescope Team did it! They have an image of M87’s black hole. It compares nicely to predictions. I’m impressed (definitely cake-worthy). Science has taken another bite out of science fiction.

The Event Horizon Telescope's image of M87*

The shadow of a black hole reconstructed from the radio observations of the Event Horizon Telescope. The black hole lies at the center of M87, and is about 6.5 billion solar masses. Credit: Event Horizon Team

How big is a black hole?

Physicist love things that are simple. This may be one of the reasons that I think black holes are cool.

Black holes form when you have something so dense that nothing can resist its own gravity: it collapses down becoming smaller and smaller. Whatever formerly made up your object (usually, the remains of what made up a star), is crushed out of existence. It becomes infinitely compact, squeezed into an infinitely small space, such that you can say that the whatever was there no longer exists. Black holes aren’t made of anything: they are just empty spacetime!

A spherical cow

Daisy, a spherical cow, or “moo-on”. Spherical cows are highly prized as pets amongst physicists because of their high degree of symmetry and ability to survive in a vacuum. They also produce delicious milkshakes.

Black holes are very simple because they are just vacuum. They are much simpler than tables, or mugs of coffee, or even spherical cows, which are all made up of things: molecules and atoms and other particles all wibbling about and interacting with each other. If you’re a fan of Game of Thrones, then you know the plot is rather complicated because there are a lot of characters. However, in a single glass of water there may be 1025 molecules: imagine how involved things can be with that many things bouncing around, occasionally evaporating, or plotting to take over the Iron Throne and rust it to pieces! Even George R. R. Martin would struggle to kill off 1025 characters. Black holes have no internal parts, they have no microstructure, they are just… nothing…

(In case you’re the type of person to worry about such things, this might not quite be true in a quantum theory, but I’m just treating them classically here.)

Since black holes aren’t made of anything, they don’t have a surface. There is no boundary, no crispy sugar shell, no transition from space to something else. This makes it difficult to really talk about the size of black holes: it is a question I often get asked when giving public talks. Black holes are really infinitely small if we just consider the point that everything collapsed to, but that’s not too useful. When we want to consider a size for a black hole, we normally use its event horizon.

Point of no return sign

The event horizon is not actually sign-posted. It’s not possible to fix a sign-post in empty space, and it would be sucked into the black hole. The sign would disappear faster than a Ramsay Street sign during a tour of the Neighbours set.

The event horizon is the point of no return. Once passed, the black hole’s gravity is inescapable; there’s no way out, even if you were able to travel at the speed of light (this is what makes them black holes). The event horizon separates the parts of the Universe where you can happily wander around from those where you’re trapped plunging towards the centre of the black hole. It is, therefore, a sensible measure of the extent of a black hole: it marks the region where the black hole’s gravity has absolute dominion (which is better than possessing the Iron Throne, and possibly even dragons).

The size of the event horizon depends upon the mass of the black hole. More massive black holes have stronger gravity, so there event horizon extends further. You need to stay further away from bigger black holes!

If we were to consider the simplest type of black hole, it’s relatively (pun intended) easy to work out where the event horizon is. The event horizon is a spherical surface, with radius

\displaystyle r_\mathrm{S} = \frac{2GM}{c^2},

This is known as the Schwarzschild radius, as this type of black hole was first theorised by Karl Schwarszchild (who was a real hard-core physicist). In this formula, M is the black hole’s mass (as it increases, so does the size of the event horizon); G is Newton’s gravitational constant (it sets the strength of gravity), and c is the speed of light (the same as in the infamous E = mc^2). You can plug in some numbers to this formula (if anything like me, two or three times before getting the correct answer), to find out how big a black hole is (or equivalently, how much you need to squeeze something before it will collapse to a black hole).

What I find shocking is that black holes are tiny! I meant it, they’re really small. The Earth has a Schwarzschild radius of 9 mm, which means you could easily lose it down the back of the sofa. Until it promptly swallowed your sofa, of course. Stellar-mass black holes are just a few kilometres across. For comparison, the Sun has a radius of about 700,000 km. For the massive black hole at the centre of our Galaxy, it is 1010 m, which does sound a lot until you release that it’s less than 10% of Earth’s orbital radius, and it’s about four million solar masses squeezed into that space.

The event horizon changes shape if the black hole has angular momentum (if it is spinning). In this case, you can get closer in, but the position of the horizon doesn’t change much. In the most extreme case, the event horizon is at a radius of

\displaystyle r_\mathrm{g} = \frac{GM}{c^2}.

Relativists like this formula, since it’s even simpler than for the Schwarzscild radius (we don’t have to remember the value of two), and it’s often called the gravitational radius. It sets the scale in relativity problems, so computer simulations often use it as a unit instead of metres or light-years or parsecs or any of the other units astronomy students despair over learning.

We’ve now figured out a sensible means of defining the size of a black hole: we can use the event horizon (which separates the part of the Universe where you can escape form the black hole, from that where there is no escape), and the size of this is around the gravitational radius r_\mathrm{g}. An interesting consequence of this (well, something I think is interesting), is to consider the effective density of a black hole. Density is how much mass you can fit into a given space. In our case, we’ll consider the mass of the black hole and the volume of its event horizon. This would be something like

\displaystyle \rho = \frac{3 M}{4 \pi r_\mathrm{g}^3} = \frac{3 c^6}{4 \pi G^3 M^2},

where I’ve used \rho for density and you shouldn’t worry about the factors of \pi or G or c, I’ve just put them in case you were curious. The interesting result is that the density decreases as the mass increases. More massive black holes are less dense! In fact, the most massive black holes, about a billion times the mass of our Sun, are less dense than water. They would float if you could find a big enough bath tub, and could somehow fill it without the water collapsing down to a black hole under its own weight…

In general, it probably makes a lot more sense (and doesn’t break the laws of physics), if you stick with a rubber duck, rather than a black hole, as a bath-time toy.

In conclusion, black holes might be smaller (and less dense) than you’d expect. However, this doesn’t mean that they’re not very dangerous. As Tyrion Lannister has shown, it doesn’t pay to judge someone by their size alone.

The missing link for black holes

There has been some recent excitement about the claimed identification of a 400-solar-mass black hole. A team of scientists have recently published a letter in the journal Nature where they show how X-ray measurements of a source in the nearby galaxy M82 can be interpreted as originating from a black hole with mass of around 400 times the mass of the Sun—from now on I’ll use M_\odot as shorthand for the mass of the Sun (one solar mass). This particular X-ray source is peculiarly bright and has long been suspected to potentially be a black hole with a mass around 100 M_\odot to 1000 M_\odot. If the result is confirmed, then it is the first definite detection of an intermediate-mass black hole, or IMBH for short, but why is this exciting?

Mass of black holes

In principle, a black hole can have any mass. To form a black hole you just need to squeeze mass down into a small enough space. For the something the mass of the Earth, you need to squeeze down to a radius of about 9 mm and for something about the mass of the Sun, you need to squeeze to a radius of about 3 km. Black holes are pretty small! Most of the time, things don’t collapse to form black holes because they materials they are made of are more than strong enough to counterbalance their own gravity.

Marshmallows

These innocent-looking marshmallows could collapse down to form black holes if they were squeezed down to a size of about 10−29 m. The only thing stopping this is the incredible strength of marshmallow when compared to gravity.

Stellar-mass black holes

Only very massive things, where gravitational forces are immense, collapse down to black holes. This happens when the most massive stars reach the end of their lifetimes. Stars are kept puffy because they are hot. They are made of plasma where all their constituent particles are happily whizzing around and bouncing into each other. This can continue to happen while the star is undergoing nuclear fusion which provides the energy to keep things hot. At some point this fuel runs out, and then the core of the star collapses. What happens next depends on the mass of the core. The least massive stars (like our own Sun) will collapse down to become white dwarfs. In white dwarfs, the force of gravity is balanced by electrons. Electrons are rather anti-social and dislike sharing the same space with each other (a concept known as the Pauli exclusion principle, which is a consequence of their exchange symmetry), hence they put up a bit of a fight when squeezed together. The electrons can balance the gravitational force for masses up to about 1.4 M_\odot, known as the Chandrasekhar mass. After that they get squeezed together with protons and we are left with a neutron star. Neutron stars are much like giant atomic nuclei. The force of gravity is now balanced by the neutrons who, like electrons, don’t like to share space, but are less easy to bully than the electrons. The maximum mass of a neutron star is not exactly known, but we think it’s somewhere between 2 M_\odot and 3 M_\odot. After this, nothing can resist gravity and you end up with a black hole of a few times the mass of the Sun.

Collapsing stars produce the imaginatively named stellar-mass black holes, as they are about the same mass as stars. Stars lose a lot of mass during their lifetime, so the mass of a newly born black hole is less than the original mass of the star that formed it. The maximum mass of stellar-mass black holes is determined by the maximum size of stars. We have good evidence for stellar-mass black holes, for example from looking at X-ray binaries, where we see a hot disc of material swirling around the black hole.

Massive black holes

We also have evidence for another class of black holes: massive black holes, MBHs to their friends, or, if trying to sound extra cool, supermassive black holes. These may be 10^5 M_\odot to 10^9 M_\odot. The strongest evidence comes from our own galaxy, where we can see stars in the centre of the galaxy orbiting something so small and heavy it can only be a black hole.

We think that there is an MBH at the centre of pretty much every galaxy, like there’s a hazelnut at the centre of a Ferrero Rocher (in this analogy, I guess the Nutella could be delicious dark matter). From the masses we’ve measured, the properties of these black holes is correlated with the properties of their surrounding galaxies: bigger galaxies have bigger MBHs. The most famous of these correlations is the M–sigma relation, between the mass of the black hole (M) and the velocity dispersion, the range of orbital speeds, of stars surrounding it (the Greek letter sigma, \sigma). These correlations tell us that the evolution of the galaxy and it’s central black hole are linked somehow, this could be just because of their shared history or through some extra feedback too.

MBHs can grow by accreting matter (swallowing up clouds of gas or stars that stray too close) or by merging with other MBHs (we know galaxies merge). The rather embarrassing problem, however, is that we don’t know what the MBHs have grown from. There are really huge MBHs already present in the early Universe (they power quasars), so MBHs must be able to grow quickly. Did they grow from regular stellar-mass black holes or some form of super black hole that formed from a giant star that doesn’t exist today? Did lots of stellar-mass black holes collide to form a seed or did material just accrete quickly? Did the initial black holes come from somewhere else other than stars, perhaps they are leftovers from the Big Bang? We don’t have the data to tell where MBHs came from yet (gravitational waves could be useful for this).

Intermediate-mass black holes

However MBHs grew, it is generally agreed that we should be able to find some intermediate-mass black holes: black holes which haven’t grown enough to become IMBHs. These might be found in dwarf galaxies, or maybe in globular clusters (giant collections of stars that formed together), perhaps even in the centre of galaxies orbiting an MBH. Finding some IMBHs will hopefully tell us about how MBHs formed (and so, possibly about how galaxies formed too).

IMBHs have proved elusive. They are difficult to spot compared to their bigger brothers and sisters. Not finding any might mean we’d need to rethink our ideas of how MBHs formed, and try to find a way for them to either be born about a million times the mass of the Sun, or be guaranteed to grow that big. The finding of the first IMBH tells us that things are more like common sense would dictate: black holes can come in the expected range of masses (phew!). We now need to identify some more to learn about their properties as a population.

In conclusion, black holes can come in a range of masses. We know about the smaller stellar-mass ones and the bigger massive black holes. We suspect that the bigger ones grow from smaller ones, and we now have some evidence for the existence of the hypothesised intermediate-mass black holes. Whatever their size though, black holes are awesome, and they shouldn’t worry about their weight.