# The O2 Catalogue—It goes up to 11

The full results of our second advanced-detector observing run (O2) have now been released—we’re pleased to announce four new gravitational wave signals: GW170729, GW170809, GW170818 and GW170823 [bonus note]. These latest observations are all of binary black hole systems. Together, they bring our total to 10 observations of binary black holes, and 1 of a binary neutron star. With more frequent detections on the horizon with our third observing run due to start early 2019, the era of gravitational wave astronomy is truly here.

The population of black holes and neutron stars observed with gravitational waves and with electromagnetic astronomy. You can play with an interactive version of this plot online.

The new detections are largely consistent with our previous findings. GW170809, GW170818 and GW170823 are all similar to our first detection GW150914. Their black holes have masses around 20 to 40 times the mass of our Sun. I would lump GW170104 and GW170814 into this class too. Although there were models that predicted black holes of these masses, we weren’t sure they existed until our gravitational wave observations. The family of black holes continues out of this range. GW151012, GW151226 and GW170608 fall on the lower mass side. These overlap with the population of black holes previously observed in X-ray binaries. Lower mass systems can’t be detected as far away, so we find fewer of these. On the higher end we have GW170729 [bonus note]. Its source is made up of black holes with masses $50.2^{+16.2}_{-10.2} M_\odot$ and $34.0^{+9.1}_{-10.1} M_\odot$ (where $M_\odot$ is the mass of our Sun). The larger black hole is a contender for the most massive black hole we’ve found in a binary (the other probable contender is GW170823’s source, which has a $39.5^{+11.2}_{-6.7} M_\odot$ black hole). We have a big happy family of black holes!

Of the new detections, GW170729, GW170809 and GW170818 were both observed by the Virgo detector as well as the two LIGO detectors. Virgo joined O2 for an exciting August [bonus note], and we decided that the data at the time of GW170729 were good enough to use too. Unfortunately, Virgo wasn’t observing at the time of GW170823. GW170729 and GW170809 are very quiet in Virgo, you can’t confidently say there is a signal there [bonus note]. However, GW170818 is a clear detection like GW170814. Well done Virgo!

Using the collection of results, we can start understand the physics of these binary systems. We will be summarising our findings in a series of papers. A huge amount of work went into these.

### The papers

#### The O2 Catalogue Paper

Title: GWTC-1: A gravitational-wave transient catalog of compact binary mergers observed by LIGO and Virgo during the first and second observing runs
arXiv:
1811.12907 [astro-ph.HE]
Data: Catalogue; Parameter estimation results
Journal: Physical Review X; 9(3):031040(49); 2019
LIGO science summary: GWTC-1: A new catalog of gravitational-wave detections

The paper summarises all our observations of binaries to date. It covers our first and second observing runs (O1 and O2). This is the paper to start with if you want any information. It contains estimates of parameters for all our sources, including updates for previous events. It also contains merger rate estimates for binary neutron stars and binary black holes, and an upper limit for neutron star–black hole binaries. We’re still missing a neutron star–black hole detection to complete the set.

More details: The O2 Catalogue Paper

#### The O2 Populations Paper

Title: Binary black hole population properties inferred from the first and second observing runs of Advanced LIGO and Advanced Virgo
arXiv:
1811.12940 [astro-ph.HE]
Journal: Astrophysical Journal Letters; 882(2):L24(30); 2019
Data: Population inference results
LIGO science summary: Binary black hole properties inferred from O1 and O2

Using our set of ten binary black holes, we can start to make some statistical statements about the population: the distribution of masses, the distribution of spins, the distribution of mergers over cosmic time. With only ten observations, we still have a lot of uncertainty, and can’t make too many definite statements. However, if you were wondering why we don’t see any more black holes more massive than GW170729, even though we can see these out to significant distances, so are we. We infer that almost all stellar-mass black holes have masses less than $45 M_\odot$.

More details: The O2 Populations Paper

### The O2 Catalogue Paper

Synopsis: O2 Catalogue Paper
Read this if: You want the most up-to-date gravitational results
Favourite part: It’s out! We can tell everyone about our FOUR new detections

This is a BIG paper. It covers our first two observing runs and our main searches for coalescing stellar mass binaries. There will be separate papers going into more detail on searches for other gravitational wave signals.

#### The instruments

Gravitational wave detectors are complicated machines. You don’t just take them out of the box and press go. We’ll be slowly improving the sensitivity of our detectors as we commission them over the next few years. O2 marks the best sensitivity achieved to date. The paper gives a brief overview of the detector configurations in O2 for both LIGO detectors, which did differ, and Virgo.

During O2, we realised that one source of noise was beam jitter, disturbances in the shape of the laser beam. This was particularly notable in Hanford, where there was a spot on the one of the optics. Fortunately, we are able to measure the effects of this, and hence subtract out this noise. This has now been done for the whole of O2. It makes a big difference! Derek Davis and TJ Massinger won the first LIGO Laboratory Award for Excellence in Detector Characterization and Calibration™ for implementing this noise subtraction scheme (the award citation almost spilled the beans on our new detections). I’m happy that GW170104 now has an increased signal-to-noise ratio, which means smaller uncertainties on its parameters.

#### The searches

We use three search algorithms in this paper. We have two matched-filter searches (GstLAL and PyCBC). These compare a bank of templates to the data to look for matches. We also use coherent WaveBurst (cWB), which is a search for generic short signals, but here has been tuned to find the characteristic chirp of a binary. Since cWB is more flexible in the signals it can find, it’s slightly less sensitive than the matched-filter searches, but it gives us confidence that we’re not missing things.

The two matched-filter searches both identify all 11 signals with the exception of GW170818, which is only found by GstLAL. This is because PyCBC only flags signals above a threshold in each detector. We’re confident it’s real though, as it is seen in all three detectors, albeit below PyCBC’s threshold in Hanford and Virgo. (PyCBC only looked at signals found in coincident Livingston and Hanford in O2, I suspect they would have found it if they were looking at all three detectors, as that would have let them lower their threshold).

The search pipelines try to distinguish between signal-like features in the data and noise fluctuations. Having multiple detectors is a big help here, although we still need to be careful in checking for correlated noise sources. The background of noise falls off quickly, so there’s a rapid transition between almost-certainly noise to almost-certainly signal. Most of the signals are off the charts in terms of significance, with GW170818, GW151012 and GW170729 being the least significant. GW170729 is found with best significance by cWB, that gives reports a false alarm rate of $1/(50~\mathrm{yr})$.

Cumulative histogram of results from GstLAL (top left), PyCBC (top right) and cWB (bottom). The expected background is shown as the dashed line and the shaded regions give Poisson uncertainties. The search results are shown as the solid red line and named gravitational-wave detections are shown as blue dots. More significant results are further to the right of the plot. Fig. 2 and Fig. 3 of the O2 Catalogue Paper.

The false alarm rate indicates how often you would expect to find something at least as signal like if you were to analyse a stretch of data with the same statistical properties as the data considered, assuming that they is only noise in the data. The false alarm rate does not fold in the probability that there are real gravitational waves occurring at some average rate. Therefore, we need to do an extra layer of inference to work out the probability that something flagged by a search pipeline is a real signal versus is noise.

The results of this calculation is given in Table IV. GW170729 has a 94% probability of being real using the cWB results, 98% using the GstLAL results, but only 52% according to PyCBC. Therefore, if you’re feeling bold, you might, say, only wager the entire economy of the UK on it being real.

We also list the most marginal triggers. These all have probabilities way below being 50% of being real: if you were to add them all up you wouldn’t get a total of 1 real event. (In my professional opinion, they are garbage). However, if you want to check for what we might have missed, these may be a place to start. Some of these can be explained away as instrumental noise, say scattered light. Others show no obvious signs of disturbance, so are probably just some noise fluctuation.

#### The source properties

We give updated parameter estimates for all 11 sources. These use updated estimates of calibration uncertainty (which doesn’t make too much difference), improved estimate of the noise spectrum (which makes some difference to the less well measured parameters like the mass ratio), the cleaned data (which helps for GW170104), and our most currently complete waveform models [bonus note].

This plot shows the masses of the two binary components (you can just make out GW170817 down in the corner). We use the convention that the more massive of the two is $m_1$ and the lighter is $m_2$. We are now really filling in the mass plot! Implications for the population of black holes are discussed in the Populations Paper.

Estimated masses for the two binary objects for each of the events in O1 and O2. From lowest chirp mass (left; red) to highest (right; purple): GW170817 (solid), GW170608 (dashed), GW151226 (solid), GW151012 (dashed), GW170104 (solid), GW170814 (dashed), GW170809 (dashed), GW170818 (dashed), GW150914 (solid), GW170823 (dashed), GW170729 (solid). The contours mark the 90% credible regions. The grey area is excluded from our convention on masses. Part of Fig. 4 of the O2 Catalogue Paper. The mass ratio is $q = m_2/m_1$.

As well as mass, black holes have a spin. For the final black hole formed in the merger, these spins are always around 0.7, with a little more or less depending upon which way the spins of the two initial black holes were pointing. As well as being probably the most most massive, GW170729’s could have the highest final spin! It is a record breaker. It radiated a colossal $4.8^{+1.7}_{-1.7} M_\odot$ worth of energy in gravitational waves [bonus note].

Estimated final masses and spins for each of the binary black hole events in O1 and O2. From lowest chirp mass (left; red–orange) to highest (right; purple): GW170608 (dashed), GW151226 (solid), GW151012 (dashed), GW170104 (solid), GW170814 (dashed), GW170809 (dashed), GW170818 (dashed), GW150914 (solid), GW170823 (dashed), GW170729 (solid). The contours mark the 90% credible regions. Part of Fig. 4 of the O2 Catalogue Paper.

There is considerable uncertainty on the spins as there are hard to measure. The best combination to pin down is the effective inspiral spin parameter $\chi_\mathrm{eff}$. This is a mass weighted combination of the spins which has the most impact on the signal we observe. It could be zero if the spins are misaligned with each other, point in the orbital plane, or are zero. If it is non-zero, then it means that at least one black hole definitely has some spin. GW151226 and GW170729 have $\chi_\mathrm{eff} > 0$ with more than 99% probability. The rest are consistent with zero. The spin distribution for GW170104 has tightened up for GW170104 as its signal-to-noise ratio has increased, and there’s less support for negative $\chi_\mathrm{eff}$, but there’s been no move towards larger positive $\chi_\mathrm{eff}$.

Estimated effective inspiral spin parameters for each of the events in O1 and O2. From lowest chirp mass (left; red) to highest (right; purple): GW170817, GW170608, GW151226, GW151012, GW170104, GW170814, GW170809, GW170818, GW150914, GW170823, GW170729. Part of Fig. 5 of the O2 Catalogue Paper.

For our analysis, we use two different waveform models to check for potential sources of systematic error. They agree pretty well. The spins are where they show most difference (which makes sense, as this is where they differ in terms of formulation). For GW151226, the effective precession waveform IMRPhenomPv2 gives $0.20^{+0.18}_{-0.08}$ and the full precession model gives $0.15^{+0.25}_{-0.11}$ and extends to negative $\chi_\mathrm{eff}$. I panicked a little bit when I first saw this, as GW151226 having a non-zero spin was one of our headline results when first announced. Fortunately, when I worked out the numbers, all our conclusions were safe. The probability of $\chi_\mathrm{eff} < 0$ is less than 1%. In fact, we can now say that at least one spin is greater than $0.28$ at 99% probability compared with $0.2$ previously, because the full precession model likes spins in the orbital plane a bit more. Who says data analysis can’t be thrilling?

Our measurement of $\chi_\mathrm{eff}$ tells us about the part of the spins aligned with the orbital angular momentum, but not in the orbital plane. In general, the in-plane components of the spin are only weakly constrained. We basically only get back the information we put in. The leading order effects of in-plane spins is summarised by the effective precession spin parameter $\chi_\mathrm{p}$. The plot below shows the inferred distributions for $\chi_\mathrm{p}$. The left half for each event shows our results, the right shows our prior after imposed the constraints on spin we get from $\chi_\mathrm{eff}$. We get the most information for GW151226 and GW170814, but even then it’s not much, and we generally cover the entire allowed range of values.

Estimated effective inspiral spin parameters for each of the events in O1 and O2. From lowest chirp mass (left; red) to highest (right; purple): GW170817, GW170608, GW151226, GW151012, GW170104, GW170814, GW170809, GW170818, GW150914, GW170823, GW170729. The left (coloured) part of the plot shows the posterior distribution; the right (white) shows the prior conditioned by the effective inspiral spin parameter constraints. Part of Fig. 5 of the O2 Catalogue Paper.

One final measurement which we can make (albeit with considerable uncertainty) is the distance to the source. The distance influences how loud the signal is (the further away, the quieter it is). This also depends upon the inclination of the source (a binary edge-on is quieter than a binary face-on/off). Therefore, the distance is correlated with the inclination and we end up with some butterfly-like plots. GW170729 is again a record setter. It comes from a luminosity distance of $2.84^{+1.40}_{-1.36}~\mathrm{Gpc}$ away. That means it has travelled across the Universe for $3.2$$6.2$ billion years—it potentially started its journey before the Earth formed!

Estimated luminosity distances and orbital inclinations for each of the events in O1 and O2. From lowest chirp mass (left; red) to highest (right; purple): GW170817 (solid), GW170608 (dashed), GW151226 (solid), GW151012 (dashed), GW170104 (solid), GW170814 (dashed), GW170809 (dashed), GW170818 (dashed), GW150914 (solid), GW170823 (dashed), GW170729 (solid). The contours mark the 90% credible regions.An inclination of zero means that we’re looking face-on along the direction of the total angular momentum, and inclination of $\pi/2$ means we’re looking edge-on perpendicular to the angular momentum. Part of Fig. 7 of the O2 Catalogue Paper.

#### Waveform reconstructions

To check our results, we reconstruct the waveforms from the data to see that they match our expectations for binary black hole waveforms (and there’s not anything extra there). To do this, we use unmodelled analyses which assume that there is a coherent signal in the detectors: we use both cWB and BayesWave. The results agree pretty well. The reconstructions beautifully match our templates when the signal is loud, but, as you might expect, can resolve the quieter details. You’ll also notice the reconstructions sometimes pick up a bit of background noise away from the signal. This gives you and idea of potential fluctuations.

Time–frequency maps and reconstructed signal waveforms for the binary black holes. For each event we show the results from the detector where the signal was loudest. The left panel for each shows the time–frequency spectrogram with the upward-sweeping chip. The right show waveforms: blue the modelled waveforms used to infer parameters (LALInf; top panel); the red wavelet reconstructions (BayesWave; top panel); the black is the maximum-likelihood cWB reconstruction (bottom panel), and the green (bottom panel) shows reconstructions for simulated similar signals. I think the agreement is pretty good! All the data have been whitened as this is how we perform the statistical analysis of our data. Fig. 10 of the O2 Catalogue Paper.

I still think GW170814 looks like a slug. Some people think they look like crocodiles.

We’ll be doing more tests of the consistency of our signals with general relativity in a future paper.

#### Merger rates

Given all our observations now, we can set better limits on the merger rates. Going from the number of detections seen to the number merger out in the Universe depends upon what you assume about the mass distribution of the sources. Therefore, we make a few different assumptions.

For binary black holes, we use (i) a power-law model for the more massive black hole similar to the initial mass function of stars, with a uniform distribution on the mass ratio, and (ii) use uniform-in-logarithmic distribution for both masses. These were designed to bracket the two extremes of potential distributions. With our observations, we’re starting to see that the true distribution is more like the power-law, so I expect we’ll be abandoning these soon. Taking the range of possible values from our calculations, the rate is in the range of $9.7$$101~\mathrm{Gpc^{-3}\,yr^{-1}}$ for black holes between $5 M_\odot$ and $50 M_\odot$ [bonus note].

For binary neutron stars, which are perhaps more interesting astronomers, we use a uniform distribution of masses between $0.8 M_\odot$ and $2.3 M_\odot$, and a Gaussian distribution to match electromagnetic observations. We find that these bracket the range $97$$4440~\mathrm{Gpc^{-3}\,yr^{-1}}$. This larger than are previous range, as we hadn’t considered the Gaussian distribution previously.

90% upper limits for neutron star–black hole binaries. Three black hole masses were tried and two spin distributions. Results are shown for the two matched-filter search algorithms. Fig. 14 of the O2 Catalogue Paper.

Finally, what about neutron star–black holes? Since we don’t have any detections, we can only place an upper limit. This is a maximum of $610~\mathrm{Gpc^{-3}\,yr^{-1}}$. This is about a factor of 2 better than our O1 results, and is starting to get interesting!

We are sure to discover lots more in O3… [bonus note].

### The O2 Populations Paper

Synopsis: O2 Populations Paper
Read this if: You want the best family portrait of binary black holes
Favourite part: A maximum black hole mass?

Each detection is exciting. However, we can squeeze even more science out of our observations by looking at the entire population. Using all 10 of our binary black hole observations, we start to trace out the population of binary black holes. Since we still only have 10, we can’t yet be too definite in our conclusions. Our results give us some things to ponder, while we are waiting for the results of O3. I think now is a good time to start making some predictions.

We look at the distribution of black hole masses, black hole spins, and the redshift (cosmological time) of the mergers. The black hole masses tell us something about how you go from a massive star to a black hole. The spins tell us something about how the binaries form. The redshift tells us something about how these processes change as the Universe evolves. Ideally, we would look at these all together allowing for mixtures of binary black holes formed through different means. Given that we only have a few observations, we stick to a few simple models.

To work out the properties of the population, we perform a hierarchical analysis of our 10 binary black holes. We infer the properties of the individual systems, assuming that they come from a given population, and then see how well that population fits our data compared with a different distribution.

In doing this inference, we account for selection effects. Our detectors are not equally sensitive to all sources. For example, nearby sources produce louder signals and we can’t detect signals that are too far away, so if you didn’t account for this you’d conclude that binary black holes only merged in the nearby Universe. Perhaps less obvious is that we are not equally sensitive to all source masses. More massive binaries produce louder signals, so we can detect these further way than lighter binaries (up to the point where these binaries are so high mass that the signals are too low frequency for us to easily spot). This is why we detect more binary black holes than binary neutron stars, even though there are more binary neutron stars out here in the Universe.

#### Masses

When looking at masses, we try three models of increasing complexity:

• Model A is a simple power law for the mass of the more massive black hole $m_1$. There’s no real reason to expect the masses to follow a power law, but the masses of stars when they form do, and astronomers generally like power laws as they’re friendly, so its a sensible thing to try. We fit for the power-law index. The power law goes from a lower limit of $5 M_\odot$ to an upper limit which we also fit for. The mass of the lighter black hole $m_2$ is assumed to be uniformly distributed between $5 M_\odot$ and the mass of the other black hole.
• Model B is the same power law, but we also allow the lower mass limit to vary from $5 M_\odot$. We don’t have much sensitivity to low masses, so this lower bound is restricted to be above $5 M_\odot$. I’d be interested in exploring lower masses in the future. Additionally, we allow the mass ratio $q = m_2/m_1$ of the black holes to vary, trying $q^{\beta_q}$ instead of Model A’s $q^0$.
• Model C has the same power law, but now with some smoothing at the low-mass end, rather than a sharp turn-on. Additionally, it includes a Gaussian component towards higher masses. This was inspired by the possibility of pulsational pair-instability supernova causing a build up of black holes at certain masses: stars which undergo this lose extra mass, so you’d end up with lower mass black holes than if the stars hadn’t undergone the pulsations. The Gaussian could fit other effects too, for example if there was a secondary formation channel, or just reflect that the pure power law is a bad fit.

In allowing the mass distributions to vary, we find overall rates which match pretty well those we obtain with our main power-law rates calculation included in the O2 Catalogue Paper, higher than with the main uniform-in-log distribution.

The fitted mass distributions are shown in the plot below. The error bars are pretty broad, but I think the models agree on some broad features: there are more light black holes than heavy black holes; the minimum black hole mass is below about $9 M_\odot$, but we can’t place a lower bound on it; the maximum black hole mass is above about $35 M_\odot$ and below about $50 M_\odot$, and we prefer black holes to have more similar masses than different ones. The upper bound on the black hole minimum mass, and the lower bound on the black hole upper mass are set by the smallest and biggest black holes we’ve detected, respectively.

Binary black hole merger rate as a function of the primary mass ($m_1$; top) and mass ratio ($q$; bottom). The solid lines and bands show the medians and 90% intervals. The dashed line shows the posterior predictive distribution: our expectation for future observations averaging over our uncertainties. Fig. 2 of the O2 Populations Paper.

That there does seem to be a drop off at higher masses is interesting. There could be something which stops stars forming black holes in this range. It has been proposed that there is a mass gap due to pair instability supernovae. These explosions completely disrupt their progenitor stars, leaving nothing behind. (I’m not sure if they are accompanied by a flash of green light). You’d expect this to kick for black holes of about $50$$60 M_\odot$. We infer that 99% of merging black holes have masses below $44.0 M_\odot$ with Model A, $41.8 M_\odot$ with Model B, and $41.8 M_\odot$ with Model C. Therefore, our results are not inconsistent with a mass gap. However, we don’t really have enough evidence to be sure.

We can compare how well each of our three models fits the data by looking at their Bayes factors. These naturally incorporate the complexity of the models: models with more parameters (which can be more easily tweaked to match the data) are penalised so that you don’t need to worry about overfitting. We have a preference for Model C. It’s not strong, but I think good evidence that we can’t use a simple power law.

#### Spins

To model the spins:

• For the magnitude, we assume a beta distribution. There’s no reason for this, but these are convenient distributions for things between 0 and 1, which are the limits on black hole spin (0 is nonspinning, 1 is as fast as you can spin). We assume that both spins are drawn from the same distribution.
• For the spin orientations, we use a mix of an isotropic distribution and a Gaussian centred on being aligned with the orbital angular momentum. You’d expect an isotropic distribution if binaries were assembled dynamically, and perhaps something with spins generally aligned with each other if the binary evolved in isolation.

We don’t get any useful information on the mixture fraction. Looking at the spin magnitudes, we have a preference towards smaller spins, but still have support for large spins. The more misaligned spins are, the larger the spin magnitudes can be: for the isotropic distribution, we have support all the way up to maximal values.

Inferred spin magnitude distributions. The left shows results for the parametric distribution, assuming a mixture of almost aligned and isotropic spin, with the median (solid), 50% and 90% intervals shaded, and the posterior predictive distribution as the dashed line. Results are included both for beta distributions which can be singular at 0 and 1, and with these excluded. Model V is a very low spin model shown for comparison. The right shows a binned reconstruction of the distribution for aligned and isotropic distributions, showing the median and 90% intervals. Fig. 8 of the O2 Populations Paper.

Since spins are harder to measure than masses, it is not surprising that we can’t make strong statements yet. If we were to find something with definitely negative $\chi_\mathrm{eff}$, we would be able to deduce that spins can be seriously misaligned.

#### Redshift evolution

As a simple model of evolution over cosmological time, we allow the merger rate to evolve as $(1+z)^\lambda$. That’s right, another power law! Since we’re only sensitive to relatively small redshifts for the masses we detect ($z < 1$), this gives a good approximation to a range of different evolution schemes.

Evolution of the binary black hole merger rate (blue), showing median, 50% and 90% intervals. For comparison, a non-evolving rate calculated using Model B is shown too. Fig. 6 of the O2 Populations Paper.

We find that we prefer evolutions that increase with redshift. There’s an 88% probability that $\lambda > 0$, but we’re still consistent with no evolution. We might expect rate to increase as star formation was higher bach towards $z =2$. If we can measure the time delay between forming stars and black holes merging, we could figure out what happens to these systems in the meantime.

The local merger rate is broadly consistent with what we infer with our non-evolving distributions, but is a little on the lower side.

### Bonus notes

#### Naming

Gravitational waves are named as GW-year-month-day, so our first observation from 14 September 2015 is GW150914. We realise that this convention suffers from a Y2K-style bug, but by the time we hit 2100, we’ll have so many detections we’ll need a new scheme anyway.

Previously, we had a second designation for less significant potential detections. They were LIGO–Virgo Triggers (LVT), the one example being LVT151012. No-one was really happy with this designation, but it stems from us being cautious with our first announcement, and not wishing to appear over bold with claiming we’d seen two gravitational waves when the second wasn’t that certain. Now we’re a bit more confident, and we’ve decided to simplify naming by labelling everything a GW on the understanding that this now includes more uncertain events. Under the old scheme, GW170729 would have been LVT170729. The idea is that the broader community can decide which events they want to consider as real for their own studies. The current condition for being called a GW is that the probability of it being a real astrophysical signal is at least 50%. Our 11 GWs are safely above that limit.

The naming change has hidden the fact that now when we used our improved search pipelines, the significance of GW151012 has increased. It would now be a GW even under the old scheme. Congratulations LVT151012, I always believed in you!

Is it of extraterrestrial origin, or is it just a blurry figure? GW151012: the truth is out there!.

#### Burning bright

We are lacking nicknames for our new events. They came in so fast that we kind of lost track. Ilya Mandel has suggested that GW170729 should be the Tiger, as it happened on the International Tiger Day. Since tigers are the biggest of the big cats, this seems apt.

Carl-Johan Haster argues that LIGO+tiger = Liger. Since ligers are even bigger than tigers, this seems like an excellent case to me! I’d vote for calling the bigger of the two progenitor black holes GW170729-tiger, the smaller GW170729-lion, and the final black hole GW17-729-liger.

Suggestions for other nicknames are welcome, leave your ideas in the comments.

#### August 2017—Something fishy or just Poisson statistics?

The final few weeks of O2 were exhausting. I was trying to write job applications at the time, and each time I sat down to work on my research proposal, my phone went off with another alert. You may be wondering about was special about August. Some have hypothesised that it is because Aaron Zimmerman, my partner for the analysis of GW170104, was on the Parameter Estimation rota to analyse the last few weeks of O2. The legend goes that Aaron is especially lucky as he was bitten by a radioactive Leprechaun. I can neither confirm nor deny this. However, I make a point of playing any lottery numbers suggested by him.

A slightly more mundane explanation is that August was when the detectors were running nice and stably. They were observing for a large fraction of the time. LIGO Livingston reached its best sensitivity at this time, although it was less happy for Hanford. We often quantify the sensitivity of our detectors using their binary neutron star range, the average distance they could see a binary neutron star system with a signal-to-noise ratio of 8. If this increases by a factor of 2, you can see twice as far, which means you survey 8 times the volume. This cubed factor means even small improvements can have a big impact. The LIGO Livingston range peak a little over $100~\mathrm{Mpc}$. We’re targeting at least $120~\mathrm{Mpc}$ for O3, so August 2017 gives an indication of what you can expect.

Binary neutron star range for the instruments across O2. The break around week 3 was for the holidays (We did work Christmas 2015). The break at week 23 was to tune-up the instruments, and clean the mirrors. At week 31 there was an earthquake in Montana, and the Hanford sensitivity didn’t recover by the end of the run. Part of Fig. 1 of the O2 Catalogue Paper.

Of course, in the case of GW170817, we just got lucky.

#### Sign errors

GW170809 was the first event we identified with Virgo after it joined observing. The signal in Virgo is very quiet. We actually got better results when we flipped the sign of the Virgo data. We were just starting to get paranoid when GW170814 came along and showed us that everything was set up right at Virgo. When I get some time, I’d like to investigate how often this type of confusion happens for quiet signals.

#### SEOBNRv3

One of the waveforms, which includes the most complete prescription of the precession of the spins of the black holes, we use in our analysis goes by the technical name of SEOBNRv3. It is extremely computationally expensive. Work has been done to improve that, but this hasn’t been implemented in our reviewed codes yet. We managed to complete an analysis for the GW170104 Discovery Paper, which was a huge effort. I said then to not expect it for all future events. We did it for all the black holes, even for the lowest mass sources which have the longest signals. I was responsible for GW151226 runs (as well as GW170104) and I started these back at the start of the summer. Eve Chase put in a heroic effort to get GW170608 results, we pulled out all the stops for that.

#### Thanksgiving

I have recently enjoyed my first Thanksgiving in the US. I was lucky enough to be hosted for dinner by Shane Larson and his family (and cats). I ate so much I thought I might collapse to a black hole. Apparently, a Thanksgiving dinner can be 3000–4500 calories. That sounds like a lot, but the merger of GW170729 would have emitted about $5 \times 10^{40}$ times more energy. In conclusion, I don’t need to go on a diet.

#### Confession

We cheated a little bit in calculating the rates. Roughly speaking, the merger rate is given by

$\displaystyle R = \frac{N}{\langle VT\rangle}$,

where $N$ is the number of detections and $\langle VT\rangle$ is the amount of volume and time we’ve searched. You expect to detect more events if you increase the sensitivity of the detectors (and hence $V$), or observer for longer (and hence increase $T$). In our calculation, we included GW170608 in $N$, even though it was found outside of standard observing time. Really, we should increase $\langle VT\rangle$ to factor in the extra time outside of standard observing time when we could have made a detection. This is messy to calculate though, as there’s not really a good way to check this. However, it’s only a small fraction of the time (so the extra $T$ should be small), and for much of the sensitivity of the detectors will be poor (so $V$ will be small too). Therefore, we estimated any bias from neglecting this is smaller than our uncertainty from the calibration of the detectors, and not worth worrying about.

#### New sources

We saw our first binary black hole shortly after turning on the Advanced LIGO detectors. We saw our first binary neutron star shortly after turning on the Advanced Virgo detector. My money is therefore on our first neutron star–black hole binary shortly after we turn on the KAGRA detector. Because science…

# Top 2016 gravitational wave papers

2016 was a busy year for gravitational-wave astronomy. I wrote many blog posts about the papers I have been involved with (I still have a back log). Therefore, as a change, I thought I’d start 2017 looking at my favourite papers written by other people published in 2016. Here are my top three.

### Prospects for multiband gravitational-wave astronomy after GW150914

Author: Sesana, A.
arXiv:
1602.06951 [gr-qc]
Journal:
Physical Review Letters; 116(23):231102(6); 2016

I wrote about this paper previously when discussing the papers released to coincide the the announcement of the observation of GW150914. It suggests that we will be able to observe binary black holes months to years before they’re detectable with ground-based detectors with a space-borne detector like LISA. With this multi-band gravitational-wave astronomy, we could be able to learn even more about black holes

The concept of multi-band gravitational-wave astronomy is not actually new. I believe it was first suggested for LIGO and LISA detecting intermediate-mass black hole binaries (binaries with black holes about 100 times the mass of our Sun); it has also been suggested for combining LISA and pulsar timing measurements to look at supermassive black hole binaries (tens of millions to billions of times the mass of our Sun). However, this paper was to first to look at what we could really learn from these observations. We should be able to get a good sky localization (less than a square degree) ahead of the merger, meaning we can point telescopes ahead of time to try to catch any flash that might accompany it; we’ll also know when the merger should happen, so that we don’t need to worry about misidentifying any explosions we might spot.  LISA would be able to provide good constraints of the black hole masses, measuring the chirp mass to an accuracy of less than 0.01%!

This paper created some real enthusiasm for multi-band gravitational-wave astronomy. Vitale (2016) considered how the combined measurements could help us test general relativity. Breivik et al. (2016) and Nishizawa et al. (2106) looked at how LISA could measure the eccentricity of these binaries (which is practically impossible by the time they are observable with ground-based detectors) to figure out how they form. I think these will be fruitful avenues of research in the future.

The excitement surrounding LISA is well timed. A mission proposal has just been submitted to ESA for their upcoming Gravitational Universe science theme. NASA has also stated interest in rejoining the mission.

### Astrophysical constraints on massive black hole binary evolution from pulsar timing arrays

Authors: Middleton, H.; Del Pozzo, W.; Farr, W.M.; Sesana, A. & Vecchio, A.
arXiv:
1507.00992 [astro-ph.CO]
Journal:
Monthly Notices of the Royal Astronomical Society Letters; 455(1):L72–L76; 2016

This is a really neat paper studying what we could learn form pulsar timing arrays. Pulsar timing arrays are sensitive to very low frequency gravitational waves, those from supermassive black hole binaries (millions to billions the times the mass of our Sun). Lots of work has been invested in trying to detect a signal. There are three consortia currently working towards this, collaborating together as part of the International Pulsar Timing Array , but I suspect secretly hoping that they can get there first. This papers looks at what we’ll actually be able to infer about the supermassive black holes when we do make a detection.

They find, unsurprisingly, that using our current upper limits on the background of gravitational waves, we can place some constraints on the number of mergers, but not say much else. If the upper limit was to improve by an order of magnitude, we’d start to learn something about the mass distribution but we wouldn’t learn much about the shape. When we do make a detection, we get more information, but still not a lot. We would know that some binaries are merging, but not which ones: there are degeneracies between the merger rate and the mass distribution. This means that even with a detection, pulsar timing will not be able to pin down the distribution of supermassive black holes, we’ll have to fold in other observations too!

Gravitational waves might be cool, but they can’t tell us everything.

### Theoretical physics implications of the binary black-hole mergers GW150914 and GW151226

Authors: Yunes, N.; Yagi,  K. & Pretorius, F.
arXiv:
1603.08955 [gr-qc]
Journal:
Physical Review D; 94(8):084002(42); 2016

After a LISA paper and a pulsar-timing array paper, we’ll round off the trio with a LIGO paper. This paper takes an exhaustive view of the all the ways that the observations of gravitational-wave events so far constrain theories of gravity. It’s an impressive work, made even more so considering that they revised the paper following the announcement of GW151226. I would have been tempted to write a second paper on that. At 42 pages, this is heavy ready (it’s the least fun of my top 3), so it is perhaps best just to dip in to find out about your favourite alternative theories of gravity.

This paper highlights how the first observations of gravitational waves change the game when it comes to testing gravity. We now have a wealth of information on gravitational-wave generation, gravitational-wave propagation and the structure of black holes. This is great for cutting down the range of possible theories. However, as the authors point out, to really test other theories of gravity, we need predictions for their behaviour in the extreme and dynamic conditions of a binary black hole coalescence. There is still a huge amount of work to do.

I especially like this paper as it is an example of how results from LIGO and Virgo can be taken forward and put to good use by those outside of the Collaboration. I hope there will be more of this in the future.

# GW150914—The papers II

GW150914, The Event to its friends, was our first direct observation of gravitational waves. To accompany the detection announcement, the LIGO Scientific & Virgo Collaboration put together a suite of companion papers, each looking at a different aspect of the detection and its implications. Some of the work we wanted to do was not finished at the time of the announcement; in this post I’ll go through the papers we have produced since the announcement.

### The papers

I’ve listed the papers below in an order that makes sense to me when considering them together. Each started off as an investigation to check that we really understood the signal and were confident that the inferences made about the source were correct. We had preliminary results for each at the time of the announcement. Since then, the papers have evolved to fill different niches [bonus points note].

#### 13. The Basic Physics Paper

Title: The basic physics of the binary black hole merger GW150914
arXiv:
1608.01940 [gr-qc]
Journal:
Annalen der Physik529(1–2):1600209(17); 2017

The Event was loud enough to spot by eye after some simple filtering (provided that you knew where to look). You can therefore figure out some things about the source with back-of-the-envelope calculations. In particular, you can convince yourself that the source must be two black holes. This paper explains these calculations at a level suitable for a keen high-school or undergraduate physics student.

More details: The Basic Physics Paper summary

#### 14. The Precession Paper

Title: Improved analysis of GW150914 using a fully spin-precessing waveform model
arXiv:
1606.01210 [gr-qc]
Journal:
Physical Review X; 6(4):041014(19); 2016

To properly measure the properties of GW150914’s source, you need to compare the data to predicted gravitational-wave signals. In the Parameter Estimation Paper, we did this using two different waveform models. These models include lots of features binary black hole mergers, but not quite everything. In particular, they don’t include all the effects of precession (the wibbling of the orbit because of the black holes spins). In this paper, we analyse the signal using a model that includes all the precession effects. We find results which are consistent with our initial ones.

More details: The Precession Paper summary

#### 15. The Systematics Paper

Title: Effects of waveform model systematics on the interpretation of GW150914
arXiv:
1611.07531 [gr-qc]
Journal:
Classical & Quantum Gravity; 34(10):104002(48); 2017
LIGO science summary: Checking the accuracy of models of gravitational waves for the first measurement of a black hole merger

To check how well our waveform models can measure the properties of the source, we repeat the parameter-estimation analysis on some synthetic signals. These fake signals are calculated using numerical relativity, and so should include all the relevant pieces of physics (even those missing from our models). This paper checks to see if there are any systematic errors in results for a signal like GW150914. It looks like we’re OK, but this won’t always be the case.

More details: The Systematics Paper summary

#### 16. The Numerical Relativity Comparison Paper

Title: Directly comparing GW150914 with numerical solutions of Einstein’s equations for binary black hole coalescence
arXiv:
1606.01262 [gr-qc]
Journal:
Physical Review D; 94(6):064035(30); 2016
LIGO science summary: Directly comparing the first observed gravitational waves to supercomputer solutions of Einstein’s theory

Since GW150914 was so short, we can actually compare the data directly to waveforms calculated using numerical relativity. We only have a handful of numerical relativity simulations, but these are enough to give an estimate of the properties of the source. This paper reports the results of this investigation. Unsurprisingly, given all the other checks we’ve done, we find that the results are consistent with our earlier analysis.

If you’re interested in numerical relativity, this paper also gives a nice brief introduction to the field.

More details: The Numerical Relativity Comparison Paper summary

### The Basic Physics Paper

Synopsis: Basic Physics Paper
Read this if: You are teaching a class on gravitational waves
Favourite part: This is published in Annalen der Physik, the same journal that Einstein published some of his monumental work on both special and general relativity

It’s fun to play with LIGO data. The Gravitational Wave Open Science Center (GWOSC), has put together a selection of tutorials to show you some of the basics of analysing signals; we also have papers which introduce gravitational wave data analysis. I wouldn’t blame you if you went of to try them now, instead of reading the rest of this post. Even though it would mean that no-one read this sentence. Purple monkey dishwasher.

The GWOSC tutorials show you how to make your own version of some of the famous plots from the detection announcement. This paper explains how to go from these, using the minimum of theory, to some inferences about the signal’s source: most significantly that it must be the merger of two black holes.

GW150914 is a chirp. It sweeps up from low frequency to high. This is what you would expect of a binary system emitting gravitational waves. The gravitational waves carry away energy and angular momentum, causing the binary’s orbit to shrink. This means that the orbital period gets shorter, and the orbital frequency higher. The gravitational wave frequency is twice the orbital frequency (for circular orbits), so this goes up too.

The rate of change of the frequency depends upon the system’s mass. To first approximation, it is determined by the chirp mass,

$\displaystyle \mathcal{M} = \frac{(m_1 m_2)^{3/5}}{(m_1 + m_2)^{1/5}}$,

where $m_1$ and $m_2$ are the masses of the two components of the binary. By looking at the signal (go on, try the GWOSC tutorials), we can estimate the gravitational wave frequency $f_\mathrm{GW}$ at different times, and so track how it changes. You can rewrite the equation for the rate of change of the gravitational wave frequency $\dot{f}_\mathrm{GW}$, to give an expression for the chirp mass

$\displaystyle \mathcal{M} = \frac{c^3}{G}\left(\frac{5}{96} \pi^{-8/3} f_\mathrm{GW}^{-11/3} \dot{f}_\mathrm{GW}\right)^{3/5}$.

Here $c$ and $G$ are the speed of light and the gravitational constant, which usually pop up in general relativity equations. If you use this formula (perhaps fitting for the trend $f_\mathrm{GW}$) you can get an estimate for the chirp mass. By fiddling with your fit, you’ll see there is some uncertainty, but you should end up with a value around $30 M_\odot$ [bonus note].

Next, let’s look at the peak gravitational wave frequency (where the signal is loudest). This should be when the binary finally merges. The peak is at about $150~\mathrm{Hz}$. The orbital frequency is half this, so $f_\mathrm{orb} \approx 75~\mathrm{Hz}$. The orbital separation $R$ is related to the frequency by

$\displaystyle R = \left[\frac{GM}{(2\pi f_\mathrm{orb})^2}\right]^{1/3}$,

where $M = m_1 + m_2$ is the binary’s total mass. This formula is only strictly true in Newtonian gravity, and not in full general relativity, but it’s still a reasonable approximation. We can estimate a value for the total mass from our chirp mass; if we assume the two components are about the same mass, then $M = 2^{6/5} \mathcal{M} \approx 70 M_\odot$. We now want to compare the binary’s separation to the size of black hole with the same mass. A typical size for a black hole is given by the Schwarzschild radius

$\displaystyle R_\mathrm{S} = \frac{2GM}{c^2}$.

If we divide the binary separation by the Schwarzschild radius we get the compactness $\mathcal{R} = R/R_\mathrm{S} \approx 1.7$. A compactness of $\sim 1$ could only happen for black holes. We could maybe get a binary made of two neutron stars to have a compactness of $\sim2$, but the system is too heavy to contain two neutron stars (which have a maximum mass of about $3 M_\odot$). The system is so compact, it must contain black holes!

What I especially like about the compactness is that it is unaffected by cosmological redshifting. The expansion of the Universe will stretch the gravitational wave, such that the frequency gets lower. This impacts our estimates for the true orbital frequency and the masses, but these cancel out in the compactness. There’s no arguing that we have a highly relativistic system.

You might now be wondering what if we don’t assume the binary is equal mass (you’ll find it becomes even more compact), or if we factor in black hole spin, or orbital eccentricity, or that the binary will lose mass as the gravitational waves carry away energy? The paper looks at these and shows that there is some wiggle room, but the signal really constrains you to have black holes. This conclusion is almost as inescapable as a black hole itself.

There are a few things which annoy me about this paper—I think it could have been more polished; “Virgo” is improperly capitalised on the author line, and some of the figures are needlessly shabby. However, I think it is a fantastic idea to put together an introductory paper like this which can be used to show students how you can deduce some properties of GW150914’s source with some simple data analysis. I’m happy to be part of a Collaboration that values communicating our science to all levels of expertise, not just writing papers for specialists!

During my undergraduate degree, there was only a single lecture on gravitational waves [bonus note]. I expect the topic will become more popular now. If you’re putting together such a course and are looking for some simple exercises, this paper might come in handy! Or if you’re a student looking for some project work this might be a good starting reference—bonus points if you put together some better looking graphs for your write-up.

If this paper has whetted your appetite for understanding how different properties of the source system leave an imprint in the gravitational wave signal, I’d recommend looking at the Parameter Estimation Paper for more.

### The Precession Paper

Synopsis: Precession Paper
Read this if: You want our most detailed analysis of the spins of GW150914’s black holes
Favourite part: We might have previously over-estimated our systematic error

The Basic Physics Paper explained how you could work out some properties of GW150914’s source with simple calculations. These calculations are rather rough, and lead to estimates with large uncertainties. To do things properly, you need templates for the gravitational wave signal. This is what we did in the Parameter Estimation Paper.

In our original analysis, we used two different waveforms:

• The first we referred to as EOBNR, short for the lengthy technical name SEOBNRv2_ROM_DoubleSpin. In short: This includes the spins of the two black holes, but assumes they are aligned such that there’s no precession. In detail: The waveform is calculated by using effective-one-body dynamics (EOB), an approximation for the binary’s motion calculated by transforming the relevant equations into those for a single object. The S at the start stands for spin: the waveform includes the effects of both black holes having spins which are aligned (or antialigned) with the orbital angular momentum. Since the spins are aligned, there’s no precession. The EOB waveforms are tweaked (or calibrated, if you prefer) by comparing them to numerical relativity (NR) waveforms, in particular to get the merger and ringdown portions of the waveform right. While it is easier to solve the EOB equations than full NR simulations, they still take a while. To speed things up, we use a reduced-order model (ROM), a surrogate model constructed to match the waveforms, so we can go straight from system parameters to the waveform, skipping calculating the dynamics of the binary.
• The second we refer to as IMRPhenom, short for the technical IMRPhenomPv2. In short: This waveform includes the effects of precession using a simple approximation that captures the most important effects. In detail: The IMR stands for inspiral–merger–ringdown, the three phases of the waveform (which are included in in the EOBNR model too). Phenom is short for phenomenological: the waveform model is constructed by tuning some (arbitrary, but cunningly chosen) functions to match waveforms calculated using a mix of EOB, NR and post-Newtonian theory. This is done for black holes with (anti)aligned spins to first produce the IMRPhenomD model. This is then twisted up, to include the dominant effects of precession to make IMRPhenomPv2. This bit is done by combining the two spins together to create a single parameter, which we call $\chi_\mathrm{p}$, which determines the amount of precession. Since we are combining the two spins into one number, we lose a bit of the richness of the full dynamics, but we get the main part.

The EOBNR and IMRPhenom models are created by different groups using different methods, so they are useful checks of each other. If there is an error in our waveforms, it would lead to systematic errors in our estimated paramters

In this paper, we use another waveform model, a precessing EOBNR waveform, technically known as SEOBNRv3. This model includes all the effects of precession, not just the simple model of the IMRPhenom model. However, it is also computationally expensive, meaning that the analysis takes a long time (we don’t have a ROM to speed things up, as we do for the other EOBNR waveform)—each waveform takes over 20 times as long to calculate as the IMRPhenom model [bonus note].

Our results show that all three waveforms give similar results. The precessing EOBNR results are generally more like the IMRPhenom results than the non-precessing EOBNR results are. The plot below compares results from the different waveforms [bonus note].

Comparison of parameter estimates for GW150914 using different waveform models. The bars show the 90% credible intervals, the dark bars show the uncertainty on the 5%, 50% and 95% quantiles from the finite number of posterior samples. The top bar is for the non-precessing EOBNR model, the middle is for the precessing IMRPhenom model, and the bottom is for the fully precessing EOBNR model. Figure 1 of the Precession Paper; see Figure 9 for a comparison of averaged EOBNR and IMRPhenom results, which we have used for our overall results.

We had used the difference between the EOBNR and IMRPhenom results to estimate potential systematic error from waveform modelling. Since the two precessing models are generally in better agreement, we have may have been too pessimistic here.

The main difference in results is that our new refined analysis gives tighter constraints on the spins. From the plot above you can see that the uncertainty for the spin magnitudes of the heavier black hole $a_1$, the lighter black hole $a_2$ and the final black hole (resulting from the coalescence) $a_\mathrm{f}$, are slightly narrower. This makes sense, as including the extra imprint from the full effects of precession gives us a bit more information about the spins. The plots below show the constraints on the spins from the two precessing waveforms: the distributions are more condensed with the new results.

Comparison of orientations and magnitudes of the two component spins. The spin is perfectly aligned with the orbital angular momentum if the angle is 0. The left disk shows results using the precessing IMRPhenom model, the right using the precessing EOBNR model. In each, the distribution for the more massive black hole is on the left, and for the smaller black hole on the right. Adapted from Figure 5 of the Parameter Estimation Paper and Figure 4 of the Precession Paper.

In conclusion, this analysis had shown that included the full effects of precession do give slightly better estimates of the black hole spins. However, it is safe to trust the IMRPhenom results.

If you are looking for the best parameter estimates for GW150914, these results are better than the original results in the Parameter Estimation Paper. However, the O2 Catalogue Paper includes results using improved calibration and noise power spectral density estimation, as well as using precessing waveforms!

### The Systematics Paper

Synopsis: Systematics Paper
Read this if: You want to know how parameter estimation could fare for future detections
Favourite part: There’s no need to panic yet

The Precession Paper highlighted how important it is to have good waveform templates. If there is an error in our templates, either because of modelling or because we are missing some physics, then our estimated parameters could be wrong—we would have a source of systematic error.

We know our waveform models aren’t perfect, so there must be some systematic error, the question is how much? From our analysis so far (such as the good agreement between different waveforms in the Precession Paper), we think that systematic error is less significant than the statistical uncertainty which is a consequence of noise in the detectors. In this paper, we try to quantify systematic error for GW150914-like systems.

To asses systematic errors, we analyse waveforms calculated by numerical relativity simulations into data around the time of GW150914. Numerical relativity exactly solves Einstein’s field equations (which govern general relativity), so results of these simulations give the most accurate predictions for the form of gravitational waves. As we know the true parameters for the injected waveforms, we can compare these to the results of our parameter estimation analysis to check for biases.

We use waveforms computed by two different codes: the Spectral Einstein Code (SpEC) and the Bifunctional Adaptive Mesh (BAM) code. (Don’t the names make them sound like such fun?) Most waveforms are injected into noise-free data, so that we know that any offset in estimated parameters is dues to the waveforms and not detector noise; however, we also tried a few injections into real data from around the time of GW150914. The signals are analysed using our standard set-up as used in the Parameter Estimation Paper (a couple of injections are also included in the Precession Paper, where they are analysed with the fully precessing EOBNR waveform to illustrate its accuracy).

The results show that in most cases, systematic errors from our waveform models are small. However, systematic errors can be significant for some orientations of precessing binaries. If we are looking at the orbital plane edge on, then there can be errors in the distance, the mass ratio and the spins, as illustrated below [bonus note]. Thankfully, edge-on binaries are quieter than face-on binaries, and so should make up only a small fraction of detected sources (GW150914 is most probably face off). Furthermore, biases are only significant for some polarization angles (an angle which describes the orientation of the detectors relative to the stretch/squash of the gravitational wave polarizations). Factoring this in, a rough estimate is that about 0.3% of detected signals would fall into the unlucky region where waveform biases are important.

Parameter estimation results for two different GW150914-like numerical relativity waveforms for different inclinations and polarization angles. An inclination of $0^\circ$ means the binary is face on, $180^\circ$ means it face off, and an inclination around $90^\circ$ is edge on. The bands show the recovered 90% credible interval; the dark lines the median values, and the dotted lines show the true values. The (grey) polarization angle $\psi = 82^\circ$ was chosen so that the detectors are approximately insensitive to the $h_+$ polarization. Figure 4 of the Systematics Paper.

While it seems that we don’t have to worry about waveform error for GW150914, this doesn’t mean we can relax. Other systems may show up different aspects of waveform models. For example, our approximants only include the dominant modes (spherical harmonic decompositions of the gravitational waves). Higher-order modes have more of an impact in systems where the two black holes are unequal masses, or where the binary has a higher total mass, so that the merger and ringdown parts of the waveform are more important. We need to continue work on developing improved waveform models (or at least, including our uncertainty about them in our analysis), and remember to check for biases in our results!

### The Numerical Relativity Comparison Paper

Synopsis: Numerical Relativity Comparison Paper
Read this if: You are really suspicious of our waveform models, or really like long tables or numerical data
Favourite part: We might one day have enough numerical relativity waveforms to do full parameter estimation with them

In the Precession Paper we discussed how important it was to have accurate waveforms; in the Systematics Paper we analysed numerical relativity waveforms to check the accuracy of our results. Since we do have numerical relativity waveforms, you might be wondering why we don’t just use these in our analysis? In this paper, we give it a go.

Our standard parameter-estimation code (LALInference) randomly hops around parameter space, for each set of parameters we generate a new waveform and see how this matches the data. This is an efficient way of exploring the parameter space. Numerical relativity waveforms are too computationally expensive to generate one each time we hop. We need a different approach.

The alternative, is to use existing waveforms, and see how each of them match. Each simulation gives the gravitational waves for a particular mass ratio and combination of spins, we can scale the waves to examine different total masses, and it is easy to consider what the waves would look like if measured at a different position (distance, inclination or sky location). Therefore, we can actually cover a fair range of possible parameters with a given set of simulations.

To keep things quick, the code averages over positions, this means we don’t currently get an estimate on the redshift, and so all the masses are given as measured in the detector frame and not as the intrinsic masses of the source.

The number of numerical relativity simulations is still quite sparse, so to get nice credible regions, a simple Gaussian fit is used for the likelihood. I’m not convinced that this capture all the detail of the true likelihood, but it should suffice for a broad estimate of the width of the distributions.

The results of this analysis generally agree with those from our standard analysis. This is a relief, but not surprising given all the other checks that we have done! It hints that we might be able to get slightly better measurements of the spins and mass ratios if we used more accurate waveforms in our standard analysis, but the overall conclusions are  sound.

I’ve been asked if since these results use numerical relativity waveforms, they are the best to use? My answer is no. As well as potential error from the sparse sampling of simulations, there are several small things to be wary of.

• We only have short numerical relativity waveforms. This means that the analysis only goes down to a frequency of $30~\mathrm{Hz}$ and ignores earlier cycles. The standard analysis includes data down to $20~\mathrm{Hz}$, and this extra data does give you a little information about precession. (The limit of the simulation length also means you shouldn’t expect this type of analysis for the longer LVT151012 or GW151226 any time soon).
• This analysis doesn’t include the effects of calibration uncertainty. There is some uncertainty in how to convert from the measured signal at the detectors’ output to the physical strain of the gravitational wave. Our standard analysis fold this in, but that isn’t done here. The estimates of the spin can be affected by miscalibration. (This paper also uses the earlier calibration, rather than the improved calibration of the O1 Binary Black Hole Paper).
• Despite numerical relativity simulations producing waveforms which include all higher modes, not all of them are actually used in the analysis. More are included than in the standard analysis, so this will probably make negligible difference.

Finally, I wanted to mention one more detail, as I think it is not widely appreciated. The gravitational wave likelihood is given by an inner product

$\displaystyle L \propto \exp \left[- \int_{-\infty}^{\infty} \mathrm{d}f \frac{|s(f) - h(f)|^2}{S_n(f)} \right]$,

where $s(f)$ is the signal, $h(f)$ is our waveform template and $S_n(f)$ is the noise spectral density (PSD). These are the three things we need to know to get the right answer. This paper, together with the Precession Paper and the Systematics Paper, has been looking at error from our waveform models $h(f)$. Uncertainty from the calibration of $s(f)$ is included in the standard analysis, so we know how to factor this in (and people are currently working on more sophisticated models for calibration error). This leaves the noise PSD $S_n(f)$

The noise PSD varies all the time, so it needs to be estimated from the data. If you use a different stretch of data, you’ll get a different estimate, and this will impact your results. Ideally, you would want to estimate from the time span that includes the signal itself, but that’s tricky as there’s a signal in the way. The analysis in this paper calculates the noise power spectral density using a different time span and a different method than our standard analysis; therefore, we expect some small difference in the estimated parameters. This might be comparable to (or even bigger than) the difference from switching waveforms! We see from the similarity of results that this cannot be a big effect, but it means that you shouldn’t obsess over small differences, thinking that they could be due to waveform differences, when they could just come from estimation of the noise PSD.

Lots of work is currently going into making sure that the numerator term $|s(f) - h(f)|^2$ is accurate. I think that the denominator $S_n(f)$ needs attention too. Since we have been kept rather busy, including uncertainty in PSD estimation will have to wait for a future set papers.

### Bonus notes

#### Finches

100 bonus points to anyone who folds up the papers to make beaks suitable for eating different foods.

Our current best estimate for the chirp mass (from the O2 Catalogue Paper) would be $31.2_{-1.5}^{+1.7} M_\odot$. You need proper templates for the gravitational wave signal to calculate this. If you factor in the the gravitational wave gets redshifted (shifted to lower frequency by the expansion of the Universe), then the true chirp mass of the source system is $28.6_{-1.5}^{+1.6} M_\odot$.

#### Formative experiences

My one undergraduate lecture on gravitational waves was the penultimate lecture of the fourth-year general relativity course. I missed this lecture, as I had a PhD interview (at the University of Birmingham). Perhaps if I had sat through it, my research career would have been different?

#### Good things come…

The computational expense of a waveform is important, as when we are doing parameter estimation, we calculate lots (tens of millions) of waveforms for different parameters to see how they match the data. Before O1, the task of using SEOBNRv3 for parameter estimation seemed quixotic. The first detection, however, was enticing enough to give it a try. It was a truly heroic effort by Vivien Raymond and team that produced these results—I am slightly suspicious the Vivien might actually be a wizard.

GW150914 is a short signal, meaning it is relatively quick to analyse. Still, it required us using all the tricks at our disposal to get results in a reasonable time. When it came time to submit final results for the Discovery Paper, we had just about 1,000 samples from the posterior probability distribution for the precessing EOBNR waveform. For comparison, we had over 45,000 sample for the non-precessing EOBNR waveform. 1,000 samples isn’t enough to accurately map out the probability distributions, so we decided to wait and collect more samples. The preliminary results showed that things looked similar, so there wouldn’t be a big difference in the science we could do. For the Precession Paper, we finally collected 2,700 samples. This is still a relatively small number, so we carefully checked the uncertainty in our results due to the finite number of samples.

The Precession Paper has shown that it is possible to use the precessing EOBNR for parameter estimation, but don’t expect it to become the norm, at least until we have a faster implementation of it. Vivien is only human, and I’m sure his family would like to see him occasionally.

#### Parameter key

In case you are wondering what all the symbols in the results plots stand for, here are their usual definitions. First up, the various masses

• $m_1$—the mass of the heavier black hole, sometimes called the primary black hole;
• $m_2$—the mass of the lighter black hole, sometimes called the secondary black hole;
• $M$—the total mass of the binary, $M = m_1 + m_2$;
• $M_\mathrm{f}$—the mass of the final black hole (after merger);
• $\mathcal{M}$—the chirp mass, the combination of the two component masses which sets how the binary inspirals together;
• $q$—the mass ratio, $q = m_1/m_2 \leq 1$. Confusingly, numerical relativists often use the opposite  convention $q = m_2/m_1 \geq 1$ (which is why the Numerical Relativity Comparison Paper discusses results in terms of $1/q$: we can keep the standard definition, but all the numbers are numerical relativist friendly).

A superscript “source” is sometimes used to distinguish the actual physical masses of the source from those measured by the detector which have been affected by cosmological redshift. The measured detector-frame mass is $m = (1 + z) m^\mathrm{source}$, where $m^\mathrm{source}$ is the true, redshift-corrected source-frame mass and $z$ is the redshift. The mass ratio $q$ is independent of the redshift. On the topic of redshift, we have

• $z$—the cosmological redshift ($z = 0$ would be now);
• $D_\mathrm{L}$—the luminosity distance.

The luminosity distance sets the amplitude of the signal, as does the orientation which we often describe using

• $\iota$—the inclination, the angle between the line of sight and the orbital angular momentum ($\boldsymbol{L}$). This is zero for a face-on binary.
• $\theta_{JN}$—the angle between the line of sight ($\boldsymbol{N}$) and the total angular momentum of the binary ($\boldsymbol{J}$); this is approximately equal to the inclination, but is easier to use for precessing binaries.

As well as masses, black holes have spins

• $a_1$—the (dimensionless) spin magnitude of the heavier black hole, which is between $0$ (no spin) and $1$ (maximum spin);
• $a_2$—the (dimensionless) spin magnitude of the lighter black hole;
• $a_\mathrm{f}$—the (dimensionless) spin magnitude of the final black hole;
• $\chi_\mathrm{eff}$—the effective inspiral spin parameter, a combinations of the two component spins which has the largest impact on the rate of inspiral (think of it as the spin equivalent of the chirp mass);
• $\chi_\mathrm{p}$—the effective precession spin parameter, a combination of spins which indicate the dominant effects of precession, it’s $0$ for no precession and $1$ for maximal precession;
• $\theta_{LS_1}$—the primary tilt angle, the angle between the orbital angular momentum and the heavier black holes spin ($\boldsymbol{S_1}$). This is zero for aligned spin.
• $\theta_{LS_2}$—the secondary tilt angle, the angle between the orbital angular momentum and the lighter black holes spin ($\boldsymbol{S_2}$).
• $\phi_{12}$—the angle between the projections of the two spins on the orbital plane.

The orientation angles change in precessing binaries (when the spins are not perfectly aligned or antialigned with the orbital angular momentum), so we quote values at a reference time corresponding to when the gravitational wave frequency is $20~\mathrm{Hz}$. Finally (for the plots shown here)

• $\psi$—the polarization angle, this is zero when the detector arms are parallel to the $h_+$ polarization’s stretch/squash axis.

For more detailed definitions, check out the Parameter Estimation Paper or the LALInference Paper.

# Testing general relativity using golden black-hole binaries

Binary black hole mergers are the ultimate laboratory for testing gravity. The gravitational fields are strong, and things are moving at close to the speed of light. these extreme conditions are exactly where we expect our theories could breakdown, which is why we were so exciting by detecting gravitational waves from black hole coalescences. To accompany the first detection of gravitational waves, we performed several tests of Einstein’s theory of general relativity (it passed). This paper outlines the details of one of the tests, one that can be extended to include future detections to put Einstein’s theory to the toughest scrutiny.

One of the difficulties of testing general relativity is what do you compare it to? There are many alternative theories of gravity, but only a few of these have been studied thoroughly enough to give an concrete idea of what a binary black hole merger should look like. Even if general relativity comes out on top when compared to one alternative model, it doesn’t mean that another (perhaps one we’ve not thought of yet) can be ruled out. We need ways of looking for something odd, something which hints that general relativity is wrong, but doesn’t rely on any particular alternative theory of gravity.

The test suggested here is a consistency test. We split the gravitational-wave signal into two pieces, a low frequency part and a high frequency part, and then try to measure the properties of the source from the two parts. If general relativity is correct, we should get answers that agree; if it’s not, and there’s some deviation in the exact shape of the signal at different frequencies, we can get different answers. One way of thinking about this test is imagining that we have two experiments, one where we measure lower frequency gravitational waves and one where we measure higher frequencies, and we are checking to see if their results agree.

To split the waveform, we use a frequency around that of the last stable circular orbit: about the point that the black holes stop orbiting about each other and plunge together and merge [bonus note]. For GW150914, we used 132 Hz, which is about the same as the C an octave below middle C (a little before time zero in the simulation below). This cut roughly splits the waveform into the low frequency inspiral (where the two black hole are orbiting each other), and the higher frequency merger (where the two black holes become one) and ringdown (where the final black hole settles down).

We are fairly confident that we understand what goes on during the inspiral. This is similar physics to where we’ve been testing gravity before, for example by studying the orbits of the planets in the Solar System. The merger and ringdown are more uncertain, as we’ve never before probed these strong and rapidly changing gravitational fields. It therefore seems like a good idea to check the two independently [bonus note].

We use our parameter estimation codes on the two pieces to infer the properties of the source, and we compare the values for the mass $M_f$ and spin $\chi_f$ of the final black hole. We could use other sets of parameters, but this pair compactly sum up the properties of the final black hole and are easy to explain. We look at the difference between the estimated values for the mass and spin, $\Delta M_f$ and $\Delta \chi_f$, if general relativity is a good match to the observations, then we expect everything to match up, and $\Delta M_f$ and $\Delta \chi_f$ to be consistent with zero. They won’t be exactly zero because we have noise in the detector, but hopefully zero will be within the uncertainty region [bonus note]. An illustration of the test is shown below, including one of the tests we did to show that it does spot when general relativity is not correct.

Results from the consistency test. The top panels show the outlines of the 50% and 90% credible levels for the low frequency (inspiral) part of the waveform, the high frequency (merger–ringdown) part, and the entire (inspiral–merger–ringdown, IMR) waveform. The bottom panel shows the fractional difference between the high and low frequency results. If general relativity is correct, we expect the distribution to be consistent with $(0,0)$, indicated by the cross (+). The left panels show a general relativity simulation, and the right panel shows a waveform from a modified theory of gravity. Figure 1 of Ghosh et al. (2016).

A convenient feature of using $\Delta M_f$ and $\Delta \chi_f$ to test agreement with relativity, is that you can combine results from multiple observations. By averaging over lots of signals, you can reduce the uncertainty from noise. This allows you to pin down whether or not things really are consistent, and spot smaller deviations (we could get precision of a few percent after about 100 suitable detections). I look forward to seeing how this test performs in the future!

arXiv: 1602.02453 [gr-qc]
Journal: Physical Review D; 94(2):021101(6); 2016
Favourite golden thing: Golden syrup sponge pudding

### Bonus notes

#### Review

I became involved in this work as a reviewer. The LIGO Scientific Collaboration is a bit of a stickler when it comes to checking its science. We had to check that the test was coded up correctly, that the results made sense, and that calculations done and written up for GW150914 were all correct. Since most of the team are based in India [bonus note], this involved some early morning telecons, but it all went smoothly.

One of our checks was that the test wasn’t sensitive to exact frequency used to split the signal. If you change the frequency cut, the results from the two sections do change. If you lower the frequency, then there’s less of the low frequency signal and the measurement uncertainties from this piece get bigger. Conversely, there’ll be more signal in the high frequency part and so we’ll make a more precise measurement of the parameters from this piece. However, the overall results where you combine the two pieces stay about the same. You get best results when there’s a roughly equal balance between the two pieces, but you don’t have to worry about getting the cut exactly on the innermost stable orbit.

#### Golden binaries

In order for the test to work, we need the two pieces of the waveform to both be loud enough to allow us to measure parameters using them. This type of signals are referred to as golden. Earlier work on tests of general relativity using golden binaries has been done by Hughes & Menou (2015), and Nakano, Tanaka & Nakamura (2015). GW150914 was a golden binary, but GW151226 and LVT151012 were not, which is why we didn’t repeat this test for them.

#### GW150914 results

For The Event, we ran this test, and the results are consistent with general relativity being correct. The plots below show the estimates for the final mass and spin (here denoted $a_f$ rather than $\chi_f$), and the fractional difference between the two measurements. The points $(0,0)$ is at the 28% credible level. This means that if general relativity is correct, we’d expect a deviation at this level to occur around-about 72% of the time due to noise fluctuations. It wouldn’t take a particular rare realisation of noise to cause the assume true value of $(0,0)$ to be found at this probability level, so we’re not too suspicious that something is amiss with general relativity.

Results from the consistency test for The Event. The top panels final mass and spin measurements from the low frequency (inspiral) part of the waveform, the high frequency (post-inspiral) part, and the entire (IMR) waveform. The bottom panel shows the fractional difference between the high and low frequency results. If general relativity is correct, we expect the distribution to be consistent with $(0,0)$, indicated by the cross. Figure 3 of the Testing General Relativity Paper.

### The authors

Abhirup Ghosh and Archisman Ghosh were two of the leads of this study. They are both A. Ghosh at the same institution, which caused some confusion when compiling the LIGO Scientific Collaboration author list. I think at one point one of them (they can argue which) was removed as someone thought there was a mistaken duplication. To avoid confusion, they now have their full names used. This is a rare distinction on the Discovery Paper (I’ve spotted just two others). The academic tradition of using first initials plus second name is poorly adapted to names which don’t fit the typical western template, so we should be more flexible.

# A black hole Pokémon

The world is currently going mad for Pokémon Go, so it seems like the perfect time to answer the most burning of scientific questions: what would a black hole Pokémon be like?

Type: Dark/Ghost

Black holes are, well, black. Their gravity is so strong that if you get close enough, nothing, not even light, can escape. I think that’s about as dark as you can get!

After picking Dark as a primary type, I thought Ghost was a good secondary type, since black holes could be thought of as the remains of dead stars. This also fit well with black holes not really being made of anything—they are just warped spacetime—and so are ethereal in nature. Of course, black holes’ properties are grounded in general relativity and not the supernatural.

In the games, having a secondary type has another advantage: Dark types are weak against Fighting types. In reality, punching or kicking a black hole is a Bad Idea™: it will not damage the black hole, but will certainly cause you some difficulties. However, Ghost types are unaffected by Fighting-type moves, so our black hole Pokémon doesn’t have to worry about them.

Height: 0’04″/0.1 m

Real astrophysical black holes are probably a bit too big for Pokémon games.  The smallest Pokémon are currently the electric bug Joltik and fairy Flabébé, so I’ve made our black hole Pokémon the same size as these. It should comfortably fit inside a Pokéball.

Measuring the size of a black hole is actually rather tricky, since they curve spacetime. When talking about the size of a black hole, we normally think in terms of the Schwarzschild radius. Named after Karl Schwarzschild, who first calculated the spacetime of a black hole (although he didn’t realise that at the time), the Schwarzschild radius correspond to the event horizon (the point of no return) of a non-spinning black hole. It’s rather tricky to measure the distance to the centre of a black hole, so really the Schwarzschild radius gives an idea of the circumference (the distance around the edge) of the event horizon: this is 2π times the Schwarschild radius. We’ll take the height to really mean twice the Schwarzschild radius (which would be the Schwarzschild diameter, if that were actually a thing).

Weight: 7.5 × 1025 lbs/3.4 × 1025 kg

Although we made our black hole pocket-sized, it is monstrously heavy. The mass is for a black hole of the size we picked, and it is about 6 times that of the Earth. That’s still quite small for a black hole (it’s 3.6 million times less massive than the black hole that formed from GW150914’s coalescence). With this mass, our Pokémon would have a significant effect on the tides as it would quickly suck in the Earth’s oceans. Still, Pokémon doesn’t need to be too realistic.

Our black hole Pokémon would be by far the heaviest Pokémon, despite being one of the smallest. The heaviest Pokémon currently is the continent Pokémon Primal Groudon. This is 2,204.4 lbs/999.7 kg, so about 34,000,000,000,000,000,000,000 times lighter.

Within the games, having such a large weight would make our black hole Pokémon vulnerable to Grass Knot, a move which trips a Pokémon. The heavier the Pokémon, the more it is hurt by the falling over, so the more damage Grass Knot does. In the case of our Pokémon, when it trips it’s not so much that it hits the ground, but that the Earth hits it, so I think it’s fair that this hurts.

Gender: Unknown

Black holes are beautifully simple, they are described just by their mass, spin and electric charge. There’s no other information you can learn about them, so I don’t think there’s any way to give them a gender. I think this is rather fitting as the sun-like Solrock is also genderless, and it seems right that stars and black holes share this.

Ability: Sticky Hold
Hidden ability:
Soundproof

Sticky Hold prevents a Pokémon’s item from being taken. (I’d expect wild black hole Pokémon to be sometimes found holding Stardust, from stars they have consumed). Due to their strong gravity, it is difficult to remove an object that is orbiting a black hole—a common misconception is that it is impossible to escape the pull of a black hole, this is only true if you cross the event horizon (if you replaced the Sun with a black hole of the same mass, the Earth would happily continue on its orbit as if nothing had happened).

Soundproof is an ability that protects Pokémon from sound-based moves. I picked it as a reference to sonic (or acoustic) black holes. These are black hole analogues—systems which mimic some of the properties of black holes. A sonic black hole can be made in a fluid which flows faster than its speed of sound. When this happens, sound can no longer escape this rapidly flowing region (it just gets swept away), just like light can’t escape from the event horizon or a regular black hole.

Sonic black holes are fun, because you can make them in the lab. You can them use them to study the properties of black holes—there is much excitement about possibly observing the equivalent of Hawking radiation. Predicted by Stephen Hawking (as you might guess), Hawking radiation is emitted by black holes, and could cause them to evaporate away (if they didn’t absorb more than they emit). Hawking radiation has never been observed from proper black holes, as it is very weak. However, finding the equivalent for sonic black holes might be enough to get Hawking his Nobel Prize…

Moves:

Start — Gravity
Start — Crunch

The starting two moves are straightforward. Gravity is the force which governs black holes; it is gravity which pulls material in and causes the collapse  of stars. I think Crunch neatly captures the idea of material being squeezed down by intense gravity.

Level 16 — Vacuum Wave

Vacuum Wave sounds like a good description of a gravitational wave: it is a ripple in spacetime. Black holes (at least when in a binary) are great sources of gravitational waves (as GW150914 and GW151226 have shown), so this seems like a sensible move for our Pokémon to learn—although I may be biased. Why at level 16? Because Einstein first predicted gravitational waves from his theory of general relativity in 1916.

Level 18 — Discharge

Black holes can have an electric charge, so our Pokémon should learn an Electric-type move. Charged black holes can have some weird properties. We don’t normally worry about charged black holes for two reasons. First, charged black holes are difficult to make: stuff is usually neutral overall, you don’t get a lot of similarly charged material in one place that can collapse down, and even if you did, it would quickly attract the opposite charge to neutralise itself. Second, if you did manage to make a charged black hole, it would quickly lose its charge: the strong electric and magnetic fields about the black hole would lead to the creation of charged particles that would neutralise the black hole. Discharge seems like a good move to describe this process.

Why level 18? The mathematical description of charged black holes was worked out by Hans Reissner and Gunnar Nordström, the second paper was published in 1918.

Level 19 —Light Screen

In general relativity, gravity bends spacetime. It is this warping that causes objects to move along curved paths (like the Earth orbiting the Sun). Light is affected in the same way and gets deflected by gravity, which is called gravitational lensing. This was the first experimental test of general relativity. In 1919, Arthur Eddington led an expedition to measure the deflection of light around the Sun during a solar eclipse.

Black holes, having strong gravity, can strongly lens light. The graphics from the movie Interstellar illustrate this beautifully. Below you can see how the image of the disc orbiting the black hole is distorted. The back of the disc is visible above and below the black hole! If you look closely, you can also see a bright circle inside the disc, close to the black hole’s event horizon. This is known as the light ring. It is where the path of light gets so bent, that it can orbit around and around the black hole many times. This sounds like a Light Screen to me.

Light-bending around the black hole Gargantua in Interstellar. The graphics use proper simulations of black holes, but they did fudge a couple of details to make it look extra pretty. Credit: Warner Bros./Double Negative.

Level 29 — Dark Void
Level 36 — Hyperspace Hole

These are three moves which with the most black hole-like names. Dark Void might be “black hole” after a couple of goes through Google Translate. Hyperspace Hole might be a good name for one of the higher dimensional black holes theoreticians like to play around with. (I mean, they like to play with the equations, not actually the black holes, as you’d need more than a pair of safety mittens for that). Shadow Ball captures the idea that a black hole is a three-dimensional volume of space, not just a plug-hole for the Universe. Non-rotating black holes are spherical (rotating ones bulge out at the middle, as I guess many of us do), so “ball” fits well, but they aren’t actually the shadow of anything, so it falls apart there.

I’ve picked the levels to be the masses of the two black holes which inspiralled together to produce GW150914, measured in units of the Sun’s mass, and the mass of the black hole that resulted from their merger. There’s some uncertainty on these measurements, so it would be OK if the moves were learnt a few levels either way.

Level 63 — Whirlpool
Level 63 — Rapid Spin

When gas falls into a black hole, it often spirals around and forms into an accretion disc. You can see an artistic representation of one in the image from Instellar above. The gas swirls around like water going down the drain, making Whirlpool and apt move. As it orbits, the gas closer to the black hole is moving quicker than that further away. Different layers rub against each other, and, just like when you rub your hands together on a cold morning, they heat up. One of the ways we look for black holes is by spotting the X-rays emitted by these hot discs.

As the material spirals into a black hole, it spins it up. If a black hole swallows enough things that were all orbiting the same way, it can end up rotating extremely quickly. Therefore, I thought our black hole Pokémon should learn Rapid Spin as the same time as Whirlpool.

I picked level 63, as the solution for a rotating black hole was worked out by Roy Kerr in 1963. While Schwarzschild found the solution for a non-spinning black hole soon after Einstein worked out the details of general relativity in 1915, and the solution for a charged black hole came just after these, there’s a long gap before Kerr’s breakthrough. It was some quite cunning maths! (The solution for a rotating charged black hole was quickly worked out after this, in 1965).

Level 77 — Hyper Beam

Another cool thing about discs is that they could power jets. As gas sloshes around towards a black hole, magnetic fields can get tangled up. This leads to some of the material to be blasted outwards along the axis of the field. We’ve some immensely powerful jets of material, like the one below, and it’s difficult to imagine anything other than a black hole that could create such high energies! Important work on this was done by Roger Blandford and Roman Znajek in 1977, which is why I picked the level. Hyper Beam is no exaggeration in describing these jets.

Jets from Centaurus A are bigger than the galaxy itself! This image is a composite of X-ray (blue), microwave (orange) and visible light. You can see the jets pushing out huge bubbles above and below the galaxy. We think the jets are powered by the galaxy’s central supermassive black hole. Credit: ESO/WFI/MPIfR/APEX/NASA/CXC/CfA/A.Weiss et al./R.Kraft et al.

After using Hyper Beam, a Pokémon must recharge for a turn. It’s an exhausting move. A similar thing may happen with black holes. If they accrete a lot of stuff, the radiation produced by the infalling material blasts away other gas and dust, cutting off the black hole’s supply of food. Black holes in the centres of galaxies may go through cycles of feeding, with discs forming, blowing away the surrounding material, and then a new disc forming once everything has settled down. This link between the black hole and its environment may explain why we see a trend between the size of supermassive black holes and the properties of their host galaxies.

Level 100 — Spacial Rend
Level 100 — Roar of Time

To finish off, since black holes are warped spacetime, a space move and a time move. Relativity say that space and time are two aspects of the same thing, so these need to be learnt together.

It’s rather tricky to imagine space and time being linked. Wibbly-wobbly, timey-wimey, spacey-wacey stuff gets quickly gets befuddling. If you imagine just two space dimension (forwards/backwards and left/right), then you can see how to change one to the other by just rotating. If you turn to face a different way, you can mix what was left to become forwards, or to become a bit of right and a bit of forwards. Black holes sort of do the same thing with space and time. Normally, we’re used to the fact that we a definitely travelling forwards in time, but if you stray beyond the event horizon of a black hole, you’re definitely travelling towards the centre of the black hole in the same inescapable way. Black holes are the masters when it comes to manipulating space and time.

There we have it, we can now sleep easy knowing what a black hole Pokémon would be like. Well almost, we still need to come up with a name. Something resembling a pun would be traditional. Suggestions are welcome. The next games in the series are Pokémon Sun and Pokémon Moon. Perhaps with this space theme Nintendo might consider a black hole Pokémon too?

# The Boxing Day Event

Advanced LIGO’s first observing run (O1) got off to an auspicious start with the detection of GW150914 (The Event to its friends). O1 was originally planned to be three months long (September to December), but after the first discovery, there were discussions about extending the run. No major upgrades to the detectors were going to be done over the holidays anyway, so it was decided that we might as well leave them running until January.

By the time the Christmas holidays came around, I was looking forward to some time off. And, of course, lots of good food and the Doctor Who Christmas Special. The work on the first detection had been exhausting, and the Collaboration reached the collective decision that we should all take some time off [bonus note]. Not a creature was stirring, not even a mouse.

On Boxing Day, there was a sudden flurry of emails. This could only mean one thing. We had another detection! Merry GW151226 [bonus note]!

I assume someone left out milk and cookies at the observatories. A not too subtle hint from Nutsinee Kijbunchoo’s comic in the LIGO Magazine.

I will always be amazed how lucky we were detecting GW150914. This could have been easily missed if we were just a little later starting observing. If that had happened, we might not have considered extended O1, and would have missed GW151226 too!

GW151226 is another signal from a binary black hole coalescence. This wasn’t too surprising at the time, as we had estimated such signals should be pretty common. It did, however, cause a slight wrinkle in discussions of what to do in the papers about the discovery of GW150914. Should we mention that we had another potential candidate? Should we wait until we had analysed the whole of O1 fully? Should we pack it all in and have another slice of cake? In the end we decided that we shouldn’t delay the first announcement, and we definitely shouldn’t rush the analysis of the full data set. Therefore, we went ahead with the original plan of just writing about the first month of observations and giving slightly awkward answers, mumbling about still having data to analyse, when asked if we had seen anything else [bonus note]. I’m not sure how many people outside the Collaboration suspected.

### The science

What have we learnt from analysing GW151226, and what have we learnt from the whole of O1? We’ve split our results into two papers.

#### 0. The Boxing Day Discovery Paper

Title: GW151226: Observation of gravitational waves from a 22-solar-mass binary black hole
arXiv: 1606.04855 [gr-qc]
Journal: Physical Review Letters116(24):241103(14)
LIGO science summary: GW151226: Observation of gravitational waves from a 22 solar-mass binary black hole (by Hannah Middleton and Carl-Johan Haster)

This paper presents the discovery of GW151226 and some of the key information about it. GW151226 is not as loud as GW150914, you can’t spot it by eye in the data, but it still stands out in our search. This is a clear detection! It is another binary black hole system, but it is a lower mass system than GW150914 (hence the paper’s title—it’s a shame they couldn’t put in the error bars though).

This paper summarises the highlights of the discovery, so below, I’ll explain these without going into too much technical detail.

More details: The Boxing Day Discovery Paper summary

#### 1. The O1 Binary Black Hole Paper

Title: Binary black hole mergers in the first Advanced LIGO observing run
arXiv: 1606.04856 [gr-qc]
Journal: Physical Review X6(4):041015(36)
Posterior samples: Release v1.0

This paper brings together (almost) everything we’ve learnt about binary black holes from O1. It discusses GW150915, LVT151012 and GW151226, and what we are starting to piece together about stellar-mass binary black holes from this small family of gravitational-wave events.

For the announcement of GW150914, we put together 12 companion papers to go out with the detection announcement. This paper takes on that role. It is Robin, Dr Watson, Hermione and Samwise Gamgee combined. There’s a lot of delicious science packed into this paper (searches, parameter estimation, tests of general relativity, merger rate estimation, and astrophysical implications). In my summary below, I’ll delve into what we have done and what our results mean.

The results of this paper have now largely been updated in the O2 Catalogue Paper.

More details: The O1 Binary Black Hole Paper summary

If you are interested in our science results, you can find data releases accompanying the events at the LIGO Open Science Center. These pages also include some wonderful tutorials to play with.

### The Boxing Day Discovery Paper

Synopsis: Boxing Day Discovery Paper
Favourite part: We’ve done it again!

#### The signal

GW151226 is not as loud as GW150914, you can’t spot it by eye in the data. Therefore, this paper spends a little more time than GW150914’s Discovery Paper talking about the ingredients for our searches.

GW151226 was found by two pipelines which specifically look for compact binary coalescences: the inspiral and merger of neutron stars or black holes. We have templates for what we think these signals should look like, and we filter the data against a large bank of these to see what matches [bonus note].

For the search to work, we do need accurate templates. Figuring out what the waveforms for binary black coalescence should look like is a difficult job, and has taken almost as long as figuring out how to build the detectors!

The signal arrived at Earth 03:38:53 GMT on 26 December 2015 and was first identified by a search pipeline within 70 seconds. We didn’t have a rapid templated search online at the time of GW150914, but decided it would be a good idea afterwards. This allowed us to send out an alert to our astronomer partners so they could look for any counterparts (I don’t think any have been found [bonus note]).

The unmodelled searches (those which don’t use templates, but just coherent signals in both detectors) which first found GW150914 didn’t find GW151226. This isn’t too surprising, as they are less sensitive. You can think of the templated searches as looking for Wally (or Waldo if you’re North American), using the knowledge that he’s wearing glasses, and a red and white stripped bobble hat, but the unmodelled searches are looking for him just knowing that he’s the person that’s on on every page.

GW151226 is the second most significant event in the search for binary black holes after The Event. Its significance is not quite off the charts, but is great enough that we have a hard time calculating exactly how significant it is. Our two search pipelines give estimates of the p-value (the probability you’d see something at least this signal-like if you only had noise in your detectors) of $< 10^{-7}$ and $3.5 \times 10^{-6}$, which are pretty good!

#### The source

To figure out the properties of the source, we ran our parameter-estimation analysis.

GW151226 comes from a black hole binary with masses of $14.2^{+8.3}_{-3.7} M_\odot$ and $7.5^{+2.3}_{-2.3} M_\odot$ [bonus note], where $M_\odot$ is the mass of our Sun (about 330,000 times the mass of the Earth). The error bars indicate our 90% probability ranges on the parameters. These black holes are less massive than the source of GW150914 (the more massive black hole is similar to the less massive black hole of LVT151012). However, the masses are still above what we believe is the maximum possible mass of a neutron star (around $3 M_\odot$). The masses are similar to those observed for black holes in X-ray binaries, so perhaps these black holes are all part of the same extended family.

A plot showing the probability distributions for the masses is shown below. It makes me happy. Since GW151226 is lower mass than GW150914, we see more of the inspiral, the portion of the signal where the two black holes are spiralling towards each other. This means that we measure the chirp mass, a particular combination of the two masses really well. It is this which gives the lovely banana shape to the distribution. Even though I don’t really like bananas, it’s satisfying to see this behaviour as this is what we have been expecting too see!

Estimated masses for the two black holes in the binary of the Boxing Day Event. The dotted lines mark the edge of our 90% probability intervals. The different coloured curves show different models: they agree which again made me happy! The two-dimensional distribution follows a curve of constant chirp mass. The sharp cut-off at the top-left is because $m_1^\mathrm{source}$ is defined to be bigger than $m_2^\mathrm{source}$. Figure 3 of The Boxing Day Discovery Paper.

The two black holes merge to form a final black hole of $20.8^{+6.1}_{-1.7} M_\odot$ [bonus note].

If you add up the initial binary masses and compare this to the final mass, you’ll notice that something is missing. Across the entire coalescence, gravitational waves carry away $1.0^{+0.1}_{-0.2} M_\odot c^2 \simeq 1.8^{+0.2}_{-0.4} \times 10^{47}~\mathrm{J}$ of energy (where $c$ is the speed of light, which is used to convert masses to energies). This isn’t quite as impressive as the energy of GW150914, but it would take the Sun 1000 times the age of the Universe to output that much energy.

The mass measurements from GW151226 are cool, but what’re really exciting are the spin measurements. Spin, as you might guess, is a measure of how much angular momentum a black hole has. We define it to go from zero (not spinning) to one (spinning as much as is possible). A black hole is fully described by its mass and spin. The black hole masses are most important in defining what a gravitational wave looks like, but the imprint of spin is more subtle. Therefore its more difficult to get a good measurement of the spins than the masses.

For GW150915 and LVT151012, we get a little bit of information on the spins. We can conclude that the spins are probably not large, or at least they are not large and aligned with the orbit of the binary. However, we can’t say for certain that we’ve seen any evidence that the black holes are spinning. For GW151226, al least one of the black holes (although we can’t say which) has to be spinning [bonus note].

The plot below shows the probability distribution for the two spins of the binary black holes. This shows the both the magnitude of the spin and the direction that of the spin (if the tilt is zero the black hole and the binary’s orbit both go around in the same way). You can see we can’t say much about the spin of the lower mass black hole, but we have a good idea about the spin of the more massive black hole (the more extreme the mass ratio, the less important the spin of lower mass black is, making it more difficult to measure). Hopefully we’ll learn more about spins in future detections as these could tell us something about how these black holes formed.

Estimated orientation and magnitude of the two component spins. Calculated with our precessing waveform model. The distribution for the more massive black hole is on the left, and for the smaller black hole on the right. Part of Figure 4 of The Boxing Day Discovery Paper.

There’s still a lot to learn about binary black holes, and future detections will help with this. More information about what we can squeeze out of our current results are given in the O1 Binary Black Hole Paper.

### The O1 Binary Black Hole Paper

Synopsis: O1 Binary Black Hole Paper
Read this if: You want to know everything we’ve learnt about binary black holes
Favourite part: The awesome table of parameters at the end

This paper contains too much science to tackle all at once, so I’ve split it up into more bite-sized pieces, roughly following the flow of the paper. First we discuss how we find signals. Then we discuss the parameters inferred from the signals. This is done assuming that general relativity is correct, so we check for any deviations from predictions in the next section. After that, we consider the rate of mergers and what we expect for the population of binary black holes from our detections. Finally, we discuss our results in the context of wider astrophysics.

#### Searches

Looking for signals hidden amongst the data is the first thing to do. This paper only talks about the template search for binary black holes: other search results (including the results for binaries including neutron stars) we will reported elsewhere.

The binary black hole search was previously described in the Compact Binary Coalescence Paper. We have two pipelines which look for binary black holes using templates: PyCBC and GstLAL. These look for signals which are found in both detectors (within 15 ms of each other) which match waveforms in the template bank. A few specifics of these have been tweaked since the start of O1, but these don’t really change any of the results. An overview of the details for both pipelines are given in Appendix A of the paper.

The big difference from Compact Binary Coalescence Paper is the data. We are now analysing the whole of O1, and we are using an improved version of the calibration (although this really doesn’t affect the search). Search results are given in Section II. We have one new detection: GW151226.

Search results for PyCBC (left) and GstLAL (right). The histograms show the number of candidate events (orange squares) compare to the background. The further an orange square is to the right of the lines, the more significant it is. Different backgrounds are shown including and excluding GW150914 (top row) and GW151226 (bottom row). Figure 3 from the O1 Binary Black Hole Paper.

The plots above show the search results. Candidates are ranked by a detection statistic (a signal-to-noise ratio modified by a self-consistency check $\hat{\rho}_c$ for PyCBC, and a ratio of likelihood for the signal and noise hypotheses $\ln \mathcal{L}$ for GstLAL). A larger detection statistic means something is more signal-like and we assess the significance by comparing with the background of noise events. The further above the background curve an event is, the more significant it is. We have three events that stand out.

Number 1 is GW150914. Its significance has increased a little from the first analysis, as we can now compare it against more background data. If we accept that GW150914 is real, we should remove it from the estimation of the background: this gives us the purple background in the top row, and the black curve in the bottom row.

GW151226 is the second event. It clearly stands out when zooming in for the second row of plots. Identifying GW150914 as a signal greatly improves GW151226’s significance.

The final event is LVT151012. Its significance hasn’t changed much since the initial analysis, and is still below our threshold for detection. I’m rather fond of it, as I do love an underdog.

#### Parameter estimation

To figure out the properties of all three events, we do parameter estimation. This was previously described in the Parameter Estimation Paper. Our results for GW150914 and LVT151012 have been updated as we have reran with the newer calibration of the data. The new calibration has less uncertainty, which improves the precision of our results, although this is really only significant for the sky localization. Technical details of the analysis are given in Appendix B and results are discussed in Section IV. You may recognise the writing style of these sections.

The probability distributions for the masses are shown below. There is quite a spectrum, from the low mass GW151226, which is consistent with measurements of black holes in X-ray binaries, up to GW150914, which contains the biggest stellar-mass black holes ever observed.

Estimated masses for the two binary black holes for each of the events in O1. The contours mark the 50% and 90% credible regions. The grey area is excluded from our convention that $m_1^\mathrm{source} \geq m_2^\mathrm{source}$. Part of Figure 4 of the O1 Binary Black Hole Paper.

The distributions for the lower mass GW151226 and LVT151012 follow the curves of constant chirp mass. The uncertainty is greater for LVT151012 as it is a quieter (lower SNR) signal. GW150914 looks a little different, as the merger and ringdown portions of the waveform are more important. These place tighter constraints on the total mass, explaining the shape of the distribution.

Another difference between the lower mass inspiral-dominated signals and the higher mass GW150915 can be seen in the plot below. The shows the probability distributions for the mass ratio $q = m_2^\mathrm{source}/m_1^\mathrm{source}$ and the effective spin parameter $\chi_\mathrm{eff}$, which is a mass-weighted combination of the spins aligned with the orbital angular momentum. Both play similar parts in determining the evolution of the inspiral, so there are stretching degeneracies for GW151226 and LVT151012, but this isn’t the case for GW150914.

Estimated mass ratios $q$ and effective spins $\chi_\mathrm{eff}$ for each of the events in O1. The contours mark the 50% and 90% credible regions. Part of Figure 4 of the O1 Binary Black Hole Paper.

If you look carefully at the distribution of $\chi_\mathrm{eff}$ for GW151226, you can see that it doesn’t extend down to zero. You cannot have a non-zero $\chi_\mathrm{eff}$ unless at least one of the black holes is spinning, so this clearly shows the evidence for spin.

The final masses of the remnant black holes are shown below. Each is around 5% less than the total mass of the binary which merged to form it, with the rest radiated away as gravitational waves.

Estimated masses $M_\mathrm{f}^\mathrm{source}$ and spins $a_\mathrm{f}$ of the remnant black holes for each of the events in O1. The contours mark the 50% and 90% credible regions. Part of Figure 4 of the O1 Binary Black Hole Paper.

The plot also shows the final spins. These are much better constrained than the component spins as they are largely determined by the angular momentum of the binary as it merged. This is why the spins are all quite similar. To calculate the final spin, we use an updated formula compared to the one in the Parameter Estimation Paper. This now includes the effect of the components’ spin which isn’t aligned with the angular momentum. This doesn’t make much difference for GW150914 or LVT151012, but the change is slightly more for GW151226, as it seems to have more significant component spins.

The luminosity distance for the sources is shown below. We have large uncertainties because the luminosity distance is degenerate with the inclination. For GW151226 and LVT151012 this does result in some beautiful butterfly-like distance–inclination plots. For GW150914, the butterfly only has the face-off inclination wing (probably as consequence of the signal being louder and the location of the source on the sky). The luminosity distances for GW150914 and GW151226 are similar. This may seem odd, because GW151226 is a quieter signal, but that is because it is also lower mass (and so intrinsically quieter).

Probability distributions for the luminosity distance of the source of each of the three events in O1. Part of Figure 4 of the O1 Binary Black Hole Paper.

Sky localization is largely determined by the time delay between the two observatories. This is one of the reasons that having a third detector, like Virgo, is an awesome idea. The plot below shows the localization relative to the Earth. You can see that each event has a localization that is part of a ring which is set by the time delay. GW150914 and GW151226 were seen by Livingston first (apparently there is some gloating about this), and LVT151012 was seen by Hanford first.

Estimated sky localization relative to the Earth for each of the events in O1. The contours mark the 50% and 90% credible regions. H+ and L+ mark the locations of the two observatories. Part of Figure 5 of the O1 Binary Black Hole Paper.

Both GW151226 and LVT151012 are nearly overhead. This isn’t too surprising, as this is where the detectors are most sensitive, and so where we expect to make the most detections.

The improvement in the calibration of the data is most evident in the sky localization. For GW150914, the reduction in calibration uncertainty improves the localization by a factor of ~2–3! For LVT151012 it doesn’t make much difference because of its location and because it is a much quieter signal.

The map below shows the localization on the sky (actually where in Universe the signal came from). The maps have rearranged themselves because of the Earth’s rotation (each event was observed at a different sidereal time).

Estimated sky localization (in right ascension and declination) for each of the events in O1. The contours mark the 50% and 90% credible regions. Part of Figure 5 of the O1 Binary Black Hole Paper.

We’re nowhere near localising sources to single galaxies, so we may never know exactly where these signals originated from.

#### Tests of general relativity

The Testing General Relativity Paper reported several results which compared GW150914 with the predictions of general relativity. Either happily or sadly, depending upon your point of view, it passed them all. In Section V of the paper, we now add GW151226 into the mix. (We don’t add LVT151012 as it’s too quiet to be much use).

A couple of the tests for GW150914 looked at the post-inspiral part of the waveform, looking at the consistency of mass and spin estimates, and trying to match the ringdown frequency. Since GW151226 is lower mass, we can’t extract any meaningful information from the post-inspiral portion of the waveform, and so it’s not worth repeating these tests.

However, the fact that GW151226 has such a lovely inspiral means that we can place some constraints on post-Newtonian parameters. We have lots and lots of cycles, so we are sensitive to any small deviations that arise during inspiral.

The plot below shows constraints on deviations for a set of different waveform parameters. A deviation of zero indicates the value in general relativity. The first four boxes (for parameters referred to as $\varphi_i$ in the Testing General Relativity Paper) are parameters that affect the inspiral. The final box on the right is for parameters which impact the merger and ringdown. The top row shows results for GW150914, these are updated results using the improved calibrated data. The second row shows results for GW151226, and the bottom row shows what happens when you combine the two.

Probability distributions for waveform parameters. The top row shows bounds from just GW150914, the second from just GW151226, and the third from combining the two. A deviation of zero is consistent with general relativity. Figure 6 from the O1 Binary Black hole Paper.

All the results are happily about zero. There were a few outliers for GW150914, but these are pulled back in by GW151226. We see that GW151226 dominates the constraints on the inspiral parameters, but GW150914 is more important for the merger–ringdown $\alpha_i$ parameters.

Again, Einstein’s theory passes the test. There is no sign of inconsistency (yet). It’s clear that adding more results greatly improves our sensitivity to these parameters, so these tests will continue put general relativity through tougher and tougher tests.

#### Rates

We have a small number of events, around 2.9 in total, so any estimates of how often binary black holes merge will be uncertain. Of course, just because something is tricky, it doesn’t mean we won’t give it a go! The Rates Paper discussed estimates after the first 16 days of coincident data, when we had just 1.9 events. Appendix C gives technical details and Section VI discusses results.

The whole of O1 is about 52 days’ worth of coincident data. It’s therefore about 3 times as long as the initial stretch. in that time we’ve observed about 3/2 times as many events. Therefore, you might expect that the event rate is about 1/2 of our original estimates. If you did, get yourself a cookie, as you are indeed about right!

To calculate the rates we need to assume something about the population of binary black holes. We use three fiducial distributions:

1. We assume that binary black holes are either like GW150914, LVT151012 or GW151226. This event-based rate is different from the previous one as it now includes an extra class for GW151226.
2. A flat-in-the-logarithm-of-masses distribution, which we expect gives a sensible lower bound on the rate.
3. A power law slope for the larger black hole of $-2.35$, which we expect gives a sensible upper bound on the rate.

We find that the rates are 1. $54^{+111}_{-40}~\mathrm{Gpc^{-3}\,yr^{-1}}$, 2. $30^{+46}_{-21}~\mathrm{Gpc^{-3}\,yr^{-1}}$, and 3. $97^{+149}_{-68}~\mathrm{Gpc^{-3}\,yr^{-1}}$. As expected, the first rate is nestled between the other two.

Despite the rates being lower, there’s still a good chance we could see 10 events by the end of O2 (although that will depend on the sensitivity of the detectors).

A new results that is included in with the rates, is a simple fit for the distribution of black hole masses [bonus note]. The method is described in Appendix D. It’s just a repeated application of Bayes’ theorem to go from the masses we measured from the detected sources, to the distribution of masses of the entire population.

We assume that the mass of the larger black hole is distributed according to a power law with index $\alpha$, and that the less massive black hole has a mass uniformly distributed in mass ratio, down to a minimum black hole mass of $5 M_\odot$. The cut-off, is the edge of a speculated mass gap between neutron stars and black holes.

We find that $\alpha = 2.5^{+1.5}_{-1.6}$. This has significant uncertainty, so we can’t say too much yet. This is a slightly steeper slope than used for the power-law rate (although entirely consistent with it), which would nudge the rates a little lower. The slope does fit in with fits to the distribution of masses in X-ray binaries. I’m excited to see how O2 will change our understanding of the distribution.

#### Astrophysical implications

With the announcement of GW150914, the Astrophysics Paper reviewed predictions for binary black holes in light of the discovery. The high masses of GW150914 indicated a low metallicity environment, perhaps no more than half of solar metallicity. However, we couldn’t tell if GW150914 came from isolated binary evolution (two stars which have lived and died together) or a dynamical interaction (probably in a globular cluster).

Since then, various studies have been performed looking at both binary evolution (Eldridge & Stanway 2016; Belczynski et al. 2016de Mink & Mandel 2016Hartwig et al. 2016; Inayoshi et al. 2016; Lipunov et al. 2016) and dynamical interactions (O’Leary, Meiron & Kocsis 2016; Mapelli 2016; Rodriguez et al. 2016), even considering binaries around supermassive black holes (Bartos et al. 2016; Stone, Metzger & Haiman 2016). We don’t have enough information to tell the two pathways apart. GW151226 gives some new information. Everything is reviewed briefly in Section VII.

GW151226 and LVT151012 are lower mass systems, and so don’t need to come from as low a metallicity environment as GW150914 (although they still could). Both are also consistent with either binary evolution or dynamical interactions. However, the low masses of GW151226 mean that it probably does not come from one particular binary formation scenario, chemically homogeneous evolution, and it is less likely to come from dynamical interactions.

Building up a population of sources, and getting better measurements of spins and mass ratios will help tease formation mechanisms apart. That will take a while, but perhaps it will be helped if we can do multi-band gravitational-wave astronomy with eLISA.

This section also updates predictions from the Stochastic Paper for the gravitational-wave background from binary black holes. There’s a small change from an energy density of $\Omega_\mathrm{GW} = 1.1^{+2.7}_{-0.9} \times 10^{-9}$ at a frequency of 25 Hz to $\Omega_\mathrm{GW} = 1.2^{+1.9}_{-0.9} \times 10^{-9}$. This might be measurable after a few years at design sensitivity.

#### Conclusion

We are living in the future. We may not have hoverboards, but the era of gravitational-wave astronomy is here. Not in 20 years, not in the next decade, not in five more years, now. LIGO has not just opened a new window, it’s smashed the window and jumped through it just before the explosion blasts the side off the building. It’s so exciting that I can’t even get my metaphors straight. The introductory paragraphs of papers on gravitational-wave astronomy will never be the same again.

Although we were lucky to discover GW150914, it wasn’t just a fluke. Binary black coalescences aren’t that rare and we should be detecting more. Lots more. You know that scene in a movie where the heroes have defeated a wave of enemies and then the camera pans back to show the approaching hoard that stretches to the horizon? That’s where we are now. O2 is coming. The second observing run, will start later this year, and we expect we’ll be adding many entries to our list of binary black holes.

We’re just getting started with LIGO and Virgo. There’ll be lots more science to come.

If you made it this far, you deserve a biscuit. A fancy one too, not just a digestive.

Or, if you’re hungry for more, here are some blogs from my LIGO colleagues

• Daniel Williams (a PhD student at University of Glasgow)
• Matt Pitkin (who is hunting for continuous gravitational waves)
• Shane Larson (who is also investigating mutli-band gravitational-wave astronomy)
• Amber Sturver (who works at the Livingston Observatory)

My group at Birmingham also made some short reaction videos (I’m too embarrassed to watch mine).

### Bonus notes

#### Christmas cease-fire

In the run-up to the holidays, there were lots of emails that contained phrases like “will have to wait until people get back from holidays” or “can’t reply as the group are travelling and have family commitments”. No-one ever said that they were taking a holiday, but just that it was happening in general, so we’d all have to wait for a couple of weeks. No-one ever argued with this, because, of course, while you were waiting for other people to do things, there was nothing you could do, and so you might as well take some time off. And you had been working really hard, so perhaps an evening off and an extra slice of cake was deserved…

Rather guiltily, I must confess to ignoring the first few emails on Boxing Day. (Although I saw them, I didn’t read them for reasons of plausible deniability). I thought it was important that my laptop could have Boxing Day off. Thankfully, others in the Collaboration were more energetic and got things going straight-away.

#### Naming

Gravitational-wave candidates (or at least the short ones from merging binary black holes which we have detected so far), start off life named by a number in our database. This event started life out as G211117. After checks and further analysis, to make sure we can’t identify any environmental effects which could have caused the detector to misbehave, candidates are renamed. Those which are significant enough to be claimed as a detection get the Gravitational Wave (GW) prefix. Those we are less certain of get the LIGO–Virgo Trigger (LVT) prefix. The rest of the name is the date in Coordinated Universal Time (UTC). The new detection is GW151226.

Informally though, it is the Boxing Day Event. I’m rather impressed that this stuck as the Collaboration is largely US based: it was still Christmas Day in the US when the detection was made, and Americans don’t celebrate Boxing Day anyway.

#### Other searches

We are now publishing the results of the O1 search for binary black holes with a template bank which goes up to total observed binary masses of $100 M_\odot$. Therefore we still have to do the same about searches for anything else. The results from searches for other compact binaries should appear soon (binary neutron star and neutron star–black hole upper limits; intermediate mass black hole binary upper limits). It may be a while before we have all the results looking for continuous waves.

#### Matched filtering

The compact binary coalescence search uses matched filtering to hunt for gravitational waves. This is a well established technique in signal processing. You have a template signal, and you see how this correlates with the data. We use the detectors’ sensitivity to filter the data, so that we give more weight to bits which match where we are sensitive, and little weight to matches where we have little sensitivity.

I imagine matched filtering as similar to how I identify a piece of music: I hear a pattern of notes and try to compare to things I know. Dum-dum-dum-daah? Beethoven’s Fifth.

Filtering against a large number of templates takes a lot of computational power, so we need to be cunning as to which templates we include. We don’t want to miss anything, so we need enough templates to cover all possibilities, but signals from similar systems can look almost identical, so we just need one representative template included in the bank. Think of trying to pick out Under Pressure, you could easily do this with a template for Ice Ice Baby, and you don’t need both Mr Brightside and Ode to Joy.

It doesn’t matter if the search doesn’t pick out a template that perfectly fits the properties of the source, as this is what parameter estimation is for.

The figure below shows how effective matched filtering can be.

• The top row shows the data from the two interferometers. It’s been cleaned up a little bit for the plot (to keep the experimentalists happy), but you can see that the noise in the detectors is seemingly much bigger than the best match template (shown in black, the same for both detectors).
• The second row shows the accumulation of signal-to-noise ratio (SNR). If you correlate the data with the template, you see that it matches the template, and keeps matching the template. This is the important part, although, at any moment it looks like there’s just random wibbles in the detector, when you compare with a template you find that there is actually a signal which evolves in a particular way. The SNR increases until the signal stops (because the black holes have merged). It is a little lower in the Livinston detector as this was slightly less sensitive around the time of the Boxing Day Event.
• The third row shows how much total SNR you would get if you moved the best match template around in time. There’s a clear peak. This is trying to show that the way the signal changes is important, and you wouldn’t get a high SNR when the signal isn’t there (you would normally expect it to be about 1).
• The final row shows the amount of energy at a particular frequency at a particular time. Compact binary coalescences have a characteristic chirp, so you would expect a sweep from lower frequencies up to higher frequencies. You can just about make it out in these plots, but it’s not obvious as for GW150914. This again shows the value of matched filtering, but it also shows that there’s no other weird glitchy stuff going on in the detectors at the time.

Observation of The Boxing Day Event in LIGO Hanford and LIGO Livingston. The top row shows filtered data and best match template. The second row shows how this template accumulates signal-to-noise ratio. The third row shows signal-to-noise ratio of this template at different end times. The fourth row shows a spectrogram of the data. Figure 1 of the Boxing Day Discovery Paper.

#### Electromagnetic and neutrino follow-up

Reports by electromagnetic astronomers on their searches for counterparts so far are:

Reports by neutrino astronomers are:

• ANTARES and IceCube—a search for high-energy neutrinos (above 100 GeV) coincident with LVT151012 or GW151226.
• KamLAND—a search for neutrinos (1.8 MeV to 111 MeV) coincident with GW150914, LVT151012 or GW151226.
• Pierre Auger Observatory—a search for ultra high-energy (above 100 PeV) neutrinos coincident with GW150914, LVT151012 or GW151226.
• Super-Kamiokande—a search for neutrinos (of a wide range of energies, from 3.5 MeV to 100 PeV) coincident with GW150914 or GW151226.
• Borexino—a search for low-energy (250 keV to 15 MeV) neutrinos coincident with GW150914, GW151226 and GW170104.
• NOvA—a search for neutrinos and cosmic rays (or a wide range of energies, from 10 MeV to over a GeV) coincident with all events from O1 and O2, plus triggers from O3.

No counterparts have been claimed, which isn’t surprising for a binary black hole coalescence.

#### Rounding

In various places, the mass of the smaller black hole is given as $8 M_\odot$. The median should really round to $7 M_\odot$ as to three significant figures it is $7.48 M_\odot$. This really confused everyone though, as with rounding you’d have a binary with components of masses $14 M_\odot$ and $7 M_\odot$ and total mass $22 M_\odot$. Rounding is a pain! Fortunately, $8 M_\odot$ lies well within the uncertainty: the 90% range is $5.2\text{--}9.8 M_\odot$.

#### Black holes are massive

I tried to find a way to convert the mass of the final black hole into every day scales. Unfortunately, the thing is so unbelievably massive, it just doesn’t work: it’s no use relating it to elephants or bowling balls. However, I did have some fun looking up numbers. Currently, it costs about £2 to buy a 180 gram bar of Cadbury’s Bourneville. Therefore, to buy an equivalent amount of dark chocolate would require everyone on Earth to save up for about 600 millions times the age of the Universe (assuming GDP stays constant). By this point, I’m sure the chocolate will be past its best, so it’s almost certainly a big waste of time.

#### Maximum minimum spin

One of the statistics people really seemed to latch on to for the Boxing Day Event was that at least one of the binary black holes had to have a spin of greater than 0.2 with 99% probability. It’s a nice number for showing that we have a preference for some spin, but it can be a bit tricky to interpret. If we knew absolutely nothing about the spins, then we would have a uniform distribution on both spins. There’d be a 10% chance that the spin of the more massive black hole is less than 0.1, and a 10% chance that the spin of the other black hole is less than 0.1. Hence, there’s a 99% probability that there is at least one black hole with spin greater than 0.1, even though we have no evidence that the black holes are spinning (or not). Really, you need to look at the full probability distributions for the spins, and not just the summary statistics, to get an idea of what’s going on.

#### Just one more thing…

The fit for the black hole mass distribution was the last thing to go in the paper. It was a bit frantic to get everything reviewed in time. In the last week, there were a couple of loud exclamations from the office next to mine, occupied by John Veitch, who as one of the CBC chairs has to keep everything and everyone organised. (I’m not quite sure how John still has so much of his hair). It seems that we just can’t stop doing science. There is a more sophisticated calculation in the works, but the foot was put down that we’re not trying to cram any more into the current papers.

# Inference on gravitational waves from coalescences of stellar-mass compact objects and intermediate-mass black holes

I love collecting things, there’s something extremely satisfying about completing a set. I suspect that this is one of the alluring features of Pokémon—you’ve gotta catch ’em all. The same is true of black hole hunting. Currently, we know of stellar-mass black holes which are a few times the mass of our Sun, up to a few tens of the mass of our Sun (the black holes of GW150914 are the biggest yet to be observed), and we know of supermassive black holes, which are ten thousand to ten billion times the mass our Sun. However, we are missing intermediate-mass black holes which lie in the middle. We have Charmander and Charizard, but where is Charmeleon? The elusive ones are always the most satisfying to capture.

Adorable black hole (available for adoption). I’m sure this could be a Pokémon. It would be a Dark type. Not that I’ve given it that much thought…

Intermediate-mass black holes have evaded us so far. We’re not even sure that they exist, although that would raise questions about how you end up with the supermassive ones (you can’t just feed the stellar-mass ones lots of rare candy). Astronomers have suggested that you could spot intermediate-mass black holes in globular clusters by the impact of their gravity on the motion of other stars. However, this effect would be small, and near impossible to conclusively spot. Another way (which I’ve discussed before), would to be to look at ultra luminous X-ray sources, which could be from a disc of material spiralling into the black hole.  However, it’s difficult to be certain that we understand the source properly and that we’re not misclassifying it. There could be one sure-fire way of identifying intermediate-mass black holes: gravitational waves.

The frequency of gravitational waves depend upon the mass of the binary. More massive systems produce lower frequencies. LIGO is sensitive to the right range of frequencies for stellar-mass black holes. GW150914 chirped up to the pitch of a guitar’s open B string (just below middle C). Supermassive black holes produce gravitational waves at too low frequency for LIGO (a space-based detector would be perfect for these). We might just be able to detect signals from intermediate-mass black holes with LIGO.

In a recent paper, a group of us from Birmingham looked at what we could learn from gravitational waves from the coalescence of an intermediate-mass black hole and a stellar-mass black hole [bonus note].  We considered how well you would be able to measure the masses of the black holes. After all, to confirm that you’ve found an intermediate-mass black hole, you need to be sure of its mass.

The signals are extremely short: we only can detect the last bit of the two black holes merging together and settling down as a final black hole. Therefore, you might think there’s not much information in the signal, and we won’t be able to measure the properties of the source. We found that this isn’t the case!

We considered a set of simulated signals, and analysed these with our parameter-estimation code [bonus note]. Below are a couple of plots showing the accuracy to which we can infer a couple of different mass parameters for binaries of different masses. We show the accuracy of measuring the chirp mass $\mathcal{M}$ (a much beloved combination of the two component masses which we are usually able to pin down precisely) and the total mass $M_\mathrm{total}$.

Measured chirp mass for systems of different total masses. The shaded regions show the 90% credible interval and the dashed lines show the true values. The mass ratio $q$ is the mass of the stellar-mass black hole divided by the mass of the intermediate-mass black hole. Figure 1 of Haster et al. (2016).

Measured total mass for systems of different total masses. The shaded regions show the 90% credible interval and the dashed lines show the true values. Figure 2 of Haster et al. (2016).

For the lower mass systems, we can measure the chirp mass quite well. This is because we get a little information from the part of the gravitational wave from when the two components are inspiralling together. However, we see less and less of this as the mass increases, and we become more and more uncertain of the chirp mass.

The total mass isn’t as accurately measured as the chirp mass at low masses, but we see that the accuracy doesn’t degrade at higher masses. This is because we get some constraints on its value from the post-inspiral part of the waveform.

We found that the transition from having better fractional accuracy on the chirp mass to having better fractional accuracy on the total mass happened when the total mass was around 200–250 solar masses. This was assuming final design sensitivity for Advanced LIGO. We currently don’t have as good sensitivity at low frequencies, so the transition will happen at lower masses: GW150914 is actually in this transition regime (the chirp mass is measured a little better).

Given our uncertainty on the masses, when can we conclude that there is an intermediate-mass black hole? If we classify black holes with masses more than 100 solar masses as intermediate mass, then we’ll be able to say to claim a discovery with 95% probability if the source has a black hole of at least 130 solar masses. The plot below shows our inferred probability of there being an intermediate-mass black hole as we increase the black hole’s mass (there’s little chance of falsely identifying a lower mass black hole).

Probability that the larger black hole is over 100 solar masses (our cut-off mass for intermediate-mass black holes $M_\mathrm{IMBH}$). Figure 7 of Haster et al. (2016).

Gravitational-wave observations could lead to a concrete detection of intermediate mass black holes if they exist and merge with another black hole. However, LIGO’s low frequency sensitivity is important for detecting these signals. If detector commissioning goes to plan and we are lucky enough to detect such a signal, we’ll finally be able to complete our set of black holes.

arXiv: 1511.01431 [astro-ph.HE]
Journal: Monthly Notices of the Royal Astronomical Society457(4):4499–4506; 2016
Birmingham science summary: Inference on gravitational waves from coalescences of stellar-mass compact objects and intermediate-mass black holes (by Carl)
Other collectables: Breakthrough, Gruber, Shaw, Kavli

### Bonus notes

#### Jargon

The coalescence of an intermediate-mass black hole and a stellar-mass object (black hole or neutron star) has typically been known as an intermediate mass-ratio inspiral (an IMRI). This is similar to the name for the coalescence of a a supermassive black hole and a stellar-mass object: an extreme mass-ratio inspiral (an EMRI). However, my colleague Ilya has pointed out that with LIGO we don’t really see much of the intermediate-mass black hole and the stellar-mass black hole inspiralling together, instead we see the merger and ringdown of the final black hole. Therefore, he prefers the name intermediate mass-ratio coalescence (or IMRAC). It’s a better description of the signal we measure, but the acronym isn’t as good.

#### Parameter-estimation runs

The main parameter-estimation analysis for this paper was done by Zhilu, a summer student. This is notable for two reasons. First, it shows that useful research can come out of a summer project. Second, our parameter-estimation code installed and ran so smoothly that even an undergrad with no previous experience could get some useful results. This made us optimistic that everything would work perfectly in the upcoming observing run (O1). Unfortunately, a few improvements were made to the code before then, and we were back to the usual level of fun in time for The Event.

# Prospects for observing and localizing gravitational-wave transients with Advanced LIGO and Advanced Virgo

The week beginning February 8th was a big one for the LIGO and Virgo Collaborations. You might remember something about a few papers on the merger of a couple of black holes; however, those weren’t the only papers we published that week. In fact, they are not even (currently) the most cited

Prospects for Observing and Localizing Gravitational-Wave Transients with Advanced LIGO and Advanced Virgo is known within the Collaboration as the Observing Scenarios Document. It has a couple of interesting aspects

• Its content is a mix of a schedule for detector commissioning and an explanation of data analysis. It is a rare paper that spans both the instrumental and data-analysis sides of the Collaboration.
• It is a living review: it is intended to be periodically updated as we get new information.

There is also one further point of interest for me: I was heavily involved in producing this latest version.

In this post I’m going to give an outline of the paper’s content, but delve a little deeper into the story of how this paper made it to print.

## The Observing Scenarios

The paper is divided up into four sections.

1. It opens, as is traditional, with the introduction. This has no mentions of windows, which is a good start.
2. Section 2 is the instrumental bit. Here we give a possible timeline for the commissioning of the LIGO and Virgo detectors and a plausible schedule for our observing runs.
3. Next we talk about data analysis for transient (short) gravitational waves. We discuss detection and then sky localization.
4. Finally, we bring everything together to give an estimate of how well we expect to be able to locate the sources of gravitational-wave signals as time goes on.

Packaged up, the paper is useful if you want to know when LIGO and Virgo might be observing or if you want to know how we locate the source of a signal on the sky. The aim was to provide a guide for those interested in multimessenger astronomy—astronomy where you rely on multiple types of signals like electromagnetic radiation (light, radio, X-rays, etc.), gravitational waves, neutrinos or cosmic rays.

The development of the detectors’ sensitivity is shown below. It takes many years of tweaking and optimising to reach design sensitivity, but we don’t wait until then to do some science. It’s just as important to practise running the instruments and analysing the data as it is to improve the sensitivity. Therefore, we have a series of observing runs at progressively higher sensitivity. Our first observing run (O1), featured just the two LIGO detectors, which were towards the better end of the expected sensitivity.

Plausible evolution of the Advanced LIGO and Advanced Virgo detectors with time. The lower the sensitivity curve, the further away we can detect sources. The distances quoted are ranges we could observe binary neutrons stars (BNSs) to. The BNS-optimized curve is a proposal to tweak the detectors for finding BNSs. Fig. 1 of the Observing Scenarios Document.

It’s difficult to predict exactly how the detectors will progress (we’re doing many things for the first time ever), but the plot above shows our current best plan.

I’ll not go into any more details about the science in the paper as I’ve already used up my best ideas writing the LIGO science summary.

If you’re particularly interested in sky localization, you might like to check out the data releases for studies using (simulated) binary neutron star and burst signals. The binary neutron star analysis is similar to that we do for any compact binary coalescence (the merger of a binary containing neutron stars or black holes), and the burst analysis works more generally as it doesn’t require a template for the expected signal.

## The path to publication

Now, this is the story of how a Collaboration paper got published. I’d like to take a minute to tell you how I became responsible for updating the Observing Scenarios…

### In the beginning

The Observing Scenarios has its origins long before I joined the Collaboration. The first version of the document I can find is from July 2012. Amongst the labyrinth of internal wiki pages we have, the earliest reference I’ve uncovered was from August 2012 (the plan was to have a mature draft by September). The aim was to give a road map for the advanced-detector era, so the wider astronomical community would know what to expect.

I imagine it took a huge effort to bring together all the necessary experts from across the Collaboration to sit down and write the document.

Any document detailing our plans would need to be updated regularly as we get a better understanding of our progress on commissioning the detectors (and perhaps understanding what signals we will see). Fortunately, there is a journal that can cope with just that: Living Reviews in Relativity. Living Reviews is designed so that authors can update their articles so that they never become (too) out-of-date.

A version was submitted to Living Reviews early in 2013, around the same time as a version was posted to the arXiv. We had referee reports (from two referees), and were preparing to resubmit. Unfortunately, Living Reviews suspended operations before we could. However, work continued.

### Updating sky localization

I joined the LIGO Scientific Collaboration when I started at the University of Birmingham in October 2013. I soon became involved in a variety of activities of the Parameter Estimation group (my boss, Alberto Vecchio, is the chair of the group).

Sky localization was a particularly active area as we prepared for the first runs of Advanced LIGO. The original version of the Observing Scenarios Document used a simple approximate means of estimating sky localization, using just timing triangulation (it didn’t even give numbers for when we only had two detectors running). We knew we could do better.

We had all the code developed, but we needed numbers for a realistic population of signals. I was one of the people who helped running the analyses to get these. We had the results by the summer of 2014; we now needed someone to write up the results. I have a distinct recollection of there being silence on our weekly teleconference. Then Alberto asked me if I would do it? I said yes: it would probably only take me a week or two to write a short technical note.

Saying yes is a slippery slope.

That note became Parameter estimation for binary neutron-star coalescences with realistic noise during the Advanced LIGO era, a 24-page paper (it considers more than just sky localization).

Numbers in hand, it was time to update the Observing Scenarios. Even if things were currently on hold with Living Reviews, we could still update the arXiv version. I thought it would be easiest if I put them in, with a little explanation, myself. I compiled a draft and circulated in the Parameter Estimation group. Then it was time to present to the Data Analysis Council.

The Data Analysis Council either sounds like a shadowy organisation orchestrating things from behind the scene, or a place where people bicker over trivial technical issues. In reality it is a little of both. This is the body that should coordinate all the various bits of analysis done by the Collaboration, and they have responsibility for the Observing Scenarios Document. I presented my update on the last call before Christmas 2014. They were generally happy, but said that the sky localization on the burst side needed updating too! There was once again a silence on the call when it came to the question of who would finish off the document. The Observing Scenarios became my responsibility.

(I had though that if I helped out with this Collaboration paper, I could take the next 900 off. This hasn’t worked out.)

### The review

With some help from the Burst group (in particular Reed Essick, who had lead their sky localization study), I soon had a new version with fully up-to-date sky localization. This was ready for our March Collaboration meeting. I didn’t go (I was saving my travel budget for the summer), so Alberto presented on my behalf. It was now agreed that the document should go through internal review.

It’s this which I really want to write about. Peer review is central to modern science. New results are always discussed by experts in the community, to try to understand the value of the work; however, peer review is formalised in the refereeing of journal articles, when one or more (usually anonymous) experts examine work before it can be published. There are many ups and down with this… For Collaboration papers, we want to be sure that things are right before we share them publicly. We go through internal peer review. In my opinion this is much more thorough than journal review, and this shows how seriously the Collaboration take their science.

Unfortunately, setting up the review was also where we hit a hurdle—it took until July. I’m not entirely sure why there was a delay: I suspect it was partly because everyone was busy assembling things ahead of O1 and partly because there were various discussions amongst the high-level management about what exactly we should be aiming for. Working as part of a large collaboration can mean that you get to be involved in wonderful science, but it can means lots of bureaucracy and politics. However, in the intervening time, Living Reviews was back in operation.

The review team consisted of five senior people, each of whom had easily five times as much experience as I do, with expertise in each of the areas covered in the document. The chair of the review was Alan Weinstein, head of the Caltech LIGO Laboratory Astrophysics Group, who has an excellent eye for detail. Our aim was to produce the update for the start of O1 in September. (Spolier: We didn’t make it)

The review team discussed things amongst themselves and I got the first comments at the end of August. The consensus was that we should not just update the sky localization, but update everything too (including the structure of the document). This precipitated a flurry of conversations with the people who organise the schedules for the detectors, those who liaise with our partner astronomers on electromagnetic follow-up, and everyone who does sky localization. I was initially depressed that we wouldn’t make our start of O1 deadline; however, then something happened that altered my perspective.

On September 14, four days before the official start of O1, we made a detection. GW150914 would change everything.

First, we could no longer claim that binary neutron stars were expected to be our most common source—instead they became the source we expect would most commonly have an electromagnetic counterpart.

Second, we needed to be careful how we described engineering runs. GW150914 occurred in our final engineering run (ER8). Practically, there was difference between the state of the detector then and in O1. The point of the final engineering run was to get everything running smoothly so all we needed to do at the official start of O1 was open the champagne. However, we couldn’t make any claims about being able to make detections during engineering runs without being krass and letting the cat out of the bag. I’m rather pleased with the sentence

Engineering runs in the commissioning phase allow us to understand our detectors and analyses in an observational mode; these are not intended to produce astrophysical results, but that does not preclude the possibility of this happening.

I don’t know if anyone noticed the implication. (Checking my notes, this was in the September 18 draft, which shows how quickly we realised the possible significance of The Event).

Finally, since the start of observations proved to be interesting, and because the detectors were running so smoothly, it was decided to extend O1 from three months to four so that it would finish in January. No commissioning was going to be done over the holidays, so it wouldn’t affect the schedule. I’m not sure how happy the people who run the detectors were about working over this period, but they agreed to the plan. (No-one asked if we would be happy to run parameter estimation over the holidays).

After half-a-dozen drafts, the review team were finally happy with the document. It was now October 20, and time to proceed to the next step of review: circulation to the Collaboration.

Collaboration papers go through a sequence of stages. First they are circulated to the everyone for comments. This can be pointing out typos, suggesting references or asking questions about the analysis. This lasts two weeks. During this time, the results must also be presented on a Collaboration-wide teleconference. After comments are addressed, the paper is sent for examination Executive Committees of the LIGO and Virgo Collaborations. After approval from them (and the review team check any changes), the paper is circulated to the Collaboration again for any last comments and checking of the author list. At the same time it is sent to the Gravitational Wave International Committee, a group of all the collaborations interested in gravitational waves. This final stage is a week. Then you can you can submit the paper.

Peer review for the journal doesn’t seem to arduous in comparison does it?

Since things were rather busy with all the analysis of GW150914, the Observing Scenario took a little longer than usual to clear all these hoops. I presented to the Collaboration on Friday 13 November. (This was rather unlucky as I was at a workshop in Italy and I had to miss the tour of the underground Laboratori Nazionali del Gran Sasso). After addressing comments from everyone (the Executive Committees do read things carefully), I got the final sign-off to submit December 21. At least we made it before the end of O1.

### Good things come…

This may sound like a tale of frustration and delay. However, I hope that it is more than that, and it shows how careful the Collaboration is. The Observing Scenarios is really a review: it doesn’t contain new science. The updated sky localization results are from studies which have appeared in peer-reviewed journals, and are based upon codes that have been separately reviewed. Despite this, every statement was examined and every number checked and rechecked, and every member of the Collaboration had opportunity to examine the results and comment on the document.

I guess this attention to detail isn’t surprising given that our work is based on measuring a change in length of one part in 1,000,000,000,000,000,000,000.

Since this is how we treat review articles, can you imagine how much scrutiny the Discovery Paper had? Everything had at least one extra layer of review, every number had to be signed-off individually by the appropriate review team, and there were so many comments on the paper that the editors had to switch to using a ticketing system we normally use for tracking bugs in our software. This level of oversight helped me to sleep a little more easily: there are six numbers in the abstract alone I could have potentially messed up.

Of course, all this doesn’t mean we can’t make mistakes…

### Looking forward

The Living Reviews version was accepted January 22, just after the end of O1. We made had to make a couple of tweaks to correct tenses. The final version appeared February 8, in time to be the last paper of the pre-discovery era.

It is now time to be thinking about the next update! There are certainly a few things on the to-do list (perhaps even some news on LIGO-India). We are having a Collaboration meeting in a couple of weeks’ time, so hopefully I can start talking to people about it then. Perhaps it’ll be done by the start of O2? [update]

arXiv: 1304.0670 [gr-qc]
Journal: Living Reviews In Relativity; 19:1(39); 2016
Science summary: Planning for a Bright Tomorrow: Prospects for Gravitational-wave Astronomy with Advanced LIGO and Advanced Virgo
Bonus fact:
This is the only paper whose arXiv ID I know by heart [update].

#### arXiv IDs

Papers whose arXiv numbers I know by heart are: 1304.0670, 1602.03840 (I count to other GW150914 companion papers from here), 1606.04856 and 1706.01812. These might tell you something about my reading habits.

#### The next version

Despite aiming for the start of O2, the next version wasn’t ready for submission until just after the end of O2, in September 2017. It was finally published (after an excpetionally long time in type-setting) in April 2018.

# GW150914—The papers

In 2015 I made a resolution to write a blog post for each paper I had published. In 2016 I’ll have to break this because there are too many to keep up with. A suite of papers were prepared to accompany the announcement of the detection of GW150914 [bonus note], and in this post I’ll give an overview of these.

### The papers

As well as the Discovery Paper published in Physical Review Letters [bonus note], there are 12 companion papers. All the papers are listed below in order of arXiv posting. My favourite is the Parameter Estimation Paper.

Subsequently, we have produced additional papers on GW150914, describing work that wasn’t finished in time for the announcement. The most up-to-date results are currently give nin the O2 Catalogue Paper.

#### 0. The Discovery Paper

Title: Observation of gravitational waves from a binary black hole merger
arXiv:
1602.03837 [gr-qc]
Journal:
Physical Review Letters; 116(6):061102(16); 2016
LIGO science summary:
Observation of gravitational waves from a binary black hole merger

This is the central paper that announces the observation of gravitational waves. There are three discoveries which are describe here: (i) the direct detection of gravitational waves, (ii) the existence of stellar-mass binary black holes, and (iii) that the black holes and gravitational waves are consistent with Einstein’s theory of general relativity. That’s not too shabby in under 11 pages (if you exclude the author list). Coming 100 years after Einstein first published his prediction of gravitational waves and Schwarzschild published his black hole solution, this is the perfect birthday present.

More details: The Discovery Paper summary

#### 1. The Detector Paper

Title: GW150914: The Advanced LIGO detectors in the era of first discoveries
arXiv:
1602.03838 [gr-qc]
Journal: Physical Review Letters; 116(13):131103(12); 2016
LIGO science summary: GW150914: The Advanced LIGO detectors in the era of the first discoveries

This paper gives a short summary of how the LIGO detectors work and their configuration in O1 (see the Advanced LIGO paper for the full design). Giant lasers and tiny measurements, the experimentalists do some cool things (even if their paper titles are a little cheesy and they seem to be allergic to error bars).

More details: The Detector Paper summary

#### 2. The Compact Binary Coalescence Paper

Title: GW150914: First results from the search for binary black hole coalescence with Advanced LIGO
arXiv:
1602.03839 [gr-qc]
Journal: Physical Review D; 93(12):122003(21); 2016
LIGO science summary: How we searched for merging black holes and found GW150914

Here we explain how we search for binary black holes and calculate the significance of potential candidates. This is the evidence to back up (i) in the Discovery Paper. We can potentially detect binary black holes in two ways: with searches that use templates, or with searches that look for coherent signals in both detectors without assuming a particular shape. The first type is also used for neutron star–black hole or binary neutron star coalescences, collectively known as compact binary coalescences. This type of search is described here, while the other type is described in the Burst Paper.

This paper describes the compact binary coalescence search pipelines and their results. As well as GW150914 there is also another interesting event, LVT151012. This isn’t significant enough to be claimed as a detection, but it is worth considering in more detail.

More details: The Compact Binary Coalescence Paper summary

#### 3. The Parameter Estimation Paper

Title: Properties of the binary black hole merger GW150914
arXiv:
1602.03840 [gr-qc]
Journal: Physical Review Letters; 116(24):241102(19); 2016
LIGO science summary: The first measurement of a black hole merger and what it means

If you’re interested in the properties of the binary black hole system, then this is the paper for you! Here we explain how we do parameter estimation and how it is possible to extract masses, spins, location, etc. from the signal. These are the results I’ve been most heavily involved with, so I hope lots of people will find them useful! This is the paper to cite if you’re using our best masses, spins, distance or sky maps. The masses we infer are so large we conclude that the system must contain black holes, which is discovery (ii) reported in the Discovery Paper.

More details: The Parameter Estimation Paper summary

#### 4. The Testing General Relativity Paper

Title: Tests of general relativity with GW150914
arXiv:
1602.03841 [gr-qc]
Journal: Physical Review Letters; 116(22):221101(19); 2016
LIGO science summary:
Was Einstein right about strong gravity?

The observation of GW150914 provides a new insight into the behaviour of gravity. We have never before probed such strong gravitational fields or such highly dynamical spacetime. These are the sorts of places you might imagine that we could start to see deviations from the predictions of general relativity. Aside from checking that we understand gravity, we also need to check to see if there is any evidence that our estimated parameters for the system could be off. We find that everything is consistent with general relativity, which is good for Einstein and is also discovery (iii) in the Discovery Paper.

More details: The Testing General Relativity Paper summary

#### 5. The Rates Paper

Title: The rate of binary black hole mergers inferred from Advanced LIGO observations surrounding GW150914
arXiv:
1602.03842 [astro-ph.HE]1606.03939 [astro-ph.HE]
Journal: Astrophysical Journal Letters; 833(1):L1(8); 2016; Astrophysical Journal Supplement Series; 227(2):14(11); 2016
LIGO science summary: The first measurement of a black hole merger and what it means

Given that we’ve spotted one binary black hole (plus maybe another with LVT151012), how many more are out there and how many more should we expect to find? We answer this here, although there’s a large uncertainty on the estimates since we don’t know (yet) the distribution of masses for binary black holes.

More details: The Rates Paper summary

#### 6. The Burst Paper

Title: Observing gravitational-wave transient GW150914 with minimal assumptions
arXiv: 1602.03843 [gr-qc]
Journal: Physical Review D; 93(12):122004(20); 2016

What can you learn about GW150914 without having to make the assumptions that it corresponds to gravitational waves from a binary black hole merger (as predicted by general relativity)? This paper describes and presents the results of the burst searches. Since the pipeline which first found GW150914 was a burst pipeline, it seems a little unfair that this paper comes after the Compact Binary Coalescence Paper, but I guess the idea is to first present results assuming it is a binary (since these are tightest) and then see how things change if you relax the assumptions. The waveforms reconstructed by the burst models do match the templates for a binary black hole coalescence.

More details: The Burst Paper summary

#### 7. The Detector Characterisation Paper

Title: Characterization of transient noise in Advanced LIGO relevant to gravitational wave signal GW150914
arXiv: 1602.03844 [gr-qc]
Journal: Classical & Quantum Gravity; 33(13):134001(34); 2016
LIGO science summary:
How do we know GW150914 was real? Vetting a Gravitational Wave Signal of Astrophysical Origin
CQG+ post: How do we know LIGO detected gravitational waves? [featuring awesome cartoons]

Could GW150914 be caused by something other than a gravitational wave: are there sources of noise that could mimic a signal, or ways that the detector could be disturbed to produce something that would be mistaken for a detection? This paper looks at these problems and details all the ways we monitor the detectors and the external environment. We can find nothing that can explain GW150914 (and LVT151012) other than either a gravitational wave or a really lucky random noise fluctuation. I think this paper is extremely important to our ability to claim a detection and I’m surprised it’s not number 2 in the list of companion papers. If you want to know how thorough the Collaboration is in monitoring the detectors, this is the paper for you.

More details: The Detector Characterisation Paper summary

#### 8. The Calibration Paper

Title: Calibration of the Advanced LIGO detectors for the discovery of the binary black-hole merger GW150914
arXiv:
1602.03845 [gr-qc]
Journal: Physical Review D; 95(6):062003(16); 2017
LIGO science summary:
Calibration of the Advanced LIGO detectors for the discovery of the binary black-hole merger GW150914

Completing the triumvirate of instrumental papers with the Detector Paper and the Detector Characterisation Paper, this paper describes how the LIGO detectors are calibrated. There are some cunning control mechanisms involved in operating the interferometers, and we need to understand these to quantify how they effect what we measure. Building a better model for calibration uncertainties is high on the to-do list for improving parameter estimation, so this is an interesting area to watch for me.

More details: The Calibration Paper summary

#### 9. The Astrophysics Paper

Title: Astrophysical implications of the binary black-hole merger GW150914
arXiv:
1602.03846 [astro-ph.HE]
Journal: Astrophysical Journal Letters; 818(2):L22(15); 2016
LIGO science summary:
The first measurement of a black hole merger and what it means

Having estimated source parameters and rate of mergers, what can we say about astrophysics? This paper reviews results related to binary black holes to put our findings in context and also makes statements about what we could hope to learn in the future.

More details: The Astrophysics Paper summary

#### 10. The Stochastic Paper

Title: GW150914: Implications for the stochastic gravitational wave background from binary black holes
arXiv:
1602.03847 [gr-qc]
Journal: Physical Review Letters; 116(13):131102(12); 2016
LIGO science summary: Background of gravitational waves expected from binary black hole events like GW150914

For every loud signal we detect, we expect that there will be many more quiet ones. This paper considers how many quiet binary black hole signals could add up to form a stochastic background. We may be able to see this background as the detectors are upgraded, so we should start thinking about what to do to identify it and learn from it.

More details: The Stochastic Paper summary

#### 11. The Neutrino Paper

Title: High-energy neutrino follow-up search of gravitational wave event GW150914 with ANTARES and IceCube
arXiv:
1602.05411 [astro-ph.HE]
Journal: Physical Review D; 93(12):122010(15); 2016
LIGO science summary: Search for neutrinos from merging black holes

We are interested so see if there’s any other signal that coincides with a gravitational wave signal. We wouldn’t expect something to accompany a black hole merger, but it’s good to check. This paper describes the search for high-energy neutrinos. We didn’t find anything, but perhaps we will in the future (perhaps for a binary neutron star merger).

More details: The Neutrino Paper summary

#### 12. The Electromagnetic Follow-up Paper

Title: Localization and broadband follow-up of the gravitational-wave transient GW150914
arXiv: 1602.08492 [astro-ph.HE]; 1604.07864 [astro-ph.HE]
Journal: Astrophysical Journal Letters; 826(1):L13(8); 2016; Astrophysical Journal Supplement Series; 225(1):8(15); 2016

As well as looking for coincident neutrinos, we are also interested in electromagnetic observations (gamma-ray, X-ray, optical, infra-red or radio). We had a large group of observers interesting in following up on gravitational wave triggers, and 25 teams have reported observations. This companion describes the procedure for follow-up observations and discusses sky localisation.

This work split into a main article and a supplement which goes into more technical details.

More details: The Electromagnetic Follow-up Paper summary

### The Discovery Paper

Synopsis: Discovery Paper
Read this if: You want an overview of The Event
Favourite part: The entire conclusion:

The LIGO detectors have observed gravitational waves from the merger of two stellar-mass black holes. The detected waveform matches the predictions of general relativity for the inspiral and merger of a pair of black holes and the ringdown of the resulting single black hole. These observations demonstrate the existence of binary stellar-mass black hole systems. This is the first direct detection of gravitational waves and the first observation of a binary black hole merger.

The Discovery Paper gives the key science results and is remarkably well written. It seems a shame to summarise it: you should read it for yourself! (It’s free).

### The Detector Paper

Synopsis: Detector Paper
Read this if: You want a brief description of the detector configuration for O1
Favourite part: It’s short!

The LIGO detectors contain lots of cool pieces of physics. This paper briefly outlines them all: the mirror suspensions, the vacuum (the LIGO arms are the largest vacuum envelopes in the world and some of the cleanest), the mirror coatings, the laser optics and the control systems. A full description is given in the Advanced LIGO paper, but the specs there are for design sensitivity (it is also heavy reading). The main difference between the current configuration and that for design sensitivity is the laser power. Currently the circulating power in the arms is $100~\mathrm{kW}$, the plan is to go up to $750~\mathrm{kW}$. This will reduce shot noise, but raises all sorts of control issues, such as how to avoid parametric instabilities.

The noise amplitude spectral density. The curves for the current observations are shown in red (dark for Hanford, light for Livingston). This is around a factor 3 better than in the final run of initial LIGO (green), but still a factor of 3 off design sensitivity (dark blue). The light blue curve shows the impact of potential future upgrades. The improvement at low frequencies is especially useful for high-mass systems like GW150914. Part of Fig. 1 of the Detector Paper.

### The Compact Binary Coalescence Paper

Synopsis: Compact Binary Coalescence Paper
Read this if: You are interested in detection significance or in LVT151012
Favourite part: We might have found a second binary black hole merger

There are two compact binary coalescence searches that look for binary black holes: PyCBC and GstLAL. Both match templates to the data from the detectors to look for anything binary like, they then calculate the probability that such a match would happen by chance due to a random noise fluctuation (the false alarm probability or p-value [unhappy bonus note]). The false alarm probability isn’t the probability that there is a gravitational wave, but gives a good indication of how surprised we should be to find this signal if there wasn’t one. Here we report the results of both pipelines on the first 38.6 days of data (about 17 days where both detectors were working at the same time).

Both searches use the same set of templates to look for binary black holes [bonus note]. They look for where the same template matches the data from both detectors within a time interval consistent with the travel time between the two. However, the two searches rank candidate events and calculate false alarm probabilities using different methods. Basically, both searches use a detection statistic (the quantity used to rank candidates: higher means less likely to be noise), that is based on the signal-to-noise ratio (how loud the signal is) and a goodness-of-fit statistic. They assess the significance of a particular value of this detection statistic by calculating how frequently this would be obtained if there was just random noise (this is done by comparing data from the two detectors when there is not a coincident trigger in both). Consistency between the two searches gives us greater confidence in the results.

PyCBC’s detection statistic is a reweighted signal-to-noise ratio $\hat{\rho}_c$ which takes into account the consistency of the signal in different frequency bands. You can get a large signal-to-noise ratio from a loud glitch, but this doesn’t match the template across a range of frequencies, which is why this test is useful. The consistency is quantified by a reduced chi-squared statistic. This is used, depending on its value, to weight the signal-to-noise ratio. When it is large (indicating inconsistency across frequency bins), the reweighted signal-to-noise ratio becomes smaller.

To calculate the background, PyCBC uses time slides. Data from the two detectors are shifted in time so that any coincidences can’t be due to a real gravitational wave. Seeing how often you get something signal-like then tells you how often you’d expect this to happen due to random noise.

GstLAL calculates the signal-to-noise ratio and a residual after subtracting the template. As a detection statistic, it uses a likelihood ratio $\mathcal{L}$: the probability of finding the particular values of the signal-to-noise ratio and residual in both detectors for signals (assuming signal sources are uniformly distributed isotropically in space), divided by the probability of finding them for noise.

The background from GstLAL is worked out by looking at the likelihood ratio fro triggers that only appear in one detector. Since there’s no coincident signal in the other, these triggers can’t correspond to a real gravitational wave. Looking at their distribution tells you how frequently such things happen due to noise, and hence how probable it is for both detectors to see something signal-like at the same time.

The results of the searches are shown in the figure below.

Search results for PyCBC (left) and GstLAL (right). The histograms show the number of candidate events (orange squares) compare to the background. The black line includes GW150914 in the background estimate, the purple removes it (assuming that it is a signal). The further an orange square is above the lines, the more significant it is. Particle physicists like to quote significance in terms of $\sigma$ and for some reason we’ve copied them. The second most significant event (around $2\sigma$) is LVT151012. Fig. 7 from the Compact Binary Coalescence Paper.

GW150914 is the most significant event in both searches (it is the most significant PyCBC event even considering just single-detector triggers). They both find GW150914 with the same template values. The significance is literally off the charts. PyCBC can only calculate an upper bound on the false alarm probability of $< 2 \times 10^{-7}$. GstLAL calculates a false alarm probability of $1.4 \times 10^{-11}$, but this is reaching the level that we have to worry about the accuracy of assumptions that go into this (that the distribution of noise triggers in uniform across templates—if this is not the case, the false alarm probability could be about $10^3$ times larger). Therefore, for our overall result, we stick to the upper bound, which is consistent with both searches. The false alarm probability is so tiny, I don't think anyone doubts this signal is real.

There is a second event that pops up above the background. This is LVT151012. It is found by both searches. Its signal-to-noise ratio is $9.6$, compared with GW150914’s $24$, so it is quiet. The false alarm probability from PyCBC is $0.02$, and from GstLAL is $0.05$, consistent with what we would expect for such a signal. LVT151012 does not reach the standards we would like to claim a detection, but it is still interesting.

Running parameter estimation on LVT151012, as we did for GW150914, gives beautiful results. If it is astrophysical in origin, it is another binary black hole merger. The component masses are lower, $m_1^\mathrm{source} = 23^{+18}_{-5} M_\odot$ and $m_2^\mathrm{source} 13^{+4}_{-5} M_\odot$ (the asymmetric uncertainties come from imposing $m_1^\mathrm{source} \geq m_2^\mathrm{source}$); the chirp mass is $\mathcal{M} = 15^{+1}_{-1} M_\odot$. The effective spin, as for GW150914, is close to zero $\chi_\mathrm{eff} = 0.0^{+0.3}_{-0.2}$. The luminosity distance is $D_\mathrm{L} = 1100^{+500}_{-500}~\mathrm{Mpc}$, meaning it is about twice as far away as GW150914’s source. I hope we’ll write more about this event in the future; there are some more details in the Rates Paper.

Is it random noise or is it a gravitational wave? LVT151012 remains a mystery. This candidate event is discussed in the Compact Binary Coalescence Paper (where it is found), the Rates Paper (which calculates the probability that it is extraterrestrial in origin), and the Detector Characterisation Paper (where known environmental sources fail to explain it). SPOILERS

### The Parameter Estimation Paper

Synopsis: Parameter Estimation Paper
Read this if: You want to know the properties of GW150914’s source
Favourite part: We inferred the properties of black holes using measurements of spacetime itself!

The gravitational wave signal encodes all sorts of information about its source. Here, we explain how we extract this information  to produce probability distributions for the source parameters. I wrote about the properties of GW150914 in my previous post, so here I’ll go into a few more technical details.

To measure parameters we match a template waveform to the data from the two instruments. The better the fit, the more likely it is that the source had the particular parameters which were used to generate that particular template. Changing different parameters has different effects on the waveform (for example, changing the distance changes the amplitude, while changing the relative arrival times changes the sky position), so we often talk about different pieces of the waveform containing different pieces of information, even though we fit the whole lot at once.

The shape of the gravitational wave encodes the properties of the source. This information is what lets us infer parameters. The example signal is GW150914. I made this explainer with Ban Farr and Nutsinee Kijbunchoo for the LIGO Magazine.

The waveform for a binary black hole merger has three fuzzily defined parts: the inspiral (where the two black holes orbit each other), the merger (where the black holes plunge together and form a single black hole) and ringdown (where the final black hole relaxes to its final state). Having waveforms which include all of these stages is a fairly recent development, and we’re still working on efficient ways of including all the effects of the spin of the initial black holes.

We currently have two favourite binary black hole waveforms for parameter estimation:

• The first we refer to as EOBNR, short for its proper name of SEOBNRv2_ROM_DoubleSpin. This is constructed by using some cunning analytic techniques to calculate the dynamics (known as effective-one-body or EOB) and tuning the results to match numerical relativity (NR) simulations. This waveform only includes the effects of spins aligned with the orbital angular momentum of the binary, so it doesn’t allow us to measure the effects of precession (wobbling around caused by the spins).
• The second we refer to as IMRPhenom, short for IMRPhenomPv2. This is constructed by fitting to the frequency dependence of EOB and NR waveforms. The dominant effects of precession of included by twisting up the waveform.

We’re currently working on results using a waveform that includes the full effects of spin, but that is extremely slow (it’s about half done now), so those results won’t be out for a while.

The results from the two waveforms agree really well, even though they’ve been created by different teams using different pieces of physics. This was a huge relief when I was first making a comparison of results! (We had been worried about systematic errors from waveform modelling). The consistency of results is partly because our models have improved and partly because the properties of the source are such that the remaining differences aren’t important. We’re quite confident that we’ve most of the parameters are reliably measured!

The component masses are the most important factor for controlling the evolution of the waveform, but we don’t measure the two masses independently.  The evolution of the inspiral is dominated by a combination called the chirp mass, and the merger and ringdown are dominated by the total mass. For lighter mass systems, where we gets lots of inspiral, we measure the chirp mass really well, and for high mass systems, where the merger and ringdown are the loudest parts, we measure the total mass. GW150914 is somewhere in the middle. The probability distribution for the masses are shown below: we can compensate for one of the component masses being smaller if we make the other larger, as this keeps chirp mass and total mass about the same.

Estimated masses for the two black holes in the binary. Results are shown for the EOBNR waveform and the IMRPhenom: both agree well. The Overall results come from averaging the two. The dotted lines mark the edge of our 90% probability intervals. The sharp diagonal line cut-off in the two-dimensional plot is a consequence of requiring $m_1^\mathrm{source} \geq m_2^\mathrm{source}$.  Fig. 1 from the Parameter Estimation Paper.

To work out these masses, we need to take into account the expansion of the Universe. As the Universe expands, it stretches the wavelength of the gravitational waves. The same happens to light: visible light becomes redder, so the phenomenon is known as redshifting (even for gravitational waves). If you don’t take this into account, the masses you measure are too large. To work out how much redshift there is you need to know the distance to the source. The probability distribution for the distance is shown below, we plot the distance together with the inclination, since both of these affect the amplitude of the waves (the source is quietest when we look at it edge-on from the side, and loudest when seen face-on/off from above/below).

Estimated luminosity distance and binary inclination angle. An inclination of $\theta_{JN} = 90^\circ$ means we are looking at the binary (approximately) edge-on. Results are shown for the EOBNR waveform and the IMRPhenom: both agree well. The Overall results come from averaging the two. The dotted lines mark the edge of our 90% probability intervals.  Fig. 2 from the Parameter Estimation Paper.

After the masses, the most important properties for the evolution of the binary are the spins. We don’t measure these too well, but the probability distribution for their magnitudes and orientations from the precessing IMRPhenom model are shown below. Both waveform models agree that the effective spin $\chi_\mathrm{eff}$, which is a combination of both spins in the direction of the orbital angular momentum) is small. Therefore, either the spins are small or are larger but not aligned (or antialigned) with the orbital angular momentum. The spin of the more massive black hole is the better measured of the two.

Estimated orientation and magnitude of the two component spins from the precessing IMRPhenom model. The magnitude is between 0 and 1 and is perfectly aligned with the orbital angular momentum if the angle is 0. The distribution for the more massive black hole is on the left, and for the smaller black hole on the right. Part of Fig. 5 from the Parameter Estimation Paper.

### The Testing General Relativity Paper

Synopsis: Testing General Relativity Paper
Read this if: You want to know more about the nature of gravity.
Favourite part: Einstein was right! (Or more correctly, we can’t prove he was wrong… yet)

The Testing General Relativity Paper is one of my favourites as it packs a lot of science in. Our first direct detection of gravitational waves and of the merger of two black holes provides a new laboratory to test gravity, and this paper runs through the results of the first few experiments.

Before we start making any claims about general relativity being wrong, we first have to check if there’s any weird noise present. You don’t want to have to rewrite the textbooks just because of an instrumental artifact. After taking out a good guess for the waveform (as predicted by general relativity), we find that the residuals do match what we expect for instrumental noise, so we’re good to continue.

I’ve written about a couple of tests of general relativity in my previous post: the consistency of the inspiral and merger–ringdown parts of the waveform, and the bounds on the mass of the graviton (from evolution of the signal). I’ll cover the others now.

The final part of the signal, where the black hole settles down to its final state (the ringdown), is the place to look to check that the object is a black hole and not some other type of mysterious dark and dense object. It is tricky to measure this part of the signal, but we don’t see anything odd. We can’t yet confirm that the object has all the properties you’d want to pin down that it is exactly a black hole as predicted by general relativity; we’re going to have to wait for a louder signal for this. This test is especially poignant, as Steven Detweiler, who pioneered a lot of the work calculating the ringdown of black holes, died a week before the announcement.

We can allow terms in our waveform (here based on the IMRPhenom model) to vary and see which values best fit the signal. If there is evidence for differences compared with the predictions from general relativity, we would have evidence for needing an alternative. Results for this analysis are shown below for a set of different waveform parameters $\hat{p}_i$: the $\varphi_i$ parameters determine the inspiral, the $\alpha_i$ parameters determine the merger–ringdown and the $\beta_i$ parameters cover the intermediate regime. If the deviation $\delta \hat{p}_i$ is zero, the value coincides with the value from general relativity. The plot shows what would happen if you allow all the variable to vary at once (the multiple results) and if you tried just that parameter on its own (the single results).

Probability distributions for waveform parameters. The single analysis only varies one parameter, the multiple analysis varies all of them, and the J0737-3039 result is the existing bound from the double pulsar. A deviation of zero is consistent with general relativity. Fig. 7 from the Testing General Relativity Paper.

Overall the results look good. Some of the single results are centred away from zero, but we think that this is just a random fluctuate caused by noise (we’ve seen similar behaviour in tests, so don’t panic yet). It’s not surprising the $\varphi_3$, $\varphi_4$ and $\varphi_{5l}$ all show this behaviour, as they are sensitive to similar noise features. These measurements are much tighter than from any test we’ve done before, except for the measurement of $\varphi_0$ which is better measured from the double pulsar (since we have lots and lots of orbits of that measured).

The final test is to look for additional polarizations of gravitational waves. These are predicted in several alternative theories of gravity. Unfortunately, because we only have two detectors which are pretty much aligned we can’t say much, at least without knowing for certain the location of the source. Extra detectors will be useful here!

In conclusion, we have found no evidence to suggest we need to throw away general relativity, but future events will help us to perform new and stronger tests.

### The Rates Paper

Synopsis: Rates Paper
Read this if: You want to know how often binary black holes merge (and how many we’ll detect)
Favourite part: There’s a good chance we’ll have ten detections by the end of our second observing run (O2)

Before September 14, we had never seen a binary stellar-mass black hole system. We were therefore rather uncertain about how many we would see. We had predictions based on simulations of the evolution of stars and their dynamical interactions. These said we shouldn’t be too surprised if we saw something in O1, but that we shouldn’t be surprised if we didn’t see anything for many years either. We weren’t really expecting to see a black hole system so soon (the smart money was on a binary neutron star). However, we did find a binary black hole, and this happened right at the start of our observations! What do we now believe about the rate of mergers?

To work out the rate, you first need to count the number of events you have detected and then work out how sensitive you are to the population of signals (how many could you see out of the total).

Counting detections sounds simple: we have GW150914 without a doubt. However, what about all the quieter signals? If you have 100 events each with a 1% probability of being real, then even though you can’t say with certainty that anyone is an actual signal, you would expect one to be so. We want to work out how many events are real and how many are due to noise. Handily, trying to tell apart different populations of things when you’re not certain about individual members is a common problem is astrophysics (where it’s often difficult to go and check what something actually is), so there exists a probabilistic framework for doing this.

Using the expected number of real and noise events for a given detection statistic (as described in the Compact Binary Coalescence Paper), we count the number of detections and as a bonus, get a probability that each event is of astrophysical origin. There are two events with more than a 50% chance of being real: GW150914, where the probability is close to 100%, and LVT151012, where to probability is 84% based on GstLAL and 91% based on PyCBC.

By injecting lots of fake signals into some data and running our detection pipelines, we can work out how sensitive they are (in effect, how far away can they find particular types of sources). For a given number of detections, the more sensitive we are, the lower the actual rate of mergers should be (for lower sensitivity we would miss more, while there’s no hiding for higher sensitivity).

There is one final difficulty in working out the total number of binary black hole mergers: we need to know the distribution of masses, because our sensitivity depends on this. However, we don’t yet know this as we’ve only seen GW150914 and (maybe) LVT151012. Therefore, we try three possibilities to get an idea of what the merger rate could be.

1. We assume that binary black holes are either like GW150914 or like LVT151012. Given that these are our only possible detections at the moment, this should give a reasonable estimate. A similar approach has been used for estimating the population of binary neutron stars from pulsar observations [bonus note].
2. We assume that the distribution of masses is flat in the logarithm of the masses. This probably gives more heavy black holes than in reality (and so a lower merger rate)
3. We assume that black holes follow a power law like the initial masses of stars. This probably gives too many low mass black holes (and so a higher merger rate)

The estimated merger rates (number of binary black hole mergers per volume per time) are then: 1. $83^{+168}_{-63}~\mathrm{Gpc^{-3}\,yr^{-1}}$; 2. $61^{+124}_{-48}~\mathrm{Gpc^{-3}\,yr^{-1}}$, and 3. $200^{+400}_{-160}~\mathrm{Gpc^{-3}\,yr^{-1}}$. There is a huge scatter, but the flat and power-law rates hopefully bound the true value.

We’ll pin down the rate better after a few more detections. How many more should we expect to see? Using the projected sensitivity of the detectors over our coming observing runs, we can work out the probability of making $N$ more detections. This is shown in the plot below. It looks like there’s about about a 10% chance of not seeing anything else in O1, but we’re confident that we’ll have 10 more by the end of O2, and 35 more by the end of O3! I may need to lie down…

The percentage chance of making 0, 10, 35 and 70 more detections of binary black holes as time goes on and detector sensitivity improves (based upon our data so far). This is a simplified version of part of Fig. 3 of the Rates Paper taken from the science summary.

### The Burst Paper

Synopsis: Burst Paper
Read this if: You want to check what we can do without a waveform template
Favourite part: You don’t need a template to make a detection

When discussing what we can learn from gravitational wave astronomy, you can almost guarantee that someone will say something about discovering the unexpected. Whenever we’ve looked at the sky in a new band of the electromagnetic spectrum, we found something we weren’t looking for: pulsars for radio, gamma-ray burst for gamma-rays, etc. Can we do the same in gravitational wave astronomy? There may well be signals we weren’t anticipating out there, but will we be able to detect them? The burst pipelines have our back here, at least for short signals.

The burst search pipelines, like their compact binary coalescence partners, assign candidate events a detection statistic and then work out a probability associated with being a false alarm caused by noise. The difference is that the burst pipelines try to find a wider range of signals.

There are three burst pipelines described: coherent WaveBurst (cWB), which famously first found GW150914; omicron–LALInferenceBurst (oLIB), and BayesWave, which follows up on cWB triggers.

As you might guess from the name, cWB looks for a coherent signal in both detectors. It looks for excess power (indicating a signal) in a time–frequency plot, and then classifies candidates based upon their structure. There’s one class for blip glitches and resonance lines (see the Detector Characterisation Paper), these are all thrown away as noise; one class for chirp-like signals that increase in frequency with time, this is where GW150914 was found, and one class for everything else. cWB’s detection statistic $\eta_c$ is something like a signal-to-noise ratio constructed based upon the correlated power in the detectors. The value for GW150914 was $\eta_c = 20$, which is higher than for any other candidate. The false alarm probability (or p-value), folding in all three search classes, is $2\times 10^{-6}$, which is pretty tiny, even if not as significant as for the tailored compact binary searches.

The oLIB search has two stages. First it makes a time–frequency plot and looks for power coincident between the two detectors. Likely candidates are then followed up by matching a sine–Gaussian wavelet to the data, using a similar algorithm to the one used for parameter estimation. It’s detection statistic is something like a likelihood ratio for the signal verses noise. It calculates a false alarm probability of about $2\times 10^{-6}$ too.

BayesWave fits a variable number of sine–Gaussian wavelets to the data. This can model both a signal (when the wavelets are the same for both detectors) and glitches (when the wavelets are independent). This is really clever, but is too computationally expensive to be left running on all the data. Therefore, it follows up on things highlighted by cWB, potentially increasing their significance. It’s detection statistic is the Bayes factor comparing the signal and glitch models. It estimates the false alarm probability to be about $7 \times 10^{-7}$ (which agrees with the cWB estimate if you only consider chirp-like triggers).

None of the searches find LVT151012. However, as this is a quiet, lower mass binary black hole, I think that this is not necessarily surprising.

cWB and BayesWave also output a reconstruction of the waveform. Reassuringly, this does look like binary black hole coalescence!

Gravitational waveforms from our analyses of GW150914. The wiggly grey line are the data from Hanford (top) and Livinston (bottom); these are analysed coherently. The plots show waveforms whitened by the noise power spectral density. The dark band shows the waveform reconstructed by BayesWave without assuming that the signal is from a binary black hole (BBH). The light bands show the distribution of BBH template waveforms that were found to be most probable from our parameter-estimation analysis. The two techniques give consistent results: the match between the two models is $94^{+2}_{-3}\%$. Fig. 6 of the Parameter Estimation Paper.

The paper concludes by performing some simple fits to the reconstructed waveforms. For this, you do have to assume that the signal cane from a binary black hole. They find parameters roughly consistent with those from the full parameter-estimation analysis, which is a nice sanity check of our results.

### The Detector Characterisation Paper

Synopsis: Detector Characteristation Paper
Read this if: You’re curious if something other than a gravitational wave could be responsible for GW150914 or LVT151012
Favourite part: Mega lightning bolts can cause correlated noise

The output from the detectors that we analyses for signals is simple. It is a single channel that records the strain. To monitor instrumental behaviour and environmental conditions the detector characterisation team record over 200,000 other channels. These measure everything from the alignment of the optics through ground motion to incidence of cosmic rays. Most of the data taken by LIGO is to monitor things which are not gravitational waves.

This paper examines all the potential sources of noise in the LIGO detectors, how we monitor them to ensure they are not confused for a signal, and the impact they could have on estimating the significance of events in our searches. It is amazingly thorough work.

There are lots of potential noise sources for LIGO. Uncorrelated noise sources happen independently at both sites, therefore they can only be mistaken for a gravitational wave if by chance two occur at the right time. Correlated noise sources effect both detectors, and so could be more confusing for our searches, although there’s no guarantee that they would cause a disturbance that looks anything like a binary black hole merger.

Sources of uncorrelated noise include:

• Ground motion caused by earthquakes or ocean waves. These create wibbling which can affect the instruments, even though they are well isolated. This is usually at low frequencies (below $0.1~\mathrm{Hz}$ for earthquakes, although it can be higher if the epicentre is near), unless there is motion in the optics around (which can couple to cause higher frequency noise). There is a network of seismometers to measure earthquakes at both sites. There where two magnitude 2.1 earthquakes within 20 minutes of GW150914 (one off the coast of Alaska, the other south-west of Seattle), but both produced ground motion that is ten times too small to impact the detectors. There was some low frequency noise in Livingston at the time of LVT151012 which is associated with a period of bad ocean waves. however, there is no evidence that these could be converted to the frequency range associated with the signal.
• People moving around near the detectors can also cause vibrational or acoustic disturbances. People are kept away from the detectors while they are running and accelerometers, microphones and seismometers monitor the environment.
• Modulation of the lasers at $9~\mathrm{MHz}$ and $45~\mathrm{MHz}$ is done to monitor and control several parts of the optics. There is a fault somewhere in the system which means that there is a coupling to the output channel and we get noise across $10~\mathrm{Hz}$ to $2~\mathrm{kHz}$, which is where we look for compact binary coalescences. Rai Weiss suggested shutting down the instruments to fix the source of this and delaying the start of observations—it’s a good job we didn’t. Periods of data where this fault occurs are flagged and not included in the analysis.
• Blip transients are a short glitch that occurs for unknown reasons. They’re quite mysterious. They are at the right frequency range ($30~\mathrm{Hz}$ to $250~\mathrm{Hz}$) to be confused with binary black holes, but don’t have the right frequency evolution. They contribute to the background of noise triggers in the compact binary coalescence searches, but are unlikely to be the cause of GW150914 or LVT151012 since they don’t have the characteristic chirp shape.

A time–frequency plot of a blip glitch in LIGO-Livingston. Blip glitches are the right frequency range to be confused with binary coalescences, but don’t have the chirp-like structure. Blips are symmetric in time, whereas binary coalescences sweep up in frequency. Fig. 3 of the Detector Characterisation Paper.

Correlated noise can be caused by:

• Electromagnetic signals which can come from lightning, solar weather or radio communications. This is measured by radio receivers and magnetometers, and its extremely difficult to produce a signal that is strong enough to have any impact of the detectors’ output. There was one strong  (peak current of about $500~\mathrm{kA}$) lightning strike in the same second as GW150914 over Burkino Faso. However, the magnetic disturbances were at least a thousand times too small to explain the amplitude of GW150914.
• Cosmic ray showers can cause electromagnetic radiation and particle showers. The particle flux become negligible after a few kilometres, so it’s unlikely that both Livingston and Hanford would be affected, but just in case there is a cosmic ray detector at Hanford. It has seen nothing suspicious.

All the monitoring channels give us a lot of insight into the behaviour of the instruments. Times which can be identified as having especially bad noise properties (where the noise could influence the measured output), or where the detectors are not working properly, are flagged and not included in the search analyses. Applying these vetoes mean that we can’t claim a detection when we know something else could mimic a gravitational wave signal, but it also helps us clean up our background of noise triggers. This has the impact of increasing the significance of the triggers which remain (since there are fewer false alarms they could be confused with). For example, if we leave the bad period in, the PyCBC false alarm probability for LVT151012 goes up from $0.02$ to $0.14$. The significance of GW150914 is so great that we don’t really need to worry about the effects of vetoes.

At the time of GW150914 the detectors were running well, the data around the event are clean, and there is nothing in any of the auxiliary channels that record anything which could have caused the event. The only source of a correlated signal which has not been rules out is a gravitational wave from a binary black hole merger. The time–frequency plots of the measured strains are shown below, and its easy to pick out the chirps.

Time–frequency plots for GW150914 as measured by Hanford (left) and Livingston (right). These show the characteristic increase in frequency with time of the chirp of a binary merger. The signal is clearly visible above the noise. Fig. 10 of the Detector Characterisation Paper.

The data around LVT151012 are significantly less stationary than around GW150914. There was an elevated noise transient rate around this time. This is probably due to extra ground motion caused by ocean waves. This low frequency noise is clearly visible in the Livingston time–frequency plot below. There is no evidence that this gets converted to higher frequencies though. None of the detector characterisation results suggest that LVT151012 has was caused by a noise artifact.

Time–frequency plots for LVT151012 as measured by Hanford (left) and Livingston (right). You can see the characteristic increase in frequency with time of the chirp of a binary merger, but this is mixed in with noise. The scale is reduced compared with for GW150914, which is why noise features appear more prominent. The band at low frequency in Livingston is due to ground motion; this is not present in Hanford. Fig. 13 of the Detector Characterisation Paper.

If you’re curious about the state of the LIGO sites and their array of sensors, you can see more about the physical environment monitors at pem.ligo.org.

### The Calibration Paper

Synopsis: Calibration Paper
Read this if: You like control engineering or precision measurement
Favourite part: Not only are the LIGO detectors sensitive enough to feel the push from a beam of light, they are so sensitive that you have to worry about where on the mirrors you push

We want to measure the gravitational wave strain—the change in length across our detectors caused by a passing gravitational wave. What we actually record is the intensity of laser light out the output of our interferometer. (The output should be dark when the strain is zero, and the intensity increases when the interferometer is stretched or squashed). We need a way to convert intensity to strain, and this requires careful calibration of the instruments.

The calibration is complicated by the control systems. The LIGO instruments are incredibly sensitive, and maintaining them in a stable condition requires lots of feedback systems. These can impact how the strain is transduced into the signal readout by the interferometer. A schematic of how what would be the change in the length of the arms without control systems $\Delta L_\mathrm{free}$ is changed into the measured strain $h$ is shown below. The calibration pipeline build models to correct for the effects of the control system to provide an accurate model of the true gravitational wave strain.

Model for how a differential arm length caused by a gravitational wave $\Delta L_\mathrm{free}$ or a photon calibration signal $x_\mathrm{T}^\mathrm{(PC)}$ is converted into the measured signal $h$. Fig. 2 from the Calibration Paper.

To measure the different responses of the system, the calibration team make several careful measurements. The primary means is using photon calibration: an auxiliary laser is used to push the mirrors and the response is measured. The spots where the lasers are pointed are carefully chosen to minimise distortion to the mirrors caused by pushing on them. A secondary means is to use actuators which are parts of the suspension system to excite the system.

As a cross-check, we can also use two auxiliary green lasers to measure changes in length using either a frequency modulation or their wavelength. These are similar approaches to those used in initial LIGO. These go give consistent results with the other methods, but they are not as accurate.

Overall, the uncertainty in the calibration of the amplitude of the strain is less than $10\%$ between $20~\mathrm{Hz}$ and $1~\mathrm{kHz}$, and the uncertainty in phase calibration is less than $10^\circ$. These are the values that we use in our parameter-estimation runs. However, the calibration uncertainty actually varies as a function of frequency, with some ranges having much less uncertainty. We’re currently working on implementing a better model for the uncertainty, which may improve our measurements. Fortunately the masses, aren’t too affected by the calibration uncertainty, but sky localization is, so we might get some gain here. We’ll hopefully produce results with updated calibration in the near future.

### The Astrophysics Paper

Synopsis: Astrophysics Paper
Read this if: You are interested in how binary black holes form
Favourite part: We might be able to see similar mass binary black holes with eLISA before they merge in the LIGO band [bonus note]

This paper puts our observations of GW150914 in context with regards to existing observations of stellar-mass black holes and theoretical models for binary black hole mergers. Although it doesn’t explicitly mention LVT151012, most of the conclusions would be just as applicable to it’s source, if it is real. I expect there will be rapid development of the field now, but if you want to catch up on some background reading, this paper is the place to start.

The paper contains lots of references to good papers to delve into. It also highlights the main conclusion we can draw in italics, so its easy to skim through if you want a summary. I discussed the main astrophysical conclusions in my previous post. We will know more about binary black holes and their formation when we get more observations, so I think it is a good time to get interested in this area.

### The Stochastic Paper

Synopsis: Stochastic Paper
Read this if: You like stochastic backgrounds
Favourite part: We might detect a background in the next decade

A stochastic gravitational wave background could be created by an incoherent superposition of many signals. In pulsar timing, they are looking for a background from many merging supermassive black holes. Could we have a similar thing from stellar-mass black holes? The loudest signals, like GW150914, are resolvable, they stand out from the background. However, for every loud signal, there will be many quiet signals, and the ones below our detection threshold could form a background. Since we’ve found that binary black hole mergers are probably plentiful, the background may be at the high end of previous predictions.

The background from stellar-mass black holes is different than the one from supermassive black holes because the signals are short. While the supermassive black holes produce an almost constant hum throughout your observations, stellar-mass black hole mergers produce short chirps. Instead of having lots of signals that overlap in time, we have a popcorn background, with one arriving on average every 15 minutes. This might allow us to do some different things when it comes to detection, but for now, we just use the standard approach.

This paper calculates the energy density of gravitational waves from binary black holes, excluding the contribution from signals loud enough to be detected. This is done for several different models. The standard (fiducial) model assumes parameters broadly consistent with those of GW150914’s source, plus a particular model for the formation of merging binaries. There are then variations on the the model for formation, considering different time delays between formation and merger, and adding in lower mass systems consistent with LVT151012. All these models are rather crude, but give an idea of potential variations in the background. Hopefully more realistic distributions will be considered in the future. There is some change between models, but this is within the (considerable) statistical uncertainty, so predictions seems robust.

Different models for the stochastic background of binary black holes. This is plotted in terms of energy density. The red band indicates the uncertainty on the fiducial model. The dashed line indicates the sensitivity of the LIGO and Virgo detectors after several years at design sensitivity. Fig. 2 of the Stochastic Paper.

After a couple of years at design sensitivity we may be able to make a confident detection of the stochastic background. The background from binary black holes is more significant than we expected.

If you’re wondering about if we could see other types of backgrounds, such as one of cosmological origin, then the background due to binary black holes could make detection more difficult. In effect, it acts as another source of noise, masking the other background. However, we may be able to distinguish the different backgrounds by measuring their frequency dependencies (we expect them to have different slopes), if they are loud enough.

### The Neutrino Paper

Synopsis: Neutrino Paper
Read this if: You really like high energy neutrinos
Favourite part: We’re doing astronomy with neutrinos and gravitational waves—this is multimessenger astronomy without any form of electromagnetic radiation

There are multiple detectors that can look for high energy neutrinos. Currently, LIGO–Virgo Observations are being followed up by searches from ANTARES and IceCube. Both of these are Cherenkov detectors: they look for flashes of light created by fast moving particles, not the neutrinos themselves, but things they’ve interacted with. ANTARES searches the waters of the Mediterranean while IceCube uses the ice of Antarctica.

Within 500 seconds either side of the time of GW150914, ANTARES found no neutrinos and IceCube found three. These results are consistent with background levels (you would expect on average less than one and 4.4 neutrinos over that time from the two respectively). Additionally, none of the IceCube neutrinos are consistent with the sky localization of GW150914 (even though the sky area is pretty big). There is no sign of a neutrino counterpart, which is what we were expecting.

Subsequent non-detections have been reported by KamLAND, the Pierre Auger ObservatorySuper-Kamiokande, Borexino and NOvA.

### The Electromagnetic Follow-up Paper

Synopsis: Electromagnetic Follow-up Paper
Read this if: You are interested in the search for electromagnetic counterparts
Favourite part: So many people were involved in this work that not only do we have to abbreviate the list of authors (Abbott, B.P. et al.), but we should probably abbreviate the list of collaborations too (LIGO Scientific & Virgo Collaboration et al.)

This is the last of the set of companion papers to be released—it took a huge amount of coordinating because of all the teams involved. The paper describes how we released information about GW150914. This should not be typical of how we will do things going forward (i) because we didn’t have all the infrastructure in place on September 14 and (ii) because it was the first time we had something we thought was real.

The first announcement was sent out on September 16, and this contained sky maps from the Burst codes cWB and LIB. In the future, we should be able to send out automated alerts with a few minutes latency.

For the first alert, we didn’t have any results which assumed the the source was a binary, as the searches which issue triggers at low latency were only looking for lower mass systems which would contain a neutron star. I suspect we’ll be reprioritising things going forward. The first information we shared about the potential masses for the source was shared on October 3. Since this was the first detection, everyone was cautious about triple-checking results, which caused the delay. Revised false alarm rates including results from GstLAL and PyCBC were sent out October 20.

The final sky maps were shared January 13. This is when we’d about finished our own reviews and knew that we would be submitting the papers soon [bonus note]. Our best sky map is the one from the Parameter Estimation Paper. You might it expect to be more con straining than the results from the burst pipelines since it uses a proper model for the gravitational waves from a binary black hole. This is the case if we ignore calibration uncertainty (which is not yet included in the burst codes), then the 50% area is $48~\mathrm{deg}^2$ and the 90% area is $150~\mathrm{deg^2}$. However, including calibration uncertainty, the sky areas are $150~\mathrm{deg^2}$ and $590~\mathrm{deg^2}$ at 50% and 90% probability respectively. Calibration uncertainty has the largest effect on sky area. All the sky maps agree that the source is in in some region of the annulus set by the time delay between the two detectors.

The different sky maps for GW150914 in an orthographic projection. The contours show the 90% region for each algorithm. The faint circles show lines of constant time delay $\Delta t_\mathrm{HL}$ between the two detectors. BAYESTAR rapidly computes sky maps for binary coalescences, but it needs the output of one of the detection pipelines to run, and so was not available at low latency. The LALInference map is our best result. All the sky maps are available as part of the data release. Fig. 2 of the Electromagnetic Follow-up Paper.

A timeline of events is shown below. There were follow-up observations across the electromagnetic spectrum from gamma-rays and X-rays through the optical and near infra-red to radio.

Timeline for observations of GW15014. The top (grey) band shows information about gravitational waves. The second (blue) band shows high-energy (gamma- and X-ray) observations. The third and fourth (green) bands show optical and near infra-red observations respectively. The bottom (red) band shows radio observations. Fig. 1 from the Electromagnetic Follow-up Paper.

Observations have been reported (via GCN notices) by

Together they cover an impressive amount of the sky as shown below. Many targeted the Large Magellanic Cloud before the knew the source was a binary black hole.

Footprints of observations compared with the 50% and 90% areas of the initially distributed (cWB: thick lines; LIB: thin lines) sky maps, also in orthographic projection. The all-sky observations are not shown. The grey background is the Galactic plane. Fig. 3 of the Electromagnetic Follow-up Paper.

Additional observations have been done using archival data by XMM-Newton and AGILE.

We don’t expect any electromagnetic counterpart to a binary black hole. No-one found anything with the exception of Fermi GBM. This has found a weak signal which may be coincident. More work is required to figure out if this is genuine (the statistical analysis looks OK, but some times you do have a false alarm). It would be a surprise if it is, so most people are sceptical. However, I think this will make people more interested in following up on our next binary black hole signal!

### Bonus notes

#### Naming The Event

GW150914 is the name we have given to the signal detected by the two LIGO instruments. The “GW” is short for gravitational wave (not galactic worm), and the numbers give the date the wave reached the detectors (2015 September 14). It was originally known as G184098, its ID in our database of candidate events (most circulars sent to and from our observer partners use this ID). That was universally agreed to be terrible to remember. We tried to think of a good nickname for the event, but failed to, so rather by default, it has informally become known as The Event within the Collaboration. I think this is fitting given its significance.

LVT151012 is the name of the most significant candidate after GW150914, it doesn’t reach our criteria to claim detection (a false alarm rate of less than once per century), which is why it’s not GW151012. The “LVT” is short for LIGO–Virgo trigger. It took a long time to settle on this and up until the final week before the announcement it was still going by G197392. Informally, it was known as The Second Monday Event, as it too was found on a Monday. You’ll have to wait for us to finish looking at the rest of the O1 data to see if the Monday trend continues. If it does, it could have serious repercussions for our understanding of Garfield.

Following the publication of the O2 Catalogue Paper, LVT151012 was upgraded to GW151012, AND we decided to get rid of the LVT class as it was rather confusing.

#### Publishing in Physical Review Letters

Several people have asked me if the Discovery Paper was submitted to Science or Nature. It was not. The decision that any detection would be submitted to Physical Review was made ahead of the run. As far as I am aware, there was never much debate about this. Physical Review had been good about publishing all our non-detections and upper limits, so it only seemed fair that they got the discovery too. You don’t abandon your friends when you strike it rich. I am glad that we submitted to them.

Gaby González, the LIGO Spokesperson, contacted the editors of Physical Review Letters ahead of submission to let them know of the anticipated results. They then started to line up some referees to give confidential and prompt reviews.

The initial plan was to submit on January 19, and we held a Collaboration-wide tele-conference to discuss the science. There were a few more things still to do, so the paper was submitted on January 21, following another presentation (and a long discussion of whether a number should be a six or a two) and a vote. The vote was overwhelmingly in favour of submission.

We got the referee reports back on January 27, although they were circulated to the Collaboration the following day. This was a rapid turnaround! From their comments, I suspect that Referee A may be a particle physicist who has dealt with similar claims of first detection—they were most concerned about statistical significance; Referee B seemed like a relativist—they made comments about the effect of spin on measurements, knew about waveforms and even historical papers on gravitational waves, and I would guess that Referee C was an astronomer involved with pulsars—they mentioned observations of binary pulsars potentially claiming the title of first detection and were also curious about sky localization. While I can’t be certain who the referees were, I am certain that I have never had such positive reviews before! Referee A wrote

The paper is extremely well written and clear. These results are obviously going to make history.

Referee B wrote

This paper is a major breakthrough and a milestone in gravitational science. The results are overall very well presented and its suitability for publication in Physical Review Letters is beyond question.

and Referee C wrote

It is an honor to have the opportunity to review this paper. It would not be an exaggeration to say that it is the most enjoyable paper I’ve ever read. […] I unreservedly recommend the paper for publication in Physical Review Letters. I expect that it will be among the most cited PRL papers ever.

I suspect I will never have such emphatic reviews again [happy bonus note][unhappy bonus note].

Publishing in Physical Review Letters seems to have been a huge success. So much so that their servers collapsed under the demand, despite them adding two more in anticipation. In the end they had to quintuple their number of servers to keep up with demand. There were 229,000 downloads from their website in the first 24 hours. Many people remarked that it was good that the paper was freely available. However, we always make our papers public on the arXiv or via LIGO’s Document Control Center [bonus bonus note], so there should never be a case where you miss out on reading a LIGO paper!

#### Publishing the Parameter Estimation Paper

The reviews for the Parameter Estimation Paper were also extremely positive. Referee A, who had some careful comments on clarifying notation, wrote

This is a beautiful paper on a spectacular result.

Referee B, who commendably did some back-of-the-envelope checks, wrote

The paper is also very well written, and includes enough background that I think a decent fraction of it will be accessible to non-experts. This, together with the profound nature of the results (first direct detection of gravitational waves, first direct evidence that Kerr black holes exist, first direct evidence that binary black holes can form and merge in a Hubble time, first data on the dynamical strong-field regime of general relativity, observation of stellar mass black holes more massive than any observed to date in our galaxy), makes me recommend this paper for publication in PRL without hesitation.

Referee C, who made some suggestions to help a non-specialist reader, wrote

This is a generally excellent paper describing the properties of LIGO’s first detection.

Physical Review Letters were also kind enough to publish this paper open access without charge!

#### Publishing the Rates Paper

It wasn’t all clear sailing getting the companion papers published. Referees did give papers the thorough checking that they deserved. The most difficult review was of the Rates Paper. There were two referees, one astrophysics, one statistics. The astrophysics referee was happy with the results and made a few suggestions to clarify or further justify the text. The statistics referee has more serious complaints…

There are five main things which I think made the statistics referee angry. First, the referee objected to our terminology

While overall I’ve been impressed with the statistics in LIGO papers, in one respect there is truly egregious malpractice, but fortunately easy to remedy. It concerns incorrectly using the term “false alarm probability” (FAP) to refer to what statisticians call a p-value, a deliberately vague term (“false alarm rate” is similarly misused). […] There is nothing subtle or controversial about the LIGO usage being erroneous, and the practice has to stop, not just within this paper, but throughout the LIGO collaboration (and as a matter of ApJ policy).

I agree with this. What we call the false alarm probability is not the probability that the detection is a false alarm. It is not the probability that the given signal is noise rather that astrophysical, but instead it is the probability that if we only had noise that we would get a detection statistic as significant or more so. It might take a minute to realise why those are different. The former (the one we should call p-value) is what the search pipelines give us, but is less useful than the latter for actually working out if the signal is real. The probabilities calculated in the Rates Paper that the signal is astrophysical are really what you want.

p-values are often misinterpreted, but most scientists are aware of this, and so are cautious when they come across them

As a consequence of this complaint, the Collaboration is purging “false alarm probability” from our papers. It is used in most of the companion papers, as they were published before we got this report (and managed to convince everyone that it is important).

Second, we were lacking in references to existing literature

Regarding scholarship, the paper is quite poor. I take it the authors have written this paper with the expectation, or at least the hope, that it would be read […] If I sound frustrated, it’s because I am.

This is fair enough. The referee made some good suggestions to work done on inferring the rate of gamma-ray bursts by Loredo & Wasserman (Part I, Part II, Part III), as well as by Petit, Kavelaars, Gladman & Loredo on trans-Neptunian objects, and we made sure to add as much work as possible in revisions. There’s no excuse for not properly citing useful work!

Third, the referee didn’t understand how we could be certain of the distribution of signal-to-noise ratio $\rho$ without also worrying about the distribution of parameters like the black hole masses. The signal-to-noise ratio is inversely proportional to distance, and we expect sources to be uniformly distributed in volume. Putting these together (and ignoring corrections from cosmology) gives a distribution for signal-to-noise ratio of $p(\rho) \propto \rho^{-4}$ (Schulz 2011).  This is sufficiently well known within the gravitational-wave community that we forgot that those outside wouldn’t appreciate it without some discussion. Therefore, it was useful that the referee did point this out.

Fourth, the referee thought we had made an error in our approach. They provided an alternative derivation which

if useful, should not be used directly without some kind of attribution

Unfortunately, they were missing some terms in their expressions. When these were added in, their approach reproduced our own (I had a go at checking this myself). Given that we had annoyed the referee on so many other points, it was tricky trying to convince them of this. Most of the time spent responding to the referees was actually working on the referee response and not on the paper.

Finally, the referee was unhappy that we didn’t make all our data public so that they could check things themselves. I think it would be great, and it will happen, it was just too early at the time.

#### LIGO Document Control Center

Papers in the LIGO Document Control Center are assigned a number starting with P (for “paper”) and then several digits. The Discover Paper’s reference is P150914. I only realised why this was the case on the day of submission.

#### The überbank

The set of templates used in the searches is designed to be able to catch binary neutron stars, neutron star–black hole binaries and binary neutron stars. It covers component masses from 1 to 99 solar masses, with total masses less than 100 solar masses. The upper cut off is chosen for computational convenience, rather than physical reasons: we do look for higher mass systems in a similar way, but they are easier to confuse with glitches and so we have to be more careful tuning the search. Since bank of templates is so comprehensive, it is known as the überbank. Although it could find binary neutron stars or neutron star–black hole binaries, we only discuss binary black holes here.

The template bank doesn’t cover the full parameter space, in particular it assumes that spins are aligned for the two components. This shouldn’t significantly affect its efficiency at finding signals, but gives another reason (together with the coarse placement of templates) why we need to do proper parameter estimation to measure properties of the source.

#### Alphabet soup

In the calculation of rates, the probabilistic means for counting sources is known as the FGMC method after its authors (who include two Birmingham colleagues and my former supervisor). The means of calculating rates assuming that the population is divided into one class to match each observation is also named for the initial of its authors as the KKL approach. The combined FGMCKKL method for estimating merger rates goes by the name alphabet soup, as that is much easier to swallow.

#### Multi-band gravitational wave astronomy

The prospect of detecting a binary black hole with a space-based detector and then seeing the same binary merger with ground-based detectors is especially exciting. My officemate Alberto Sesana (who’s not in LIGO) has just written a paper on the promise of multi-band gravitational wave astronomy. Black hole binaries like GW150914 could be spotted by eLISA (if you assume one of the better sensitivities for a detector with three arms). Then a few years to weeks later they merge, and spend their last moments emitting in LIGO’s band. The evolution of some binary black holes is sketched in the plot below.

The evolution of binary black hole mergers (shown in blue). The eLISA and Advanced LIGO sensitivity curves are shown in purple and orange respectively. As the black holes inspiral, they emit gravitational waves at higher frequency, shifting from the eLISa band to the LIGO band (where they merge). The scale at the top gives the approximate time until merger. Fig. 1 of Sesana (2016).

Seeing the signal in two bands can help in several ways. First it can increase our confidence in detection, potentially picking out signals that we wouldn’t otherwise. Second, it gives us a way to verify the calibration of our instruments. Third, it lets us improve our parameter-estimation precision—eLISA would see thousands of cycles, which lets it pin down the masses to high accuracy, these results can be combined with LIGO’s measurements of the strong-field dynamics during merger to give a fantastic overall picture of the system. Finally, since eLISA can measure the signal for a considerable time, it can well localise the source, perhaps just to a square degree; since we’ll also be able to predict when the merger will happen, you can point telescopes at the right place ahead of time to look for any electromagnetic counterparts which may exist. Opening up the gravitational wave spectrum is awesome!

#### The LALInference sky map

One of my jobs as part of the Parameter Estimation group was to produce the sky maps from our parameter-estimation runs. This is a relatively simple job of just running our sky area code. I had done it many times while were collecting our results, so I knew that the final versions were perfectly consistent with everything else we had seen. While I was comfortable with running the code and checking the results, I was rather nervous uploading the results to our database to be shared with our observational partners. I somehow managed to upload three copies by accident. D’oh! Perhaps future historians will someday look back at the records for G184098/GW150914 and wonder what was this idiot Christopher Berry doing? Probably no-one would every notice, but I know the records are there…

# Advanced LIGO detects gravitational waves!

The first observing run (O1) of Advanced LIGO was scheduled to start 9 am GMT (10 am BST), 14 September 2015. Both gravitational-wave detectors were running fine, but there were few a extra things the calibration team wanted to do and not all the automated analysis had been set up, so it was decided to postpone the start of the run until 18 September. No-one told the Universe. At 9:50 am, 14 September there was an event. To those of us in the Collaboration, it is known as The Event.

The Event’s signal as measured by LIGO Hanford and LIGO Livingston. The shown signal has been filtered to make it more presentable. The Hanford signal is inverted because of the relative orientations of the two interferometers. You can clearly see that both observatories see that same signal, and even without fancy analysis, that there are definitely some wibbles there! Part of Fig. 1 from the Discovery Paper.

## Detection

The detectors were taking data and the coherent WaveBurst (cWB) detection pipeline was set up analysing this. It finds triggers in near real time, and so about 3 minutes after the gravitational wave reached Earth, cWB found it. I remember seeing the first few emails… and ignoring them—I was busy trying to finalise details for our default parameter-estimation runs for the start of O1. However, the emails kept on coming. And coming. Something exciting was happening. The detector scientists at the sites swung in to action and made sure that the instruments would run undisturbed so we could get lots of data about their behaviour; meanwhile, the remaining data analysis codes were set running with ruthless efficiency.

The cWB algorithm doesn’t search for a particular type of signal, instead it looks for the same thing in both detectors—it’s what we call a burst search. Burst searches could find supernova explosions, black hole mergers, or something unexpected (so long as the signal is short). Looking at the data, we saw that the frequency increased with time, there was the characteristic chirp of a binary black hole merger! This meant that the searches that specifically look for the coalescence of binaries (black hole or neutron stars) should find it too, if the signal was from a binary black hole. It also meant that we could analyse the data to measure the parameters.

A time–frequency plot that shows The Event’s signal power in the detectors. You can see the signal increase in frequency as time goes on: the characteristic chirp of a binary merger! The fact that you can spot the signal by eye shows how loud it is. Part of Fig. 1 from the Discovery Paper.

The signal was quite short, so it was quick for us to run parameter estimation on it—this makes a welcome change as runs on long, binary neutron-star signals can take months. We actually had the first runs done before all the detection pipelines had finished running. We kept the results secret: the detection people didn’t want to know the results before they looked at their own results (it reminded me of the episode of Whatever Happened to the Likely Lads where they try to avoid hearing the results of the football until they can watch the match). The results from each of the detection pipelines came in [bonus note]. There were the other burst searches: LALInferenceBurst found strong evidence for a signal, and BayesWave classified it clearly as a signal, not noise or a glitch; then the binary searches: both GstLAL and PyCBC found the signal (the same signal) at high significance. The parameter-estimation results were beautiful—we had seen the merger of two black holes!

At first, we couldn’t quite believe that we had actually made the detection. The signal seemed too perfect. Famously, LIGO conducts blind injections: fake signals are secretly put into the data to check that we do things properly. This happened during the run of initial LIGO (an event known as the Big Dog), and many people still remembered the disappointment. We weren’t set up for injections at the time (that was part of getting ready for O1), and the heads of the Collaboration said that there were no plans for blind injections, but people wanted to be sure. Only three or four people in the Collaboration can perform a blind injection; however, it’s a little publicised fact that you can tell if there was an injection. The data from the instruments is recorded at many stages, so there’s a channel which records the injected signal. During a blind-injection run, we’re not allowed to look at this, but this wasn’t a blind-injection run, so this was checked and rechecked. There was nothing. People considered other ways of injecting the signal that wouldn’t be recorded (perhaps splitting the signal up and putting small bits in lots of different systems), but no-one actually understands all the control systems well enough to get this to work. There were basically two ways you could fake the signal. The first is hack into the servers at both sites and CalTech simultaneously and modify the data before it got distributed. You would need to replace all the back-ups and make sure you didn’t leave any traces of tampering. You would also need to understand the control system well enough that all the auxiliary channels (the signal as recorded at over 30 different stages throughout the detectors’ systems) had the right data. The second is to place a device inside the interferometers that would inject the signal. As long as you had a detailed understanding of the instruments, this would be simple: you’d just need to break into both interferometers without being noticed. Since the interferometers are two of the most sensitive machines ever made, this is like that scene from Mission:Impossible, except on the actually impossible difficulty setting. You would need to break into the vacuum tube (by installing an airlock in the concrete tubes without disturbing the seismometers), not disturb the instrument while working on it, and not scatter any of the (invisible) infra-red laser light. You’d need to do this at both sites, and then break in again to remove the devices so they’re not found now that O1 is finished. The devices would also need to be perfectly synchronised. I would love to see a movie where they try to fake the signal, but I am convinced, absolutely, that the easiest way to inject the signal is to collide two black holes a billion years ago. (Also a good plot for a film?)

There is no doubt. We have detected gravitational waves. (I cannot articulate how happy I was to hit the button to update that page! [bonus note])

I still remember the exact moment this hit me. I was giving a public talk on black holes. It was a talk similar to ones I have given many times before. I start with introducing general relativity and the curving of spacetime, then I talk about the idea of a black hole. Next I move on to evidence for astrophysical black holes, and I showed the video zooming into the centre of the Milky Way, ending with the stars orbiting around Sagittarius A*, the massive black hole in the centre of our galaxy (shown below). I said that the motion of the stars was our best evidence for the existence of black holes, then I realised that this was no longer the case. Now, we have a whole new insight into the properties of black holes.

## Gravitational-wave astronomy

Having caught a gravitational wave, what do you do with it? It turns out that there’s rather a lot of science you can do. The last few months have been exhausting. I think we’ve done a good job as a Collaboration of assembling all the results we wanted to go with the detection—especially since lots of things were being done for the first time! I’m sure we’ll update our analysis with better techniques and find new ways of using the data, but for now I hope everyone can enjoy what we have discovered so far.

I will write up a more technical post on the results, here we’ll run through some of the highlights. For more details of anything, check out the data release.

### The source

The results of our parameter-estimation runs tell us about the nature of the source. We have a binary with objects of masses $36^{+5}_{-4} M_\odot$ and $29^{+4}_{-4} M_\odot$, where $M_\odot$ indicates the mass of our Sun (about $2 \times 10^{30}$ kilograms). If you’re curious what’s going with these numbers and the pluses and minuses, check out this bonus note.

Estimated masses for the two black holes in the binary. $m_1^\mathrm{source}$ is the mass of the heavier black hole and $m_2^\mathrm{source}$ is the mass of the lighter black hole. The dotted lines mark the edge of our 90% probability intervals. The different coloured curves show different models: they agree which made me incredibly happy! Fig. 1 from the Parameter Estimation Paper.

We know that we’re dealing with compact objects (regular stars could never get close enough together to orbit fast enough to emit gravitational waves at the right frequency), and the only compact objects that can be as massive as these object are black holes. This means we’re discovered the first stellar-mass black hole binary! We’ve also never seen stellar-mass black holes (as opposed to the supermassive flavour that live in the centres of galaxies) this heavy, but don’t get too attached to that record.

Black holes have at most three properties. This makes them much simpler than a Starbucks Coffee (they also stay black regardless of how much milk you add). Black holes are described by their mass, their spin (how much they rotate), and their electric charge. We don’t expect black holes out in the Universe to have much electric charge because (i) its very hard to separate lots of positive and negative charge in the first place, and (ii) even if you succeed at (i), it’s difficult to keep positive and negative charge apart. This is kind of like separating small children and sticky things that are likely to stain. Since the electric charge can be ignored, we just need mass and spin. We’ve measured masses, can we measure spins?

Black hole spins are defined to be between 0 (no spin) and 1 (the maximum amount you can have). Our best estimates are that the bigger black hole has spin $0.3_{-0.3}^{+0.5}$, and the small one has spin $0.5_{-0.4}^{+0.5}$ (these numbers have been rounded). These aren’t great measurements. For the smaller black hole, its spin is almost equally probable to take any allowed value; this isn’t quite the case, but we haven’t learnt much about its size. For the bigger black hole, we do slightly better, and it seems that the spin is on the smaller side. This is interesting, as measurements of spins for black holes in X-ray binaries tend to be on the higher side: perhaps there are different types of black holes?

We can’t measure the spins precisely for a few reasons. The signal is short, so we don’t see lots of wibbling while the binaries are orbiting each other (the tell-tale sign of spin). Results for the orientation of the binary also suggest that we’re looking at it either face on or face off, which makes any wobbles in the orbit that are there less visible. However, there is one particular combination of the spins, which we call the effective spin, that we can measure. The effective spin controls how the black holes spiral together. It has a value of 1 if both black holes have max spin values, and are rotating the same way as the binary is orbiting. It has a value of −1 if the black holes have max spin values and are both rotating exactly the opposite way to the binary’s orbit. We find that the effective spin is small, $-0.06_{-0.18}^{+0.17}$. This could mean that both black holes have small spins, or that they have larger spins that aren’t aligned with the orbit (or each other). We have learnt something about the spins, it’s just not too easy to tease that apart to give values for each of the black holes.

As the two black holes orbit each other, they (obviously, given what we’ve seen) emit gravitational waves. These carry away energy and angular momentum, so the orbit shrinks and the black holes inspiral together. Eventually they merge and settle down into a single bigger black hole. All this happens while we’re watching (we have great seats). A simulation of this happening is below. You can see that the frequency of the gravitational waves is twice that of the orbit, and the video freezes around the merger so you can see two become one.

What are the properties of the final black hole? The mass of the remnant black holes is $62^{+4}_{-4} M_\odot$. It is the new record holder for the largest observed stellar-mass black hole!

If you do some quick sums, you’ll notice that the final black hole is lighter than the sum of the two initial black holes. This is because of that energy that was carried away by the gravitational waves. Over the entire evolution of the system, $3.0^{+0.5}_{-0.4} M_\odot c^2 \simeq 5.3_{-0.8}^{+0.9} \times 10^{47}~\mathrm{J}$ of energy was radiated away as gravitational waves (where $c$ is the speed of light as in Einstein’s famous equation). This is a colossal amount of energy. You’d need to eat over eight billion times the mass of the Sun in butter to get the equivalent amount of calories. (Do not attempt the wafer-thin mint afterwards). The majority of that energy is radiated within the final second. For a brief moment, this one black hole merger outshines the whole visible Universe if you compare its gravitational-wave luminosity, to everything else’s visible-light luminosity!

We’ve measured mass, what about spin? The final black hole’s spin in $0.67^{+0.05}_{-0.07}$, which is in the middling-to-high range. You’ll notice that we can deduce this to a much higher precisely than the spins of the two initial black holes. This is because it is largely fixed by the orbital angular momentum of the binary, and so its value is set by orbital dynamics and gravitational physics. I think its incredibly satisfying that we we can such a clean measurement of the spin.

We have measured both of the properties of the final black hole, and we have done this using spacetime itself. This is astounding!

Estimated mass $M_\mathrm{f}^\mathrm{source}$ and spin $a_\mathrm{f}^\mathrm{source}$ for the final black hole. The dotted lines mark the edge of our 90% probability intervals. The different coloured curves show different models: they agree which still makes me incredibly happy! Fig. 3 from the Parameter Estimation Paper.

How big is the final black hole? My colleague Nathan Johnson-McDaniel has done some calculations and finds that the total distance around the equator of the black hole’s event horizon is about $1100~\mathrm{km}$ (about six times the length of the M25). Since the black hole is spinning, its event horizon is not a perfect sphere, but it bulges out around the equator. The circumference going over the black hole’s poles is about $1000~\mathrm{km}$ (about five and a half M25s, so maybe this would be the better route for your morning commute). The total area of the event horizon is about $37000~\mathrm{km}^2$. If you flattened this out, it would cover an area about the size of Montana. Neil Cornish (of Montana State University) said that he’s not sure which we know more accurately: the area of the event horizon or the area of Montana!

OK, we’ve covered the properties of the black holes, perhaps it’s time for a celebratory biscuit and a sit down? But we’re not finished yet, where is the source?

We infer that the source is at a luminosity distance of $410^{+160}_{-180}~\mathrm{Mpc}$, a megaparsec is a unit of length (regardless of what Han Solo thinks) equal to about 3 million light-years. The luminosity distance isn’t quite the same as the distance you would record using a tape measure because it takes into account the effects of the expansion of the Universe. But it’s pretty close. Using our 90% probability range, the merger would have happened sometime between 700 million years and 1.6 billion years ago. This coincides with the Proterozoic Eon on Earth, the time when the first oxygen-dependent animals appeared. Gasp!

With only the two LIGO detectors in operation, it is difficult to localise where on the sky source came from. To have a 90% chance of finding the source, you’d need to cover $600~\mathrm{deg^2}$ of the sky. For comparison, the full Moon is about $0.2~\mathrm{deg^2}$. This is a large area to cover with a telescope, and we don’t expect there to be anything to see for a black hole merger, but that hasn’t stopped our intrepid partners from trying. For a lovely visualisation of where we think the source could be, marvel at the Gravoscope.

### Astrophysics

The detection of this black hole merger tells us:

• Black holes 30 times the mass of our Sun do form These must be the remains of really massive stars. Stars lose mass throughout their lifetime through stellar winds. How much they lose depends on what they are made from. Astronomers have a simple periodic table: hydrogen, helium and metals. (Everything that is not hydrogen or helium is a metal regardless of what it actually is). More metals means more mass loss, so to end up with our black holes, we expect that they must have started out as stars with less than half the fraction of metals found in our Sun. This may mean the parent stars were some of the first stars to be born in the Universe.
• Binary black holes exist There are two ways to make a black hole binary. You can start with two stars in a binary (stars love company, so most have at least one companion), and have them live their entire lives together, leaving behind the two black holes. Alternatively, you could have somewhere where there are lots of stars and black holes, like a globular cluster, and the two black holes could wander close enough together to form the binary. People have suggested that either (or both) could happen. You might be able to tell the two apart using spin measurements. The spins of the black holes are more likely to be aligned (with each other and the way that the binary orbits) if they came from stars formed in a binary. The spins would be randomly orientated if two black holes came together to form a binary by chance. We can’t tell the two apart now, but perhaps when we have more observations!
• Binary black holes merge Since we’ve seen a signal from two black holes inspiralling together and merging, we know that this happens. We can also estimate how often this happens, given how many signals we’ve seen in our observations. Somewhere in the observable Universe, a similar binary could be merging about every 15 minutes. For LIGO, this should mean that we’ll be seeing more. As the detectors’ sensitivity improves (especially at lower frequencies), we’ll be able to detect more and more systems [bonus note]. We’re still uncertain in our predictions of exactly how many we’ll see. We’ll understand things better after observing for longer: were we just lucky, or were we unlucky not to have seen more? Given these early results, we estimate that the end of the third observing run (O3), we could have over 30. It looks like I will be kept busy over the next few years…

### Gravitational physics

Black holes are the parts of the Universe with the strongest possible gravity. They are the ideal place to test Einstein’s theory of general relativity. The gravitational waves from a black hole merger let us probe right down to the event horizon, using ripples in spacetime itself. This makes gravitational waves a perfect way of testing our understanding of gravity.

We have run some tests on the signal to see how well it matches our expectations. We find no reason to doubt that Einstein was right.

The first check is that if we try to reconstruct the signal, without putting in information about what gravitational waves from a binary merger look like, we find something that agrees wonderfully with our predictions. We can reverse engineer what the gravitational waves from a black hole merger look like from the data!

Recovered gravitational waveforms from our analysis of The Event. The dark band shows our estimate for the waveform without assuming a particular source (it is build from wavelets, which sound adorable to me). The light bands show results if we assume it is a binary black hole (BBH) as predicted by general relativity. They match really well! Fig. 6 from the Parameter Estimation Paper.

As a consistency test, we checked what would happen if you split the signal in two, and analysed each half independently with our parameter-estimation codes. If there’s something weird, we would expect to get different results. We cut the data into a high frequency piece and a low frequency piece at roughly where we think the merger starts. The lower frequency (mostly) inspiral part is more similar to the physics we’ve tested before, while the higher frequency (mostly) merger and ringdown is new and hence more uncertain. Looking at estimates for the mass and spin of the final black hole, we find that the two pieces are consistent as expected.

In general relativity, gravitational waves travel at the speed of light. (The speed of light is misnamed, it’s really a property of spacetime, rather than of light). If gravitons, the theoretical particle that carries the gravitational force, have a mass, then gravitational waves can’t travel at the speed of light, but would travel slightly slower. Because our signals match general relativity so well, we can put a limit on the maximum allowed mass. The mass of the graviton is less than $1.2 \times 10^{-22}~\mathrm{eV\,c^{-2}}$ (in units that the particle physicists like). This is tiny! It is about as many times lighter than an electron as an electron is lighter than a teaspoon of water (well, $4~\mathrm{g}$, which is just under a full teaspoon), or as many times lighter than the almost teaspoon of water is than three Earths.

Bounds on the Compton wavelength $\lambda_g$ of the graviton from The Event (GW150914). The Compton wavelength is a length defined by the mass of a particle: smaller masses mean large wavelengths. We place much better limits than existing tests from the Solar System or the double pulsar. There are some cosmological tests which are stronger still (but they make assumptions about dark matter). Fig. 8 from the Testing General Relativity Paper.

Overall things look good for general relativity, it has passed a tough new test. However, it will be extremely exciting to get more observations. Then we can combine all our results to get the best insights into gravity ever. Perhaps we’ll find a hint of something new, or perhaps we’ll discover that general relativity is perfect? We’ll have to wait and see.

## Conclusion

100 years after Einstein predicted gravitational waves and Schwarzschild found the equations describing a black hole, LIGO has detected gravitational waves from two black holes orbiting each other. This is the culmination of over forty years of effort. The black holes inspiral together and merge to form a bigger black hole. This is the signal I would have wished for. From the signal we can infer the properties of the source (some better than others), which makes me exceedingly happy. We’re starting to learn about the properties of black holes, and to test Einstein’s theory. As we continue to look for gravitational waves (with Advanced Virgo hopefully joining next year), we’ll learn more and perhaps make other detections too. The era of gravitational-wave astronomy has begun!

After all that, I am in need of a good nap! (I was too excited to sleep last night, it was like a cross between Christmas Eve and the night before final exams). For more on the story from scientists inside the LIGO–Virgo Collaboration, check out posts by:

• Matt Pitkin (the tireless reviewer of our parameter-estimation work)
• Brynley Pearlstone (who’s just arrived at the LIGO Hanford site)
• Amber Stuver (who  blogged through LIGO’s initial runs too)
• Rebecca Douglas (a good person to ask about what build a detector out of)
• Daniel Williams (someone fresh to the Collaboration)
• Sean Leavey (a PhD student working on on interferometry)
• Andrew Williamson (who likes to look for gravitational waves that coincide with gamma-ray bursts)
• Shane Larson (another fan of space-based gravitational-wave detectors)
• Roy Williams (who helps to make all the wonderful open data releases for LIGO)
• Chris North (creator of the Gravoscope amongst other things)

There’s also this video from my the heads of my group in Birmingham on their reactions to the discovery (the credits at the end show how large an effort the detection is).

Discovery paper: Observation of Gravitational Waves from a Binary Black Hole Merger
Date release:
LIGO Open Science Center

### Bonus notes

#### Search pipelines

At the Large Hadron Collider, there are separate experiments that independently analyse data, and this is an excellent cross-check of any big discoveries (like the Higgs). We’re not in a position to do this for gravitational waves. However, the different search pipelines are mostly independent of each other. They use different criteria to rank potential candidates, and the burst and binary searches even look for different types of signals. Therefore, the different searches act as a check of each other. The teams can get competitive at times, so they do check each other’s results thoroughly.

#### The announcement

Updating Have we detected gravitational waves yet? was doubly exciting as I had to successfully connect to the University’s wi-fi. I managed this with about a minute to spare. Then I hovered with my finger on the button until David Reitze said “We. Have detected. Gravitational waves!” The exact moment is captured in the video below, I’m just off to the left.

#### Parameters and uncertainty

We don’t get a single definite number from our analysis, we have some uncertainty too. Therefore, our results are usually written  as the median value (which means we think that the true value is equally probable to be above or below this number), plus the range needed to safely enclose 90% of the probability (so there’s a 10% chance the true value is outside this range. For the mass of the bigger black hole, the median estimate is $36 M_\odot$, we think there’s a 5% chance that the mass is below $32 M_\odot =(36 - 4) M_\odot$, and a 5% chance it’s above $41 M_\odot =(36 + 5) M_\odot$, so we write our result as $36^{+5}_{-4} M_\odot$.

#### Sensitivity and ranges

Gravitational-wave detectors measure the amplitude of the wave (the amount of stretch and squash). The measured amplitude is smaller for sources that are further away: if you double the luminosity distance of a source, you halve its amplitude. Therefore, if you improve your detectors’ sensitivity by a factor of two, you can see things twice as far away. This means that we observe a volume of space (2 × 2 × 2) = 8 times as big. (This isn’t exactly the case because of pesky factors from the expansion of the Universe, but is approximately right). Even a small improvement in sensitivity can have a considerable impact on the number of signals detected!