The O2 Catalogue—It goes up to 11

The full results of our second advanced-detector observing run (O2) have now been released—we’re pleased to announce four new gravitational wave signals: GW170729, GW170809, GW170818 and GW170823 [bonus note]. These latest observations are all of binary black hole systems. Together, they bring our total to 10 observations of binary black holes, and 1 of a binary neutron star. With more frequent detections on the horizon with our third observing run due to start early 2019, the era of gravitational wave astronomy is truly here.

Black hole and neutron star masses

The population of black holes and neutron stars observed with gravitational waves and with electromagnetic astronomy. You can play with an interactive version of this plot online.

The new detections are largely consistent with our previous findings. GW170809, GW170818 and GW170823 are all similar to our first detection GW150914. Their black holes have masses around 20 to 40 times the mass of our Sun. I would lump GW170104 and GW170814 into this class too. Although there were models that predicted black holes of these masses, we weren’t sure they existed until our gravitational wave observations. The family of black holes continues out of this range. GW151012, GW151226 and GW170608 fall on the lower mass side. These overlap with the population of black holes previously observed in X-ray binaries. Lower mass systems can’t be detected as far away, so we find fewer of these. On the higher end we have GW170729 [bonus note]. Its source is made up of black holes with masses 50.2^{+16.2}_{-10.2} M_\odot and 34.0^{+9.1}_{-10.1} M_\odot (where M_\odot is the mass of our Sun). The larger black hole is a contender for the most massive black hole we’ve found in a binary (the other probable contender is GW170823’s source, which has a 39.5^{+11.2}_{-6.7} M_\odot black hole). We have a big happy family of black holes!

Of the new detections, GW170729, GW170809 and GW170818 were both observed by the Virgo detector as well as the two LIGO detectors. Virgo joined O2 for an exciting August [bonus note], and we decided that the data at the time of GW170729 were good enough to use too. Unfortunately, Virgo wasn’t observing at the time of GW170823. GW170729 and GW170809 are very quiet in Virgo, you can’t confidently say there is a signal there [bonus note]. However, GW170818 is a clear detection like GW170814. Well done Virgo!

Using the collection of results, we can start understand the physics of these binary systems. We will be summarising our findings in a series of papers. A huge amount of work went into these.

The papers

The O2 Catalogue Paper

Title: GWTC-1: A gravitational-wave transient catalog of compact binary mergers observed by LIGO and Virgo during the first and second observing runs
arXiv:
 1811.12907 [astro-ph.HE]
Data: Catalogue; Parameter estimation results
Journal: Physical Review X; 9(3):031040(49); 2019
LIGO science summary: GWTC-1: A new catalog of gravitational-wave detections

The paper summarises all our observations of binaries to date. It covers our first and second observing runs (O1 and O2). This is the paper to start with if you want any information. It contains estimates of parameters for all our sources, including updates for previous events. It also contains merger rate estimates for binary neutron stars and binary black holes, and an upper limit for neutron star–black hole binaries. We’re still missing a neutron star–black hole detection to complete the set.

More details: The O2 Catalogue Paper

The O2 Populations Paper

Title: Binary black hole population properties inferred from the first and second observing runs of Advanced LIGO and Advanced Virgo
arXiv:
 1811.12940 [astro-ph.HE]
Journal: Astrophysical Journal Letters; 882(2):L24(30); 2019
Data: Population inference results
LIGO science summary: Binary black hole properties inferred from O1 and O2

Using our set of ten binary black holes, we can start to make some statistical statements about the population: the distribution of masses, the distribution of spins, the distribution of mergers over cosmic time. With only ten observations, we still have a lot of uncertainty, and can’t make too many definite statements. However, if you were wondering why we don’t see any more black holes more massive than GW170729, even though we can see these out to significant distances, so are we. We infer that almost all stellar-mass black holes have masses less than 45 M_\odot.

More details: The O2 Populations Paper

The O2 Catalogue Paper

Synopsis: O2 Catalogue Paper
Read this if: You want the most up-to-date gravitational results
Favourite part: It’s out! We can tell everyone about our FOUR new detections

This is a BIG paper. It covers our first two observing runs and our main searches for coalescing stellar mass binaries. There will be separate papers going into more detail on searches for other gravitational wave signals.

The instruments

Gravitational wave detectors are complicated machines. You don’t just take them out of the box and press go. We’ll be slowly improving the sensitivity of our detectors as we commission them over the next few years. O2 marks the best sensitivity achieved to date. The paper gives a brief overview of the detector configurations in O2 for both LIGO detectors, which did differ, and Virgo.

During O2, we realised that one source of noise was beam jitter, disturbances in the shape of the laser beam. This was particularly notable in Hanford, where there was a spot on the one of the optics. Fortunately, we are able to measure the effects of this, and hence subtract out this noise. This has now been done for the whole of O2. It makes a big difference! Derek Davis and TJ Massinger won the first LIGO Laboratory Award for Excellence in Detector Characterization and Calibration™ for implementing this noise subtraction scheme (the award citation almost spilled the beans on our new detections). I’m happy that GW170104 now has an increased signal-to-noise ratio, which means smaller uncertainties on its parameters.

The searches

We use three search algorithms in this paper. We have two matched-filter searches (GstLAL and PyCBC). These compare a bank of templates to the data to look for matches. We also use coherent WaveBurst (cWB), which is a search for generic short signals, but here has been tuned to find the characteristic chirp of a binary. Since cWB is more flexible in the signals it can find, it’s slightly less sensitive than the matched-filter searches, but it gives us confidence that we’re not missing things.

The two matched-filter searches both identify all 11 signals with the exception of GW170818, which is only found by GstLAL. This is because PyCBC only flags signals above a threshold in each detector. We’re confident it’s real though, as it is seen in all three detectors, albeit below PyCBC’s threshold in Hanford and Virgo. (PyCBC only looked at signals found in coincident Livingston and Hanford in O2, I suspect they would have found it if they were looking at all three detectors, as that would have let them lower their threshold).

The search pipelines try to distinguish between signal-like features in the data and noise fluctuations. Having multiple detectors is a big help here, although we still need to be careful in checking for correlated noise sources. The background of noise falls off quickly, so there’s a rapid transition between almost-certainly noise to almost-certainly signal. Most of the signals are off the charts in terms of significance, with GW170818, GW151012 and GW170729 being the least significant. GW170729 is found with best significance by cWB, that gives reports a false alarm rate of 1/(50~\mathrm{yr}).

Inverse false alarm rates

Cumulative histogram of results from GstLAL (top left), PyCBC (top right) and cWB (bottom). The expected background is shown as the dashed line and the shaded regions give Poisson uncertainties. The search results are shown as the solid red line and named gravitational-wave detections are shown as blue dots. More significant results are further to the right of the plot. Fig. 2 and Fig. 3 of the O2 Catalogue Paper.

The false alarm rate indicates how often you would expect to find something at least as signal like if you were to analyse a stretch of data with the same statistical properties as the data considered, assuming that they is only noise in the data. The false alarm rate does not fold in the probability that there are real gravitational waves occurring at some average rate. Therefore, we need to do an extra layer of inference to work out the probability that something flagged by a search pipeline is a real signal versus is noise.

The results of this calculation is given in Table IV. GW170729 has a 94% probability of being real using the cWB results, 98% using the GstLAL results, but only 52% according to PyCBC. Therefore, if you’re feeling bold, you might, say, only wager the entire economy of the UK on it being real.

We also list the most marginal triggers. These all have probabilities way below being 50% of being real: if you were to add them all up you wouldn’t get a total of 1 real event. (In my professional opinion, they are garbage). However, if you want to check for what we might have missed, these may be a place to start. Some of these can be explained away as instrumental noise, say scattered light. Others show no obvious signs of disturbance, so are probably just some noise fluctuation.

The source properties

We give updated parameter estimates for all 11 sources. These use updated estimates of calibration uncertainty (which doesn’t make too much difference), improved estimate of the noise spectrum (which makes some difference to the less well measured parameters like the mass ratio), the cleaned data (which helps for GW170104), and our most currently complete waveform models [bonus note].

This plot shows the masses of the two binary components (you can just make out GW170817 down in the corner). We use the convention that the more massive of the two is m_1 and the lighter is m_2. We are now really filling in the mass plot! Implications for the population of black holes are discussed in the Populations Paper.

All binary masses

Estimated masses for the two binary objects for each of the events in O1 and O2. From lowest chirp mass (left; red) to highest (right; purple): GW170817 (solid), GW170608 (dashed), GW151226 (solid), GW151012 (dashed), GW170104 (solid), GW170814 (dashed), GW170809 (dashed), GW170818 (dashed), GW150914 (solid), GW170823 (dashed), GW170729 (solid). The contours mark the 90% credible regions. The grey area is excluded from our convention on masses. Part of Fig. 4 of the O2 Catalogue Paper. The mass ratio is q = m_2/m_1.

As well as mass, black holes have a spin. For the final black hole formed in the merger, these spins are always around 0.7, with a little more or less depending upon which way the spins of the two initial black holes were pointing. As well as being probably the most most massive, GW170729’s could have the highest final spin! It is a record breaker. It radiated a colossal 4.8^{+1.7}_{-1.7} M_\odot worth of energy in gravitational waves [bonus note].

All final black hole masses and spins

Estimated final masses and spins for each of the binary black hole events in O1 and O2. From lowest chirp mass (left; red–orange) to highest (right; purple): GW170608 (dashed), GW151226 (solid), GW151012 (dashed), GW170104 (solid), GW170814 (dashed), GW170809 (dashed), GW170818 (dashed), GW150914 (solid), GW170823 (dashed), GW170729 (solid). The contours mark the 90% credible regions. Part of Fig. 4 of the O2 Catalogue Paper.

There is considerable uncertainty on the spins as there are hard to measure. The best combination to pin down is the effective inspiral spin parameter \chi_\mathrm{eff}. This is a mass weighted combination of the spins which has the most impact on the signal we observe. It could be zero if the spins are misaligned with each other, point in the orbital plane, or are zero. If it is non-zero, then it means that at least one black hole definitely has some spin. GW151226 and GW170729 have \chi_\mathrm{eff} > 0 with more than 99% probability. The rest are consistent with zero. The spin distribution for GW170104 has tightened up for GW170104 as its signal-to-noise ratio has increased, and there’s less support for negative \chi_\mathrm{eff}, but there’s been no move towards larger positive \chi_\mathrm{eff}.

All effective inspiral spin parameters

Estimated effective inspiral spin parameters for each of the events in O1 and O2. From lowest chirp mass (left; red) to highest (right; purple): GW170817, GW170608, GW151226, GW151012, GW170104, GW170814, GW170809, GW170818, GW150914, GW170823, GW170729. Part of Fig. 5 of the O2 Catalogue Paper.

For our analysis, we use two different waveform models to check for potential sources of systematic error. They agree pretty well. The spins are where they show most difference (which makes sense, as this is where they differ in terms of formulation). For GW151226, the effective precession waveform IMRPhenomPv2 gives 0.20^{+0.18}_{-0.08} and the full precession model gives 0.15^{+0.25}_{-0.11} and extends to negative \chi_\mathrm{eff}. I panicked a little bit when I first saw this, as GW151226 having a non-zero spin was one of our headline results when first announced. Fortunately, when I worked out the numbers, all our conclusions were safe. The probability of \chi_\mathrm{eff} < 0 is less than 1%. In fact, we can now say that at least one spin is greater than 0.28 at 99% probability compared with 0.2 previously, because the full precession model likes spins in the orbital plane a bit more. Who says data analysis can’t be thrilling?

Our measurement of \chi_\mathrm{eff} tells us about the part of the spins aligned with the orbital angular momentum, but not in the orbital plane. In general, the in-plane components of the spin are only weakly constrained. We basically only get back the information we put in. The leading order effects of in-plane spins is summarised by the effective precession spin parameter \chi_\mathrm{p}. The plot below shows the inferred distributions for \chi_\mathrm{p}. The left half for each event shows our results, the right shows our prior after imposed the constraints on spin we get from \chi_\mathrm{eff}. We get the most information for GW151226 and GW170814, but even then it’s not much, and we generally cover the entire allowed range of values.

All effective precession spin parameters

Estimated effective inspiral spin parameters for each of the events in O1 and O2. From lowest chirp mass (left; red) to highest (right; purple): GW170817, GW170608, GW151226, GW151012, GW170104, GW170814, GW170809, GW170818, GW150914, GW170823, GW170729. The left (coloured) part of the plot shows the posterior distribution; the right (white) shows the prior conditioned by the effective inspiral spin parameter constraints. Part of Fig. 5 of the O2 Catalogue Paper.

One final measurement which we can make (albeit with considerable uncertainty) is the distance to the source. The distance influences how loud the signal is (the further away, the quieter it is). This also depends upon the inclination of the source (a binary edge-on is quieter than a binary face-on/off). Therefore, the distance is correlated with the inclination and we end up with some butterfly-like plots. GW170729 is again a record setter. It comes from a luminosity distance of 2.84^{+1.40}_{-1.36}~\mathrm{Gpc} away. That means it has travelled across the Universe for 3.26.2 billion years—it potentially started its journey before the Earth formed!

All distances and inclinations

Estimated luminosity distances and orbital inclinations for each of the events in O1 and O2. From lowest chirp mass (left; red) to highest (right; purple): GW170817 (solid), GW170608 (dashed), GW151226 (solid), GW151012 (dashed), GW170104 (solid), GW170814 (dashed), GW170809 (dashed), GW170818 (dashed), GW150914 (solid), GW170823 (dashed), GW170729 (solid). The contours mark the 90% credible regions.An inclination of zero means that we’re looking face-on along the direction of the total angular momentum, and inclination of \pi/2 means we’re looking edge-on perpendicular to the angular momentum. Part of Fig. 7 of the O2 Catalogue Paper.

Waveform reconstructions

To check our results, we reconstruct the waveforms from the data to see that they match our expectations for binary black hole waveforms (and there’s not anything extra there). To do this, we use unmodelled analyses which assume that there is a coherent signal in the detectors: we use both cWB and BayesWave. The results agree pretty well. The reconstructions beautifully match our templates when the signal is loud, but, as you might expect, can resolve the quieter details. You’ll also notice the reconstructions sometimes pick up a bit of background noise away from the signal. This gives you and idea of potential fluctuations.

Spectrograms and waveforms

Time–frequency maps and reconstructed signal waveforms for the binary black holes. For each event we show the results from the detector where the signal was loudest. The left panel for each shows the time–frequency spectrogram with the upward-sweeping chip. The right show waveforms: blue the modelled waveforms used to infer parameters (LALInf; top panel); the red wavelet reconstructions (BayesWave; top panel); the black is the maximum-likelihood cWB reconstruction (bottom panel), and the green (bottom panel) shows reconstructions for simulated similar signals. I think the agreement is pretty good! All the data have been whitened as this is how we perform the statistical analysis of our data. Fig. 10 of the O2 Catalogue Paper.

I still think GW170814 looks like a slug. Some people think they look like crocodiles.

We’ll be doing more tests of the consistency of our signals with general relativity in a future paper.

Merger rates

Given all our observations now, we can set better limits on the merger rates. Going from the number of detections seen to the number merger out in the Universe depends upon what you assume about the mass distribution of the sources. Therefore, we make a few different assumptions.

For binary black holes, we use (i) a power-law model for the more massive black hole similar to the initial mass function of stars, with a uniform distribution on the mass ratio, and (ii) use uniform-in-logarithmic distribution for both masses. These were designed to bracket the two extremes of potential distributions. With our observations, we’re starting to see that the true distribution is more like the power-law, so I expect we’ll be abandoning these soon. Taking the range of possible values from our calculations, the rate is in the range of 9.7101~\mathrm{Gpc^{-3}\,yr^{-1}} for black holes between 5 M_\odot and 50 M_\odot [bonus note].

For binary neutron stars, which are perhaps more interesting astronomers, we use a uniform distribution of masses between 0.8 M_\odot and 2.3 M_\odot, and a Gaussian distribution to match electromagnetic observations. We find that these bracket the range 974440~\mathrm{Gpc^{-3}\,yr^{-1}}. This larger than are previous range, as we hadn’t considered the Gaussian distribution previously.

NSBH rate upper limits

90% upper limits for neutron star–black hole binaries. Three black hole masses were tried and two spin distributions. Results are shown for the two matched-filter search algorithms. Fig. 14 of the O2 Catalogue Paper.

Finally, what about neutron star–black holes? Since we don’t have any detections, we can only place an upper limit. This is a maximum of 610~\mathrm{Gpc^{-3}\,yr^{-1}}. This is about a factor of 2 better than our O1 results, and is starting to get interesting!

We are sure to discover lots more in O3… [bonus note].

The O2 Populations Paper

Synopsis: O2 Populations Paper
Read this if: You want the best family portrait of binary black holes
Favourite part: A maximum black hole mass?

Each detection is exciting. However, we can squeeze even more science out of our observations by looking at the entire population. Using all 10 of our binary black hole observations, we start to trace out the population of binary black holes. Since we still only have 10, we can’t yet be too definite in our conclusions. Our results give us some things to ponder, while we are waiting for the results of O3. I think now is a good time to start making some predictions.

We look at the distribution of black hole masses, black hole spins, and the redshift (cosmological time) of the mergers. The black hole masses tell us something about how you go from a massive star to a black hole. The spins tell us something about how the binaries form. The redshift tells us something about how these processes change as the Universe evolves. Ideally, we would look at these all together allowing for mixtures of binary black holes formed through different means. Given that we only have a few observations, we stick to a few simple models.

To work out the properties of the population, we perform a hierarchical analysis of our 10 binary black holes. We infer the properties of the individual systems, assuming that they come from a given population, and then see how well that population fits our data compared with a different distribution.

In doing this inference, we account for selection effects. Our detectors are not equally sensitive to all sources. For example, nearby sources produce louder signals and we can’t detect signals that are too far away, so if you didn’t account for this you’d conclude that binary black holes only merged in the nearby Universe. Perhaps less obvious is that we are not equally sensitive to all source masses. More massive binaries produce louder signals, so we can detect these further way than lighter binaries (up to the point where these binaries are so high mass that the signals are too low frequency for us to easily spot). This is why we detect more binary black holes than binary neutron stars, even though there are more binary neutron stars out here in the Universe.

Masses

When looking at masses, we try three models of increasing complexity:

  • Model A is a simple power law for the mass of the more massive black hole m_1. There’s no real reason to expect the masses to follow a power law, but the masses of stars when they form do, and astronomers generally like power laws as they’re friendly, so its a sensible thing to try. We fit for the power-law index. The power law goes from a lower limit of 5 M_\odot to an upper limit which we also fit for. The mass of the lighter black hole m_2 is assumed to be uniformly distributed between 5 M_\odot and the mass of the other black hole.
  • Model B is the same power law, but we also allow the lower mass limit to vary from 5 M_\odot. We don’t have much sensitivity to low masses, so this lower bound is restricted to be above 5 M_\odot. I’d be interested in exploring lower masses in the future. Additionally, we allow the mass ratio q = m_2/m_1 of the black holes to vary, trying q^{\beta_q} instead of Model A’s q^0.
  • Model C has the same power law, but now with some smoothing at the low-mass end, rather than a sharp turn-on. Additionally, it includes a Gaussian component towards higher masses. This was inspired by the possibility of pulsational pair-instability supernova causing a build up of black holes at certain masses: stars which undergo this lose extra mass, so you’d end up with lower mass black holes than if the stars hadn’t undergone the pulsations. The Gaussian could fit other effects too, for example if there was a secondary formation channel, or just reflect that the pure power law is a bad fit.

In allowing the mass distributions to vary, we find overall rates which match pretty well those we obtain with our main power-law rates calculation included in the O2 Catalogue Paper, higher than with the main uniform-in-log distribution.

The fitted mass distributions are shown in the plot below. The error bars are pretty broad, but I think the models agree on some broad features: there are more light black holes than heavy black holes; the minimum black hole mass is below about 9 M_\odot, but we can’t place a lower bound on it; the maximum black hole mass is above about 35 M_\odot and below about 50 M_\odot, and we prefer black holes to have more similar masses than different ones. The upper bound on the black hole minimum mass, and the lower bound on the black hole upper mass are set by the smallest and biggest black holes we’ve detected, respectively.

Population vs black hole mass

Binary black hole merger rate as a function of the primary mass (m_1; top) and mass ratio (q; bottom). The solid lines and bands show the medians and 90% intervals. The dashed line shows the posterior predictive distribution: our expectation for future observations averaging over our uncertainties. Fig. 2 of the O2 Populations Paper.

That there does seem to be a drop off at higher masses is interesting. There could be something which stops stars forming black holes in this range. It has been proposed that there is a mass gap due to pair instability supernovae. These explosions completely disrupt their progenitor stars, leaving nothing behind. (I’m not sure if they are accompanied by a flash of green light). You’d expect this to kick for black holes of about 5060 M_\odot. We infer that 99% of merging black holes have masses below 44.0 M_\odot with Model A, 41.8 M_\odot with Model B, and 41.8 M_\odot with Model C. Therefore, our results are not inconsistent with a mass gap. However, we don’t really have enough evidence to be sure.

We can compare how well each of our three models fits the data by looking at their Bayes factors. These naturally incorporate the complexity of the models: models with more parameters (which can be more easily tweaked to match the data) are penalised so that you don’t need to worry about overfitting. We have a preference for Model C. It’s not strong, but I think good evidence that we can’t use a simple power law.

Spins

To model the spins:

  • For the magnitude, we assume a beta distribution. There’s no reason for this, but these are convenient distributions for things between 0 and 1, which are the limits on black hole spin (0 is nonspinning, 1 is as fast as you can spin). We assume that both spins are drawn from the same distribution.
  • For the spin orientations, we use a mix of an isotropic distribution and a Gaussian centred on being aligned with the orbital angular momentum. You’d expect an isotropic distribution if binaries were assembled dynamically, and perhaps something with spins generally aligned with each other if the binary evolved in isolation.

We don’t get any useful information on the mixture fraction. Looking at the spin magnitudes, we have a preference towards smaller spins, but still have support for large spins. The more misaligned spins are, the larger the spin magnitudes can be: for the isotropic distribution, we have support all the way up to maximal values.

Parametric and binned spin magnitude distributions

Inferred spin magnitude distributions. The left shows results for the parametric distribution, assuming a mixture of almost aligned and isotropic spin, with the median (solid), 50% and 90% intervals shaded, and the posterior predictive distribution as the dashed line. Results are included both for beta distributions which can be singular at 0 and 1, and with these excluded. Model V is a very low spin model shown for comparison. The right shows a binned reconstruction of the distribution for aligned and isotropic distributions, showing the median and 90% intervals. Fig. 8 of the O2 Populations Paper.

Since spins are harder to measure than masses, it is not surprising that we can’t make strong statements yet. If we were to find something with definitely negative \chi_\mathrm{eff}, we would be able to deduce that spins can be seriously misaligned.

Redshift evolution

As a simple model of evolution over cosmological time, we allow the merger rate to evolve as (1+z)^\lambda. That’s right, another power law! Since we’re only sensitive to relatively small redshifts for the masses we detect (z < 1), this gives a good approximation to a range of different evolution schemes.

Rate versus redshift

Evolution of the binary black hole merger rate (blue), showing median, 50% and 90% intervals. For comparison, a non-evolving rate calculated using Model B is shown too. Fig. 6 of the O2 Populations Paper.

We find that we prefer evolutions that increase with redshift. There’s an 88% probability that \lambda > 0, but we’re still consistent with no evolution. We might expect rate to increase as star formation was higher bach towards z =2. If we can measure the time delay between forming stars and black holes merging, we could figure out what happens to these systems in the meantime.

The local merger rate is broadly consistent with what we infer with our non-evolving distributions, but is a little on the lower side.

Bonus notes

Naming

Gravitational waves are named as GW-year-month-day, so our first observation from 14 September 2015 is GW150914. We realise that this convention suffers from a Y2K-style bug, but by the time we hit 2100, we’ll have so many detections we’ll need a new scheme anyway.

Previously, we had a second designation for less significant potential detections. They were LIGO–Virgo Triggers (LVT), the one example being LVT151012. No-one was really happy with this designation, but it stems from us being cautious with our first announcement, and not wishing to appear over bold with claiming we’d seen two gravitational waves when the second wasn’t that certain. Now we’re a bit more confident, and we’ve decided to simplify naming by labelling everything a GW on the understanding that this now includes more uncertain events. Under the old scheme, GW170729 would have been LVT170729. The idea is that the broader community can decide which events they want to consider as real for their own studies. The current condition for being called a GW is that the probability of it being a real astrophysical signal is at least 50%. Our 11 GWs are safely above that limit.

The naming change has hidden the fact that now when we used our improved search pipelines, the significance of GW151012 has increased. It would now be a GW even under the old scheme. Congratulations LVT151012, I always believed in you!

Trust LIGO

Is it of extraterrestrial origin, or is it just a blurry figure? GW151012: the truth is out there!.

Burning bright

We are lacking nicknames for our new events. They came in so fast that we kind of lost track. Ilya Mandel has suggested that GW170729 should be the Tiger, as it happened on the International Tiger Day. Since tigers are the biggest of the big cats, this seems apt.

Carl-Johan Haster argues that LIGO+tiger = Liger. Since ligers are even bigger than tigers, this seems like an excellent case to me! I’d vote for calling the bigger of the two progenitor black holes GW170729-tiger, the smaller GW170729-lion, and the final black hole GW17-729-liger.

Suggestions for other nicknames are welcome, leave your ideas in the comments.

August 2017—Something fishy or just Poisson statistics?

The final few weeks of O2 were exhausting. I was trying to write job applications at the time, and each time I sat down to work on my research proposal, my phone went off with another alert. You may be wondering about was special about August. Some have hypothesised that it is because Aaron Zimmerman, my partner for the analysis of GW170104, was on the Parameter Estimation rota to analyse the last few weeks of O2. The legend goes that Aaron is especially lucky as he was bitten by a radioactive Leprechaun. I can neither confirm nor deny this. However, I make a point of playing any lottery numbers suggested by him.

A slightly more mundane explanation is that August was when the detectors were running nice and stably. They were observing for a large fraction of the time. LIGO Livingston reached its best sensitivity at this time, although it was less happy for Hanford. We often quantify the sensitivity of our detectors using their binary neutron star range, the average distance they could see a binary neutron star system with a signal-to-noise ratio of 8. If this increases by a factor of 2, you can see twice as far, which means you survey 8 times the volume. This cubed factor means even small improvements can have a big impact. The LIGO Livingston range peak a little over 100~\mathrm{Mpc}. We’re targeting at least 120~\mathrm{Mpc} for O3, so August 2017 gives an indication of what you can expect.

Detector sensitivity across O2

Binary neutron star range for the instruments across O2. The break around week 3 was for the holidays (We did work Christmas 2015). The break at week 23 was to tune-up the instruments, and clean the mirrors. At week 31 there was an earthquake in Montana, and the Hanford sensitivity didn’t recover by the end of the run. Part of Fig. 1 of the O2 Catalogue Paper.

Of course, in the case of GW170817, we just got lucky.

Sign errors

GW170809 was the first event we identified with Virgo after it joined observing. The signal in Virgo is very quiet. We actually got better results when we flipped the sign of the Virgo data. We were just starting to get paranoid when GW170814 came along and showed us that everything was set up right at Virgo. When I get some time, I’d like to investigate how often this type of confusion happens for quiet signals.

SEOBNRv3

One of the waveforms, which includes the most complete prescription of the precession of the spins of the black holes, we use in our analysis goes by the technical name of SEOBNRv3. It is extremely computationally expensive. Work has been done to improve that, but this hasn’t been implemented in our reviewed codes yet. We managed to complete an analysis for the GW170104 Discovery Paper, which was a huge effort. I said then to not expect it for all future events. We did it for all the black holes, even for the lowest mass sources which have the longest signals. I was responsible for GW151226 runs (as well as GW170104) and I started these back at the start of the summer. Eve Chase put in a heroic effort to get GW170608 results, we pulled out all the stops for that.

Thanksgiving

I have recently enjoyed my first Thanksgiving in the US. I was lucky enough to be hosted for dinner by Shane Larson and his family (and cats). I ate so much I thought I might collapse to a black hole. Apparently, a Thanksgiving dinner can be 3000–4500 calories. That sounds like a lot, but the merger of GW170729 would have emitted about 5 \times 10^{40} times more energy. In conclusion, I don’t need to go on a diet.

Confession

We cheated a little bit in calculating the rates. Roughly speaking, the merger rate is given by

\displaystyle R = \frac{N}{\langle VT\rangle},

where N is the number of detections and \langle VT\rangle is the amount of volume and time we’ve searched. You expect to detect more events if you increase the sensitivity of the detectors (and hence V), or observer for longer (and hence increase T). In our calculation, we included GW170608 in N, even though it was found outside of standard observing time. Really, we should increase \langle VT\rangle to factor in the extra time outside of standard observing time when we could have made a detection. This is messy to calculate though, as there’s not really a good way to check this. However, it’s only a small fraction of the time (so the extra T should be small), and for much of the sensitivity of the detectors will be poor (so V will be small too). Therefore, we estimated any bias from neglecting this is smaller than our uncertainty from the calibration of the detectors, and not worth worrying about.

New sources

We saw our first binary black hole shortly after turning on the Advanced LIGO detectors. We saw our first binary neutron star shortly after turning on the Advanced Virgo detector. My money is therefore on our first neutron star–black hole binary shortly after we turn on the KAGRA detector. Because science…

Advertisement

Dirichlet Process Gaussian-mixture model: An application to localizing coalescing binary neutron stars with gravitational-wave observations

Where do gravitational waves like GW170817 come from? Using our network of detectors, we cannot pinpoint a source, but we can make a good estimate—the amplitude of the signal tells us about the distance; the time delay between the signal arriving at different detectors, and relative amplitudes of the signal in different detectors tells us about the sky position (see the excellent video by Leo Singer below).

In this paper we look at full three-dimensional localization of gravitational-wave sources; we important a (rather cunning) technique from computer vision to construct a probability distribution for the source’s location, and then explore how well we could localise a set of simulated binary neutron stars. Knowing the source location enables lots of cool science. First, it aids direct follow-up observations with non-gravitational-wave observatories, searching for electromagnetic or neutrino counterparts. It’s especially helpful if you can cross-reference with galaxy catalogues, to find the most probable source locations (this technique was used to find the kilonova associated with GW170817). Even without finding a counterpart, knowing the most probable host galaxy helps us figure out how the source formed (have lots of stars been born recently, or are all the stars old?), and allows us measure the expansion of the Universe. Having a reliable technique to reconstruct source locations is useful!

This was a fun paper to write [bonus note]. I’m sure it will be valuable, both for showing how to perform this type of reconstruction of a multi-dimensional probability density, and for its implications for source localization and follow-up of gravitational-wave signals. I go into details of both below, first discussing our statistical model (this is a bit technical), then looking at our results for a set of binary neutron stars (which have implications for hunting for counterparts) .

Dirichlet process Gaussian mixture model

When we analyse gravitational-wave data to infer the source properties (location, masses, etc.), we map out parameter space with a set of samples: a list of points in the parameter space, with there being more around more probable locations and fewer in less probable locations. These samples encode everything about the probability distribution for the different parameters, we just need to extract it…

For our application, we want a nice smooth probability density. How do we convert a bunch of discrete samples to a smooth distribution? The simplest thing is to bin the samples. However, picking the right bin size is difficult, and becomes much harder in higher dimensions. Another popular option is to use kernel density estimation. This is better at ensuring smooth results, but you now have to worry about the size of your kernels.

Our approach is in essence to use a kernel density estimate, but to learn the size and position of the kernels (as well as the number) from the data as an extra layer of inference. The “Gaussian mixture model” part of the name refers to the kernels—we use several different Gaussians. The “Dirichlet process” part refers to how we assign their properties (their means and standard deviations). What I really like about this technique, as opposed to the usual rule-of-thumb approaches used for kernel density estimation,  is that it is well justified from a theoretical point of view.

I hadn’t come across a Dirchlet process before. Section 2 of the paper is a walkthrough of how I built up an understanding of this mathematical object, and it contains lots of helpful references if you’d like to dig deeper.

In our application, you can think of the Dirichlet process as being a probability distribution for probability distributions. We want a probability distribution describing the source location. Given our samples, we infer what this looks like. We could put all the probability into one big Gaussian, or we could put it into lots of little Gaussians. The Gaussians could be wide or narrow or a mix. The Dirichlet distribution allows us to assign probabilities to each configuration of Gaussians; for example, if our samples are all in the northern hemisphere, we probably want Gaussians centred around there, rather than in the southern hemisphere.

With the resulting probability distribution for the source location, we can quickly evaluate it at a single point. This means we can rapidly produce a list of most probable source galaxies—extremely handy if you need to know where to point a telescope before a kilonova fades away (or someone else finds it).

Gravitational-wave localization

To verify our technique works, and develop an intuition for three-dimensional localizations, we used studied a set of simulated binary neutron star signals created for the First 2 Years trilogy of papers. This data set is well studied now, it illustrates performance it what we anticipated to be the first two observing runs of the advanced detectors, which turned out to be not too far from the truth. We have previously looked at three-dimensional localizations for these signals using a super rapid approximation.

The plots below show how well we could localise the sources of our binary neutron star sources. Specifically, the plots show the size of the volume which has a 90% probability of containing the source verses the signal-to-noise ratio (the loudness) of the signal. Typically, volumes are 10^410^5~\mathrm{Mpc}^3, which is about 10^{68}10^{69} Olympic swimming pools. Such a volume would contain something like 1001000 galaxies.

Volume verses signal-to-noise ratio

Localization volume as a function of signal-to-noise ratio. The top panel shows results for two-detector observations: the LIGO-Hanford and LIGO-Livingston (HL) network similar to in the first observing run, and the LIGO and Virgo (HLV) network similar to the second observing run. The bottom panel shows all observations for the HLV network including those with all three detectors which are colour coded by the fraction of the total signal-to-noise ratio from Virgo. In both panels, there are fiducial lines scaling inversely with the sixth power of the signal-to-noise ratio. Adapted from Fig. 4 of Del Pozzo et al. (2018).

Looking at the results in detail, we can learn a number of things

  1. The localization volume is roughly inversely proportional to the sixth power of the signal-to-noise ratio [bonus note]. Loud signals are localized much better than quieter ones!
  2. The localization dramatically improves when we have three-detector observations. The extra detector improves the sky localization, which reduces the localization volume.
  3. To get the benefit of the extra detector, the source needs to be close enough that all the detectors could get a decent amount of the signal-to-noise ratio. In our case, Virgo is the least sensitive, and we see the the best localizations are when it has a fair share of the signal-to-noise ratio.
  4. Considering the cases where we only have two detectors, localization volumes get bigger at a given signal-to-noise ration as the detectors get more sensitive. This is because we can detect sources at greater distances.

Putting all these bits together, I think in the future, when we have lots of detections, it would make most sense to prioritise following up the loudest signals. These are the best localised, and will also be the brightest since they are the closest, meaning there’s the greatest potential for actually finding a counterpart. As the sensitivity of the detectors improves, its only going to get more difficult to find a counterpart to a typical gravitational-wave signal, as sources will be further away and less well localized. However, having more sensitive detectors also means that we are more likely to have a really loud signal, which should be really well localized.

Banana vs cucumber

Left: Localization (yellow) with a network of two low-sensitivity detectors. The sky location is uncertain, but we know the source must be nearby. Right: Localization (green) with a network of three high-sensitivity detectors. We have good constraints on the source location, but it could now be at a much greater range of distances. Not to scale.

Using our localization volumes as a guide, you would only need to search one galaxy to find the true source in about 7% of cases with a three-detector network similar to at the end of our second observing run. Similarly, only ten would need to be searched in 23% of cases. It might be possible to get even better performance by considering which galaxies are most probable because they are the biggest or the most likely to produce merging binary neutron stars. This is definitely a good approach to follow.

Three-dimensional localization with galaxy catalgoue

Galaxies within the 90% credible volume of an example simulated source, colour coded by probability. The galaxies are from the GLADE Catalog; incompleteness in the plane of the Milky Way causes the missing wedge of galaxies. The true source location is marked by a cross [bonus note]. Part of Figure 5 of Del Pozzo et al. (2018).

arXiv: 1801.08009 [astro-ph.IM]
Journal: Monthly Notices of the Royal Astronomical Society; 479(1):601–614; 2018
Code: 3d_volume
Buzzword bingo: Interdisciplinary (we worked with computer scientist Tom Haines); machine learning (the inference involving our Dirichlet process Gaussian mixture model); multimessenger astronomy (as our results are useful for following up gravitational-wave signals in the search for counterparts)

Bonus notes

Writing

We started writing this paper back before the first observing run of Advanced LIGO. We had a pretty complete draft on Friday 11 September 2015. We just needed to gather together a few extra numbers and polish up the figures and we’d be done! At 10:50 am on Monday 14 September 2015, we made our first detection of gravitational waves. The paper was put on hold. The pace of discoveries over the coming years meant we never quite found enough time to get it together—I’ve rewritten the introduction a dozen times. It’s extremely satisfying to have it done. This is a shame, as it meant that this study came out much later than our other three-dimensional localization study. The delay has the advantage of justifying one of my favourite acknowledgement sections.

Sixth power

We find that the localization volume \Delta V is inversely proportional to the sixth power of the signal-to-noise ration \varrho. This is what you would expect. The localization volume depends upon the angular uncertainty on the sky \Delta \Omega, the distance to the source D, and the distance uncertainty \Delta D,

\Delta V \sim D^2 \Delta \Omega \Delta D.

Typically, the uncertainty on a parameter (like the masses) scales inversely with the signal-to-noise ratio. This is the case for the logarithm of the distance, which means

\displaystyle \frac{\Delta D}{D} \propto \varrho^{-1}.

The uncertainty in the sky location (being two dimensional) scales inversely with the square of the signal-to-noise ration,

\Delta \Omega \propto \varrho^{-2}.

The signal-to-noise ratio itself is inversely proportional to the distance to the source (sources further way are quieter. Therefore, putting everything together gives

\Delta V \propto \varrho^{-6}.

Treasure

We all know that treasure is marked by a cross. In the case of a binary neutron star merger, dense material ejected from the neutron stars will decay to heavy elements like gold and platinum, so there is definitely a lot of treasure at the source location.

Hierarchical analysis of gravitational-wave measurements of binary black hole spin–orbit misalignments

Gravitational waves allow us to infer the properties of binary black holes (two black holes in orbit about each other), but can we use this information to figure out how the black holes and the binary form? In this paper, we show that measurements of the black holes’ spins can help us this out, but probably not until we have at least 100 detections.

Black hole spins

Black holes are described by their masses (how much they bend spacetime) and their spins (how much they drag spacetime to rotate about them). The orientation of the spins relative to the orbit of the binary could tell us something about the history of the binary [bonus note].

We considered four different populations of spin–orbit alignments to see if we could tell them apart with gravitational-wave observations:

  1. Aligned—matching the idealised example of isolated binary evolution. This stands in for the case where misalignments are small, which might be the case if material blown off during a supernova ends up falling back and being swallowed by the black hole.
  2. Isotropic—matching the expectations for dynamically formed binaries.
  3. Equal misalignments at birth—this would be the case if the spins and orbit were aligned before the second supernova, which then tilted the plane of the orbit. (As the binary inspirals, the spins wobble around, so the two misalignment angles won’t always be the same).
  4. Both spins misaligned by supernova kicks, assuming that the stars were aligned with the orbit before exploding. This gives a more general scatter of unequal misalignments, but typically the primary (bigger and first forming) black hole is more misaligned.

These give a selection of possible spin alignments. For each, we assumed that the spin magnitude was the same and had a value of 0.7. This seemed like a sensible idea when we started this study [bonus note], but is now towards the upper end of what we expect for binary black holes.

Hierarchical analysis

To measurement the properties of the population we need to perform a hierarchical analysis: there are two layers of inference, one for the individual binaries, and one of the population.

From a gravitational wave signal, we infer the properties of the source using Bayes’ theorem. Given the data d_\alpha, we want to know the probability that the parameters \mathbf{\Theta}_\alpha have different values, which is written as p(\mathbf{\Theta}_\alpha|d_\alpha). This is calculated using

\displaystyle p(\mathbf{\Theta}_\alpha|d_\alpha) = \frac{p(d_\alpha | \mathbf{\Theta}_\alpha) p(\mathbf{\Theta}_\alpha)}{p(d_\alpha)},

where p(d_\alpha | \mathbf{\Theta}_\alpha) is the likelihood, which we can calculate from our knowledge of the noise in our gravitational wave detectors, p(\mathbf{\Theta}_\alpha) is the prior on the parameters (what we would have guessed before we had the data), and the normalisation constant p(d_\alpha) is called the evidence. We’ll use the evidence again in the next layer of inference.

Our prior on the parameters should actually depend upon what we believe about the astrophysical population. It is different if we believed that Model 1 were true (when we’d only consider aligned spins) than for Model 2. Therefore, we should really write

\displaystyle p(\mathbf{\Theta}_\alpha|d_\alpha, \lambda) = \frac{p(d_\alpha | \mathbf{\Theta}_\alpha,\lambda) p(\mathbf{\Theta}_\alpha,\lambda)}{p(d_\alpha|\lambda)},

where  \lambda denotes which model we are considering.

This is an important point to remember: if you our using our LIGO results to test your theory of binary formation, you need to remember to correct for our choice of prior. We try to pick non-informative priors—priors that don’t make strong assumptions about the physics of the source—but this doesn’t mean that they match what would be expected from your model.

We are interested in the probability distribution for the different models: how many binaries come from each. Given a set of different observations \{d_\alpha\}, we can work this out using another application of Bayes’ theorem (yay)

\displaystyle p(\mathbf{\lambda}|\{d_\alpha\}) = \frac{p(\{d_\alpha\} | \mathbf{\lambda}) p(\mathbf{\lambda})}{p(\{d_\alpha\})},

where p(\{d_\alpha\} | \mathbf{\lambda}) is just all the evidences for the individual events (given that model) multiplied together, p(\mathbf{\lambda}) is our prior for the different models, and p(\{d_\alpha\}) is another normalisation constant.

Now knowing how to go from a set of observations to the probability distribution on the different channels, let’s give it a go!

Results

To test our approach made a set of mock gravitational wave measurements. We generated signals from binaries for each of our four models, and analysed these as we would for real signals (using LALInference). This is rather computationally expensive, and we wanted a large set of events to analyse, so using these results as a guide, we created a larger catalogue of approximate distributions for the inferred source parameters p(\mathbf{\Theta}_\alpha|d_\alpha). We then fed these through our hierarchical analysis. The GIF below shows how measurements of the fraction of binaries from each population tightens up as we get more detections: the true fraction is marked in blue.

Fraction of binaries from each of the four models

Probability distribution for the fraction of binaries from each of our four spin misalignment populations for different numbers of observations. The blue dot marks the true fraction: and equal fraction from all four channels.

The plot shows that we do zoom in towards the true fraction of events from each model as the number of events increases, but there are significant degeneracies between the different models. Notably, it is difficult to tell apart Models 1 and 3, as both have strong support for both spins being nearly aligned. Similarly, there is a degeneracy between Models 2 and 4 as both allow for the two spins to have very different misalignments (and for the primary spin, which is the better measured one, to be quite significantly misaligned).

This means that we should be able to distinguish aligned from misaligned populations (we estimated that as few as 5 events would be needed to distinguish the case that all events came from either Model 1  or Model 2 if those were the only two allowed possibilities). However, it will be more difficult to distinguish different scenarios which only lead to small misalignments from each other, or disentangle whether there is significant misalignment due to big supernova kicks or because binaries are formed dynamically.

The uncertainty of the fraction of events from each model scales roughly with the square root of the number of observations, so it may be slow progress making these measurements. I’m not sure whether we’ll know the answer to how binary black hole form, or who will sit on the Iron Throne first.

arXiv: 1703.06873 [astro-ph.HE]
Journal: Monthly Notices of the Royal Astronomical Society471(3):2801–2811; 2017
Birmingham science summary: Hierarchical analysis of gravitational-wave measurements of binary black hole spin–orbit misalignment (by Simon)
If you like this you might like: Farr et al. (2017)Talbot & Thrane (2017), Vitale et al. (2017), Trifirò et al. (2016), Minogue (2000)

Bonus notes

Spin misalignments and formation histories

If you have two stars forming in a binary together, you’d expect them to be spinning in roughly the same direction, rotating the same way as they go round in their orbit (like our Solar System). This is because they all formed from the same cloud of swirling gas and dust. Furthermore, if two stars are to form a black hole binary that we can detect gravitational waves from, they need to be close together. This means that there can be tidal forces which gently tug the stars to align their rotation with the orbit. As they get older, stars puff up, meaning that if you have a close-by neighbour, you can share outer layers. This transfer of material will tend to align rotate too. Adding this all together, if you have an isolated binary of stars, you might expect that when they collapse down to become black holes, their spins are aligned with each other and the orbit.

Unfortunately, real astrophysics is rarely so clean. Even if the stars were initially rotating the same way as each other, they doesn’t mean that their black hole remnants will do the same. This depends upon how the star collapses. Massive stars explode as supernova, blasting off their outer layers while their cores collapse down to form black holes. Escaping material could carry away angular momentum, meaning that the black hole is spinning in a different direction to its parent star, or material could be blasted off asymmetrically, giving the new black hole a kick. This would change the plane of the binary’s orbit, misaligning the spins.

Alternatively, the binary could be formed dynamically. Instead of two stars living their lives together, we could have two stars (or black holes) come close enough together to form a binary. This is likely to happen in regions where there’s a high density of stars, such as a globular cluster. In this case, since the binary has been randomly assembled, there’s no reason for the spins to be aligned with each other or the orbit. For dynamically assembled binaries, all spin–orbit misalignments are equally probable.

Slow and steady

This project was led by Simon Stevenson. It was one of the first things we started working on at the beginning of his PhD. He has now graduated, and is off to start a new exciting life as a postdoc in Australia. We got a little distracted by other projects, most notably analysing the first detections of gravitational waves. Simon spent a lot of time developing the COMPAS population code, a code to simulate the evolution of binaries. Looking back, it’s impressive how far he’s come. This paper used a simple approximation to to estimate the masses of our black holes: we called it the Post-it note model, as we wrote it down on a single Post-it. Now Simon’s writing papers including the complexities of common-envelope evolution in order to explain LIGO’s actual observations.