Deep and rapid observations of strong-lensing galaxy clusters within the sky localisation of GW170814

Gravitational waves and gravitational lensing are two predictions of general relativity. Gravitational waves are produced whenever masses accelerate. Gravitational lensing is produced by anything with mass. Gravitational lensing can magnify images, making it easier to spot far away things. In theory, gravitational waves can be lensed too. In this paper, we looked for evidence that GW170814 might have been lensed. (We didn’t find any, but this was my first foray into traditional astronomy).

The lensing of gravitational waves

Strong gravitational lensing magnifies a signal. A gravitational wave which has been lensed would therefore have a larger amplitude than if it had not been lensed. We infer the distance to the source of a gravitational wave from the amplitude. If we didn’t know a signal was lensed, we’d therefore think the source is much closer than it really is.

Waveform explained

The shape of the gravitational wave encodes the properties of the source. This information is what lets us infer parameters. The example signal is GW150914 (which is fairly similar to GW170814). I made this explainer with Ban Farr and Nutsinee Kijbunchoo for the LIGO Magazine.

Mismeasuring the distance to a gravitational wave has important consequences for understanding their sources. As the gravitational wave travels across the expanding Universe, it gets stretched (redshifted) so by the time it arrives at our detectors it has a longer wavelength (and shorter frequency). If we assume that a signal came from a closer source, we’ll underestimate the amount of stretching the signal has undergone, and won’t fully correct for it. This means we’ll overestimate the masses when we infer them from the signal.

This possibility got a few people thinking when we announced our first detection, as GW150914 was heavier than previously observed black holes. Could we be seeing lensed gravitational waves?

Such strongly lensed gravitational waves should be multiply imaged. We should be able to see multiple copies of the same signal which have taken different paths from the source and then are bent by the gravity of the lens to reach us at different times. The delay time between images depends on the mass of the lens, with bigger lensing having longer delays. For galaxy clusters, it can be years.

The idea

Some of my former Birmingham colleagues who study gravitational lensing, were thinking about the possibility of having multiply imaged gravitational waves. I pointed out how difficult these would be to identify. They would come from the same part of the sky, and would have the same source parameters. However, since our uncertainties are so large for gravitational wave observations, I thought it would be tough to convince yourself that you’d seen the same signal twice [bonus note]. Lensing is expected to be rare [bonus note], so would you put your money on two signals (possibly years apart) being the same, or there just happening to be two similar systems somewhere in this huge patch of the sky?

However, if there were an optical counterpart to the merger, it would be much easier to tell that it was lensed. Since we know the location of galaxy clusters which could strongly lens a signal, we can target searches looking for counterparts at these clusters. The odds of finding anything are slim, but since this doesn’t take too much telescope time to look it’s still a gamble worth taking, as the potential pay-off would be huge.

Somehow [bonus note], I got involved in observing proposals to look for strongly lensed. We got everything in place for the last month of O2. It was just one month, so I wasn’t anticipating there being that much to do. I was very wrong.

GW170814

For GW170814 there were a couple of galaxy clusters which could serve as being strong gravitational lenses. Abell 3084 started off as the more probably, but as the sky localization for GW170814 was refined, SMACS J0304.3−4401 looked like the better bet.

Sky maps for GW170814 (left: initial Bayestar localization; right: refined LALInference localizations) and two potential gravitational lensing galaxy clusters

Sky localization for GW170814 and the galaxy clusters Abell 3084 (filled circle), and SMACS J0304.3−4401 (open). The left plot shows the low-latency Bayestar localization (LIGO only dotted, LIGO and Virgo solid), and the right shows the refined LALInference sky maps (solid from GCN 21493, which we used for our observations, and dotted from GWTC-1). The dashed lines shows the Galactic plane. Figure 1 of Smith et al. (2019).

We observed both galaxy clusters using the Gemini Multi-Object Spectrographs (GMOS) on Gemini South and the Multi Unit Spectroscopic Explorer (MUSE) on the Very Large Telescope, both in Chile. You’ll never guess what we found…

That’s right, absolutely nothing! [bonus note] That’s not actually too surprising. GW170814‘s source was identified as a binary black hole—assuming no lensing, its source binary had masses around 25 and 30 solar masses. We don’t expect significant electromagnetic emission from a binary black hole merger (which would make it a big discovery if found, but that is a long shot). If there source were lensed, we would have overestimated the source masses, but to get the source into the neutron star mass range would take a ridiculous amount of lensing. However, the important point is that we have demonstrated that such a search for strong lensed images is possible!

The future

In O3 [bonus notebonus note], the team has been targeting lower mass systems, where a neutron star may get mislabelled as a black hole by mistake due to a moderate amount of lensing. A false identification here  could confuse our understanding of the minimum mass of a black hole, and also mean that we miss all sorts of lovely multimessenger observations, so this seems like a good plan to me.

arXiv: 1805.07370 [astro-ph.HE]
Journal: Monthly Notices of the Royal Astronomical Society; 485(4):5180–5191; 2019
Conference proceedings: 1803.07851 [astro-ph.HE] (from when work was still in-progress)
Future research: Are Double Stuf Oreos just gravitationally lensed regular Oreos?

Bonus notes

Statistical analysis

It is possible to do a statistical analysis to calculate the probability of two signals being lensed images of each. The best attempt I’ve seen at this is Hannuksela et al. (2019). They do a nice study considering lensing by galaxies (and find nothing conclusive).

Biasing merger rates

If we included lensed events in our calculations of the merger rate density (the rate of mergers per unit volume of space), without correcting for them being lensed, we would overestimate the merger rate density. We’d assume that all our mergers came from a smaller volume of space than they actually did, as we wouldn’t know that the lensed events are being seen from further away. As long as the fraction of lensed events is small, this shouldn’t be a big problem, so we’re probably safe not to worry about it.

Slippery slope

What actually happened was my then boss, Alberto Vecchio, asked me to do some calculations based upon the sky maps for our detections in O1 as they’d only take me 5 minutes. Obviously, there were then more calculations, advice about gravitational wave alerts, feedback on observing proposals… and eventually I thought that if I’d put in this much time I might as well get a paper to show for it.

It was interesting to see how electromagnetic observing works, but I’m not sure I’d do it again.

Upper limits

Following tradition, when we don’t make a detection, we can set an upper limit on what could be there. In this case, we conclude that there is nothing to see down to an i-band magnitude of 25. This is pretty faint, about 40 million times fainter than something you could see with the naked eye (translating to visibly light). We can set such a good upper limit (compared to other follow-up efforts) as we only needed to point the telescopes at a small patch of sky around the galaxy clusters, and so we could leave them staring for a relatively long time.

O3 lensing hype

In O3, two gravitational wave candidates (S190828j and S190828l) were found just 21 minutes apart—this, for reasons I don’t entirely understand, led to much speculation that they were multiple images of a gravitationally lensed source. For a comprehensive debunking, follow this Twitter thread.

Second star to the right and straight on ’til morning—Astrophysics white papers

What will be the next big thing in astronomy? One of the hard things about research is that you often don’t know what you will discover before you embark on an investigation. An idea might work out, or it might not, or along the way you might discover something unexpected which is far more interesting. As you might imagine, this can make laying definite plans difficult…

However, it is important to have plans for research. While you might not be sure of the outcome, it is necessary to weigh the risks and rewards associated with the probable results before you invest your time and taxpayers’ money!

To help with planning and prioritising, researchers in astrophysics often pull together white papers [bonus note]. These are sketches of ideas for future research, arguing why you think they might be interesting. These can then be discussed within the community to help shape the direction of the field. If other scientists find the paper convincing, you can build support which helps push for funding. If there are gaps in the logic, others can point these out to ave you heading the wrong way. This type of consensus building is especially important for large experiments or missions—you don’t want to spend a billion dollars on something unless you’re really sure it is a good idea and lots of people agree.

I have been involved with a few white papers recently. Here are some key ideas for where research should go.

Ground-based gravitational-wave detectors: The next generation

We’ve done some awesome things with Advanced LIGO and Advanced Virgo. In just a couple of years we have revolutionized our understanding of binary black holes. That’s not bad. However, our current gravitational-wave observatories are limited in what they can detect. What amazing things could we achieve with a new generation of detectors?

It can take decades to develop new instruments, therefore it’s important to start thinking about them early. Obviously, what we would most like is an observatory which can detect everything, but that’s not feasible. In this white paper, we pick the questions we most want answered, and see what the requirements for a new detector would be. A design which satisfies these specifications would therefore be a solid choice for future investment.

Binary black holes are the perfect source for ground-based detectors. What do we most want to know about them?

  1. How many mergers are there, and how does the merger rate change over the history of the Universe? We want to know how binary black holes are made. The merger rate encodes lots of information about how to make binaries, and comparing how this evolves compared with the rate at which the Universe forms stars, will give us a deeper understanding of how black holes are made.
  2. What are the properties (masses and spins) of black holes? The merger rate tells us some things about how black holes form, but other properties like the masses, spins and orbital eccentricity complete the picture. We want to make precise measurements for individual systems, and also understand the population.
  3. Where do supermassive black holes come from? We know that stars can collapse to produce stellar-mass black holes. We also know that the centres of galaxies contain massive black holes. Where do these massive black holes come from? Do they grow from our smaller black holes, or do they form in a different way? Looking for intermediate-mass black holes in the gap in-between will tells us whether there is a missing link in the evolution of black holes.
Detection horizon as a function of binary mass for Advanced LIGO, A+, Cosmic Explorer and the Einstein Telescope

The detection horizon (the distance to which sources can be detected) for Advanced LIGO (aLIGO), its upgrade A+, and the proposed Cosmic Explorer (CE) and Einstein Telescope (ET). The horizon is plotted for binaries with equal-mass, nonspinning components. Adapted from Hall & Evans (2019).

What can we do to answer these questions?

  1. Increase sensitivity! Advanced LIGO and Advanced Virgo can detect a 30 M_\odot + 30 M_\odot binary out to a redshift of about z \approx 1. The planned detector upgrade A+ will see them out to redshift z \approx 2. That’s pretty impressive, it means we’re covering 10 billion years of history. However, the peak in the Universe’s star formation happens at around z \approx 2, so we’d really like to see beyond this in order to measure how the merger rate evolves. Ideally we would see all the way back to cosmic dawn at z \approx 20 when the Universe was only 200 million years old and the first stars light up.
  2. Increase our frequency range! Our current detectors are limited in the range of frequencies they can detect. Pushing to lower frequencies helps us to detect heavier systems. If we want to detect intermediate-mass black holes of 100 M_\odot we need this low frequency sensitivity. At the moment, Advanced LIGO could get down to about 10~\mathrm{Hz}. The plot below shows the signal from a 100 M_\odot + 100 M_\odot binary at z = 10. The signal is completely undetectable at 10~\mathrm{Hz}.

    Gravitational wave signal from a binary of two 100 solar mass black holes at a redshift of 10

    The gravitational wave signal from the final stages of inspiral, merger and ringdown of a two 100 solar mass black holes at a redshift of 10. The signal chirps up in frequency. The colour coding shows parts of the signal above different frequencies. Part of Figure 2 of the Binary Black Holes White Paper.

  3. Increase sensitivity and frequency range! Increasing sensitivity means that we will have higher signal-to-noise ratio detections. For these loudest sources, we will be able to make more precise measurements of the source properties. We will also have more detections overall, as we can survey a larger volume of the Universe. Increasing the frequency range means we can observe a longer stretch of the signal (for the systems we currently see). This means it is easier to measure spin precession and orbital eccentricity. We also get to measure a wider range of masses. Putting the improved sensitivity and frequency range together means that we’ll get better measurements of individual systems and a more complete picture of the population.

How much do we need to improve our observatories to achieve our goals? To quantify this, lets consider the boost in sensitivity relative to A+, which I’ll call \beta_\mathrm{A+}. If the questions can be answered with \beta_\mathrm{A+} = 1, then we don’t need anything beyond the currently planned A+. If we need a slightly larger \beta_\mathrm{A+}, we should start investigating extra ways to improve the A+ design. If we need much larger \beta_\mathrm{A+}, we need to think about new facilities.

The plot below shows the boost necessary to detect a binary (with equal-mass nonspinning components) out to a given redshift. With a boost of \beta_\mathrm{A+} = 10 (blue line) we can survey black holes around 10 M_\odot30 M_\odot across cosmic time.

Boost to detect a binary of a given mass at a given redshift

The boost factor (relative to A+) \beta_\mathrm{A+} needed to detect a binary with a total mass M out to redshift z. The binaries are assumed to have equal-mass, nonspinning components. The colour scale saturates at \log_{10} \beta_\mathrm{A+} = 4.5. The blue curve highlights the reach at a boost factor of \beta_\mathrm{A+} = 10. The solid and dashed white lines indicate the maximum reach of Cosmic Explorer and the Einstein Telescope, respectively. Part of Figure 1 of the Binary Black Holes White Paper.

The plot above shows that to see intermediate-mass black holes, we do need to completely overhaul the low-frequency sensitivity. What do we need to detect a 100 M_\odot + 100 M_\odot binary at z = 10? If we parameterize the noise spectrum (power spectral density) of our detector as S_n(f) = S_{10}(f/10~\mathrm{Hz})^\alpha with a lower cut-off frequency of f_\mathrm{min}, we can investigate the various possibilities. The plot below shows the possible combinations of parameters which meet of requirements.

Noise curve requirements for intermediate-mass black hole detection

Requirements on the low-frequency noise power spectrum necessary to detect an optimally oriented intermediate-mass binary black hole system with two 100 solar mass components at a redshift of 10. Part of Figure 2 of the Binary Black Holes White Paper.

To build up information about the population of black holes, we need lots of detections. Uncertainties scale inversely with the square root of the number of detections, so you would expect few percent uncertainty after 1000 detections. If we want to see how the population evolves, we need these many per redshift bin! The plot below shows the number of detections per year of observing time for different boost factors. The rate starts to saturate once we detect all the binaries in the redshift range. This is as good as you’ll ever going to get.

Detections per redshift bin as a function of boost factor

Expected rate of binary black hole detections R_\mathrm{det} per redshift bin as a function of A+ boost factor \beta_\mathrm{A+} for three redshift bins. The merging binaries are assumed to be uniformly distributed with a constant merger rate roughly consistent with current observations: the solid line is about the current median, while the dashed and dotted lines are roughly the 90% bounds. Figure 3 of the Binary Black Holes White Paper.

Looking at the plots above, it is clear that A+ is not going to satisfy our requirements. We need something with a boost factor of \beta_\mathrm{A+} = 10: a next-generation observatory. Both the Cosmic Explorer and Einstein Telescope designs do satisfy our goals.

Yes!

Data is pleased. Credit: Paramount

Title: Deeper, wider, sharper: Next-generation ground-based gravitational-wave observations of binary black holes
arXiv:
1903.09220 [astro-ph.HE]
Contribution level: ☆☆☆☆☆ Leading author
Theme music: Daft Punk

Extreme mass ratio inspirals are awesome

We have seen gravitational waves from a stellar-mass black hole merging with another stellar-mass black hole, can we observe a stellar-mass black hole merging with a massive black hole? Yes, these are a perfect source for a space-based gravitational wave observatory. We call these systems extreme mass-ratio inspirals (or EMRIs, pronounced em-rees, for short) [bonus note].

Having such an extreme mass ratio, with one black hole much bigger than the other, gives EMRIs interesting properties. The number of orbits over the course of an inspiral scales with the mass ratio: the more extreme the mass ratio, the more orbits there are. Each of these gives us something to measure in the gravitational wave signal.

The intricate structure of an EMRI orbit

A short section of an orbit around a spinning black hole. While inspirals last for years, this would represent only a few hours around a black hole of mass M = 10^6 M_\odot. The position is measured in terms of the gravitational radius r_\mathrm{g} = GM/c^2. The innermost stable orbit for this black hole would be about r_\mathrm{g} = 2.3. Part of Figure 1 of the EMRI White Paper.

As EMRIs are so intricate, we can make exquisit measurements of the source properties. These will enable us to:

Event rates for EMRIs are currently uncertain: there could be just one per year or thousands. From the rate we can figure out the details of what is going in in the nuclei of galaxies, and what types of objects you find there.

With EMRIs you can unravel mysteries in astrophysics, fundamental physics and cosmology.

Have we sold you that EMRIs are awesome? Well then, what do we need to do to observe them? There is only one currently planned mission which can enable us to study EMRIs: LISA. To maximise the science from EMRIs, we have to support LISA.

Lisa Simpson dancing

As an aspiring scientist, Lisa Simpson is a strong supporter of the LISA mission. Credit: Fox

Title: The unique potential of extreme mass-ratio inspirals for gravitational-wave astronomy
arXiv:
1903.03686 [astro-ph.HE]
Contribution level: ☆☆☆☆☆ Leading author
Theme music: Muse

Bonus notes

White paper vs journal article

Since white papers are proposals for future research, they aren’t as rigorous as usual academic papers. They are really attempts to figure out a good question to ask, rather than being answers. White papers are not usually peer reviewed before publication—the point is that you want everybody to comment on them, rather than just one or two anonymous referees.

Whilst white papers aren’t quite the same class as journal articles, they do still contain some interesting ideas, so I thought they still merit a blog post.

Recycling

I have blogged about EMRIs before, so I won’t go into too much detail here. It was one of my former blog posts which inspired the LISA Science Team to get in touch to ask me to write the white paper.

The O2 Catalogue—It goes up to 11

The full results of our second advanced-detector observing run (O2) have now been released—we’re pleased to announce four new gravitational wave signals: GW170729, GW170809, GW170818 and GW170823 [bonus note]. These latest observations are all of binary black hole systems. Together, they bring our total to 10 observations of binary black holes, and 1 of a binary neutron star. With more frequent detections on the horizon with our third observing run due to start early 2019, the era of gravitational wave astronomy is truly here.

Black hole and neutron star masses

The population of black holes and neutron stars observed with gravitational waves and with electromagnetic astronomy. You can play with an interactive version of this plot online.

The new detections are largely consistent with our previous findings. GW170809, GW170818 and GW170823 are all similar to our first detection GW150914. Their black holes have masses around 20 to 40 times the mass of our Sun. I would lump GW170104 and GW170814 into this class too. Although there were models that predicted black holes of these masses, we weren’t sure they existed until our gravitational wave observations. The family of black holes continues out of this range. GW151012, GW151226 and GW170608 fall on the lower mass side. These overlap with the population of black holes previously observed in X-ray binaries. Lower mass systems can’t be detected as far away, so we find fewer of these. On the higher end we have GW170729 [bonus note]. Its source is made up of black holes with masses 50.2^{+16.2}_{-10.2} M_\odot and 34.0^{+9.1}_{-10.1} M_\odot (where M_\odot is the mass of our Sun). The larger black hole is a contender for the most massive black hole we’ve found in a binary (the other probable contender is GW170823’s source, which has a 39.5^{+11.2}_{-6.7} M_\odot black hole). We have a big happy family of black holes!

Of the new detections, GW170729, GW170809 and GW170818 were both observed by the Virgo detector as well as the two LIGO detectors. Virgo joined O2 for an exciting August [bonus note], and we decided that the data at the time of GW170729 were good enough to use too. Unfortunately, Virgo wasn’t observing at the time of GW170823. GW170729 and GW170809 are very quiet in Virgo, you can’t confidently say there is a signal there [bonus note]. However, GW170818 is a clear detection like GW170814. Well done Virgo!

Using the collection of results, we can start understand the physics of these binary systems. We will be summarising our findings in a series of papers. A huge amount of work went into these.

The papers

The O2 Catalogue Paper

Title: GWTC-1: A gravitational-wave transient catalog of compact binary mergers observed by LIGO and Virgo during the first and second observing runs
arXiv:
 1811.12907 [astro-ph.HE]
Data: Catalogue; Parameter estimation results
Journal: Physical Review X; 9(3):031040(49); 2019
LIGO science summary: GWTC-1: A new catalog of gravitational-wave detections

The paper summarises all our observations of binaries to date. It covers our first and second observing runs (O1 and O2). This is the paper to start with if you want any information. It contains estimates of parameters for all our sources, including updates for previous events. It also contains merger rate estimates for binary neutron stars and binary black holes, and an upper limit for neutron star–black hole binaries. We’re still missing a neutron star–black hole detection to complete the set.

More details: The O2 Catalogue Paper

The O2 Populations Paper

Title: Binary black hole population properties inferred from the first and second observing runs of Advanced LIGO and Advanced Virgo
arXiv:
 1811.12940 [astro-ph.HE]
Journal: Astrophysical Journal Letters; 882(2):L24(30); 2019
Data: Population inference results
LIGO science summary: Binary black hole properties inferred from O1 and O2

Using our set of ten binary black holes, we can start to make some statistical statements about the population: the distribution of masses, the distribution of spins, the distribution of mergers over cosmic time. With only ten observations, we still have a lot of uncertainty, and can’t make too many definite statements. However, if you were wondering why we don’t see any more black holes more massive than GW170729, even though we can see these out to significant distances, so are we. We infer that almost all stellar-mass black holes have masses less than 45 M_\odot.

More details: The O2 Populations Paper

The O2 Catalogue Paper

Synopsis: O2 Catalogue Paper
Read this if: You want the most up-to-date gravitational results
Favourite part: It’s out! We can tell everyone about our FOUR new detections

This is a BIG paper. It covers our first two observing runs and our main searches for coalescing stellar mass binaries. There will be separate papers going into more detail on searches for other gravitational wave signals.

The instruments

Gravitational wave detectors are complicated machines. You don’t just take them out of the box and press go. We’ll be slowly improving the sensitivity of our detectors as we commission them over the next few years. O2 marks the best sensitivity achieved to date. The paper gives a brief overview of the detector configurations in O2 for both LIGO detectors, which did differ, and Virgo.

During O2, we realised that one source of noise was beam jitter, disturbances in the shape of the laser beam. This was particularly notable in Hanford, where there was a spot on the one of the optics. Fortunately, we are able to measure the effects of this, and hence subtract out this noise. This has now been done for the whole of O2. It makes a big difference! Derek Davis and TJ Massinger won the first LIGO Laboratory Award for Excellence in Detector Characterization and Calibration™ for implementing this noise subtraction scheme (the award citation almost spilled the beans on our new detections). I’m happy that GW170104 now has an increased signal-to-noise ratio, which means smaller uncertainties on its parameters.

The searches

We use three search algorithms in this paper. We have two matched-filter searches (GstLAL and PyCBC). These compare a bank of templates to the data to look for matches. We also use coherent WaveBurst (cWB), which is a search for generic short signals, but here has been tuned to find the characteristic chirp of a binary. Since cWB is more flexible in the signals it can find, it’s slightly less sensitive than the matched-filter searches, but it gives us confidence that we’re not missing things.

The two matched-filter searches both identify all 11 signals with the exception of GW170818, which is only found by GstLAL. This is because PyCBC only flags signals above a threshold in each detector. We’re confident it’s real though, as it is seen in all three detectors, albeit below PyCBC’s threshold in Hanford and Virgo. (PyCBC only looked at signals found in coincident Livingston and Hanford in O2, I suspect they would have found it if they were looking at all three detectors, as that would have let them lower their threshold).

The search pipelines try to distinguish between signal-like features in the data and noise fluctuations. Having multiple detectors is a big help here, although we still need to be careful in checking for correlated noise sources. The background of noise falls off quickly, so there’s a rapid transition between almost-certainly noise to almost-certainly signal. Most of the signals are off the charts in terms of significance, with GW170818, GW151012 and GW170729 being the least significant. GW170729 is found with best significance by cWB, that gives reports a false alarm rate of 1/(50~\mathrm{yr}).

Inverse false alarm rates

Cumulative histogram of results from GstLAL (top left), PyCBC (top right) and cWB (bottom). The expected background is shown as the dashed line and the shaded regions give Poisson uncertainties. The search results are shown as the solid red line and named gravitational-wave detections are shown as blue dots. More significant results are further to the right of the plot. Fig. 2 and Fig. 3 of the O2 Catalogue Paper.

The false alarm rate indicates how often you would expect to find something at least as signal like if you were to analyse a stretch of data with the same statistical properties as the data considered, assuming that they is only noise in the data. The false alarm rate does not fold in the probability that there are real gravitational waves occurring at some average rate. Therefore, we need to do an extra layer of inference to work out the probability that something flagged by a search pipeline is a real signal versus is noise.

The results of this calculation is given in Table IV. GW170729 has a 94% probability of being real using the cWB results, 98% using the GstLAL results, but only 52% according to PyCBC. Therefore, if you’re feeling bold, you might, say, only wager the entire economy of the UK on it being real.

We also list the most marginal triggers. These all have probabilities way below being 50% of being real: if you were to add them all up you wouldn’t get a total of 1 real event. (In my professional opinion, they are garbage). However, if you want to check for what we might have missed, these may be a place to start. Some of these can be explained away as instrumental noise, say scattered light. Others show no obvious signs of disturbance, so are probably just some noise fluctuation.

The source properties

We give updated parameter estimates for all 11 sources. These use updated estimates of calibration uncertainty (which doesn’t make too much difference), improved estimate of the noise spectrum (which makes some difference to the less well measured parameters like the mass ratio), the cleaned data (which helps for GW170104), and our most currently complete waveform models [bonus note].

This plot shows the masses of the two binary components (you can just make out GW170817 down in the corner). We use the convention that the more massive of the two is m_1 and the lighter is m_2. We are now really filling in the mass plot! Implications for the population of black holes are discussed in the Populations Paper.

All binary masses

Estimated masses for the two binary objects for each of the events in O1 and O2. From lowest chirp mass (left; red) to highest (right; purple): GW170817 (solid), GW170608 (dashed), GW151226 (solid), GW151012 (dashed), GW170104 (solid), GW170814 (dashed), GW170809 (dashed), GW170818 (dashed), GW150914 (solid), GW170823 (dashed), GW170729 (solid). The contours mark the 90% credible regions. The grey area is excluded from our convention on masses. Part of Fig. 4 of the O2 Catalogue Paper. The mass ratio is q = m_2/m_1.

As well as mass, black holes have a spin. For the final black hole formed in the merger, these spins are always around 0.7, with a little more or less depending upon which way the spins of the two initial black holes were pointing. As well as being probably the most most massive, GW170729’s could have the highest final spin! It is a record breaker. It radiated a colossal 4.8^{+1.7}_{-1.7} M_\odot worth of energy in gravitational waves [bonus note].

All final black hole masses and spins

Estimated final masses and spins for each of the binary black hole events in O1 and O2. From lowest chirp mass (left; red–orange) to highest (right; purple): GW170608 (dashed), GW151226 (solid), GW151012 (dashed), GW170104 (solid), GW170814 (dashed), GW170809 (dashed), GW170818 (dashed), GW150914 (solid), GW170823 (dashed), GW170729 (solid). The contours mark the 90% credible regions. Part of Fig. 4 of the O2 Catalogue Paper.

There is considerable uncertainty on the spins as there are hard to measure. The best combination to pin down is the effective inspiral spin parameter \chi_\mathrm{eff}. This is a mass weighted combination of the spins which has the most impact on the signal we observe. It could be zero if the spins are misaligned with each other, point in the orbital plane, or are zero. If it is non-zero, then it means that at least one black hole definitely has some spin. GW151226 and GW170729 have \chi_\mathrm{eff} > 0 with more than 99% probability. The rest are consistent with zero. The spin distribution for GW170104 has tightened up for GW170104 as its signal-to-noise ratio has increased, and there’s less support for negative \chi_\mathrm{eff}, but there’s been no move towards larger positive \chi_\mathrm{eff}.

All effective inspiral spin parameters

Estimated effective inspiral spin parameters for each of the events in O1 and O2. From lowest chirp mass (left; red) to highest (right; purple): GW170817, GW170608, GW151226, GW151012, GW170104, GW170814, GW170809, GW170818, GW150914, GW170823, GW170729. Part of Fig. 5 of the O2 Catalogue Paper.

For our analysis, we use two different waveform models to check for potential sources of systematic error. They agree pretty well. The spins are where they show most difference (which makes sense, as this is where they differ in terms of formulation). For GW151226, the effective precession waveform IMRPhenomPv2 gives 0.20^{+0.18}_{-0.08} and the full precession model gives 0.15^{+0.25}_{-0.11} and extends to negative \chi_\mathrm{eff}. I panicked a little bit when I first saw this, as GW151226 having a non-zero spin was one of our headline results when first announced. Fortunately, when I worked out the numbers, all our conclusions were safe. The probability of \chi_\mathrm{eff} < 0 is less than 1%. In fact, we can now say that at least one spin is greater than 0.28 at 99% probability compared with 0.2 previously, because the full precession model likes spins in the orbital plane a bit more. Who says data analysis can’t be thrilling?

Our measurement of \chi_\mathrm{eff} tells us about the part of the spins aligned with the orbital angular momentum, but not in the orbital plane. In general, the in-plane components of the spin are only weakly constrained. We basically only get back the information we put in. The leading order effects of in-plane spins is summarised by the effective precession spin parameter \chi_\mathrm{p}. The plot below shows the inferred distributions for \chi_\mathrm{p}. The left half for each event shows our results, the right shows our prior after imposed the constraints on spin we get from \chi_\mathrm{eff}. We get the most information for GW151226 and GW170814, but even then it’s not much, and we generally cover the entire allowed range of values.

All effective precession spin parameters

Estimated effective inspiral spin parameters for each of the events in O1 and O2. From lowest chirp mass (left; red) to highest (right; purple): GW170817, GW170608, GW151226, GW151012, GW170104, GW170814, GW170809, GW170818, GW150914, GW170823, GW170729. The left (coloured) part of the plot shows the posterior distribution; the right (white) shows the prior conditioned by the effective inspiral spin parameter constraints. Part of Fig. 5 of the O2 Catalogue Paper.

One final measurement which we can make (albeit with considerable uncertainty) is the distance to the source. The distance influences how loud the signal is (the further away, the quieter it is). This also depends upon the inclination of the source (a binary edge-on is quieter than a binary face-on/off). Therefore, the distance is correlated with the inclination and we end up with some butterfly-like plots. GW170729 is again a record setter. It comes from a luminosity distance of 2.84^{+1.40}_{-1.36}~\mathrm{Gpc} away. That means it has travelled across the Universe for 3.26.2 billion years—it potentially started its journey before the Earth formed!

All distances and inclinations

Estimated luminosity distances and orbital inclinations for each of the events in O1 and O2. From lowest chirp mass (left; red) to highest (right; purple): GW170817 (solid), GW170608 (dashed), GW151226 (solid), GW151012 (dashed), GW170104 (solid), GW170814 (dashed), GW170809 (dashed), GW170818 (dashed), GW150914 (solid), GW170823 (dashed), GW170729 (solid). The contours mark the 90% credible regions.An inclination of zero means that we’re looking face-on along the direction of the total angular momentum, and inclination of \pi/2 means we’re looking edge-on perpendicular to the angular momentum. Part of Fig. 7 of the O2 Catalogue Paper.

Waveform reconstructions

To check our results, we reconstruct the waveforms from the data to see that they match our expectations for binary black hole waveforms (and there’s not anything extra there). To do this, we use unmodelled analyses which assume that there is a coherent signal in the detectors: we use both cWB and BayesWave. The results agree pretty well. The reconstructions beautifully match our templates when the signal is loud, but, as you might expect, can resolve the quieter details. You’ll also notice the reconstructions sometimes pick up a bit of background noise away from the signal. This gives you and idea of potential fluctuations.

Spectrograms and waveforms

Time–frequency maps and reconstructed signal waveforms for the binary black holes. For each event we show the results from the detector where the signal was loudest. The left panel for each shows the time–frequency spectrogram with the upward-sweeping chip. The right show waveforms: blue the modelled waveforms used to infer parameters (LALInf; top panel); the red wavelet reconstructions (BayesWave; top panel); the black is the maximum-likelihood cWB reconstruction (bottom panel), and the green (bottom panel) shows reconstructions for simulated similar signals. I think the agreement is pretty good! All the data have been whitened as this is how we perform the statistical analysis of our data. Fig. 10 of the O2 Catalogue Paper.

I still think GW170814 looks like a slug. Some people think they look like crocodiles.

We’ll be doing more tests of the consistency of our signals with general relativity in a future paper.

Merger rates

Given all our observations now, we can set better limits on the merger rates. Going from the number of detections seen to the number merger out in the Universe depends upon what you assume about the mass distribution of the sources. Therefore, we make a few different assumptions.

For binary black holes, we use (i) a power-law model for the more massive black hole similar to the initial mass function of stars, with a uniform distribution on the mass ratio, and (ii) use uniform-in-logarithmic distribution for both masses. These were designed to bracket the two extremes of potential distributions. With our observations, we’re starting to see that the true distribution is more like the power-law, so I expect we’ll be abandoning these soon. Taking the range of possible values from our calculations, the rate is in the range of 9.7101~\mathrm{Gpc^{-3}\,yr^{-1}} for black holes between 5 M_\odot and 50 M_\odot [bonus note].

For binary neutron stars, which are perhaps more interesting astronomers, we use a uniform distribution of masses between 0.8 M_\odot and 2.3 M_\odot, and a Gaussian distribution to match electromagnetic observations. We find that these bracket the range 974440~\mathrm{Gpc^{-3}\,yr^{-1}}. This larger than are previous range, as we hadn’t considered the Gaussian distribution previously.

NSBH rate upper limits

90% upper limits for neutron star–black hole binaries. Three black hole masses were tried and two spin distributions. Results are shown for the two matched-filter search algorithms. Fig. 14 of the O2 Catalogue Paper.

Finally, what about neutron star–black holes? Since we don’t have any detections, we can only place an upper limit. This is a maximum of 610~\mathrm{Gpc^{-3}\,yr^{-1}}. This is about a factor of 2 better than our O1 results, and is starting to get interesting!

We are sure to discover lots more in O3… [bonus note].

The O2 Populations Paper

Synopsis: O2 Populations Paper
Read this if: You want the best family portrait of binary black holes
Favourite part: A maximum black hole mass?

Each detection is exciting. However, we can squeeze even more science out of our observations by looking at the entire population. Using all 10 of our binary black hole observations, we start to trace out the population of binary black holes. Since we still only have 10, we can’t yet be too definite in our conclusions. Our results give us some things to ponder, while we are waiting for the results of O3. I think now is a good time to start making some predictions.

We look at the distribution of black hole masses, black hole spins, and the redshift (cosmological time) of the mergers. The black hole masses tell us something about how you go from a massive star to a black hole. The spins tell us something about how the binaries form. The redshift tells us something about how these processes change as the Universe evolves. Ideally, we would look at these all together allowing for mixtures of binary black holes formed through different means. Given that we only have a few observations, we stick to a few simple models.

To work out the properties of the population, we perform a hierarchical analysis of our 10 binary black holes. We infer the properties of the individual systems, assuming that they come from a given population, and then see how well that population fits our data compared with a different distribution.

In doing this inference, we account for selection effects. Our detectors are not equally sensitive to all sources. For example, nearby sources produce louder signals and we can’t detect signals that are too far away, so if you didn’t account for this you’d conclude that binary black holes only merged in the nearby Universe. Perhaps less obvious is that we are not equally sensitive to all source masses. More massive binaries produce louder signals, so we can detect these further way than lighter binaries (up to the point where these binaries are so high mass that the signals are too low frequency for us to easily spot). This is why we detect more binary black holes than binary neutron stars, even though there are more binary neutron stars out here in the Universe.

Masses

When looking at masses, we try three models of increasing complexity:

  • Model A is a simple power law for the mass of the more massive black hole m_1. There’s no real reason to expect the masses to follow a power law, but the masses of stars when they form do, and astronomers generally like power laws as they’re friendly, so its a sensible thing to try. We fit for the power-law index. The power law goes from a lower limit of 5 M_\odot to an upper limit which we also fit for. The mass of the lighter black hole m_2 is assumed to be uniformly distributed between 5 M_\odot and the mass of the other black hole.
  • Model B is the same power law, but we also allow the lower mass limit to vary from 5 M_\odot. We don’t have much sensitivity to low masses, so this lower bound is restricted to be above 5 M_\odot. I’d be interested in exploring lower masses in the future. Additionally, we allow the mass ratio q = m_2/m_1 of the black holes to vary, trying q^{\beta_q} instead of Model A’s q^0.
  • Model C has the same power law, but now with some smoothing at the low-mass end, rather than a sharp turn-on. Additionally, it includes a Gaussian component towards higher masses. This was inspired by the possibility of pulsational pair-instability supernova causing a build up of black holes at certain masses: stars which undergo this lose extra mass, so you’d end up with lower mass black holes than if the stars hadn’t undergone the pulsations. The Gaussian could fit other effects too, for example if there was a secondary formation channel, or just reflect that the pure power law is a bad fit.

In allowing the mass distributions to vary, we find overall rates which match pretty well those we obtain with our main power-law rates calculation included in the O2 Catalogue Paper, higher than with the main uniform-in-log distribution.

The fitted mass distributions are shown in the plot below. The error bars are pretty broad, but I think the models agree on some broad features: there are more light black holes than heavy black holes; the minimum black hole mass is below about 9 M_\odot, but we can’t place a lower bound on it; the maximum black hole mass is above about 35 M_\odot and below about 50 M_\odot, and we prefer black holes to have more similar masses than different ones. The upper bound on the black hole minimum mass, and the lower bound on the black hole upper mass are set by the smallest and biggest black holes we’ve detected, respectively.

Population vs black hole mass

Binary black hole merger rate as a function of the primary mass (m_1; top) and mass ratio (q; bottom). The solid lines and bands show the medians and 90% intervals. The dashed line shows the posterior predictive distribution: our expectation for future observations averaging over our uncertainties. Fig. 2 of the O2 Populations Paper.

That there does seem to be a drop off at higher masses is interesting. There could be something which stops stars forming black holes in this range. It has been proposed that there is a mass gap due to pair instability supernovae. These explosions completely disrupt their progenitor stars, leaving nothing behind. (I’m not sure if they are accompanied by a flash of green light). You’d expect this to kick for black holes of about 5060 M_\odot. We infer that 99% of merging black holes have masses below 44.0 M_\odot with Model A, 41.8 M_\odot with Model B, and 41.8 M_\odot with Model C. Therefore, our results are not inconsistent with a mass gap. However, we don’t really have enough evidence to be sure.

We can compare how well each of our three models fits the data by looking at their Bayes factors. These naturally incorporate the complexity of the models: models with more parameters (which can be more easily tweaked to match the data) are penalised so that you don’t need to worry about overfitting. We have a preference for Model C. It’s not strong, but I think good evidence that we can’t use a simple power law.

Spins

To model the spins:

  • For the magnitude, we assume a beta distribution. There’s no reason for this, but these are convenient distributions for things between 0 and 1, which are the limits on black hole spin (0 is nonspinning, 1 is as fast as you can spin). We assume that both spins are drawn from the same distribution.
  • For the spin orientations, we use a mix of an isotropic distribution and a Gaussian centred on being aligned with the orbital angular momentum. You’d expect an isotropic distribution if binaries were assembled dynamically, and perhaps something with spins generally aligned with each other if the binary evolved in isolation.

We don’t get any useful information on the mixture fraction. Looking at the spin magnitudes, we have a preference towards smaller spins, but still have support for large spins. The more misaligned spins are, the larger the spin magnitudes can be: for the isotropic distribution, we have support all the way up to maximal values.

Parametric and binned spin magnitude distributions

Inferred spin magnitude distributions. The left shows results for the parametric distribution, assuming a mixture of almost aligned and isotropic spin, with the median (solid), 50% and 90% intervals shaded, and the posterior predictive distribution as the dashed line. Results are included both for beta distributions which can be singular at 0 and 1, and with these excluded. Model V is a very low spin model shown for comparison. The right shows a binned reconstruction of the distribution for aligned and isotropic distributions, showing the median and 90% intervals. Fig. 8 of the O2 Populations Paper.

Since spins are harder to measure than masses, it is not surprising that we can’t make strong statements yet. If we were to find something with definitely negative \chi_\mathrm{eff}, we would be able to deduce that spins can be seriously misaligned.

Redshift evolution

As a simple model of evolution over cosmological time, we allow the merger rate to evolve as (1+z)^\lambda. That’s right, another power law! Since we’re only sensitive to relatively small redshifts for the masses we detect (z < 1), this gives a good approximation to a range of different evolution schemes.

Rate versus redshift

Evolution of the binary black hole merger rate (blue), showing median, 50% and 90% intervals. For comparison, a non-evolving rate calculated using Model B is shown too. Fig. 6 of the O2 Populations Paper.

We find that we prefer evolutions that increase with redshift. There’s an 88% probability that \lambda > 0, but we’re still consistent with no evolution. We might expect rate to increase as star formation was higher bach towards z =2. If we can measure the time delay between forming stars and black holes merging, we could figure out what happens to these systems in the meantime.

The local merger rate is broadly consistent with what we infer with our non-evolving distributions, but is a little on the lower side.

Bonus notes

Naming

Gravitational waves are named as GW-year-month-day, so our first observation from 14 September 2015 is GW150914. We realise that this convention suffers from a Y2K-style bug, but by the time we hit 2100, we’ll have so many detections we’ll need a new scheme anyway.

Previously, we had a second designation for less significant potential detections. They were LIGO–Virgo Triggers (LVT), the one example being LVT151012. No-one was really happy with this designation, but it stems from us being cautious with our first announcement, and not wishing to appear over bold with claiming we’d seen two gravitational waves when the second wasn’t that certain. Now we’re a bit more confident, and we’ve decided to simplify naming by labelling everything a GW on the understanding that this now includes more uncertain events. Under the old scheme, GW170729 would have been LVT170729. The idea is that the broader community can decide which events they want to consider as real for their own studies. The current condition for being called a GW is that the probability of it being a real astrophysical signal is at least 50%. Our 11 GWs are safely above that limit.

The naming change has hidden the fact that now when we used our improved search pipelines, the significance of GW151012 has increased. It would now be a GW even under the old scheme. Congratulations LVT151012, I always believed in you!

Trust LIGO

Is it of extraterrestrial origin, or is it just a blurry figure? GW151012: the truth is out there!.

Burning bright

We are lacking nicknames for our new events. They came in so fast that we kind of lost track. Ilya Mandel has suggested that GW170729 should be the Tiger, as it happened on the International Tiger Day. Since tigers are the biggest of the big cats, this seems apt.

Carl-Johan Haster argues that LIGO+tiger = Liger. Since ligers are even bigger than tigers, this seems like an excellent case to me! I’d vote for calling the bigger of the two progenitor black holes GW170729-tiger, the smaller GW170729-lion, and the final black hole GW17-729-liger.

Suggestions for other nicknames are welcome, leave your ideas in the comments.

August 2017—Something fishy or just Poisson statistics?

The final few weeks of O2 were exhausting. I was trying to write job applications at the time, and each time I sat down to work on my research proposal, my phone went off with another alert. You may be wondering about was special about August. Some have hypothesised that it is because Aaron Zimmerman, my partner for the analysis of GW170104, was on the Parameter Estimation rota to analyse the last few weeks of O2. The legend goes that Aaron is especially lucky as he was bitten by a radioactive Leprechaun. I can neither confirm nor deny this. However, I make a point of playing any lottery numbers suggested by him.

A slightly more mundane explanation is that August was when the detectors were running nice and stably. They were observing for a large fraction of the time. LIGO Livingston reached its best sensitivity at this time, although it was less happy for Hanford. We often quantify the sensitivity of our detectors using their binary neutron star range, the average distance they could see a binary neutron star system with a signal-to-noise ratio of 8. If this increases by a factor of 2, you can see twice as far, which means you survey 8 times the volume. This cubed factor means even small improvements can have a big impact. The LIGO Livingston range peak a little over 100~\mathrm{Mpc}. We’re targeting at least 120~\mathrm{Mpc} for O3, so August 2017 gives an indication of what you can expect.

Detector sensitivity across O2

Binary neutron star range for the instruments across O2. The break around week 3 was for the holidays (We did work Christmas 2015). The break at week 23 was to tune-up the instruments, and clean the mirrors. At week 31 there was an earthquake in Montana, and the Hanford sensitivity didn’t recover by the end of the run. Part of Fig. 1 of the O2 Catalogue Paper.

Of course, in the case of GW170817, we just got lucky.

Sign errors

GW170809 was the first event we identified with Virgo after it joined observing. The signal in Virgo is very quiet. We actually got better results when we flipped the sign of the Virgo data. We were just starting to get paranoid when GW170814 came along and showed us that everything was set up right at Virgo. When I get some time, I’d like to investigate how often this type of confusion happens for quiet signals.

SEOBNRv3

One of the waveforms, which includes the most complete prescription of the precession of the spins of the black holes, we use in our analysis goes by the technical name of SEOBNRv3. It is extremely computationally expensive. Work has been done to improve that, but this hasn’t been implemented in our reviewed codes yet. We managed to complete an analysis for the GW170104 Discovery Paper, which was a huge effort. I said then to not expect it for all future events. We did it for all the black holes, even for the lowest mass sources which have the longest signals. I was responsible for GW151226 runs (as well as GW170104) and I started these back at the start of the summer. Eve Chase put in a heroic effort to get GW170608 results, we pulled out all the stops for that.

Thanksgiving

I have recently enjoyed my first Thanksgiving in the US. I was lucky enough to be hosted for dinner by Shane Larson and his family (and cats). I ate so much I thought I might collapse to a black hole. Apparently, a Thanksgiving dinner can be 3000–4500 calories. That sounds like a lot, but the merger of GW170729 would have emitted about 5 \times 10^{40} times more energy. In conclusion, I don’t need to go on a diet.

Confession

We cheated a little bit in calculating the rates. Roughly speaking, the merger rate is given by

\displaystyle R = \frac{N}{\langle VT\rangle},

where N is the number of detections and \langle VT\rangle is the amount of volume and time we’ve searched. You expect to detect more events if you increase the sensitivity of the detectors (and hence V), or observer for longer (and hence increase T). In our calculation, we included GW170608 in N, even though it was found outside of standard observing time. Really, we should increase \langle VT\rangle to factor in the extra time outside of standard observing time when we could have made a detection. This is messy to calculate though, as there’s not really a good way to check this. However, it’s only a small fraction of the time (so the extra T should be small), and for much of the sensitivity of the detectors will be poor (so V will be small too). Therefore, we estimated any bias from neglecting this is smaller than our uncertainty from the calibration of the detectors, and not worth worrying about.

New sources

We saw our first binary black hole shortly after turning on the Advanced LIGO detectors. We saw our first binary neutron star shortly after turning on the Advanced Virgo detector. My money is therefore on our first neutron star–black hole binary shortly after we turn on the KAGRA detector. Because science…

Dirichlet Process Gaussian-mixture model: An application to localizing coalescing binary neutron stars with gravitational-wave observations

Where do gravitational waves like GW170817 come from? Using our network of detectors, we cannot pinpoint a source, but we can make a good estimate—the amplitude of the signal tells us about the distance; the time delay between the signal arriving at different detectors, and relative amplitudes of the signal in different detectors tells us about the sky position (see the excellent video by Leo Singer below).

In this paper we look at full three-dimensional localization of gravitational-wave sources; we important a (rather cunning) technique from computer vision to construct a probability distribution for the source’s location, and then explore how well we could localise a set of simulated binary neutron stars. Knowing the source location enables lots of cool science. First, it aids direct follow-up observations with non-gravitational-wave observatories, searching for electromagnetic or neutrino counterparts. It’s especially helpful if you can cross-reference with galaxy catalogues, to find the most probable source locations (this technique was used to find the kilonova associated with GW170817). Even without finding a counterpart, knowing the most probable host galaxy helps us figure out how the source formed (have lots of stars been born recently, or are all the stars old?), and allows us measure the expansion of the Universe. Having a reliable technique to reconstruct source locations is useful!

This was a fun paper to write [bonus note]. I’m sure it will be valuable, both for showing how to perform this type of reconstruction of a multi-dimensional probability density, and for its implications for source localization and follow-up of gravitational-wave signals. I go into details of both below, first discussing our statistical model (this is a bit technical), then looking at our results for a set of binary neutron stars (which have implications for hunting for counterparts) .

Dirichlet process Gaussian mixture model

When we analyse gravitational-wave data to infer the source properties (location, masses, etc.), we map out parameter space with a set of samples: a list of points in the parameter space, with there being more around more probable locations and fewer in less probable locations. These samples encode everything about the probability distribution for the different parameters, we just need to extract it…

For our application, we want a nice smooth probability density. How do we convert a bunch of discrete samples to a smooth distribution? The simplest thing is to bin the samples. However, picking the right bin size is difficult, and becomes much harder in higher dimensions. Another popular option is to use kernel density estimation. This is better at ensuring smooth results, but you now have to worry about the size of your kernels.

Our approach is in essence to use a kernel density estimate, but to learn the size and position of the kernels (as well as the number) from the data as an extra layer of inference. The “Gaussian mixture model” part of the name refers to the kernels—we use several different Gaussians. The “Dirichlet process” part refers to how we assign their properties (their means and standard deviations). What I really like about this technique, as opposed to the usual rule-of-thumb approaches used for kernel density estimation,  is that it is well justified from a theoretical point of view.

I hadn’t come across a Dirchlet process before. Section 2 of the paper is a walkthrough of how I built up an understanding of this mathematical object, and it contains lots of helpful references if you’d like to dig deeper.

In our application, you can think of the Dirichlet process as being a probability distribution for probability distributions. We want a probability distribution describing the source location. Given our samples, we infer what this looks like. We could put all the probability into one big Gaussian, or we could put it into lots of little Gaussians. The Gaussians could be wide or narrow or a mix. The Dirichlet distribution allows us to assign probabilities to each configuration of Gaussians; for example, if our samples are all in the northern hemisphere, we probably want Gaussians centred around there, rather than in the southern hemisphere.

With the resulting probability distribution for the source location, we can quickly evaluate it at a single point. This means we can rapidly produce a list of most probable source galaxies—extremely handy if you need to know where to point a telescope before a kilonova fades away (or someone else finds it).

Gravitational-wave localization

To verify our technique works, and develop an intuition for three-dimensional localizations, we used studied a set of simulated binary neutron star signals created for the First 2 Years trilogy of papers. This data set is well studied now, it illustrates performance it what we anticipated to be the first two observing runs of the advanced detectors, which turned out to be not too far from the truth. We have previously looked at three-dimensional localizations for these signals using a super rapid approximation.

The plots below show how well we could localise the sources of our binary neutron star sources. Specifically, the plots show the size of the volume which has a 90% probability of containing the source verses the signal-to-noise ratio (the loudness) of the signal. Typically, volumes are 10^410^5~\mathrm{Mpc}^3, which is about 10^{68}10^{69} Olympic swimming pools. Such a volume would contain something like 1001000 galaxies.

Volume verses signal-to-noise ratio

Localization volume as a function of signal-to-noise ratio. The top panel shows results for two-detector observations: the LIGO-Hanford and LIGO-Livingston (HL) network similar to in the first observing run, and the LIGO and Virgo (HLV) network similar to the second observing run. The bottom panel shows all observations for the HLV network including those with all three detectors which are colour coded by the fraction of the total signal-to-noise ratio from Virgo. In both panels, there are fiducial lines scaling inversely with the sixth power of the signal-to-noise ratio. Adapted from Fig. 4 of Del Pozzo et al. (2018).

Looking at the results in detail, we can learn a number of things

  1. The localization volume is roughly inversely proportional to the sixth power of the signal-to-noise ratio [bonus note]. Loud signals are localized much better than quieter ones!
  2. The localization dramatically improves when we have three-detector observations. The extra detector improves the sky localization, which reduces the localization volume.
  3. To get the benefit of the extra detector, the source needs to be close enough that all the detectors could get a decent amount of the signal-to-noise ratio. In our case, Virgo is the least sensitive, and we see the the best localizations are when it has a fair share of the signal-to-noise ratio.
  4. Considering the cases where we only have two detectors, localization volumes get bigger at a given signal-to-noise ration as the detectors get more sensitive. This is because we can detect sources at greater distances.

Putting all these bits together, I think in the future, when we have lots of detections, it would make most sense to prioritise following up the loudest signals. These are the best localised, and will also be the brightest since they are the closest, meaning there’s the greatest potential for actually finding a counterpart. As the sensitivity of the detectors improves, its only going to get more difficult to find a counterpart to a typical gravitational-wave signal, as sources will be further away and less well localized. However, having more sensitive detectors also means that we are more likely to have a really loud signal, which should be really well localized.

Banana vs cucumber

Left: Localization (yellow) with a network of two low-sensitivity detectors. The sky location is uncertain, but we know the source must be nearby. Right: Localization (green) with a network of three high-sensitivity detectors. We have good constraints on the source location, but it could now be at a much greater range of distances. Not to scale.

Using our localization volumes as a guide, you would only need to search one galaxy to find the true source in about 7% of cases with a three-detector network similar to at the end of our second observing run. Similarly, only ten would need to be searched in 23% of cases. It might be possible to get even better performance by considering which galaxies are most probable because they are the biggest or the most likely to produce merging binary neutron stars. This is definitely a good approach to follow.

Three-dimensional localization with galaxy catalgoue

Galaxies within the 90% credible volume of an example simulated source, colour coded by probability. The galaxies are from the GLADE Catalog; incompleteness in the plane of the Milky Way causes the missing wedge of galaxies. The true source location is marked by a cross [bonus note]. Part of Figure 5 of Del Pozzo et al. (2018).

arXiv: 1801.08009 [astro-ph.IM]
Journal: Monthly Notices of the Royal Astronomical Society; 479(1):601–614; 2018
Code: 3d_volume
Buzzword bingo: Interdisciplinary (we worked with computer scientist Tom Haines); machine learning (the inference involving our Dirichlet process Gaussian mixture model); multimessenger astronomy (as our results are useful for following up gravitational-wave signals in the search for counterparts)

Bonus notes

Writing

We started writing this paper back before the first observing run of Advanced LIGO. We had a pretty complete draft on Friday 11 September 2015. We just needed to gather together a few extra numbers and polish up the figures and we’d be done! At 10:50 am on Monday 14 September 2015, we made our first detection of gravitational waves. The paper was put on hold. The pace of discoveries over the coming years meant we never quite found enough time to get it together—I’ve rewritten the introduction a dozen times. It’s extremely satisfying to have it done. This is a shame, as it meant that this study came out much later than our other three-dimensional localization study. The delay has the advantage of justifying one of my favourite acknowledgement sections.

Sixth power

We find that the localization volume \Delta V is inversely proportional to the sixth power of the signal-to-noise ration \varrho. This is what you would expect. The localization volume depends upon the angular uncertainty on the sky \Delta \Omega, the distance to the source D, and the distance uncertainty \Delta D,

\Delta V \sim D^2 \Delta \Omega \Delta D.

Typically, the uncertainty on a parameter (like the masses) scales inversely with the signal-to-noise ratio. This is the case for the logarithm of the distance, which means

\displaystyle \frac{\Delta D}{D} \propto \varrho^{-1}.

The uncertainty in the sky location (being two dimensional) scales inversely with the square of the signal-to-noise ration,

\Delta \Omega \propto \varrho^{-2}.

The signal-to-noise ratio itself is inversely proportional to the distance to the source (sources further way are quieter. Therefore, putting everything together gives

\Delta V \propto \varrho^{-6}.

Treasure

We all know that treasure is marked by a cross. In the case of a binary neutron star merger, dense material ejected from the neutron stars will decay to heavy elements like gold and platinum, so there is definitely a lot of treasure at the source location.

Prospects for observing and localizing gravitational-wave transients with Advanced LIGO, Advanced Virgo and KAGRA

This paper, known as the Observing Scenarios Document with the Collaboration, outlines the observing plans of the ground-based detectors over the coming decade. If you want to search for electromagnetic or neutrino signals from our gravitational-wave sources, this is the paper for you. It is a living review—a document that is continuously updated.

This is the second published version, the big changes since the last version are

  1. We have now detected gravitational waves
  2. We have observed our first gravitational wave with a mulitmessenger counterpart [bonus note]
  3. We now include KAGRA, along with LIGO and Virgo

As you might imagine, these are quite significant updates! The first showed that we can do gravitational-wave astronomy. The second showed that we can do exactly the science this paper is about. The third makes this the first joint publication of the LIGO Scientific, Virgo and KAGRA Collaborations—hopefully the first of many to come.

I lead both this and the previous version. In my blog on the previous version, I explained how I got involved, and the long road that a collaboration must follow to get published. In this post, I’ll give an overview of the key details from the new version together with some behind-the-scenes background (working as part of a large scientific collaboration allows you to do amazing science, but it can also be exhausting). If you’d like a digest of this paper’s science, check out the LIGO science summary.

Commissioning and observing phases

The first section of the paper outlines the progression of detector sensitivities. The instruments are incredibly sensitive—we’ve never made machines to make these types of measurements before, so it takes a lot of work to get them to run smoothly. We can’t just switch them on and have them work at design sensitivity [bonus note].

Possible advanced detector sensitivity

Target evolution of the Advanced LIGO and Advanced Virgo detectors with time. The lower the sensitivity curve, the further away we can detect sources. The distances quoted are binary neutron star (BNS) ranges, the average distance we could detect a binary neutron star system. The BNS-optimized curve is a proposal to tweak the detectors for finding BNSs. Figure 1 of the Observing Scenarios Document.

The plots above show the planned progression of the different detectors. We had to get these agreed before we could write the later parts of the paper because the sensitivity of the detectors determines how many sources we will see and how well we will be able to localize them. I had anticipated that KAGRA would be the most challenging here, as we had not previously put together this sequence of curves. However, this was not the case, instead it was Virgo which was tricky. They had a problem with the silica fibres which suspended their mirrors (they snapped, which is definitely not what you want). The silica fibres were replaced with steel ones, but it wasn’t immediately clear what sensitivity they’d achieve and when. The final word was they’d observe in August 2017 and that their projections were unchanged. I was sceptical, but they did pull it out of the bag! We had our first clear three-detector observation of a gravitational wave 14 August 2017. Bravo Virgo!

LIGO, Virgo and KAGRA observing runs

Plausible time line of observing runs with Advanced LIGO (Hanford and Livingston), advanced Virgo and KAGRA. It is too early to give a timeline for LIGO India. The numbers above the bars give binary neutron star ranges (italic for achieved, roman for target); the colours match those in the plot above. Currently our third observing run (O3) looks like it will start in early 2019; KAGRA might join with an early sensitivity run at the end of it. Figure 2 of the Observing Scenarios Document.

Searches for gravitational-wave transients

The second section explain our data analysis techniques: how we find signals in the data, how we work out probable source locations, and how we communicate these results with the broader astronomical community—from the start of our third observing run (O3), information will be shared publicly!

The information in this section hasn’t changed much [bonus note]. There is a nice collection of references on the follow-up of different events, including GW170817 (I’d recommend my blog for more on the electromagnetic story). The main update I wanted to include was information on the detection of our first gravitational waves. It turned out to be more difficult than I imagined to come up with a plot which showed results from the five different search algorithms (two which used templates, and three which did not) which found GW150914, and harder still to make a plot which everyone liked. This plot become somewhat infamous for the amount of discussion it generated. I think we ended up with something which was a good compromise and clearly shows our detections sticking out above the background of noise.

CBC and burst search results

Offline transient search results from our first observing run (O1). The plot shows the number of events found verses false alarm rate: if there were no gravitational waves we would expect the points to follow the dashed line. The left panel shows the results of the templated search for compact binary coalescences (binary black holes, binary neutron stars and neutron star–black hole binaries), the right panel shows the unmodelled burst search. GW150914, GW151226 and LVT151012 are found by the templated search; GW150914 is also seen in the burst search. Arrows indicate bounds on the significance. Figure 3 of the Observing Scenarios Document.

Observing scenarios

The third section brings everything together and looks at what the prospects are for (gravitational-wave) multimessenger astronomy during each observing run. It’s really all about the big table.

Ranges, binary neutron star detections, and localization precesion

Summary of different observing scenarios with the advanced detectors. We assume a 70–75% duty factor for each instrument (including Virgo for the second scenario’s sky localization, even though it only joined our second observing run for the final month). Table 3 from the Observing Scenarios Document.

I think there are three really awesome take-aways from this

  1. Actual binary neutron stars detected = 1. We did it!
  2. Using the rates inferred using our observations so far (including GW170817), once we have the full five detector network of LIGO-Hanford, LIGO-Livingston, Virgo, KAGRA and LIGO-India, we could be detected 11–180 binary neutron stars a year. That something like between one a month to one every other day! I’m kind of scared…
  3. With the five detector network the sky localization is really good. The median localization is about 9–12 square degrees, about the area the LSST could cover in a single pointing! This really shows the benefit of adding more detector to the network. The improvement comes not because a source is much better localized with five detectors than four, but because when you have five detectors you almost always have at least three detectors(the number needed to get a good triangulation) online at any moment, so you get a nice localization for pretty much everything.

In summary, the prospects for observing and localizing gravitational-wave transients are pretty great. If you are an astronomer, make the most of the quiet before O3 begins next year.

arXiv: 1304.0670 [gr-qc]
Journal: Living Reviews In Relativity21:3(57); 2018
Science summary: A Bright today and brighter tomorrow: Prospects for gravitational-wave astronomy With Advanced LIGO, Advanced Virgo, and KAGRA
Prospects for the next update:
 After two updates, I’ve stepped down from preparing the next one. Wooh!

Bonus notes

GW170817 announcement

The announcement of our first multimessenger detection came between us submitting this update and us getting referee reports. We wanted an updated version of this paper, with the current details of our observing plans, to be available for our astronomer partners to be able to cite when writing their papers on GW170817.

Predictably, when the referee reports came back, we were told we really should include reference to GW170817. This type of discovery is exactly what this paper is about! There was avalanche of results surrounding GW170817, so I had to read through a lot of papers. The reference list swelled from 8 to 13 pages, but this effort was handy for my blog writing. After including all these new results, it really felt like this was version 2.5 of the Observing Scenarios, rather than version 2.

Design sensitivity

We use the term design sensitivity to indicate the performance the current detectors were designed to achieve. They are the targets we aim to achieve with Advanced LIGO, Advance Virgo and KAGRA. One thing I’ve had to try to train myself not to say is that design sensitivity is the final sensitivity of our detectors. Teams are currently working on plans for how we can upgrade our detectors beyond design sensitivity. Reaching design sensitivity will not be the end of our journey.

Binary black holes vs binary neutron stars

Our first gravitational-wave detections were from binary black holes. Therefore, when we were starting on this update there was a push to switch from focusing on binary neutron stars to binary black holes. I resisted on this, partially because I’m lazy, but mostly because I still thought that binary neutron stars were our best bet for multimessenger astronomy. This worked out nicely.

Accuracy of inference on the physics of binary evolution from gravitational-wave observations

Gravitational-wave astronomy lets us observing binary black holes. These systems, being made up of two black holes, are pretty difficult to study by any other means. It has long been argued that with this new information we can unravel the mysteries of stellar evolution. Just as a palaeontologist can discover how long-dead animals lived from their bones, we can discover how massive stars lived by studying their black hole remnants. In this paper, we quantify how much we can really learn from this black hole palaeontology—after 1000 detections, we should pin down some of the most uncertain parameters in binary evolution to a few percent precision.

Life as a binary

There are many proposed ways of making a binary black hole. The current leading contender is isolated binary evolution: start with a binary star system (most stars are in binaries or higher multiples, our lonesome Sun is a little unusual), and let the stars evolve together. Only a fraction will end with black holes close enough to merge within the age of the Universe, but these would be the sources of the signals we see with LIGO and Virgo. We consider this isolated binary scenario in this work [bonus note].

Now, you might think that with stars being so fundamentally important to astronomy, and with binary stars being so common, we’d have the evolution of binaries figured out by now. It turns out it’s actually pretty messy, so there’s lots of work to do. We consider constraining four parameters which describe the bits of binary physics which we are currently most uncertain of:

  • Black hole natal kicks—the push black holes receive when they are born in supernova explosions. We now the neutron stars get kicks, but we’re less certain for black holes [bonus note].
  • Common envelope efficiency—one of the most intricate bits of physics about binaries is how mass is transferred between stars. As they start exhausting their nuclear fuel they puff up, so material from the outer envelope of one star may be stripped onto the other. In the most extreme cases, a common envelope may form, where so much mass is piled onto the companion, that both stars live in a single fluffy envelope. Orbiting inside the envelope helps drag the two stars closer together, bringing them closer to merging. The efficiency determines how quickly the envelope becomes unbound, ending this phase.
  • Mass loss rates during the Wolf–Rayet (not to be confused with Wolf 359) and luminous blue variable phases–stars lose mass through out their lives, but we’re not sure how much. For stars like our Sun, mass loss is low, there is enough to gives us the aurora, but it doesn’t affect the Sun much. For bigger and hotter stars, mass loss can be significant. We consider two evolutionary phases of massive stars where mass loss is high, and currently poorly known. Mass could be lost in clumps, rather than a smooth stream, making it difficult to measure or simulate.

We use parameters describing potential variations in these properties are ingredients to the COMPAS population synthesis code. This rapidly (albeit approximately) evolves a population of stellar binaries to calculate which will produce merging binary black holes.

The question is now which parameters affect our gravitational-wave measurements, and how accurately we can measure those which do?

Merger rate with redshift and chirp mass

Binary black hole merger rate at three different redshifts z as calculated by COMPAS. We show the rate in 30 different chirp mass bins for our default population parameters. The caption gives the total rate for all masses. Figure 2 of Barrett et al. (2018)

Gravitational-wave observations

For our deductions, we use two pieces of information we will get from LIGO and Virgo observations: the total number of detections, and the distributions of chirp masses. The chirp mass is a combination of the two black hole masses that is often well measured—it is the most important quantity for controlling the inspiral, so it is well measured for low mass binaries which have a long inspiral, but is less well measured for higher mass systems. In reality we’ll have much more information, so these results should be the minimum we can actually do.

We consider the population after 1000 detections. That sounds like a lot, but we should have collected this many detections after just 2 or 3 years observing at design sensitivity. Our default COMPAS model predicts 484 detections per year of observing time! Honestly, I’m a little scared about having this many signals…

For a set of population parameters (black hole natal kick, common envelope efficiency, luminous blue variable mass loss and Wolf–Rayet mass loss), COMPAS predicts the number of detections and the fraction of detections as a function of chirp mass. Using these, we can work out the probability of getting the observed number of detections and fraction of detections within different chirp mass ranges. This is the likelihood function: if a given model is correct we are more likely to get results similar to its predictions than further away, although we expect their to be some scatter.

If you like equations, the from of our likelihood is explained in this bonus note. If you don’t like equations, there’s one lurking in the paragraph below. Just remember, that it can’t see you if you don’t move. It’s OK to skip the equation.

To determine how sensitive we are to each of the population parameters, we see how the likelihood changes as we vary these. The more the likelihood changes, the easier it should be to measure that parameter. We wrap this up in terms of the Fisher information matrix. This is defined as

\displaystyle F_{ij} = -\left\langle\frac{\partial^2\ln \mathcal{L}(\mathcal{D}|\left\{\lambda\right\})}{\partial \lambda_i \partial\lambda_j}\right\rangle,

where \mathcal{L}(\mathcal{D}|\left\{\lambda\right\}) is the likelihood for data \mathcal{D} (the number of observations and their chirp mass distribution in our case), \left\{\lambda\right\} are our parameters (natal kick, etc.), and the angular brackets indicate the average over the population parameters. In statistics terminology, this is the variance of the score, which I think sounds cool. The Fisher information matrix nicely quantifies how much information we can lean about the parameters, including the correlations between them (so we can explore degeneracies). The inverse of the Fisher information matrix gives a lower bound on the covariance matrix (the multidemensional generalisation of the variance in a normal distribution) for the parameters \left\{\lambda\right\}. In the limit of a large number of detections, we can use the Fisher information matrix to estimate the accuracy to which we measure the parameters [bonus note].

We simulated several populations of binary black hole signals, and then calculate measurement uncertainties for our four population uncertainties to see what we could learn from these measurements.

Results

Using just the rate information, we find that we can constrain a combination of the common envelope efficiency and the Wolf–Rayet mass loss rate. Increasing the common envelope efficiency ends the common envelope phase earlier, leaving the binary further apart. Wider binaries take longer to merge, so this reduces the merger rate. Similarly, increasing the Wolf–Rayet mass loss rate leads to wider binaries and smaller black holes, which take longer to merge through gravitational-wave emission. Since the two parameters have similar effects, they are anticorrelated. We can increase one and still get the same number of detections if we decrease the other. There’s a hint of a similar correlation between the common envelope efficiency and the luminous blue variable mass loss rate too, but it’s not quite significant enough for us to be certain it’s there.

Correaltions between population parameters

Fisher information matrix estimates for fractional measurement precision of the four population parameters: the black hole natal kick \sigma_\mathrm{kick}, the common envelope efficiency \alpha_\mathrm{CE}, the Wolf–Rayet mass loss rate f_\mathrm{WR}, and the luminous blue variable mass loss rate f_\mathrm{LBV}. There is an anticorrealtion between f_\mathrm{WR} and \alpha_\mathrm{CE}, and hints at a similar anticorrelation between f_|mathrm{LBV} and \alpha_\mathrm{CE}. We show 1500 different realisations of the binary population to give an idea of scatter. Figure 6 of Barrett et al. (2018)

Adding in the chirp mass distribution gives us more information, and improves our measurement accuracies. The fraction uncertainties are about 2% for the two mass loss rates and the common envelope efficiency, and about 5% for the black hole natal kick. We’re less sensitive to the natal kick because the most massive black holes don’t receive a kick, and so are unaffected by the kick distribution [bonus note]. In any case, these measurements are exciting! With this type of precision, we’ll really be able to learn something about the details of binary evolution.

Standard deviation of measurements of population parameters

Measurement precision for the four population parameters after 1000 detections. We quantify the precision with the standard deviation estimated from the Fisher inforamtion matrix. We show results from 1500 realisations of the population to give an idea of scatter. Figure 5 of Barrett et al. (2018)

The accuracy of our measurements will improve (on average) with the square root of the number of gravitational-wave detections. So we can expect 1% measurements after about 4000 observations. However, we might be able to get even more improvement by combining constraints from other types of observation. Combining different types of observation can help break degeneracies. I’m looking forward to building a concordance model of binary evolution, and figuring out exactly how massive stars live their lives.

arXiv: 1711.06287 [astro-ph.HE]
Journal: Monthly Notices of the Royal Astronomical Society; 477(4):4685–4695; 2018
Favourite dinosaur: Professor Science

Bonus notes

Channel selection

In practise, we will need to worry about how binary black holes are formed, via isolated evolution or otherwise, before inferring the parameters describing binary evolution. This makes the problem more complicated. Some parameters, like mass loss rates or black hole natal kicks, might be common across multiple channels, while others are not. There are a number of ways we might be able to tell different formation mechanisms apart, such as by using spin measurements.

Kick distribution

We model the supernova kicks v_\mathrm{kick} as following a Maxwell–Boltzmann distribution,

\displaystyle p(v_\mathrm{kick}) = \sqrt{\frac{2}{\pi}}  \frac{v_\mathrm{kick}^2}{\sigma_\mathrm{kick}^3} \exp\left(\frac{-v_\mathrm{kick}^2}{2\sigma_\mathrm{kick}^2}\right),

where \sigma_\mathrm{kick} is the unknown population parameter. The natal kick received by the black hole v^*_\mathrm{kick} is not the same as this, however, as we assume some of the material ejected by the supernova falls back, reducing the over kick. The final natal kick is

v^*_\mathrm{kick} = (1-f_\mathrm{fb})v_\mathrm{kick},

where f_\mathrm{fb} is the fraction that falls back, taken from Fryer et al. (2012). The fraction is greater for larger black holes, so the biggest black holes get no kicks. This means that the largest black holes are unaffected by the value of \sigma_\mathrm{kick}.

The likelihood

In this analysis, we have two pieces of information: the number of detections, and the chirp masses of the detections. The first is easy to summarise with a single number. The second is more complicated, and we consider the fraction of events within different chirp mass bins.

Our COMPAS model predicts the merger rate \mu and the probability of falling in each chirp mass bin p_k (we factor measurement uncertainty into this). Our observations are the the total number of detections N_\mathrm{obs} and the number in each chirp mass bin c_k (N_\mathrm{obs} = \sum_k c_k). The likelihood is the probability of these observations given the model predictions. We can split the likelihood into two pieces, one for the rate, and one for the chirp mass distribution,

\mathcal{L} = \mathcal{L}_\mathrm{rate} \times \mathcal{L}_\mathrm{mass}.

For the rate likelihood, we need the probability of observing N_\mathrm{obs} given the predicted rate \mu. This is given by a Poisson distribution,

\displaystyle \mathcal{L}_\mathrm{rate} = \exp(-\mu t_\mathrm{obs}) \frac{(\mu t_\mathrm{obs})^{N_\mathrm{obs}}}{N_\mathrm{obs}!},

where t_\mathrm{obs} is the total observing time. For the chirp mass likelihood, we the probability of getting a number of detections in each bin, given the predicted fractions. This is given by a multinomial distribution,

\displaystyle \mathcal{L}_\mathrm{mass} = \frac{N_\mathrm{obs}!}{\prod_k c_k!} \prod_k p_k^{c_k}.

These look a little messy, but they simplify when you take the logarithm, as we need to do for the Fisher information matrix.

When we substitute in our likelihood into the expression for the Fisher information matrix, we get

\displaystyle F_{ij} = \mu t_\mathrm{obs} \left[ \frac{1}{\mu^2} \frac{\partial \mu}{\partial \lambda_i} \frac{\partial \mu}{\partial \lambda_j}  + \sum_k\frac{1}{p_k} \frac{\partial p_k}{\partial \lambda_i} \frac{\partial p_k}{\partial \lambda_j} \right].

Conveniently, although we only need to evaluate first-order derivatives, even though the Fisher information matrix is defined in terms of second derivatives. The expected number of events is \langle N_\mathrm{obs} \rangle = \mu t_\mathrm{obs}. Therefore, we can see that the measurement uncertainty defined by the inverse of the Fisher information matrix, scales on average as N_\mathrm{obs}^{-1/2}.

For anyone worrying about using the likelihood rather than the posterior for these estimates, the high number of detections [bonus note] should mean that the information we’ve gained from the data overwhelms our prior, meaning that the shape of the posterior is dictated by the shape of the likelihood.

Interpretation of the Fisher information matrix

As an alternative way of looking at the Fisher information matrix, we can consider the shape of the likelihood close to its peak. Around the maximum likelihood point, the first-order derivatives of the likelihood with respect to the population parameters is zero (otherwise it wouldn’t be the maximum). The maximum likelihood values of N_\mathrm{obs} = \mu t_\mathrm{obs} and c_k = N_\mathrm{obs} p_k are the same as their expectation values. The second-order derivatives are given by the expression we have worked out for the Fisher information matrix. Therefore, in the region around the maximum likelihood point, the Fisher information matrix encodes all the relevant information about the shape of the likelihood.

So long as we are working close to the maximum likelihood point, we can approximate the distribution as a multidimensional normal distribution with its covariance matrix determined by the inverse of the Fisher information matrix. Our results for the measurement uncertainties are made subject to this approximation (which we did check was OK).

Approximating the likelihood this way should be safe in the limit of large N_\mathrm{obs}. As we get more detections, statistical uncertainties should reduce, with the peak of the distribution homing in on the maximum likelihood value, and its width narrowing. If you take the limit of N_\mathrm{obs} \rightarrow \infty, you’ll see that the distribution basically becomes a delta function at the maximum likelihood values. To check that our N_\mathrm{obs} = 1000 was large enough, we verified that higher-order derivatives were still small.

Michele Vallisneri has a good paper looking at using the Fisher information matrix for gravitational wave parameter estimation (rather than our problem of binary population synthesis). There is a good discussion of its range of validity. The high signal-to-noise ratio limit for gravitational wave signals corresponds to our high number of detections limit.

 

Science with the space-based interferometer LISA. V. Extreme mass-ratio inspirals

The space-based observatory LISA will detect gravitational waves from massive black holes (giant black holes residing in the centres of galaxies). One particularly interesting signal will come from the inspiral of a regular stellar-mass black hole into a massive black hole. These are called extreme mass-ratio inspirals (or EMRIs, pronounced emries, to their friends) [bonus note]. We have never observed such a system. This means that there’s a lot we have to learn about them. In this work, we systematically investigated the prospects for observing EMRIs. We found that even though there’s a wide range in predictions for what EMRIs we will detect, they should be a safe bet for the LISA mission.

EMRI spacetime

Artistic impression of the spacetime for an extreme-mass-ratio inspiral, with a smaller stellar-mass black hole orbiting a massive black hole. This image is mandatory when talking about extreme-mass-ratio inspirals. Credit: NASA

LISA & EMRIs

My previous post discussed some of the interesting features of EMRIs. Because of the extreme difference in masses of the two black holes, it takes a long time for them to complete their inspiral. We can measure tens of thousands of orbits, which allows us to make wonderfully precise measurements of the source properties (if we can accurately pick out the signal from the data). Here, we’ll examine exactly what we could learn with LISA from EMRIs [bonus note].

First we build a model to investigate how many EMRIs there could be.  There is a lot of astrophysics which we are currently uncertain about, which leads to a large spread in estimates for the number of EMRIs. Second, we look at how precisely we could measure properties from the EMRI signals. The astrophysical uncertainties are less important here—we could get a revolutionary insight into the lives of massive black holes.

The number of EMRIs

To build a model of how many EMRIs there are, we need a few different inputs:

  1. The population of massive black holes
  2. The distribution of stellar clusters around massive black holes
  3. The range of orbits of EMRIs

We examine each of these in turn, building a more detailed model than has previously been constructed for EMRIs.

We currently know little about the population of massive black holes. This means we’ll discover lots when we start measuring signals (yay), but it’s rather inconvenient now, when we’re trying to predict how many EMRIs there are (boo). We take two different models for the mass distribution of massive black holes. One is based upon a semi-analytic model of massive black hole formation, the other is at the pessimistic end allowed by current observations. The semi-analytic model predicts massive black hole spins around 0.98, but we also consider spins being uniformly distributed between 0 and 1, and spins of 0. This gives us a picture of the bigger black hole, now we need the smaller.

Observations show that the masses of massive black holes are correlated with their surrounding cluster of stars—bigger black holes have bigger clusters. We consider four different versions of this trend: Gültekin et al. (2009); Kormendy & Ho (2013); Graham & Scott (2013), and Shankar et al. (2016). The stars and black holes about a massive black hole should form a cusp, with the density of objects increasing towards the massive black hole. This is great for EMRI formation. However, the cusp is disrupted if two galaxies (and their massive black holes) merge. This tends to happen—it’s how we get bigger galaxies (and black holes). It then takes some time for the cusp to reform, during which time, we don’t expect as many EMRIs. Therefore, we factor in the amount of time for which there is a cusp for massive black holes of different masses and spins.

Colliding galaxies

That’s a nice galaxy you have there. It would be a shame if it were to collide with something… Hubble image of The Mice. Credit: ACS Science & Engineering Team.

Given a cusp about a massive black hole, we then need to know how often an EMRI forms. Simulations give us a starting point. However, these only consider a snap-shot, and we need to consider how things evolve with time. As stellar-mass black holes inspiral, the massive black hole will grow in mass and the surrounding cluster will become depleted. Both these effects are amplified because for each inspiral, there’ll be many more stars or stellar-mass black holes which will just plunge directly into the massive black hole. We therefore need to limit the number of EMRIs so that we don’t have an unrealistically high rate. We do this by adding in a couple of feedback factors, one to cap the rate so that we don’t deplete the cusp quicker than new objects will be added to it, and one to limit the maximum amount of mass the massive black hole can grow from inspirals and plunges. This gives us an idea for the total number of inspirals.

Finally, we calculate the orbits that EMRIs will be on.  We again base this upon simulations, and factor in how the spin of the massive black hole effects the distribution of orbital inclinations.

Putting all the pieces together, we can calculate the population of EMRIs. We now need to work out how many LISA would be able to detect. This means we need models for the gravitational-wave signal. Since we are simulating a large number, we use a computationally inexpensive analytic model. We know that this isn’t too accurate, but we consider two different options for setting the end of the inspiral (where the smaller black hole finally plunges) which should bound the true range of results.

Number of detected EMRIs

Number of EMRIs for different size massive black holes in different astrophysical models. M1 is our best estimate, the others explore variations on this. M11 and M12 are designed to be cover the extremes, being the most pessimistic and optimistic combinations. The solid and dashed lines are for two different signal models (AKK and AKS), which are designed to give an indication of potential variation. They agree where the massive black hole is not spinning (M10 and M11). The range of masses is similar for all models, as it is set by the sensitivity of LISA. We can detect higher mass systems assuming the AKK signal model as it includes extra inspiral close to highly spinning black holes: for the heaviest black holes, this is the only part of the signal at high enough frequency to be detectable. Figure 8 of Babak et al. (2017).

Allowing for all the different uncertainties, we find that there should be somewhere between 1 and 4200 EMRIs detected per year. (The model we used when studying transient resonances predicted about 250 per year, albeit with a slightly different detector configuration, which is fairly typical of all the models we consider here). This range is encouraging. The lower end means that EMRIs are a pretty safe bet, we’d be unlucky not to get at least one over the course of a multi-year mission (LISA should have at least four years observing). The upper end means there could be lots—we might actually need to worry about them forming a background source of noise if we can’t individually distinguish them!

EMRI measurements

Having shown that EMRIs are a good LISA source, we now need to consider what we could learn by measuring them?

We estimate the precision we will be able to measure parameters using the Fisher information matrix. The Fisher matrix measures how sensitive our observations are to changes in the parameters (the more sensitive we are, the better we should be able to measure that parameter). It should be a lower bound on actual measurement precision, and well approximate the uncertainty in the high signal-to-noise (loud signal) limit. The combination of our use of the Fisher matrix and our approximate signal models means our results will not be perfect estimates of real performance, but they should give an indication of the typical size of measurement uncertainties.

Given that we measure a huge number of cycles from the EMRI signal, we can make really precise measurements of the the mass and spin of the massive black hole, as these parameters control the orbital frequencies. Below are plots for the typical measurement precision from our Fisher matrix analysis. The orbital eccentricity is measured to similar accuracy, as it influences the range of orbital frequencies too. We also get pretty good measurements of the the mass of the smaller black hole, as this sets how quickly the inspiral proceeds (how quickly the orbital frequencies change). EMRIs will allow us to do precision astronomy!

EMRI redshifted mass measurements

Distribution of (one standard deviation) fractional uncertainties for measurements of the  massive black hole (redshifted) mass M_z. Results are shown for the different astrophysical models, and for the different signal models.  The astrophysical model has little impact on the uncertainties. M4 shows a slight difference as it assumes heavier stellar-mass black holes. The results with the two signal models agree when the massive black hole is not spinning (M10 and M11). Otherwise, measurements are more precise with the AKK signal model, as this includes extra signal from the end of the inspiral. Part of Figure 11 of Babak et al. (2017).

EMRI spin measurements

Distribution of (one standard deviation) uncertainties for measurements of the massive black hole spin a. The results mirror those for the masses above. Part of Figure 11 of Babak et al. (2017).

Now, before you get too excited that we’re going to learn everything about massive black holes, there is one confession I should make. In the plot above I show the measurement accuracy for the redshifted mass of the massive black hole. The cosmological expansion of the Universe causes gravitational waves to become stretched to lower frequencies in the same way light is (this makes visible light more red, hence the name). The measured frequency is f_z = (1 + z)f where f is the frequency emitted, and z is the redshift (z= 0 for a nearby source, and is larger for further away sources). Lower frequency gravitational waves correspond to higher mass systems, so it is often convenient to work with the redshifted mass, the mass corresponding to the signal you measure if you ignore redshifting. The redshifted mass of the massive black hole is M_z = (1+z)M where M is the true mass. To work out the true mass, we need the redshift, which means we need to measure the distance to the source.

EMRI lumniosity distance measurement

Distribution of (one standard deviation) fractional uncertainties for measurements of the luminosity distance D_\mathrm{L}. The signal model is not as important here, as the uncertainty only depends on how loud the signal is. Part of Figure 12 of Babak et al. (2017).

The plot above shows the fractional uncertainty on the distance. We don’t measure this too well, as it is determined from the amplitude of the signal, rather than its frequency components. The situation is much as for LIGO. The larger uncertainties on the distance will dominate the overall uncertainty on the black hole masses. We won’t be getting all these to fractions of a percent. However, that doesn’t mean we can’t still figure out what the distribution of masses looks like!

One of the really exciting things we can do with EMRIs is check that the signal matches our expectations for a black hole in general relativity. Since we get such an excellent map of the spacetime of the massive black hole, it is easy to check for deviations. In general relativity, everything about the black hole is fixed by its mass and spin (often referred to as the no-hair theorem). Using the measured EMRI signal, we can check if this is the case. One convenient way of doing this is to describe the spacetime of the massive object in terms of a multipole expansion. The first (most important) terms gives the mass, and the next term the spin. The third term (the quadrupole) is set by the first two, so if we can measure it, we can check if it is consistent with the expected relation. We estimated how precisely we could measure a deviation in the quadrupole. Fortunately, for this consistency test, all factors from redshifting cancel out, so we can get really detailed results, as shown below. Using EMRIs, we’ll be able to check for really small differences from general relativity!

EMRI measurement of bumpy black hole spacetime

Distribution of (one standard deviation) of uncertainties for deviations in the quadrupole moment of the massive object spacetime \mathcal{Q}. Results are similar to the mass and spin measurements. Figure 13 of Babak et al. (2017).

In summary: EMRIS are awesome. We’re not sure how many we’ll detect with LISA, but we’re confident there will be some, perhaps a couple of hundred per year. From the signals we’ll get new insights into the masses and spins of black holes. This should tell us something about how they, and their surrounding galaxies, evolved. We’ll also be able to do some stringent tests of whether the massive objects are black holes as described by general relativity. It’s all pretty exciting, for when LISA launches, which is currently planned about 2034…

Sometimes, it leads to very little, and it seems like it's not worth it, and you wonder why you waited so long for something so disappointing

One of the most valuable traits a student or soldier can have: patience. Credit: Sony/Marvel

arXiv: 1703.09722 [gr-qc]
Journal: Physical Review D; 477(4):4685–4695; 2018
Conference proceedings: 1704.00009 [astro-ph.GA] (from when work was still in-progress)
Estimated number of Marvel films before LISA launch: 48 (starting with Ant-Man and the Wasp)

Bonus notes

Hyphenation

Is it “extreme-mass-ratio inspiral”, “extreme mass-ratio inspiral” or “extreme mass ratio inspiral”? All are used in the literature. This is one of the advantage of using “EMRI”. The important thing is that we’re talking about inspirals that have a mass ratio which is extreme. For this paper, we used “extreme mass-ratio inspiral”, but when I first started my PhD, I was first introduced to “extreme-mass-ratio inspirals”, so they are always stuck that way in my mind.

I think hyphenation is a bit of an art, and there’s no definitive answer here, just like there isn’t for superhero names, where you can have Iron Man, Spider-Man or Iceman.

Science with LISA

This paper is part of a series looking at what LISA could tells us about different gravitational wave sources. So far, this series covers

  1. Massive black hole binaries
  2. Cosmological phase transitions
  3. Standard sirens (for measuring the expansion of the Universe)
  4. Inflation
  5. Extreme-mass-ratio inspirals

You’ll notice there’s a change in the name of the mission from eLISA to LISA part-way through, as things have evolved. (Or devolved?) I think the main take-away so far is that the cosmology group is the most enthusiastic.

Importance of transient resonances in extreme-mass-ratio inspirals

Extreme-mass-ratio inspirals (EMRIs for short) are a promising source for the planned space-borne gravitational-wave observatory LISA. To detect and analyse them we need accurate models for the signals, which are exquisitely intricate. In this paper, we investigated a feature, transient resonances, which have not previously included in our models. They are difficult to incorporate, but can have a big impact on the signal. Fortunately, we find that we can still detect the majority of EMRIs, even without including resonances. Phew!

EMRIs and orbits

EMRIs are a beautiful gravitational wave source. They occur when a stellar-mass black hole slowly inspirals into a massive black hole (as found in the centre of galaxies). The massive black hole can be tens of thousands or millions of times more massive than the stellar-mass black hole (hence extreme mass ratio). This means that the inspiral is slow—we can potentially measure tens of thousands of orbits. This is both the blessing and the curse of EMRIs. The huge numbers of cycles means that we can closely follow the inspiral, and build a detailed map of the massive black hole’s spacetime. EMRIs will give us precision measurements of the properties of massive black holes. However, to do this, we need to be able to find the EMRI signals in the data, we need models which can match the signals over all these cycles. Analysing EMRIs is a huge challenge.

 

EMRI orbits are complicated. At any moment, the orbit can be described by three orbital frequencies: one for radial (in/out) motion \Omega_r, one for polar (north/south if we think of the spin of the massive black hole like the rotation of the Earth) motion \Omega_\theta and one for axial (around in the east/west direction) motion. As gravitational waves are emitted, and the orbit shrinks, these frequencies evolve. The animation above, made by Steve Drasco, illustrates the evolution of an EMRI. Every so often, so can see the pattern freeze—the orbits stays in a constant shape (although this still rotates). This is a transient resonance. Two of the orbital frequencies become commensurate (so we might have 3 north/south cycles and 2 in/out cycles over the same period [bonus note])—this is the resonance. However, because the frequencies are still evolving, we don’t stay locked like this is forever—which is why the resonance is transient. To calculate an EMRI, you need to know how the orbital frequencies evolve.

The evolution of an EMRI is slow—the time taken to inspiral is much longer than the time taken to complete one orbit. Therefore, we can usually split the problem of calculating the trajectory of an EMRI into two parts. On short timescales, we can consider orbits as having fixed frequencies. On long timescale, we can calculate the evolution by averaging over many orbits. You might see the problem with this—around resonances, this averaging breaks down. Whereas normally averaging over many orbits means averaging over a complicated trajectory that hits pretty much all possible points in the orbital range, on resonance, you just average over the same bit again and again. On resonance, terms which usually average to zero can become important. Éanna Flanagan and Tanja Hinderer first pointed out that around resonances the usual scheme (referred to as the adiabatic approximation) doesn’t work.

A non-resonant orbit

A non-resonant EMRI orbit in three dimensions (left) and two dimensions (right), ignoring the rotation in the axial direction. A non-resonant orbit will eventually fill the r\theta plane. Credit: Rob Cole

A 2:3 resonance

For comparison, a resonant EMRI orbit. A 2:3 resonance traces the same parts of the r\theta plane over and over. Credit: Rob Cole

Around a resonance, the evolution will be enhanced or decreased a little relative to the standard adiabatic evolution. We get a kick. This is only small, but because we observe EMRIs for so many orbits, a small difference can grow to become a significant difference later on. Does this mean that we won’t be able to detect EMRIs with our standard models? This was a concern, so back at the end of PhD I began to investigate [bonus note]. The first step is to understand the size of the kick.

Jump for 2:3 resonance

A jump in the orbital energy across a 2:3 resonance. The plot shows the difference between the approximate adiabatic evolution and the instantaneous evolution including the resonance. The thickness of the blue line is from oscillations on the orbital timescale which is too short to resolve here. The dotted red line shows the fitted size of the jump. Time is measured in terms of the resonance time \tau_\mathrm{res} which is defined below. Figure 4 of Berry et al. (2016).

Resonance kicks

If there were no gravitational waves, the orbit would not evolve, it would be fixed. The orbit could then be described by a set of constants of motion. The most commonly used when describing orbits about black holes are the energy, angular momentum and Carter constant. For the purposes of this blog, we’ll not worry too much about what these constants are, we’ll just consider some constant I.

The resonance kick is a change in this constant \Delta I. What should this depend on? There are three ingredients. First, the rate of change of this constant F on the resonant orbit. Second, the time spent on resonance \tau_\mathrm{res}. The bigger these are, the bigger the size of the jump. Therefore,

|\Delta I| \propto F \tau_\mathrm{res}.

However, the jump could be positive or negative. This depends upon the relative phase of the radial and polar motion [bonus note]—for example, do they both reach their maximum point at the same time, or does one lag behind the other? We’ll call this relative phase q. By varying q we explore we can get our resonant trajectory to go through any possible point in space. Therefore, averaging over q should get us back to the adiabatic approximation: the average value of \Delta I must be zero. To complete our picture for the jump, we need a periodic function of the phase,

\Delta I = F \tau_\mathrm{res} f(q),

with \langle f(q) \rangle_q = 0. Now, we know the pieces, we can try to figure out what the pieces are.

The rate of change F is proportional the mass ratio \eta \ll 1: the smaller the stellar-mass black hole is relative to the massive one, the smaller F is. The exact details depend upon gravitational self-force calculations, which we’ll skip over, as they’re pretty hard, but they are the same for all orbits (resonant or not).

We can think of the resonance timescale either as the time for the orbital frequencies to drift apart or the time for the orbit to start filling the space again (so that it’s safe to average). The two pictures yield the same answer—there’s a fuller explanation in Section III A of the paper. To define the resonance timescale, it is useful to define the frequency \Omega = n_r \Omega_r - n_\theta \Omega_\theta, which is zero exactly on resonance. If this is evolving at rate \dot{\Omega}, then the resonance timescale is

\displaystyle \tau_\mathrm{res} = \left[\frac{2\pi}{\dot{\Omega}}\right]^{1/2}.

This bridges the two timescales that usually define EMRIs: the short orbital timescale T and the long evolution timescale \tau_\mathrm{ev}:

T \sim \eta^{1/2} \tau_\mathrm{res} \sim \eta \tau_\mathrm{ev}.

To find the form of for f(q), we need to do some quite involved maths (given in Appendix B of the paper) [bonus note]. This works by treating the evolution far from resonance as depending upon two independent times (effectively defining T and \tau_\mathrm{ev}), and then matching the evolution close to resonance using an expansion in terms of a different time (something like \tau_\mathrm{res}). The solution shows that the jump depends sensitively upon the phase q at resonance, which makes them extremely difficult to calculate.

We numerically evaluated the size of kicks for different orbits and resonances. We found a number of trends. First, higher-order resonances (those with larger n_r and n_\theta) have smaller jumps than lower-order ones. This makes sense, as higher-order resonances come closer to covering all the points in the space, and so are more like averaging over the entire space. Second, jumps are larger for higher eccentricity orbits. This also makes sense, as you can’t have resonances for circular (zero eccentricity orbits) as there’s no radial frequency, so the size of the jumps must tend to zero. We’ll see that these two points are important when it comes to observational consequences of transient resonances.

Astrophysical EMRIs

Now we’ve figured out the impact of passing through a transient resonance, let’s look at what this means for detecting EMRIs. The jump can mean that the evolution post-resonance can soon become out of phase with that pre-resonance. We can’t match both parts with the same adiabatic template. This could significantly hamper our prospects for detection, as we’re limited to the bits of signal we can pick up between resonances.

We created an astrophysical population of simulated EMRIs. We used numerical simulations to estimate a plausible population of massive black holes and distribution of stellar-mass black holes insprialling into them. We then used adiabatic models to see how many LISA (or eLISA as it was called at the time) could potentially detect. We found there were ~510 EMRIs detectable (with a signal-to-noise ratio of 15 or above) for a two-year mission.

We then calculated how much the signal-to-noise ratio would be reduced by passing through transient resonances. The plot below shows the distribution of signal-to-noise ratio for the original population, ignoring resonances, and then after factoring in the reduction. There are now ~490 detectable EMRIs, a loss of 4%. We can still detect the majority of EMRIs!

Signal-to-noise ratio distribution

Distribution of signal-to-noise ratios for EMRIs. In blue (solid outline), we have the results ignoring transient resonances. In orange (dashed outline), we have the distribution including the reduction due to resonance jumps. Events falling below 15 are deemed to be undetectable. Figure 10 of Berry et al. (2016).

We were worried about the impact of transient resonances, we know that jumps can cause them to become undetectable, so why aren’t we seeing a bit effect in our population? The answer lies is in the trends we saw earlier. Jumps are large for low order resonances with high eccentricities. These were the ones first highlighted, as they are obviously the most important. However, low-order resonances are only encountered really close to the massive black hole. This means late in the inspiral, after we have already accumulated lots of signal-to-noise ratio. Losing a little bit of signal right at the end doesn’t hurt detectability too much. On top of this, gravitational wave emission efficiently damps down eccentricity. Orbits typically have low eccentricities by the time they hit low-order resonances, meaning that the jumps are actually quite small. Although small jumps lead to some mismatch, we can still use our signal templates without jumps. Therefore, resonances don’t hamper us (too much) in finding EMRIs!

This may seem like a happy ending, but it is not the end of the story. While we can detect EMRIs, we still need to be able to accurately infer their source properties. Features not included in our signal templates (like jumps), could bias our results. For example, it might be that we can better match a jump by using a template for a different black hole mass or spin. However, if we include jumps, these extra features could give us extra precision in our measurements. The question of what jumps could mean for parameter estimation remains to be answered.

arXiv: 1608.08951 [gr-qc]
Journal: Physical Review D; 94(12):124042(24); 2016
Conference proceedings: 1702.05481 [gr-qc] (only 2 pages—ideal for emergency journal club presentations)
Favourite jumpers: Woolly, Mario, Kangaroos

Bonus notes

Radial and polar only

When discussing resonances, and their impact on orbital evolution, we’ll only care about \Omega_r\Omega_\theta resonances. Resonances with \Omega_\phi are not important because the spacetime is axisymmetric. The equations are exactly identical for all values of the the axial angle \phi, so it doesn’t matter where you are (or if you keep cycling over the same spot) for the evolution of the EMRI.

This, however, doesn’t mean that \Omega_\phi resonances aren’t interesting. They can lead to small kicks to the binary, because you are preferentially emitting gravitational waves in one direction. For EMRIs this are negligibly small, but for more equal mass systems, they could have some interesting consequences as pointed out by Maarten van de Meent.

Extra time

I’m grateful to the Cambridge Philosophical Society for giving me some extra funding to work on resonances. If you’re a Cambridge PhD student, make sure to become a member so you can take advantage of the opportunities they offer.

Calculating jumps

The theory of how to evolve through a transient resonance was developed by Kevorkian and coauthors. I spent a long time studying these calculations before working up the courage to attempt them myself. There are a few technical details which need to be adapted for the case of EMRIs. I finally figured everything out while in Warsaw Airport, coming back from a conference. It was the most I had ever felt like a real physicist.

No you won't

Transient resonances remind me of Spirographs. Thanks Frinkiac

GW170817—The papers

After three months (and one binary black hole detection announcement), I finally have time to write about the suite of LIGO–Virgo papers put together to accompany GW170817.

The papers

There are currently 9 papers in the GW170817 family. Further papers, for example looking at parameter estimation in detail, are in progress. Papers are listed below in order of arXiv posting. My favourite is the GW170817 Discovery Paper. Many of the highlights, especially from the Discovery and Multimessenger Astronomy Papers, are described in my GW170817 announcement post.

Keeping up with all the accompanying observational results is a task not even Sisyphus would envy. I’m sure that the details of these will be debated for a long time to come. I’ve included references to a few below (mostly as [citation notes]), but these are not guaranteed to be complete (I’ll continue to expand these in the future).

0. The GW170817 Discovery Paper

Title: GW170817: Observation of gravitational waves from a binary neutron star inspiral
arXiv:
 1710.05832 [gr-qc]
Journal:
 Physical Review Letters; 119(16):161101(18); 2017
LIGO science summary:
 GW170817: Observation of gravitational waves from a binary neutron star inspiral

This is the paper announcing the gravitational-wave detection. It gives an overview of the properties of the signal, initial estimates of the parameters of the source (see the GW170817 Properties Paper for updates) and the binary neutron star merger rate, as well as an overview of results from the other companion papers.

I was disappointed that “the era of gravitational-wave multi-messenger astronomy has opened with a bang” didn’t make the conclusion of the final draft.

More details: The GW170817 Discovery Paper summary

−1. The Multimessenger Astronomy Paper

Title: Multi-messenger observations of a binary neutron star merger
arXiv:
 1710.05833 [astro-ph.HE]
Journal:
 Astrophysical Journal Letters; 848(2):L12(59); 2017
LIGO science summary:
 The dawn of multi-messenger astrophysics: observations of a binary neutron star merger

I’ve numbered this paper as −1 as it gives an overview of all the observations—gravitational wave, electromagnetic and neutrino—accompanying GW170817. I feel a little sorry for the neutrino observers, as they’re the only ones not to make a detection. Drawing together the gravitational wave and electromagnetic observations, we can confirm that binary neutron star mergers are the progenitors of (at least some) short gamma-ray bursts and kilonovae.

Do not print this paper, the author list stretches across 23 pages.

More details: The Multimessenger Astronomy Paper summary

1. The GW170817 Gamma-ray Burst Paper

Title: Gravitational waves and gamma-rays from a binary neutron star merger: GW170817 and GRB 170817A
arXiv:
 1710.05834 [astro-ph.HE]
Journal:
 Astrophysical Journal Letters; 848(2):L13(27); 2017
LIGO science summary:
 Gravitational waves and gamma-rays from a binary neutron star merger: GW170817 and GRB 170817A

Here we bring together the LIGO–Virgo observations of GW170817 and the Fermi and INTEGRAL observations of GRB 170817A. From the spatial and temporal coincidence of the gravitational waves and gamma rays, we establish that the two are associated with each other. There is a 1.7 s time delay between the merger time estimated from gravitational waves and the arrival of the gamma-rays. From this, we make some inferences about the structure of the jet which is the source of the gamma rays. We can also use this to constrain deviations from general relativity, which is cool. Finally, we estimate that there be 0.3–1.7 joint gamma ray–gravitational wave detections per year once our gravitational-wave detectors reach design sensitivity!

More details: The GW170817 Gamma-ray Burst Paper summary

2. The GW170817 Hubble Constant Paper

Title: A gravitational-wave standard siren measurement of the Hubble constant [bonus note]
arXiv:
 1710.05835 [astro-ph.CO]
Journal:
 Nature; 551(7678):85–88; 2017 [bonus note]
LIGO science summary:
 Measuring the expansion of the Universe with gravitational waves

The Hubble constant quantifies the current rate of expansion of the Universe. If you know how far away an object is, and how fast it is moving away (due to the expansion of the Universe, not because it’s on a bus or something, that is important), you can estimate the Hubble constant. Gravitational waves give us an estimate of the distance to the source of GW170817. The observations of the optical transient AT 2017gfo allow us to identify the galaxy NGC 4993 as the host of GW170817’s source. We know the redshift of the galaxy (which indicates how fast its moving). Therefore, putting the two together we can infer the Hubble constant in a completely new way.

More details: The GW170817 Hubble Constant Paper summary

3. The GW170817 Kilonova Paper

Title: Estimating the contribution of dynamical ejecta in the kilonova associated with GW170817
arXiv:
 1710.05836 [astro-ph.HE]
Journal:
 Astrophysical Journal Letters; 850(2):L39(13); 2017
LIGO science summary:
 Predicting the aftermath of the neutron star collision that produced GW170817

During the coalescence of two neutron stars, lots of neutron-rich matter gets ejected. This undergoes rapid radioactive decay, which powers a kilonova, an optical transient. The observed signal depends upon the material ejected. Here, we try to use our gravitational-wave measurements to predict the properties of the ejecta ahead of the flurry of observational papers.

More details: The GW170817 Kilonova Paper summary

4. The GW170817 Stochastic Paper

Title: GW170817: Implications for the stochastic gravitational-wave background from compact binary coalescences
arXiv:
 1710.05837 [gr-qc]
Journal: Physical Review Letters; 120(9):091101(12); 2018
LIGO science summary: The background symphony of gravitational waves from neutron star and black hole mergers

We can detect signals if they are loud enough, but there will be many quieter ones that we cannot pick out from the noise. These add together to form an overlapping background of signals, a background rumbling in our detectors. We use the inferred rate of binary neutron star mergers to estimate their background. This is smaller than the background from binary black hole mergers (black holes are more massive, so they’re intrinsically louder), but they all add up. It’ll still be a few years before we could detect a background signal.

More details: The GW170817 Stochastic Paper summary

5. The GW170817 Progenitor Paper

Title: On the progenitor of binary neutron star merger GW170817
arXiv:
 1710.05838 [astro-ph.HE]
Journal:
 Astrophysical Journal Letters; 850(2):L40(18); 2017
LIGO science summary:
 Making GW170817: neutron stars, supernovae and trick shots (I’d especially recommend reading this one)

We know that GW170817 came from the coalescence of two neutron stars, but where did these neutron stars come from? Here, we combine the parameters inferred from our gravitational-wave measurements, the observed position of AT 2017gfo in NGC 4993 and models for the host galaxy, to estimate properties like the kick imparted to neutron stars during the supernova explosion and how long it took the binary to merge.

More details: The GW170817 Progenitor Paper summary

6. The GW170817 Neutrino Paper

Title: Search for high-energy neutrinos from binary neutron star merger GW170817 with ANTARES, IceCube, and the Pierre Auger Observatory
arXiv:
 1710.05839 [astro-ph.HE]
Journal:
 Astrophysical Journal Letters; 850(2):L35(18); 2017

This is the search for neutrinos from the source of GW170817. Lots of neutrinos are emitted during the collision, but not enough to be detectable on Earth. Indeed, we don’t find any neutrinos, but we combine results from three experiments to set upper limits.

More details: The GW170817 Neutrino Paper summary

7. The GW170817 Post-merger Paper

Title: Search for post-merger gravitational waves from the remnant of the binary neutron star merger GW170817
arXiv:
 1710.09320 [astro-ph.HE]
Journal:
 Astrophysical Journal Letters; 851(1):L16(13); 2017
LIGO science summary:
 Searching for the neutron star or black hole resulting from GW170817

After the two neutron stars merged, what was left? A larger neutron star or a black hole? Potentially we could detect gravitational waves from a wibbling neutron star, as it sloshes around following the collision. We don’t. It would have to be a lot closer for this to be plausible. However, this paper outlines how to search for such signals; the GW170817 Properties Paper contains a more detailed look at any potential post-merger signal.

More details: The GW170817 Post-merger Paper summary

8. The GW170817 Properties Paper

Title: Properties of the binary neutron star merger GW170817
arXiv:
 1805.11579 [gr-qc]

In the GW170817 Discovery Paper we presented initial estimates for the properties of GW170817’s source. These were the best we could do on the tight deadline for the announcement (it was a pretty good job in my opinion). Now we have had a bit more time we can present a new, improved analysis. This uses recalibrated data and a wider selection of waveform models. We also fold in our knowledge of the source location, thanks to the observation of AT 2017gfo by our astronomer partners, for our best results. if you want to know the details of GW170817’s source, this is the paper for you!

If you’re looking for the most up-to-date results regarding GW170817, check out the O2 Catalogue Paper.

More details: The GW170817 Properties Paper summary

9. The GW170817 Equation-of-state Paper

Title: GW170817: Measurements of neutron star radii and equation of state
arXiv:
 1805.11581 [gr-qc]

Neutron stars are made of weird stuff: nuclear density material which we cannot replicate here on Earth. Neutron star matter is often described in terms of an equation of state, a relationship that explains how the material changes at different pressures or densities. A stiffer equation of state means that the material is harder to squash, and a softer equation of state is easier to squish. This means that for a given mass, a stiffer equation of state will predict a larger, fluffier neutron star, while a softer equation of state will predict a more compact, denser neutron star. In this paper, we assume that GW170817’s source is a binary neutron star system, where both neutron stars have the same equation of state, and see what we can infer about neutron star stuff™.

More details: The GW170817 Equation-of-state Paper summary

The GW170817 Discovery Paper

Synopsis: GW170817 Discovery Paper
Read this if: You want all the details of our first gravitational-wave observation of a binary neutron star coalescence
Favourite part: Look how well we measure the chirp mass!

GW170817 was a remarkable gravitational-wave discovery. It is the loudest signal observed to date, and the source with the lowest mass components. I’ve written about some of the highlights of the discovery in my previous GW170817 discovery post.

Binary neutron stars are one of the principal targets for LIGO and Virgo. The first observational evidence for the existence of gravitational waves came from observations of binary pulsars—a binary neutron star system where (at least one) one of the components is a pulsar. Therefore (unlike binary black holes), we knew that these sources existed before we turned on our detectors. What was less certain was how often they merge. In our first advanced-detector observing run (O1), we didn’t find any, allowing us to estimate an upper limit on the merger rate of 12600~\mathrm{Gpc^{-1}\,yr^{-1}}. Now, we know much more about merging binary neutron stars.

GW170817, as a loud and long signal, is a highly significant detection. You can see it in the data by eye. Therefore, it should have been a easy detection. As is often the case with real experiments, it wasn’t quite that simple. Data transfer from Virgo had stopped over night, and there was a glitch (a non-stationary and non-Gaussian noise feature) in the Livingston detector, which meant that this data weren’t automatically analysed. Nevertheless, GstLAL flagged something interesting in the Hanford data, and there was a mad flurry to get the other data in place so that we could analyse the signal in all three detectors. I remember being sceptical in these first few minutes until I saw the plot of Livingston data which blew me away: the chirp was clearly visible despite the glitch!

Normalised spectrograms for GW170817

Time–frequency plots for GW170104 as measured by Hanford, Livingston and Virgo. The Livingston data have had the glitch removed. The signal is clearly visible in the two LIGO detectors as the upward sweeping chirp; it is not visible in Virgo because of its lower sensitivity and the source’s position in the sky. Figure 1 of the GW170817 Discovery Paper.

Using data from both of our LIGO detectors (as discussed for GW170814, our offline algorithms searching for coalescing binaries only use these two detectors during O2), GW170817 is an absolutely gold-plated detection. GstLAL estimates a false alarm rate (the rate at which you’d expect something at least this signal-like to appear in the detectors due to a random noise fluctuation) of less than one in 1,100,000 years, while PyCBC estimates the false alarm rate to be less than one in 80,000 years.

Parameter estimation (inferring the source properties) used data from all three detectors. We present a (remarkably thorough given the available time) initial analysis in this paper (more detailed results are given in the GW170817 Properties Paper, and the most up-to-date results are in O2 Catalogue Paper). This signal is challenging to analyse because of the glitch and because binary neutron stars are made of stuff™, which can leave an imprint on the waveform. We’ll be looking at the effects of these complications in more detail in the future. Our initial results are

  • The source is localized to a region of about 28~\mathrm{deg^2} at a distance of 40^{+8}_{-14}~\mathrm{Mpc} (we typically quote results at the 90% credible level). This is the closest gravitational-wave source yet.
  • The chirp mass is measured to be 1.188_{-0.002}^{+0.004} M_\odot, much lower than for our binary black hole detections.
  • The spins are not well constrained, the uncertainty from this means that we don’t get precise measurements of the individual component masses. We quote results with two choices of spin prior: the astrophysically motivated limit of 0.05, and the more agnostic and conservative upper bound of 0.89. I’ll stick to using the low-spin prior results be default.
  • Using the low-spin prior, the component masses are m_1 = 1.361.60 M_\odot and m_2 = 1.171.36 M_\odot. We have the convention that m_1 \geq m_2, which is why the masses look unequal; there’s a lot of support for them being nearly equal. These masses match what you’d expect for neutron stars.

As mentioned above, neutron stars are made of stuff™, and the properties of this leave an imprint on the waveform. If neutron stars are big and fluffy, they will get tidally distorted. Raising tides sucks energy and angular momentum out of the orbit, making the inspiral quicker. If neutron stars are small and dense, tides are smaller and the inspiral looks like that for tow black holes. For this initial analysis, we used waveforms which includes some tidal effects, so we get some preliminary information on the tides. We cannot exclude zero tidal deformation, meaning we cannot rule out from gravitational waves alone that the source contains at least one black hole (although this would be surprising, given the masses). However, we can place a weak upper limit on the combined dimensionless tidal deformability of \tilde{\Lambda} \leq 900. This isn’t too informative, in terms of working out what neutron stars are made from, but we’ll come back to this in the GW170817 Properties Paper and the GW170817 Equation-of-state Paper.

Given the source masses, and all the electromagnetic observations, we’re pretty sure this is a binary neutron star system—there’s nothing to suggest otherwise.

Having observed one (and one one) binary neutron star coalescence in O1 and O2, we can now put better constraints on the merger rate. As a first estimate, we assume that component masses are uniformly distributed between 1 M_\odot and 2 M_\odot, and that spins are below 0.4 (in between the limits used for parameter estimation). Given this, we infer that the merger rate is 1540_{-1220}^{+3200}~\mathrm{Gpc^{-3}\,yr^{-1}}, safely within our previous upper limit [citation note].

There’s a lot more we can learn from GW170817, especially as we don’t just have gravitational waves as a source of information, and this is explained in the companion papers.

The Multimessenger Paper

Synopsis: Multimessenger Paper
Read this if: Don’t. Use it too look up which other papers to read.
Favourite part: The figures! It was a truly amazing observational effort to follow-up GW170817

The remarkable thing about this paper is that it exists. Bringing together such a diverse (and competitive) group was a huge effort. Alberto Vecchio was one of the editors, and each evening when leaving the office, he was convinced that the paper would have fallen apart by morning. However, it hung together—the story was too compelling. This paper explains how gravitational waves, short gamma-ray bursts, kilonovae all come from a single source [citation note]. This is the greatest collaborative effort in the history of astronomy.

The paper outlines the discoveries and all of the initial set of observations. If you want to understand the observations themselves, this is not the paper to read. However, using it, you can track down the papers that you do want. A huge amount of care went in to trying to describe how discoveries were made: for example, Fermi observed GRB 170817A independently of the gravitational-wave alert, and we found GW170817 without relying on the GRB alert, however, the communication between teams meant that we took everything much seriously and pushed out alerts as quickly as possible. For more on the history of observations, I’d suggest scrolling through the GCN archive.

The paper starts with an overview of the gravitational-wave observations from the inspiral, then the prompt detection of GRB 170817A, before describing how the gravitational-wave localization enabled discovery of the optical transient AT 2017gfo. This source, in nearby galaxy NGC 4993, was then the subject of follow-up across the electromagnetic spectrum. We have huge amount of photometric and spectroscopy of the source, showing general agreement with models for a kilonova. X-ray and radio afterglows were observed 9 days and 16 days after the merger, respectively [citation note]. No neutrinos were found, which isn’t surprising.

The GW170817 Gamma-ray Burst Paper

Synopsis: GW170817 Gamma-ray Burst Paper
Read this if: You’re interested in the jets from where short gamma-ray bursts originate or in tests of general relativity
Favourite part: How much science come come from a simple time delay measurement

This joint LIGO–Virgo–FermiINTEGRAL paper combines our observations of GW170817 and GRB 170817A. The result is one of the most contentful of the companion papers.

Gravitational-wave chirp and short gamma-ray burst

Detection of GW170817 and GRB 170817A. The top three panels show the gamma-ray lightcurves (first: GBM detectors 1, 2, and 5 for 10–50 keV; second: GBM data for 50–300 keV ; third: the SPI-ACS data starting approximately at 100 keV and with a high energy limit of least 80 MeV), the red line indicates the background.The bottom shows the a time–frequency representation of coherently combined gravitational-wave data from LIGO-Hanford and LIGO-Livingston. Figure 2 of the GW170817 Gamma-ray Burst Paper.

The first item on the to-do list for joint gravitational-wave–gamma-ray science, is to establish that we are really looking at the same source.

From the GW170817 Discovery Paper, we know that its source is consistent with being a binary neutron star system. Hence, there is matter around which can launch create the gamma-rays. The Fermi-GBM and INTEGRAL observations of GRB170817A indicate that it falls into the short class, as hypothesised as the result of a binary neutron star coalescence. Therefore, it looks like we could have the right ingredients.

Now, given that it is possible that the gravitational waves and gamma rays have the same source, we can calculate the probability of the two occurring by chance. The probability of temporal coincidence is 5.0 \times 10^{-6}, adding in spatial coincidence too, and the probability becomes 5.0 \times 10^{-8}. It’s safe to conclude that the two are associated: merging binary neutron stars are the source of at least some short gamma-ray bursts!

Testing gravity

There is a \sim1.74\pm0.05~\mathrm{s} delay time between the inferred merger time and the gamma-ray burst. Given that signal has travelled for about 85 million years (taking the 5% lower limit on the inferred distance), this is a really small difference: gravity and light must travel at almost exactly the same speed. To derive exact limit you need to make some assumptions about when the gamma-rays were created. We’d expect some delay as it takes time for the jet to be created, and then for the gamma-rays to blast their way out of the surrounding material. We conservatively (and arbitrarily) take a window of the delay being 0 to 10 seconds, this gives

\displaystyle -3 \times 10^{-15} \leq \frac{v_\mathrm{GW} - v_\mathrm{EM}}{v_\mathrm{EM}} \leq 7 \times 10^{-16}.

That’s pretty small!

General relativity predicts that gravity and light should travel at the same speed, so I wasn’t too surprised by this result. I was surprised, however, that this result seems to have caused a flurry of activity in effectively ruling out several modified theories of gravity. I guess there’s not much point in explaining what these are now, but they are mostly theories which add in extra fields, which allow you to tweak how gravity works so you can explain some of the effects attributed to dark energy or dark matter. I’d recommend Figure 2 of Ezquiaga & Zumalacárregui (2017) for a summary of which theories pass the test and which are in trouble; Kase & Tsujikawa (2018) give a good review.

Viable and non-viable scalar–tensor theories

Table showing viable (left) and non-viable (right) scalar–tensor theories after discovery of GW170817/GRB 170817A. The theories are grouped as Horndeski theories and (the more general) beyond Horndeski theories. General relativity is a tensor theory, so these models add in an extra scalar component. Figure 2 of Ezquiaga & Zumalacárregui (2017).

We don’t discuss the theoretical implications of the relative speeds of gravity and light in this paper, but we do use the time delay to place bounds for particular on potential deviations from general relativity.

  1. We look at a particular type of Lorentz invariance violation. This is similar to what we did for GW170104, where we looked at the dispersion of gravitational waves, but here it is for the case of \alpha = 2, which we couldn’t test.
  2. We look at the Shapiro delay, which is the time difference travelling in a curved spacetime relative to a flat one. That light and gravity are effected the same way is a test of the weak equivalence principle—that everything falls the same way. The effects of the curvature can be quantified with the parameter \gamma, which describes the amount of curvature per unit mass. In general relativity \gamma_\mathrm{GW} = \gamma_\mathrm{EM} = 1. Considering the gravitational potential of the Milky Way, we find that -2.6 \times 10^{-7} \leq \gamma_\mathrm{GW} - \gamma_\mathrm{EM} \leq 1.2 \times 10 ^{-6} [citation note].

As you’d expect given the small time delay, these bounds are pretty tight! If you’re working on a modified theory of gravity, you have some extra checks to do now.

Gamma-ray bursts and jets

From our gravitational-wave and gamma-ray observations, we can also make some deductions about the engine which created the burst. The complication here, is that we’re not exactly sure what generates the gamma rays, and so deductions are model dependent. Section 5 of the paper uses the time delay between the merger and the burst, together with how quickly the burst rises and fades, to place constraints on the size of the emitting region in different models. The papers goes through the derivation in a step-by-step way, so I’ll not summarise that here: if you’re interested, check it out.

Energy and luminosity distribution of gamma-ray bursts

Isotropic energies (left) and luminosities (right) for all gamma-ray bursts with measured distances. These isotropic quantities assume equal emission in all directions, which gives an upper bound on the true value if we are observing on-axis. The short and long gamma-ray bursts are separated by the standard T_{90} = 2~\mathrm{s} duration. The green line shows an approximate detection threshold for Fermi-GBM. Figure 4 from the GW170817 Gamma-ray Burst Paper; you may have noticed that the first version of this paper contained two copies of the energy plot by mistake.

GRB 170817A was unusually dim [citation note]. The plot above compares it to other gamma-ray bursts. It is definitely in the tail. Since it appears so dim, we think that we are not looking at a standard gamma-ray burst. The most obvious explanation is that we are not looking directly down the jet: we don’t expect to see many off-axis bursts, since they are dimmer. We expect that a gamma-ray burst would originate from a jet of material launched along the direction of the total angular momentum. From the gravitational waves alone, we can estimate that the misalignment angle between the orbital angular momentum axis and the line of sight is \leq 55~\mathrm{deg} (adding in the identification of the host galaxy, this becomes \leq 28~\mathrm{deg} using the Planck value for the Hubble constant and 36~\mathrm{deg} with the SH0ES value), so this is consistent with viewing the burst off-axis (updated numbers are given in the GW170817 Properties Paper). There are multiple models for such gamma-ray emission, as illustrated below. We could have a uniform top-hat jet (the simplest model) which we are viewing from slightly to the side, we could have a structured jet, which is concentrated on-axis but we are seeing from off-axis, or we could have a cocoon of material pushed out of the way by the main jet, which we are viewing emission from. Other electromagnetic observations will tell us more about the inclination and the structure of the jet [citation note].

GRB 170817A jet structure and viewing angle

Cartoon showing three possible viewing geometries and jet profiles which could explain the observed properties of GRB 170817A. Figure 5 of the GW170817 Gamma-ray Burst Paper.

Now that we know gamma-ray bursts can be this dim, if we observe faint bursts (with unknown distances), we have to consider the possibility that they are dim-and-close in addition to the usual bright-and-far-away.

The paper closes by considering how many more joint gravitational-wave–gamma-ray detections of binary neutron star coalescences we should expect in the future. In our next observing run, we could expect 0.1–1.4 joint detections per year, and when LIGO and Virgo get to design sensitivity, this could be 0.3–1.7 detections per year.

The GW170817 Hubble Constant Paper

Synopsis: GW170817 Hubble Constant Paper
Read this if: You have an interest in cosmology
Favourite part: In the future, we may be able to settle the argument between the cosmic microwave background and supernova measurements

The Universe is expanding. In the nearby Universe, this can be described using the Hubble relation

v_H = H_0 D,

where v_H is the expansion velocity, H_0 is the Hubble constant and D is the distance to the source. GW170817 is sufficiently nearby for this relationship to hold. We know the distance from the gravitational-wave measurement, and we can estimate the velocity from the redshift of the host galaxy. Therefore, it should be simple to combine the two to find the Hubble constant. Of course, there are a few complications…

This work is built upon the identification of the optical counterpart AT 2017gfo. This allows us to identify the galaxy NGC 4993 as the host of GW170817’s source: we calculate that there’s a 4 \times 10^{-5} probability that AT 2017gfo would be as close to NGC 4993 on the sky by chance. Without a counterpart, it would still be possible to infer the Hubble constant statistically by cross-referencing the inferred gravitational-wave source location with the ensemble of compatible galaxies in a catalogue (you assign a probability to the source being associated with each galaxy, instead of saying it’s definitely in this one). The identification of NGC 4993 makes things much simpler.

As a first ingredient, we need the distance from gravitational waves. For this, a slightly different analysis was done than in the GW170817 Discovery Paper. We fix the sky location of the source to match that of AT 2017gfo, and we use (binary black hole) waveforms which don’t include any tidal effects. The sky position needs to be fixed, because for this analysis we are assuming that we definitely know where the source is. The tidal effects were not included (but precessing spins were) because we needed results quickly: the details of spins and tides shouldn’t make much difference to the distance. From this analysis, we find the distance is 41^{+6}_{-13}~\mathrm{Mpc} if we follow our usual convention of quoting the median at symmetric 90% credible interval; however, this paper primarily quotes the most probable value and minimal (not-necessarily symmmetric) 68.3% credible interval, following this convention, we write the distance as 44^{+3}_{-7}~\mathrm{Mpc}.

While NGC 4993 being close by makes the relationship for calculating the Hubble constant simple, it adds a complication for calculating the velocity. The motion of the galaxy is not only due to the expansion of the Universe, but because of how it is moving within the gravitational potentials of nearby groups and clusters. This is referred to as peculiar motion. Adding this in increases our uncertainty on the velocity. Combining results from the literature, our final estimate for the velocity is v_H= 3017 \pm 166~\mathrm{km\,s^{-1}}.

We put together the velocity and the distance in a Bayesian analysis. This is a little more complicated than simply dividing the numbers (although that gives you a similar result). You have to be careful about writing things down, otherwise you might implicitly assume a prior that you didn’t intend (my most useful contribution to this paper is probably a whiteboard conversation with Will Farr where we tracked down a difference in prior assumptions approaching the problem two different ways). This is all explained in the Methods, it’s not easy to read, but makes sense when you work through. The result is H_0 = 70^{+12}_{-8}~\mathrm{km\,s^{-1}\,Mpc^{-1}} (quoted as maximum a posteriori value and 68% interval, or 74^{+33}_{-12}~\mathrm{km\,s^{-1}\,Mpc^{-1}} in the usual median-and-90%-interval convention). An updated set of results is given in the GW170817 Properties Paper: H_0 = 70^{+19}_{-8}~\mathrm{km\,s^{-1}\,Mpc^{-1}} (68% interval using the low-spin prior). This is nicely (and diplomatically) consistent with existing results.

The distance has considerable uncertainty because there is a degeneracy between the distance and the orbital inclination (the angle of the normal to the orbital plane relative to the line of sight). If you could figure out the inclination from another observation, then you could tighten constraints on the Hubble constant, or if you’re willing to adopt one of the existing values of the Hubble constant, you can pin down the inclination. Data (updated data) to help you try this yourself are available [citation note].

GW170817 Hubble constant vs inclination

Two-dimensional posterior probability distribution for the Hubble constant and orbital inclination inferred from GW170817. The contours mark 68% and 95% levels. The coloured bands are measurements from the cosmic microwave background (Planck) and supernovae (SH0ES). Figure 2 of the GW170817 Hubble Constant Paper.

In the future we’ll be able to combine multiple events to produce a more precise gravitational-wave estimate of the Hubble constant. Chen, Fishbach & Holz (2017) is a recent study of how measurements should improve with more events: we should get to 4% precision after around 100 detections.

The GW170817 Kilonova Paper

Synopsis: GW170817 Kilonova Paper
Read this if: You want to check our predictions for ejecta against observations
Favourite part: We might be able to create all of the heavy r-process elements—including the gold used to make Nobel Prizes—from merging neutron stars

When two neutron stars collide, lots of material gets ejected outwards. This neutron-rich material undergoes nuclear decay—now no longer being squeezed by the strong gravity inside the neutron star, it is unstable, and decays from the strange neutron star stuff™ to become more familiar elements (elements heavier than iron including gold and platinum). As these r-process elements are created, the nuclear reactions power a kilonova, the optical (infrared–ultraviolet) transient accompanying the merger. The properties of the kilonova depends upon how much material is ejected.

In this paper, we try to estimate how much material made up the dynamical ejecta from the GW170817 collision. Dynamical ejecta is material which escapes as the two neutron stars smash into each other (either from tidal tails or material squeezed out from the collision shock). There are other sources of ejected material, such as winds from the accretion disk which forms around the remnant (whether black hole or neutron star) following the collision, so this is only part of the picture; however, we can estimate the mass of the dynamical ejecta from our gravitational-wave measurements using simulations of neutron star mergers. These estimates can then be compared with electromagnetic observations of the kilonova [citation note].

The amount of dynamical ejecta depends upon the masses of the neutron stars, how rapidly they are rotating, and the properties of the neutron star material (described by the equation of state). Here, we use the masses inferred from our gravitational-wave measurements and feed these into fitting formulae calibrated against simulations for different equations of state. These don’t include spin, and they have quite large uncertainties (we include a 72% relative uncertainty when producing our results), so these are not precision estimates. Neutron star physics is a little messy.

We find that the dynamical ejecta is 10^{-3}10^{-2} M_\odot (assuming the low-spin mass results). These estimates can be feed into models for kilonovae to produce lightcurves, which we do. There is plenty of this type of modelling in the literature as observers try to understand their observations, so this is nothing special in terms of understanding this event. However, it could be useful in the future (once we have hoverboards), as we might be able to use gravitational-wave data to predict how bright a kilonova will be at different times, and so help astronomers decide upon their observing strategy.

Finally, we can consider how much r-process elements we can create from the dynamical ejecta. Again, we don’t consider winds, which may also contribute to the total budget of r-process elements from binary neutron stars. Our estimate for r-process elements needs several ingredients: (i) the mass of the dynamical ejecta, (ii) the fraction of the dynamical ejecta converted to r-process elements, (iii) the merger rate of binary neutron stars, and (iv) the convolution of the star formation rate and the time delay between binary formation and merger (which we take to be \propto t^{-1}). Together (i) and (ii) give the mass of r-process elements per binary neutron star (assuming that GW170817 is typical); (iii) and (iv) give total density of mergers throughout the history of the Universe, and combining everything together you get the total mass of r-process elements accumulated over time. Using the estimated binary neutron star merger rate of 1540_{-1220}^{+3200}~\mathrm{Gpc^{-3}\,yr^{-1}}, we can explain the Galactic abundance of r-process elements if more than about 10% of the dynamical ejecta is converted.

Binary neutron star merger rate, ejecta mass and r-process element abundance

Present day binary neutron star merger rate density versus dynamical ejecta mass. The grey region shows the inferred 90% range for the rate, the blue shows the approximate range of ejecta masses, and the red band shows the band where the Galactic elemental abundance can be reproduced if at least 50% of the dynamical mass gets converted. Part of Figure 5 of the GW170817 Kilonova Paper.

The GW170817 Stochastic Paper

Synopsis: GW170817 Stochastic Paper
Read this if: You’re impatient for finding a background of gravitational waves
Favourite part: The background symphony

For every loud gravitational-wave signal, there are many more quieter ones. We can’t pick these out of the detector noise individually, but they are still there, in our data. They add together to form a stochastic background, which we might be able to detect by correlating the data across our detector network.

Following the detection of GW150914, we considered the background due to binary black holes. This is quite loud, and might be detectable in a few years. Here, we add in binary neutron stars. This doesn’t change the picture too much, but gives a more accurate picture.

Binary black holes have higher masses than binary neutron stars. This means that their gravitational-wave signals are louder, and shorter (they chirp quicker and chirp up to a lower frequency). Being louder, binary black holes dominate the overall background. Being shorter, they have a different character: binary black holes form a popcorn background of short chirps which rarely overlap, but binary neutron stars are long enough to overlap, forming a more continuous hum.

The dimensionless energy density at a gravitational-wave frequency of 25 Hz from binary black holes is 1.1_{-0.7}^{+1.2} \times 10^{-9}, and from binary neutron stars it is 0.7_{-0.6}^{+1.5} \times 10^{-9}. There are on average 0.06_{-0.04}^{+0.06} binary black hole signals in detectors at a given time, and 15_{-12}^{+31} binary neutron star signals.

Simulated background of overlapping binary signals

Simulated time series illustrating the difference between binary black hole (green) and binary neutron star (red) signals. Each chirp increases in amplitude until the point at which the binary merges. Binary black hole signals are short, loud chirps, while the longer, quieter binary neutron star signals form an overlapping background. Figure 2 from the GW170817 Stochastic Paper.

To calculate the background, we need the rate of merger. We now have an estimate for binary neutron stars, and we take the most recent estimate from the GW170104 Discovery Paper for binary black holes. We use the rates assuming the power law mass distribution for this, but the result isn’t too sensitive to this: we care about the number of signals in the detector, and the rates are derived from this, so they agree when working backwards. We evolve the merger rate density across cosmic history by factoring in the star formation rate and delay time between formation and merger. A similar thing was done in the GW170817 Kilonova Paper, here we used a slightly different star formation rate, but results are basically the same with either. The addition of binary neutron stars increases the stochastic background from compact binaries by about 60%.

Detection in our next observing run, at a moderate significance, is possible, but I think unlikely. It will be a few years until detection is plausible, but the addition of binary neutron stars will bring this closer. When we do detect the background, it will give us another insight into the merger rate of binaries.

The GW170817 Progenitor Paper

Synopsis: GW170817 Progenitor Paper
Read this if: You want to know about neutron star formation and supernovae
Favourite part: The Spirography figures

The identification of NGC 4993 as the host galaxy of GW170817’s binary neutron star system allows us to make some deductions about how it formed. In this paper, we simulate a large number of binaries, tracing the later stages of their evolution, to see which ones end up similar to GW170817. By doing so, we learn something about the supernova explosion which formed the second of the two neutron stars.

The neutron stars started life as a pair of regular stars [bonus note]. These burned through their hydrogen fuel, and once this is exhausted, they explode as a supernova. The core of the star collapses down to become a neutron star, and the outer layers are blasted off. The more massive star evolves faster, and goes supernova first. We’ll consider the effects of the second supernova, and the kick it gives to the binary: the orbit changes both because of the rocket effect of material being blasted off, and because one of the components loses mass.

From the combination of the gravitational-wave and electromagnetic observations of GW170817, we know the masses of the neutron star, the type of galaxy it is found in, and the position of the binary within the galaxy at the time of merger (we don’t know the exact position, just its projection as viewed from Earth, but that’s something).

Post-supernova orbits in model NGC 4993

Orbital trajectories of simulated binaries which led to GW170817-like merger. The coloured lines show the 2D projection of the orbits in our model galaxy. The white lines mark the initial (projected) circular orbit of the binary pre-supernova, and the red arrows indicate the projected direction of the supernova kick. The background shading indicates the stellar density. Figure 4 of the GW170817 Progenitor Paper; animated equivalents can be found in the Science Summary.

We start be simulating lots of binaries just before the second supernova explodes. These are scattered at different distances from the centre of the galaxy, have different orbital separations, and have different masses of the pre-supernova star. We then add the effects of the supernova, adding in a kick. We fix then neutron star masses to match those we inferred from the gravitational wave measurements. If the supernova kick is too big, the binary flies apart and will never merge (boo). If the binary remains bound, we follow its evolution as it moves through the galaxy. The structure of the galaxy is simulated as a simple spherical model, a Hernquist profile for the stellar component and a Navarro–Frenk–White profile for the dark matter halo [citation note], which are pretty standard. The binary shrinks as gravitational waves are emitted, and eventually merge. If the merger happens at a position which matches our observations (yay), we know that the initial conditions could explain GW170817.

Inferred supernova kick, progenitor stellar mass, pre-supernova orbital separation and supernova galactic radius

Inferred progenitor properties: (second) supernova kick velocity, pre-supernova progenitor mass, pre-supernova binary separation and galactic radius at time of the supernova. The top row shows how the properties vary for different delay times between supernova and merger. The middle row compares all the binaries which survive the second supernova compared with the GW170817-like ones. The bottom row shows parameters for GW170817-like binaries with different galactic offsets than the 1.8~\mathrm{kpc} to 2.2~\mathrm{kpc} range used for GW1708017. The middle and bottom rows assume a delay time of at least 1~\mathrm{Gyr}. Figure 5 of the GW170817 Progenitor Paper; to see correlations between parameters, check out Figure 8 of the GW170817 Progenitor Paper.

The plot above shows the constraints on the progenitor’s properties. The inferred second supernova kick is V_\mathrm{kick} \simeq 300_{-200}^{+250}~\mathrm{km\,s^{-1}}, similar to what has been observed for neutron stars in the Milky Way; the per-supernova stellar mass is M_\mathrm{He} \simeq 3.0_{-1.5}^{+3.5} M_\odot (we assume that the star is just a helium core, with the outer hydrogen layers having been stripped off, hence the subscript); the pre-supernova orbital separation was R_\odot \simeq 3.5_{-1.5}^{+5.0} R_\odot, and the offset from the centre of the galaxy at the time of the supernova was 2.0_{-1.5}^{+4.0}~\mathrm{kpc}. The main strongest constraints come from keeping the binary bound after the supernova; results are largely independent of the delay time once this gets above 1~\mathrm{Gyr} [citation note].

As we collect more binary neutron star detections, we’ll be able to deduce more about how they form. If you’re interested more in the how to build a binary neutron star system, the introduction to this paper is well referenced; Tauris et al. (2017) is a detailed (pre-GW170817) review, and Stevance et al. (2023) do some detailed investigations of potential binary evolution to see how to form GW170817’s source (finding the stars were probably born 512.5~\mathrm{Gyr} ago from stars 1324 M_\odot and 1012 M_\odot).

The GW170817 Neutrino Paper

Synopsis: GW170817 Neutrino Paper
Read this if: You want a change from gravitational wave–electromagnetic multimessenger astronomy
Favourite part: There’s still something to look forward to with future detections—GW170817 hasn’t stolen all the firsts. Also this paper is not Abbot et al.

This is a joint search by ANTARES, IceCube and the Pierre Auger Observatory for neutrinos coincident with GW170817. Knowing both the location and the time of the binary neutron star merger makes it easy to search for counterparts. No matching neutrinos were detected.

GW170817 localization and neutrino candidates

Neutrino candidates at the time of GW170817. The map is is in equatorial coordinates. The gravitational-wave localization is indicated by the red contour, and the galaxy NGC 4993 is indicated by the black cross. Up-going and down-going regions for each detector are indicated, as detectors are more sensitive to up-going neutrinos, as the Cherenkov detectors are subject to a background from cosmic rays hitting the atmosphere. Figure 1 from the GW170817 Neutrino Paper.

Using the non-detections, we can place upper limits on the neutrino flux. These are summarised in the plots below. Optimistic models for prompt emission from an on axis gamma-ray burst would lead to a detectable flux, but otherwise theoretical predictions indicate that a non-detection is expected. From electromagnetic observations, it doesn’t seem like we are on-axis, so the story all fits together.

Neutrino upper limits

90% confidence upper limits on neutrino spectral fluence F per flavour (electron, muon and tau) as a function of energy E in \pm 500~\mathrm{s} window (top) about the GW170817 trigger time, and a 14~\mathrm{day} window following GW170817 (bottom). IceCube is also sensitive to MeV neutrinos (none were detected). Fluences are the per-flavour sum of neutrino and antineutrino fluence, assuming equal fluence in all flavours. These are compared to theoretical predictions from Kimura et al. (2017) and Fang & Metzger (2017), scaled to a distance of 40 Mpc. The angles labelling the models are viewing angles in excess of the jet opening angle. Figure 2 from the GW170817 Neutrino paper.

Super-Kamiokande have done their own search for neutrinos, form 3.5~\mathrm{MeV} to around 100~\mathrm{PeV} (Abe et al. 2018). They found nothing in either the \pm 500~\mathrm{s} window around the event or the 14~\mathrm{day} window following it. Similarly BUST looked for muon neutrinos and antineutrinos and found nothing in the \pm 500~\mathrm{s} window around the event, and no excess in the 14~\mathrm{day} window following it (Petkov et al. 2019). NOvA looked for neutrinos and cosmic rays 1000~\mathrm{s} around the event and found nothing (Acero et al. 2020).

The only post-detection neutrino modelling paper I’ve seen is Biehl, Heinze, &Winter (2017). They model prompt emission from the same source as the gamma-ray burst and find that neutrino fluxes would be 10^{-4} of current sensitivity.

The GW170817 Post-merger Paper

Synopsis: GW170817 Post-merger Paper
Read this if: You are an optimist
Favourite part: We really do check everywhere for signals

Following the inspiral of two black holes, we know what happens next: the black holes merge to form a bigger black hole, which quickly settles down to its final stable state. We have a complete model of the gravitational waves from the inspiral–merger–ringdown life of coalescing binary black holes. Binary neutron stars are more complicated.

The inspiral of two binary neutron stars is similar to that for black holes. As they get closer together, we might see some imprint of tidal distortions not present for black holes, but the main details are the same. It is the chirp of the inspiral which we detect. As the neutron stars merge, however, we don’t have a clear picture of what goes on. Material gets shredded and ejected from the neutron stars; the neutron stars smash together; it’s all rather messy. We don’t have a good understanding of what should happen when our neutron stars merge, the details depend upon the properties of the stuff™ neutron stars are made of—if we could measure the gravitational-wave signal from this phase, we would learn a lot.

There are four plausible outcomes of a binary neutron star merger:

  1. If the total mass is below the maximum mass for a (non-rotating) neutron star (M < M^\mathrm{Static}), we end up with a bigger, but still stable neutron star. Given our inferences from the inspiral (see the plot from the GW170817 Gamma-ray Burst Paper below), this is unlikely.
  2. If the total mass is above the limit for a stable, non-rotating neutron star, but can still be supported by uniform rotation (M^\mathrm{Static} < M < M^\mathrm{Uniform}), we have a supramassive neutron star. The rotation will slow down due to the emission of electromagnetic and gravitational radiation, and eventually the neutron star will collapse to a black hole. The time until collapse could take something like 105 \times 10^4~\mathrm{s}; it is unclear if this is long enough for supramassive neutron stars to have a mid-life crisis.
  3. If the total mass is above the limit for support from uniform rotation, but can still be supported through differential rotation and thermal gradients(M^\mathrm{Uniform} < M < M^\mathrm{Differential}), then we have a hypermassive neutron star. The hypermassive neutron star cools quickly through neutrino emission, and its rotation slows through magnetic braking, meaning that it promptly collapses to a black hole in \lesssim 1~\mathrm{s}.
  4. If the total mass is big enough(M^\mathrm{Differential} < M), the merging neutron stars collapse down to a black hole.

In the case of the collapse to a black hole, we get a ringdown as in the case of a binary black hole merger. The frequency is around 6~\mathrm{kHz}, too high for us to currently measure. However, if there is a neutron star, there may be slightly lower frequency gravitational waves from the neutron star matter wibbling about. We’re not exactly sure of the form of these signals, so we perform an unmodelled search for them (knowing the position of GW170817’s source helps for this).

Maximum neutron star masses

Comparison of inferred component masses with critical mass boundaries for different equations of state. The left panel shows the maximum mass of a non-rotating neutron star compared to the initial baryonic mass (ignoring material ejected during merger and gravitational binding energy); the middle panel shows the maximum mass for a uniformly rotating neutron star; the right panel shows the maximum mass of a non-rotating neutron star compared of the gravitational mass of the heavier component neutron star. Figure 3 of the GW170817 Gamma-ray Burst Paper.

Several different search algorithms were used to hunt for a post-merger signal:

  1. coherent WaveBurst (cWB) was used to look for short duration (< 1~\mathrm{s}) bursts. This searched a 2~\mathrm{s} window including the merger time and covering the 1.7~\mathrm{s} delay to the gamma-ray burst detection, and frequencies of 10244096~\mathrm{Hz}. Only LIGO data were used, as Virgo data suffered from large noise fluctuations above 2.5~\mathrm{kHz}.
  2. cWB was used to look for intermediate duration (< 500~\mathrm{s}) bursts. This searched a 1000~\mathrm{s} window from the merger time, and frequencies 242048~\mathrm{Hz}. This used LIGO and Virgo data.
  3. The Stochastic Transient Analysis Multi-detector Pipeline (STAMP) was also used to look for intermediate duration signals. This searched the merger time until the end of O2 (in 500~\mathrm{s} chunks), and frequencies 244000~\mathrm{Hz}. This used only LIGO data. There are two variations of STAMP: Zebragard and Lonetrack, and both are used here.

Although GEO is similar to LIGO and Virgo and the searched high-frequencies, its data were not used as we have not yet studied its noise properties in enough detail. Since the LIGO detectors are the most sensitive, their data is most important for the search.

No plausible candidates were found, so we set some upper limits on what could have been detected. From these, it is not surprising that nothing was found, as we would need pretty much all of the mass of the remnant to somehow be converted into gravitational waves to see something. Results are shown in the plot below. An updated analysis which puts upper limits on the post-merger signal is given in the GW170817 Properties Paper.

Detector sensitivities and search upper limits

Noise amplitude spectral density \sqrt{S_n} for the four detectors, and search upper limits h_\mathrm{rss} as a function of frequency. The noise amplitude spectral densities compare the sensitivities of the detectors. The search upper limits are root-sum-squared strain amplitudes at 50% detection efficiency. The colour code of the upper-limit markers indicates the search algorithm and the shape indicates the waveform injected to set the limits (the frequency is the average for this waveform). The bar mode waveform come from the rapid rotation of the supramassive neutron star leading to it becoming distorted (stretched) in a non-axisymmetric way (Lasky, Sarin & Sammut 2017); the magnetar waveform assumes that the (rapidly rotating) supramassive neutron star’s magnetic field generates significant ellipticity (Corsi & Mészáros 2009); the short-duration merger waveforms are from a selection of numerical simulations (Bauswein et al. 2013; Takami et al. 2015; Kawamura et al. 2016; Ciolfi et al. 2017). The open squares are merger waveforms scaled to the distance and orientation inferred from the inspiral of GW170817. The dashed black lines show strain amplitudes for a narrow-band signal with fixed energy content: the top line is the maximum possible value for GW170817. Figure 1 of the GW170817 Post-merger Paper.

We can’t tell the fate of GW170817’s neutron stars from gravitational waves alone [citation note]. As high-frequency sensitivity is improved in the future, we might be able to see something from a really close by binary neutron star merger.

The GW170817 Properties Paper

Synopsis: GW170817 Properties Paper
Read this if: You want the best results for GW170817’s source, our best measurement of the Hubble constant, or limits on the post-merger signal
Favourite part: Look how tiny the uncertainties are!

As time progresses, we often refine our analyses of gravitational-wave data. This can be because we’ve had time to recalibrate data from our detectors, because better analysis techniques have been developed, or just because we’ve had time to allow more computationally intensive analyses to finish. This paper is our first attempt at improving our inferences about GW170817. The results use an improved calibration of Virgo data, and analyses more of the signal (down to a low frequency of 23 Hz, instead of 30 Hz, which gives use about an extra 1500 cycles), uses improved models of the waveforms, and includes a new analysis looking at the post-merger signal. The results update those given in the GW170817 Discovery Paper, the GW170817 Hubble Constant Paper and the GW170817 Post-merger Paper.

Inspiral

Our initial analysis was based upon quick to calculate post-Newtonian waveform known as TaylorF2. We thought this should be a conservative choice: any results with more complicated waveforms should give tighter results. This worked out. We try several different waveform models, each based upon the point particle waveforms we use for analysing binary black hole signals with extra bits to model the tidal deformation of neutron stars. The results are broadly consistent, so I’ll concentrate on discussing our preferred results calculated using IMRPhenomPNRT waveform (which uses IMRPhenomPv2 as a base and adds on numerical-relativity calibrated tides). As in the GW170817 Discovery Paper, we perform the analysis with two priors on the binary spins, one with spins up to 0.89 (which should safely encompass all possibilities for neutron stars), and one with spins of up to 0.05 (which matches observations of binary neutron stars in our Galaxy).

The first analysis we did was to check the location of the source. Reassuringly, we are still perfectly consistent with the location of AT 2017gfo (phew!). The localization is much improved, the 90% sky area is down to just 16~\mathrm{deg^2}! Go Virgo!

Having established that it still makes sense that AT 2017gfo pin-points the source location, we use this as the position in subsequent analyses. We always use the sky position of the counterpart and the redshift of the host galaxy (Levan et al. 2017), but we don’t typically use the distance. This is because we want to be able to measure the Hubble constant, which relies on using the distance inferred from gravitational waves.

We use the distance from Cantiello et al. (2018) [citation note] for one calculation: an estimation of the inclination angle. The inclination is degenerate with the distance (both affect the amplitude of the signal), so having constraints on one lets us measure the other with improved precision. Without the distance information, we find that the angle between the binary’s total angular momentum and the line of sight is 152^{+21}_{-27}~\mathrm{deg} for the high-spin prior and 146^{+25}_{-27}~\mathrm{deg} with the low-spin prior. The difference between the two results is because of the spin angular momentum slightly shifts the direction of the total angular momentum. Incorporating the distance information, for the high-spin prior the angle is 153^{+15}_{-11}~\mathrm{deg} (so the misalignment angle is 27^{+11}_{-15}~\mathrm{deg}), and for the low-spin prior it is 151^{+15}_{-11}~\mathrm{deg} (misalignment 29^{+11}_{-15}~\mathrm{deg}) [citation note].

Orientation and magnitudes of the two spins

Estimated orientation and magnitude of the two component spins. The left pair is for the high-spin prior and so magnitudes extend to 0.89, and the right pair are for the low-spin prior and extend to 0.05. In each, the distribution for the more massive component is on the left, and for the smaller component on the right. The probability is binned into areas which have uniform prior probabilities. The low-spin prior truncates the posterior distribution, but this is less of an issue for the high-spin prior. Results are shown at a point in the inspiral corresponding to a gravitational-wave frequency of 100~\mathrm{Hz}. Parts of Figure 8 and 9 of the GW170817 Properties Paper.

Main results include:

  • The luminosity distance is 38.7_{-14.3}^{+7.4}~\mathrm{Mpc} with the low-spin prior and 40.8_{-12.3}^{+5.6}~\mathrm{Mpc} with the high-spin prior. The difference is for the same reason as the difference in inclination measurements. The results are consistent with the distance to NGC 4993 [citation note].
  • The chirp mass redshifted to the detector-frame is measured to be 1.1975^{+0.0001}_{-0.0001} M_\odot with the low-spin prior and 1.1976^{+0.0001}_{-0.0001} M_\odot with the high-spin. This corresponds to a physical chirp mass of 1.186_{-0.001}^{+0.001} M_\odot.
  • The spins are not well constrained. We get the best measurement along the direction of the orbital angular momentum. For the low-spin prior, this is enough to disfavour the spins being antialigned, but that’s about it. For the high-spin prior, we rule out large spins aligned or antialigned, and very large spins in the plane. The aligned components of the spin are best described by the effective inspiral spin parameter \chi_\mathrm{eff}, for the low-spin prior it is 0.00^{+0.02}_{-0.01} and for the high-spin prior it is 0.02^{+0.08}_{-0.02}.
  • Using the low-spin prior, the component masses are m_1 = 1.361.60 M_\odot and m_2 = 1.161.36 M_\odot, and for the high-spin prior they are m_1 = 1.361.89 M_\odot and m_2 = 1.001.36 M_\odot.

These are largely consistent with our previous results. There are small shifts, but the biggest change is that the errors are a little smaller.

Binary neutron star masses

Estimated masses for the two neutron stars in the binary using the high-spin (left) and low-spin (right) priors. The two-dimensional plot follows a line of constant chirp mass which is too narrow to resolve on this scale. Results are shown for four different waveform models. TaylorF2 (used in the initial analysis), IMRPhenomDNRT and SEOBNRT have aligned spins, while IMRPhenomPNRT includes spin precession. IMRPhenomPNRT is used for the main results.Figure 5 of the GW170817 Properties Paper.

For the Hubble constant, we find H_0 = 70^{+19}_{-8}~\mathrm{km\,s^{-1}\,Mpc^{-1}} with the low-spin prior and H_0 = 70^{+13}_{-7}~\mathrm{km\,s^{-1}\,Mpc^{-1}} with the high-spin prior. Here, we quote maximum a posterior value and narrowest 68% intervals as opposed to the usual median and symmetric 90% credible interval. You might think its odd that the uncertainty is smaller when using the wider high-spin prior, but this is just another consequence of the difference in the inclination measurements. The values are largely in agreement with our initial values.

The best measured tidal parameter is the combined dimensionless tidal deformability \tilde{\Lambda}. With the high-spin prior, we can only set an upper bound of \tilde{\Lambda} < 630 . With the low-spin prior, we find that we are still consistent with zero deformation, but the distribution peaks away from zero. We have \tilde{\Lambda} = 300^{+500}_{-190} using the usual median and symmetric 90% credible interval, and \tilde{\Lambda} = 300^{+420}_{-230} if we take the narrowest 90% interval. This looks like we have detected matter effects, but since we’ve had to use the low-spin prior, which is only appropriate for neutron stars, this would be a circular argument. More details on what we can learn about tidal deformations and what neutron stars are made of, under the assumption that we do have neutron stars, are given in the GW170817 Equation-of-state Paper.

Post-merger

Previously, in the GW170817 Post-merger Paper, we searched for a post-merger signal. We didn’t find anything. Now, we try to infer the shape of the signal, assuming it is there (with a peak within 250~\mathrm{ms} of the coalescence time). We still don’t find anything, but now we set much tighter upper limits on what signal there could be there.

For this analysis, we use data from the two LIGO detectors, and from GEO 600! We don’t use Virgo data, as it is not well behaved at these high frequencies. We use BayesWave to try to constrain the signal.

Detector sensitivities and signal strain upper limits

Noise amplitude spectral density for the detectors used, prior and posterior strain upper limits, and selected numerical simulations as a function of frequency. The signal upper limits are Bayesian 90% credible bounds for the signal in Hanford, but is derived from a coherent analysis of all three indicated detectors. Figure 13 of the GW170817 Properties Paper.

While the upper limits are much better, they are still about 12–215 times larger than expectations from simulations. Therefore, we’d need to improve our detector sensitivity by about a factor of 3.5–15 to detect a similar signal. Fingers crossed!

The GW170817 Equation-of-state Paper

Synopsis: GW170817 Equation-of-state Paper
Read this if: You want to know what neutron stars are made of
Favourite part: The beautiful butterfly plots

Usually in our work, we like to remain open minded and not make too many assumptions. In our analysis of GW170817, as presented in the GW170817 Properties Paper, we have remained agnostic about the components of the binary, seeing what the data tell us. However, from the electromagnetic observations, there is solid evidence that the source is a binary neutron star system. In this paper, we take it as granted that the source is made of two neutron stars, and that these neutron stars are made of similar stuff™ [citation note], to see what we can learn about the properties of neutron stars.

When a two neutron stars get close together, they become distorted by each other’s gravity. Tides are raised, kind of like how the Moon creates tides on Earth. Creating tides takes energy out of the orbit, causing the inspiral to proceed faster. This is something we can measure from the gravitational wave signal. Tides are larger when the neutron stars are bigger. The size of neutron stars and how easy they are the stretch and squash depends upon their equation of state. We can use the measurements of the neutron star masses and amount of tidal deformation to infer their size and their equation of state.

The signal is analysed as in the GW170817 Properties Paper (IMRPhenomPNRT waveform, low-spin prior, position set to match AT 2017gfo). However, we also add in some information about the composition of neutron stars.

Calculating the behaviour of this incredibly dense material is difficult, but there are some relations (called universal relations) between the tidal deformability of neutron stars and their radii which are insensitive to the details of the equation of state. One relates symmetric and antisymmetric combinations of the tidal deformations of the two neutron stars as a function of the mass ratio, allows us to calculate consistent tidal deformations. Another relates the tidal deformation to the compactness (mass divided by radius) allows us to convert tidal deformations to radii. The analysis includes the uncertainty in these relations.

In addition to this, we also use a parametric model of the equation of state to model the tidal deformations. By sampling directly in terms of the equation of state, it is easy to impose constraints on the allowed values. For example, we impose that the speed of sound inside the neutron star is less than the speed of light, that the equation of state can support neutron stars of that mass, that it is possible to explain the most massive confirmed neutron star (we use a lower limit for this mass of 1.97 M_\odot), as well as it being thermodynamically stable. Accommodating the most massive neutron star turns out to be an important piece of information.

The plot below shows the inferred tidal deformation parameters for the two neutron stars. The two techniques, using the equation-of-state insensitive relations and using the parametrised equation-of-state model without included the constraint of matching the 1.97 M_\odot neutron star, give similar results. For a 1.4 M_\odot neutron star, these results indicate that the tidal deformation parameter would be \Lambda_{1.4} = 190^{+390}_{-120}. We favour softer equations of state over stiffer ones [citation note]. I think this means that neutron stars are more huggable.

Tidal deformations assuming neutron star components for GW170817's source

Probability distributions for the tidal parameters of the two neutron stars. The tidal deformation of the more massive neutron star \Lambda_1 must be greater than that for the smaller neutron star \Lambda_2. The green shading and (50% and 90%) contours are calculated using the equation-of-state insensitive relations. The blue contours are for the parametrised equation-of-state model. The orange contours are from the GW170817 Properties Paper, where we don’t assume a common equation of state. The black lines are predictions from a selection of different equations of state Figure 1 of the GW170817 Equation-of-state Paper.

We can translate our results into estimates on the size of the neutron stars. The plots below show the inferred radii. The results for the parametrised equation-of-state model now includes the constraint of accommodating a 1.97 M_\odot neutron star, which is the main reason for the difference in the plots. Using the equation-of-state insensitive relations we find that the radius of the heavier (m_1 = 1.361.62M_\odot) neutron star is R_1 = 10.8^{+2.0}_{-1.7}~\mathrm{km} and the radius of the lighter (m_2 = 1.151.36M_\odot) neutron star is R_2 = 10.7^{+2.1}_{-1.5}~\mathrm{km}. With the parametrised equation-of-state model, the radii are R_1 = 11.9^{+1.4}_{-1.4}~\mathrm{km} (m_1 = 1.361.58M_\odot) and R_2 = 11.9^{+1.4}_{-1.4}~\mathrm{km} (m_2 = 1.181.36M_\odot).

Neutron star masses and radii

Posterior probability distributions for neutron star masses and radii (blue for the more massive neutron star, orange for the lighter). The left plot uses the equation-of-state insensitive relations, and the right uses the parametrised equation-of-state model. In the one-dimensional plots, the dashed lines indicate the priors. The lines in the top left indicate the size of a Schwarzschild Black hole and the Buchadahl limit for the collapse of a neutron star. Figure 3 of the GW170817 Equation-of-state Paper.

When I was an undergraduate, I remember learning that neutron stars were about 15~\mathrm{km} in radius. We now know that’s not the case.

If you want to investigate further, you can download the posterior samples from these analyses.

Bonus notes

Standard sirens

In astronomy, we often use standard candles, objects like type IA supernovae of known luminosity, to infer distances. If you know how bright something should be, and how bright you measure it to be, you know how far away it is. By analogy, we can infer how far away a gravitational-wave source is by how loud it is. It is thus not a candle, but a siren. Sean Carrol explains more about this term on his blog.

Nature

I know… Nature published the original Schutz paper on measuring the Hubble constant using gravitational waves; therefore, there’s a nice symmetry in publishing the first real result doing this in Nature too.

Globular clusters

Instead of a binary neutron star system forming from a binary of two stars born together, it is possible for two neutron stars to come close together in a dense stellar environment like a globular cluster. A significant fraction of binary black holes could be formed this way. Binary neutron stars, being less massive, are not as commonly formed this way. We wouldn’t expect GW170817 to have formed this way. In the GW170817 Progenitor Paper, we argue that the probability of GW170817’s source coming from a globular cluster is small—for predicted rates, see Bae, Kim & Lee (2014).

Levan et al. (2017) check for a stellar cluster at the site of AT 2017gfo, and find nothing. The smallest 30% of the Milky Way’s globular clusters would evade this limit, but these account for just 5% of the stellar mass in globular clusters, and a tiny fraction of dynamical interactions. Fong et al. (2019) perform some detailed observations looking for a globular cluster, and also find nothing. This excludes a cluster down to 1.3\ times 10^4 M_\odot, which is basically all (99.996%) of them. Therefore, it’s unlikely that a cluster is the source of this binary.

Citation notes

Merger rates

From our gravitational-wave data, we estimate the current binary neutron star merger rate density is 1540_{-1220}^{+3200}~\mathrm{Gpc^{-3}\,yr^{-1}}. Several electromagnetic observers performed their own rate estimates from the frequency of detection (or lack thereof) of electromagnetic transients.

Kasliwal et al. (2017) consider transients seen by the Palomar Transient Factory, and estimate a rate density of approximately 320~\mathrm{Gpc^{-3}\,yr^{-1}} (3-sigma upper limit of 800~\mathrm{Gpc^{-3}\,yr^{-1}}), towards the bottom end of our range, but their rate increases if not all mergers are as bright as AT 2017gfo.

Siebert et al. (2017) works out the rate of AT 2017gfo-like transients in the Swope Supernova Survey. They obtain an upper limit of 16000~\mathrm{Gpc^{-3}\,yr^{-1}}. They use to estimate the probability that AT 2017gfo and GW170817 are just a chance coincidence and are actually unrelated. The probability is 9 \times 10^{-6} at 90% confidence.

Smartt et al. (2017) estimate the kilonova rate from the ATLAS survey, they calculate a 95% upper limit of 30000~\mathrm{Gpc^{-3}\,yr^{-1}}, safely above our range.

Yang et al. (2017) calculates upper limits from the DLT40 Supernova survey. Depending upon the reddening assumed, this is between 93000^{+16000}_{-18000}~\mathrm{Gpc^{-3}\,yr^{-1}} and 109000^{+28000}_{-18000}~\mathrm{Gpc^{-3}\,yr^{-1}}. Their figure 3 shows that this is well above expected rates.

Zhang et al. (2017) is interested in the rate of gamma-ray bursts. If you know the rate of short gamma-ray bursts and of binary neutron star mergers, you can learn something about the beaming angle of the jet. The smaller the jet, the less likely we are to observe a gamma-ray burst. In order to do this, they do their own back-of-the-envelope for the gravitational-wave rate. They get 1100_{-910}^{+2500}~\mathrm{Gpc^{-3}\,yr^{-1}}. That’s not too bad, but do stick with our result.

If you’re interested in the future prospects for kilonova detection, I’d recommend Scolnic et al. (2017). Check out their Table 2 for detection rates (assuming a rate of 1000~\mathrm{Gpc^{-3}\,yr^{-1}}): LSST and WFIRST will see lots, about 7 and 8 per year respectively.

Using later observational constraints on the jet structure, Gupta & Bartos (2018) use the short gamma-ray burst rate to estimate a binary neutron star merger rate of 500~\mathrm{Gpc^{-3}\,yr^{-1}}. They project that around 30% of gravitational-wave detections will be accompanied by gamma-ray bursts, once LIGO and Virgo reach design sensitivity.

Della Valle et al. (2018) calculate an observable kilonova rate of 352_{-281}^{+810}~\mathrm{Gpc^{-3}\,yr^{-1}}. To match up to our binary neutron star merger rate, we either need only a fraction of binary neutron star mergers to produce kilonova or for them to only be observable for viewing angles of less than 40^\circ. Their table 2 contains a nice compilation of rates for short gamma-ray bursts.

The electromagnetic story

Some notes on an incomplete overview of papers describing the electromagnetic discovery. For observational data, I’d recommend looking at the Open Kilonova Project.

Independently of our gravitational-wave detection, a short gamma-ray burst GRB 170817A was observed by Fermi-GBM (Goldstein et al. 2017). Fermi-LAT did not see anything, as it was offline for crossing through the South Atlantic Anomaly. At the time of the merger, INTEGRAL was following up the location of GW170814, fortunately this meant it could still observe the location of GW170817, and following the alert they found GRB 170817A in their data (Savchenko et al. 2017).

Following up on our gravitational-wave localization, an optical transient AT 2017gfo was discovered. The discovery was made by the One-Meter Two-Hemisphere (1M2H) collaboration using the Swope telescope at the Las Campanas Observatory in Chile; they designated the transient as SSS17a (Coulter et al. 2017). That same evening, several other teams also found the transient within an hour of each other:

  • The Distance Less Than 40 Mpc (DLT40) search found the transient using the PROMPT 0.4-m telescope at the Cerro Tololo Inter-American Observatory in Chile; they designated the transient DLT17ck (Valenti et al. 2017).
  • The VINROUGE collaboration (I think, they don’t actually identify themselves in their own papers) found the transient using VISTA at the European Southern Observatory in Chile (Tanvir et al. 2017). Their paper also describes follow-up observations with the Very Large Telescope, the Hubble Space Telescope, the Nordic Optical Telescope and the Danish 1.54-m Telescope, and has one of my favourite introduction sections of the discovery papers.
  • The MASTER collaboration followed up with their network of global telescopes, and it was their telescope at the San Juan National University Observatory in Argentina which found the transient (Lipunov et al. 2017); they, rather catchily denote the transient as OTJ130948.10-232253.3.
  • The Dark Energy Survey and the Dark Energy Camera GW–EM (DES and DECam) Collaboration found the transient with the DECam on the Blanco 4-m telescope, which is also at the Cerro Tololo Inter-American Observatory in Chile (Soares-Santos et al. 2017).
  • The Las Cumbres Observatory Collaboration used their global network of telescopes, with, unsurprisingly, their 1-m telescope at the Cerro Tololo Inter-American Observatory in Chile first imaging the transient (Arcavi et al. 2017). Their observing strategy is described in a companion paper (Arcavi et al. 2017), which also describes follow-up of GW170814.

From these, you can see that South America was the place to be for this event: it was night at just the right time.

There was a huge amount of follow-up across the infrared–optical–ultraviolet range of AT 2017gfo. Villar et al. (2017) attempts to bring these together in a consistent way. Their Figure 1 is beautiful.

Ultraviolet–infrared lightcurves

Assembled lightcurves from ultraviolet, optical and infrared observations of AT 2017gfo. The data points are the homogenized data, and the lines are fitted kilonova models. The blue light initially dominates but rapidly fades, while the red light undergoes a slower decay. Figure 1 of Villar et al. (2017).

Hinderer et al. (2018) use numerical relativity simulations to compare theory and observations for gravitational-wave constraints on the tidal deformation and the kilonova lightcurve. They find that observations could be consistent with a neutron star–black hole binary and well as a binary neutron star. Coughline & Dietrich (2019) come to a similar conclusion. I think it’s unlikely that there would be a black hole this low mass, but it’s interesting that there are some simulations which can fit the observations.

AT 2017gfo was also the target of observations across the electromagnetic spectrum. An X-ray afterglow was observed 9 days post merger, and 16 days post merger, just as we thought the excitement was over, a radio afterglow was found:

The afterglow will continue to brighten for a while, so we can expect a series of updates:

  • Pooley, Kumar & Wheeler (2017) observed with Chandra 108 and 111 days post merger. Ruan et al. (2017) observed with Chandra 109 days post merger. The large gap in the X-ray observations from the initial observations is because the Sun got in the way.
  • Mooley et al. (2017) update the GROWTH radio results up to 107 days post merger (the largest span whilst still pre-empting new X-ray observations), observing with the Very Large Array, Australia Telescope Compact Array and Giant Meterewave Radio Telescope.

Excitingly, the afterglow has also now been spotted in the optical:

  • Lyman et al. (2018) observed with Hubble 110 (rest-frame) days post-merger (which is when the Sun was out of the way for Hubble). At this point the kilonova should have faded away, but they found something, and this is quite blue. The conclusion is that it’s the afterglow, and it will peak in about a year.
  • Margutti et al. (2018) brings together Chandra X-ray observations, Very Large Array radio observations and Hubble optical observations. The Hubble observations are 137 days post merger, and the Chandra observations are 153 days and 163 days post-merger. They find that they all agree (including the tentative radio signal at 10 days post-merger). They argue that the emission disfavours on-axis jets and spherical fireballs.
Evolution of radio, optical and X-ray fluxes to 160 days

Evolution of radio, optical and X-ray spectral energy density of the counterpart to GW170817. The radio and X-ray are always dominated by the afterglow, as indicated by them following the same power law. At early times, the optical is dominated by the kilonova, but as this fades, the afterglow starts to dominate. Figure 1 of Margutti et al. (2018).

The afterglow is fading.

  • D’Avanzo et al. (2018) observed in X-ray 135 days post-merger with XMM-Newton. They find that the flux is faded compared to the previous trend. They suggest that we’re just at the turn-over, so this is consistent with the most recent Hubble observations.
  • Resmi et al. (2018) observed at low radio frequencies with the Giant Meterwave Radio Telescope. They saw the signal at 1390~\mathrm{MHz} after 67 days post-merger, but this evolves little over the duration of their observations (to day 152 post-merger), also suggesting a turn-over.
  • Dobie et al. (2018) observed in radio 125–200 days post-merger with the Very Large Array and Australia Telescope Compact Array, and they find that the afterglow is starting to fade, with a peak at 149 ± 2 days post-merger.
  • Nynka et al. (2018) made X-ray observations at 260 days post-merger. They conclude the afterglow is definitely fading, and that this is not because of passing of the synchrotron cooling frequency.
  • Mooley et al. (2018) observed in radio to 298 days. They find the turn-over around 170 days. They argue that results support a narrow, successful jet.
  • Troja et al. (2018) observed in radio and X-ray to 359 days. The fading is now obvious, and starting to reveal something about the jet structure. Their best fits seem to favour a structured relativistic jet or a wide-angled cocoon.
  • Lamb et al. (2018) observed in optical to 358 days. They infer a peak around 140–160 days. Their observations are well fit either by a Gaussian structured jet or a two-component jet (with the second component being the cocoon), although the two-component model doesn’t fit early X-ray observations well. They conclude there must have been a successful jet of some form.
Light curves for Gaussian jet and observations

Radio, optical and X-ray observations to 358 days after merger. The coloured lines show fitted Gaussian jet models. Figure 3 of Lamb et al. (2018).

  • Fong et al. (2019) observe in optical to 584 days post-merger, combined with observation in radio to 585 days post-merger and in X-ray 583 days post-merger. These observations favour a structured jet over a quasi-spherical outflow. Hajela et al. (2019) extend the radio and X-ray observations even further, out to 743 days post-merger.
Optical, radio and X-ray observations of GW170817's afterglow

Left: Optical afterglow observed until 584 days post-merger together with predictions for a structured jet and a quasi-spherical outflow (Wu & MacFadyen 2018). Right: Radio, optical and X-ray observations to 535 days, 534 days and 533 days post-merger-respectively. Triangles denote upper limits. Figures 2 and 3 of Fong et al. (2019).

  • Troja et al. (2020) observed with Chandra between 935 and 942 days post-merger, and see a nice decline, consistent with a spreading jet. They also looked in radio, but didn’t find anything.
  • Makhathini et al. (2020) compile a uniform set of radio, optical and X-ray afterglow observations. Their data set covers 0.5 to 940 days post-merger. It really is a lovely data set!
Scaled optical, radio and X-ray observations of GW170817's afterglow

Optical, radio and X-ray light-curves, scaled by a best-fit spectral index so that the different observations lie on top of each other, for GW170817’s afterglow. The top panel shows the individual observations, labelled by observatory and observing band. The bottom panel shows a moving average. Figure 1 of Makhathini et al. (2020).

  • Balasubramanian et al. (2021) continue to obtain radio and X-ray observations until 1270 days  post-merger. The radio is as expected for a structured jet, but there may be some brighting in the X-ray?
  • Hajela et al. (2021) do find that there is a brightening in the X-ray after around 900 days. However, there is nothing in the radio. This could suggest some form of kilonova afterglow (which may argue against a prompt collapse to a black hole), or it could be from accretion onto the remnant. Either would be an interesting observation.
  • Troja et al. (2021) reanalyse the X-ray data, checking the calibration. They do not find a rise, but do find an excess at late times that is difficult to explain with just the jet afterglow, suggesting that there is some extra emission like a kilonova afterglow.
  • Balasubramanian et al. (2022) perform 3 GHz Very Large Array until 29 March 2022. They no longer detect the radio emission, but instead place an upper limit. This suggests no rebrightening.
X-ray and radio observations of GW170817's afterglow

X-ray (top) and radio (bottom) observations from Chandra and the Very Large Array, respectively. The X-ray observations show an excess after around 900 days, but their is not sign in radio. The red and orange lines show estimated synchrotron emission for different power laws. The grey curve shows synchrotron emission from the dynamical ejecta of a kilonova from a numerical relativity simulation of a neutron star merger. Figure 2 of Hajela at al. (2021).

The story of the most ambitious cross-over of astronomical observations might now be coming to an end?

Shapiro delay

Using the time delay between GW170817 and GRB 170817A, a few other teams also did their own estimation of the Shapiro delay before they knew what was in our GW170817 Gamma-ray Burst Paper.

Our estimate of -2.6 \times 10^{-7} \leq \gamma_\mathrm{GW} - \gamma_\mathrm{EM} \leq 1.2 \times 10 ^{-6} is the most conservative.

Comparison to other gamma-ray bursts

Are the electromagnetic counterparts to GW170817 similar to what has been observed before?

Yue et al. (2017) compare GRB 170817A with other gamma-ray bursts. It is low luminosity, but it may not be alone. There could be other bursts like it (perhaps GRB 070923, GRB 080121 and GRB 090417A), if indeed they are from nearby sources. They suggest that GRB 130603B may be the on-axis equivalent of GRB 170817A [citation note]; however, the non-detection of kilonovae for several bursts indicates that there needs to be some variation in their properties too. This agree with the results of Gompertz et al. (2017), who compares the GW170817 observations with other kilonovae: it is fainter than the other candidate kilonovae (GRB 050709, GRB 060614, GRB 130603B and tentatively GRB 160821B), but equally brighter than upper limits from other bursts. There must be a diversity in kilonovae observations. Fong et al. (2017) look at the diversity of afterglows (across X-ray to radio), and again find GW170817’s counterpart to be faint. This is probably because we are off-axis. The most comprehensive study is von Kienlin et al. (2019) who search ten years of Fermi archives and find 13 GRB 170817A-like short gamma-ray bursts: GRB 081209A, GRB 100328A, GRB 101224A, GRB 110717A; GRB 111024C, GRB 120302B, GRB 120915A, GRB 130502A, GRB 140511A, GRB 150101B, GRB 170111B, GRB 170817A and GRB 180511A. There is a range behaviours in these, with the shorter GRBs showing fast variability. Future observations will help unravel how much variation there is from viewing different angles, and how much intrinsic variation there is from the source—perhaps some short gamma-ray bursts come from neutron star–black hole binaries?

Inclination, jets and ejecta

Pretty much every observational paper has a go at estimating the properties of the ejecta, the viewing angle or something about the structure of the jet. I may try to pull these together later, but I’ve not had time yet as it is a very long list! Most of the inclination measurements assumed a uniform top-hat jet, which we now know is not a good model.

In my non-expert opinion, the later results seem more interesting. With very-long baseline interferometry radio observations to 230 days post-merger, Mooley et al. (2018) claim that while the early radio emission was powered by the wide cocoon of a structured jet, the later emission is dominated by a narrow, energetic jet. There was a successful jet, so we would have seen something like a regular short gamma-ray burst on axis. They estimate that the jet opening angle is < 5~\mathrm{deg}, and that we are viewing it at an angle of 20 \pm 5~\mathrm{deg}. With X-ray and radio observations to 359 days, Troja et al. (2018) estimate (folding in gravitational-wave constraints too) that the viewing angle is 22 \pm 6~\mathrm{deg}, and the width of a Gaussian structured jet would be 3.4 \pm 1.1~\mathrm{deg}. Using a combination of gravitational-wave, optical, radio and X-ray data, Gianfagna et al.(2022) find a viewing angle of 34^{+2}_{-2}~\mathrm{deg}, for a Gaussian structured jet where they estimates the width 6.2^{+0.4}_{-0.5}~\mathrm{deg}. Using broadband synchrotron data to 800 days with a boosted fireball model, McDowell & MacFadyen (2023) estimate the viewing angle is 30^{+7}_{-8}~\mathrm{deg}.

Hubble constant and misalignment

Guidorzi et al. (2017) try to tighten the measurement of the Hubble constant by using radio and X-ray observations. Their modelling assumes a uniform jet, which doesn’t look like a currently favoured option [citation note], so there is some model-based uncertainty to be included here. Additionally, the jet is unlikely to be perfectly aligned with the orbital angular momentum, which may add a couple of degrees more uncertainty.

Mandel (2018) works the other way and uses the recent Dark Energy Survey Hubble constant estimate to bound the misalignment angle to less than 28~\mathrm{deg}, which (unsurprisingly) agrees pretty well with the result we obtained using the Planck value. Finstad et al. (2018) uses the luminosity distance from Cantiello et al. (2018) [citation note] as a (Gaussian) prior for an analysis of the gravitational-wave signal, and get a misalignment 32^{+10}_{-13}\pm 2~\mathrm{deg} (where the errors are statistical uncertainty and an estimate of systematic error from calibration of the strain).

Hotokezaka et al. (2018) use the inclination results from Mooley et al. (2018) [citation note] (together with the updated posterior samples from the GW170817 Properties Paper) to infer a value of h = 0.689^{+0.047}_{-0.046} (quoting median and 68% symmetric credible interval). Using different jet models changes their value for the Hubble constant a little; the choice of spin prior does not (since we get basically all of the inclination information from their radio observations). The results is still consistent with Planck and SH0ES, but is closer to the Planck value.

GW170817 Hubble constant with inclination measurements

Posterior probability distribution for the Hubble constant inferred from GW170817 using only gravitational waves (GWs), and folding in models for the power-law jet (PLJ) model and very-long baseline interferometry (VLBI) radio observations. The lines symmetric mark 68% intervals. The coloured bands are measurements from the cosmic microwave background (Planck) and supernovae (SH0ES). Figure 2 of Hotokezaka et al. (2018)

Dhawan et al. (2019) use broadband photometry of the kilonova to estimate the observation angle as 32.5^{+11.7}_{-9.7}~\mathrm{deg}. Combining this with results from the Hubble Constant Paper they find h = 0.724^{+0.079}_{-0.073}.

Palmese et al. (2023) use afterglow observations until 3.5 years postmerger to measure the Hubble constant. They infer a viewing angle of 30.4^{+2.9}_{-1.7}~\mathrm{deg}, and hence h = 0.755^{+0.053}_{-0.054}.

NGC 4993 properties

In the GW170817 Progenitor Paper we used component properties for NGC 4993 from Lim et al. (2017): a stellar mass of (10^{10.454}/h^2) M_\odot and a dark matter halo mass of (10^{12.2}/h) M_\odot, where we use the Planck value of h = 0.679 (but conclusions are similar using the SH0ES value for this).

Blanchard et al. (2017) estimate a stellar mass of about \log(M_\ast/M_\odot) = 10.65^{+0.03}_{-0.03}. They also look at the star formation history, 90% were formed by 6.8^{+2.2}_{-0.8}~\mathrm{Gyr} ago, and the median mass-weighted stellar age is 13.2^{+0.5}_{-0.9}~\mathrm{Gyr}. From this they infer a merger delay time of 6.813.6~\mathrm{Gyr}. From this, and assuming that the system was born close to its current location, they estimate that the supernova kick V_\mathrm{kick} \leq 200~\mathrm{km\,s^{-1}}, towards the lower end of our estimate. They use h = 0.677.

Im et al. (2017) find a mean stellar mass of 0.31.2 \times 10^{11} M_\odot and the mean stellar age is greater than about 3~\mathrm{Gyr}. They also give a luminosity distance estimate of 38.4 \pm 8.9~\mathrm{Mpc}, which overlaps with our gravitational-wave estimate. I’m not sure what value of h they are using.

Levan et al. (2017) suggest a stellar mass of around 1.4 \times 10^{11} M_\odot. They find that 60% of stars by mass are older than 5~\mathrm{Gyr} and that less than 1% are less than 0.5~\mathrm{Gyr} old. Their Figure 5 has some information on likely supernova kicks, they conclude it was probably small, but don’t quantify this. They use h = 0.696.

Pan et al. (2017) find \log(M_\ast/M_\odot) = 10.49^{+0.08}_{-0.20}. They calculate a mass-weighted mean stellar age of 10.97~\mathrm{Gyr} and a likely minimum age for GW170817’s source system of 2.8~\mathrm{Gyr}. They use h = 0.7.

Troja et al. (2017) find a stellar mass of \log(M_\ast/M_\odot) \sim 10.88, and suggest an old stellar population of age > 2~\mathrm{Gyr}.

Ebrová & Bílek (2018) assume a distance of 41.0~\mathrm{kpc} and find a halo mass of 1.939 \times 10^{12} M_\odot. They suggest that NGC 4993 swallowed a smaller late-type galaxy somewhere between 0.2~\mathrm{Gyr} and 1~\mathrm{Gyr} ago, most probably around 0.4~\mathrm{Gyr} ago.

The consensus seems to be that the stellar population is old (and not much else). Fortunately, the conclusions of the GW170817 Progenitor Paper are pretty robust for delay times longer than 1~\mathrm{Gyr} as seems likely.

A couple of other papers look at the distance of the galaxy:

The values are consistent with our gravitational-wave estimates.

The remnant’s fate

We cannot be certain what happened to the merger remnant from gravitational-wave observations alone. However, electromagnetic observations do give some hints here.

Evans et al. (2017) argue that their non-detection of X-rays when observing with Swift and NuSTAR indicates that there is no neutron star remnant at this point, meaning we must have collapsed to form a black hole by 0.6 days post-merger. This isn’t too restricting in terms of the different ways the remnant could collapse, but does exclude a stable neutron star remnant. MAXI also didn’t detect any X-rays 4.6 hours after the merger (Sugita et al. 2018).

Pooley, Kumar & Wheeler (2017) consider X-ray observations of the afterglow. They calculate that if the remnant was a hypermassive neutron star with a large magnetic field, the early (10 day post-merger) luminosity would be much higher (and we could expect to see magnetar outbursts). Therefore, they think it is more likely that the remnant is a black hole. However, Piro et al. (2018) suggest that if the spin-down of the neutron star remnant is dominated by losses due to gravitational wave emission, rather than electromagnetic emission, then the scenario is still viable. They argue that a tentatively identified X-ray flare seen 155 days post-merger, could be evidence of dissipation of the neutron star’s toroidal magnetic field.

Kasen et al. (2017) use the observed red component of the kilonova to argue that the remnant must have collapsed to a black hole in < 10~\mathrm{ms}. A neutron star would irradiate the ejecta with neutrinos, lower the neutron fraction and making the ejecta bluer. Since it is red, the neutrino flux must have been shut off, and the neutron star must have collapsed. We are in case b in their figure below.

Kilonova ejecta compoents

Cartoon of the different components of matter ejected from neutron star mergers. Red colours show heavy r-process elements and blue colours light r-process elements. There is a tidal tail of material forming a torus in the orbital plane, roughly spherical winds from the accretion disk, and material squeezed into the polar reasons during the collision. In case a, we have a long-lived neutron star, and its neutrino irradiation leads to blue ejecta. In case b the neutron star collapses, cutting off the neutrino flux. In case c, there is a neutron star–black hole merger, and we don’t have the polar material from the collision. Figure 1 of Kasen et al. (2017); also see Figure 1 of Margalit & Metzger (2017).

Ai et al. (2018) find that there are some corners of parameter space for certain equations of state where a long-lived neutron star is possible, even given the observations. Therefore, we should remain open minded.

Margalit & Metzger (2017) and Bauswein et al. (2017) note that the relatively large amount of ejecta inferred from observations [citation note] is easier to explain when there is delayed (on timescales of > 10~\mathrm{ms}). This is difficult to resolve unless neutron star radii are small (\lesssim 11~\mathrm{km}). Metzger, Thompson & Quataert (2018) derive how this tension could be resolved if the remnant was a rapidly spinning magnetar with a life time of 0.11~\mathrm{s}Matsumoto et al. (2018), suggest that the optical emission is powered by the jet and material accreting onto the central object, rather than r-process decay, and this permits much smaller amounts of ejecta, which could also solve the issue. Yu & Dai (2017) suggest that accretion onto a long-lived neutron star could power the emission, and would only require a single opacity for the ejecta. Li et al. (2018) put forward a similar theory, arguing that both the high ejecta mass and low opacity are problems for the standard r-process explanation, but fallback onto a neutron star could work. However, Margutti et al. (2018) say that X-ray emission powered by a central engine is disfavoured at all times.

In conclusion, it seems probable that we ended up with a black hole, and we had an a unstable neutron star for a short time after merger, but I don’t think it’s yet settled how long this was around.

Gill, Nathanail & Rezzolla (2019) considered how long it would take to produce the observed amount of ejecta, and the relative amounts of red and blue eject, as well as the delay time between the gravitational-wave measurement of the merger and the observation of the gamma-ray burst, to estimate how long it took the remnant to collapse to a black hole. They find a lifetime of = 0.98^{+0.31}_{-0.26}~\mathrm{s}.

Twin stars

We might not have two neutron stars with the same equation of state if they can undergo a phase transition. This would be kind of of like if one one made up of fluffer marshmallow, and the other was made up of gooey toasted marshmallow: they have the same ingredient, but in one the type of stuff has changed, giving it different physical properties. Standard neutron stars could be made of hadronic matter, kind of like a giant nucleus, but we could have another type where the hadrons break down into their component quarks. We could therefore have two neutron stars with similar masses but with very different equations of state. This is referred to as the twin star scenario. Hybrid stars which have quark cores surrounded by hadronic outer layers are often discussed in this context.

Neutron star equation of state

Several papers have explored what we can deduce about the nature of neutron star stuff™ from gravitational wave or electromagnetic observations the neutron star coalescence. It is quite a tricky problem. Below are some investigations into the radii of neutron stars and their tidal deformations; these seem compatible with the radii inferred in the GW170817 Equation-of-state Paper.

Bauswein et al. (2017) argue that the amount of ejecta inferred from the kilonova is too large for there to have been a prompt collapse to a black hole [citation note]. Using this, they estimate that the radius of a non-rotating neutron star of mass 1.6~\mathrm{M_\odot} has a radius of at least 10.68_{-0.04}^{+0.15}~\mathrm{km}. They also estimate that the radius for the maximum mass nonrotating neutron star must be greater than 9.60_{-0.03}^{+0.14}~\mathrm{km}. Köppel, Bovard & Rezzolla (2019) calculate a similar, updated analysis, using a new approach to fit for the maximum mass of a neutron star, and they find a radius for 1.6~\mathrm{M_\odot} is greater than  10.90~\mathrm{km}, and for 1.4~\mathrm{M_\odot}  is greater than 10.92~\mathrm{km}.

Annala et al. (2018) combine our initial measurement of the tidal deformation, with the requirement hat the equation of state supports a 2 M_\odot neutron star (which they argue requires that the tidal deformation of a 1.4 M_\odot neutron star is at least 120). They argue that the latter condition implies that the radius of a 1.4 M_\odot neutron star is at least 9.9~\mathrm{km} and the former that it is less than 13.6~\mathrm{km}.

Radice et al. (2018) combine together observations of the kilonova (the amount of ejecta inferred) with gravitational-wave measurements of the masses to place constraints on the tidal deformation. From their simulations, they argue that to explain the ejecta, the combined dimensionless tidal deformability must be \tilde{\Lambda} > 400. This is consistent with results in the GW170817 Properties Paper, but would eliminate the main peak of the distribution we inferred from gravitational waves alone. However, Kuichi et al. (2019) show that it is possible to get the required ejecta for smaller tidal deformations, depending upon assumptions about the maximum neutron star mass (higher masses allow smaller tidal deformations)mand asymmetry of the binary components.

Lim & Holt (2018) perform some equation-of-state calculations. They find that their particular method (chiral effective theory) is already in good agreement with estimates of the maximum neutron star mass and tidal deformations. Which is nice. Using their models, they predict that for GW170817’s chirp mass \tilde{\Lambda} = 532^{+106}_{-119}.

Raithel, Özel & Psaltis (2018) argue that for a given chirp mass, \tilde{\Lambda} is only a weak function of component masses, and depends mostly on the radii. Therefore, from our initial inferred value, they put a 90% upper limit on the radii of 13~\mathrm{km}.

Most et al. (2018) consider a wide range of parametrised equations of state. They consider both hadronic (made up of particles like neutrons and protons) equation of states, and ones where they undergo phase transitions (with hadrons breaking into quarks), which could potentially mean that the two neutron stars have quite different properties [citation note]. A number of different constraints are imposed, to give a selection of potential radius ranges. Combining the requirement that neutron stars can be up to 2.01 M_\odot (Antoniadis et al. 2013), the maximum neutron star mass of 2.17 M_\odot inferred by Margalit & Metzger (2017), our initial gravitational-wave upper limit on the tidal deformation and the lower limit from Radice et al. (2018), they estimate that the radius of a 1.4 M_\odot neutron star is 12.0013.45~\mathrm{km} for the hadronic equation of state. For the equation of state with the phase transition, they do the same, but without the tidal deformation from Radice et al. (2018), and find the radius of a 1.4 M_\odot neutron star is 8.5313.74~\mathrm{km}.

Paschalidis et al. (2018) consider in more detail the idea equations of state with hadron–quark phase transitions, and the possibility that one of the components of GW170817’s source was a hadron–quark hybrid star. They find that the initial tidal measurements are consistent with this.

Burgio et al. (2018) further explore the possibility that the two binary components have different properties. They consider both there being a hadron–quark phase transition, and also that one star is hadronic and the other is a quark star (made up of deconfined quarks, rather than ones packaged up inside hadrons). X-ray observations indicate that neutron stars have radii in the range 9.911.2~\mathrm{km}, whereas most of the radii inferred for GW170817’s components are larger. This paper argues that this can be resolved if one of the components of GW170817’s source was a hadron–quark hybrid star or a quark star.

De et al. (2018) perform their own analysis of the gravitational signal, with a variety of different priors on the component masses. They assume that the two neutron stars have the same radii. In the GW170817 Equation-of-state Paper we find that the difference can be up to about 2~\mathrm{km}, which I think makes this an OK approximation; Zhao & Lattimer (2018) look at this in more detail. Within their approximation, they estimate the neutron stars to have a common radius of 8.913.2~\mathrm{km}.

Malik et al. (2018) use the initial gravitational-wave upper bound on tidal deformation and the lower bound from Radice et al. (2018) in combination with several equations of state (calculated using relativistic mean field and of Skyrme Hartree–Fock recipes, which sound delicious). For a 1.4 M_\odot neutron star, they obtain a tidal deformation in the range 344859 and the radius in the range 11.8213.72~\mathrm{km}.

Radice & Dai (2018) do their own analysis of our gravitational-wave data (using relative binning) and combine this with an analysis of the electromagnetic observations using models for the accretion disc. They find that the areal radius of a 1.4 M_\odot is 12.2^{+1.0}_{-0.8} \pm 0.2~\mathrm{km}. These results are in good agreement with ours, their inclusion of electromagnetic data pushes their combined results towards larger values for the tidal deformation.

Montaña et al. (2018) consider twin star scenarios [citation note] where we have a regular hadronic neutron star and a hybrid hadron–quark star. They find the data are consistent with neutron star–neutron star, neutron star–hybrid star or hybrid star–hybrid star binaries. Their Table II is a useful collection of results for the radius of a  1.4 M_\odot neutron star, including the possibility of phase transitions.

Coughlin et al. (2018) use our LIGO–Virgo results and combine them with constraints from the observation of the kilonova (combined with fits to numerical simulations) and the gamma-ray burst. The electromagnetic observations give some extra information of the tidal deformability, mass ratio and inclination. They use the approximation that the neutron stars have equal radii. They find that the tidal deformability \tilde{\Lambda} has a 90% interval 279822 and the neutron star radius is 11.113.4~\mathrm{km}.

Zhou, Chen & Zhang (2019) use data from heavy ion collider experiments, which constrains the properties of nuclear density stuff™ at one end of the spectrum, the existence of 2 M_\odot neutron stars, and our GW170817 Equation-of-state Paper constraints on the tidal deformation to determine that the radius of a 1.4 M_\odot neutron star is 11.113.3~\mathrm{km}.

Kumar & Landry (2019) use the GW170817 Equation-of-state Paper constraints, and combine these of electromagnetic constraints to get an overall tidal deformability measurement. They use of observations of X-ray bursters from Özel et al. (2016) which give mass and radius measurements, and translate these using universal relations. Their overall result is the tidal deformability of a 1.4 M_\odot neutron star is 112^{+46}_{-33}.

Gamba, Read & Wade (2019) estimate the systematic error in the GW170817 Equation-of-state Paper results for the neutron star radius which may have been introduced from assumptions about the crust’s equation of state. They find that the error could be 0.3~\mathrm{km} (about 3%).

Later papers start to use GW190425 (spoilers), so I’ll not go further here. However, Zhu, Li & Liu (2022) combine our gravitational-wave data with kilonova observations, and the results of NICER. They find the radius of a 1.4 M_\odot neutron star is 11.64^{+0.21}_{-0.23}~\mathrm{km}.

GW170608—The underdog

Detected in June, GW170608 has had a difficult time. It was challenging to analyse, and neglected in favour of its louder and shinier siblings. However, we can now introduce you to our smallest chirp-mass binary black hole system!

Family of adorable black holes

The growing family of black holes. From Dawn Finney.

Our family of binary black holes is now growing large. During our first observing run (O1) we found three: GW150914, LVT151012 and GW151226. The advanced detector observing run (O2) ran from 30 November 2016 to 25 August 2017 (with a couple of short breaks). From our O1 detections, we were expecting roughly one binary black hole per month. The first same in January, GW170104, and we have announced the first detection which involved Virgo from August, GW170814, so you might be wondering what happened in-between? Pretty much everything was dropped following the detection of our first binary neutron star system, GW170817, as a sizeable fraction of the astronomical community managed to observe its electromagnetic counterparts. Now, we are starting to dig our way out of the O2 back-log.

On 8 June 2017, a chirp was found in data from LIGO Livingston. At the time, LIGO Hanford was undergoing planned engineering work [bonus explanation]. We would not normally analyse this data, as the detector is disturbed; however, we had to follow up on the potential signal in Livingston. Only low frequency data in Hanford should have been affected, so we limited our analysis to above 30 Hz (this sounds easier than it is—I was glad I was not on rota to analyse this event [bonus note]). A coincident signal was found [bonus note]. Hello GW170608, the June event!

Normalised spectrograms for GW170608

Time–frequency plots for GW170608 as measured by LIGO Hanford and Livingston. The chirp is clearer in Hanford, despite it being less sensitive, because of the sources position. Figure 1 of the GW170608 Paper.

Analysing data from both Hanford and Livingston (limiting Hanford to above 30 Hz) [bonus note], GW170608 was found by both of our offline searches for binary signals. PyCBC detected it with a false alarm rate of less than 1 in 3000 years, and GstLAL estimated a false alarm rate of 1 in 160000 years. The signal was also picked up by coherent WaveBurst, which doesn’t use waveform templates, and so is more flexible in what it can detect at the cost off sensitivity: this analysis estimates a false alarm rate of about 1 in 30 years. GW170608 probably isn’t a bit of random noise.

GW170608 comes from a low mass binary. Well, relatively low mass for a binary black hole. For low mass systems, we can measure the chirp mass \mathcal{M}, the particular combination of the two black hole masses which governs the inspiral, well. For GW170608, the chirp mass is 7.9_{-0.2}^{+0.2} M_\odot. This is the smallest chirp mass we’ve ever measured, the next smallest is GW151226 with 8.9_{-0.3}^{+0.3} M_\odot. GW170608 is probably the lowest mass binary we’ve found—the total mass and individual component masses aren’t as well measured as the chirp mass, so there is small probability (~11%) that GW151226 is actually lower mass. The plot below compares the two.

Binary black hole masses

Estimated masses m_1 \geq m_2 for the two black holes in the binary. The two-dimensional shows the probability distribution for GW170608 as well as 50% and 90% contours for GW151226, the other contender for the lightest black hole binary. The one-dimensional plots on the sides show results using different waveform models. The dotted lines mark the edge of our 90% probability intervals. The one-dimensional plots at the top show the probability distributions for the total mass M and chirp mass \mathcal{M}. Figure 2 of the GW170608 Paper. I think this plot is neat.

One caveat with regards to the masses is that the current results only consider spin magnitudes up to 0.89, as opposed to the usual 0.99. There is a correlation between the mass ratio and the spins: you can have a more unequal mass binary with larger spins. There’s not a lot of support for large spins, so it shouldn’t make too much difference. We use the full range in updated analysis in the O2 Catalogue Paper.

Speaking of spins, GW170608 seems to prefer small spins aligned with the angular momentum; spins are difficult to measure, so there’s a lot of uncertainty here. The best measured combination is the effective inspiral spin parameter \chi_\mathrm{eff}. This is a combination of the spins aligned with the orbital angular momentum. For GW170608 it is 0.07_{-0.09}^{+0.23}, so consistent with zero and leaning towards being small and positive. For GW151226 it was 0.21_{-0.10}^{+0.20}, and we could exclude zero spin (at least one of the black holes must have some spin). The plot below shows the probability distribution for the two component spins (you can see the cut at a maximum magnitude of 0.89). We prefer small spins, and generally prefer spins in the upper half of the plots, but we can’t make any definite statements other than both spins aren’t large and antialigned with the orbital angular momentum.

Orientation and magnitudes of the two spins

Estimated orientation and magnitude of the two component spins. The distribution for the more massive black hole is on the left, and for the smaller black hole on the right. The probability is binned into areas which have uniform prior probabilities, so if we had learnt nothing, the plot would be uniform. This analysis assumed spin magnitudes less than 0.89, which is why there is an apparent cut-off. Part of Figure 3 of the GW170608 Paper. For the record, I voted against this colour scheme.

The properties of GW170608’s source are consistent with those inferred from observations of low-mass X-ray binaries (here the low-mass refers to the companion star, not the black hole). These are systems where mass overflows from a star onto a black hole, swirling around in an accretion disc before plunging in. We measure the X-rays emitted from the hot gas from the disc, and these measurements can be used to estimate the mass and spin of the black hole. The similarity suggests that all these black holes—observed with X-rays or with gravitational waves—may be part of the same family.

Inferred black hole masses

Estimated black hole masses inferred from low-mass X-ray binary observations. Figure 1 of Farr et al. (2011). The masses overlap those of the lower mass binary black holes found by LIGO and Virgo.

We’ll present update merger rates and results for testing general relativity in our end-of-O2 paper. The low mass of GW170608’s source will make it a useful addition to our catalogue here. Small doesn’t mean unimportant.

Title: GW170608: Observation of a 19 solar-mass binary black hole coalescence
Journal: Astrophysical Journal Letters; 851(2):L35(11); 2017
arXiv: 1711.05578 [gr-qc] [bonus note]
Science summary: GW170608: LIGO’s lightest black hole binary?
Data release: LIGO Open Science Center

If you’re looking for the most up-to-date results regarding GW170608, check out the O2 Catalogue Paper.

Bonus notes

Detector engineering

A lot of time and effort goes into monitoring, maintaining and tweaking the detectors so that they achieve the best possible performance. The majority of work on the detectors happens during engineering breaks between observing runs, as we progress towards design sensitivity. However, some work is also needed during observing runs, to keep the detectors healthy.

On 8 June, Hanford was undergoing angle-to-length (A2L) decoupling, a regular maintenance procedure which minimises the coupling between the angular position of the test-mass mirrors and the measurement of strain. Our gravitational-wave detectors carefully measure the time taken for laser light to bounce between the test-mass mirrors in their arms. If one of these mirrors gets slightly tilted, then the laser could bounce of part of the mirror which is slightly closer or further away than usual: we measure a change in travel time even though the length of the arm is the same. To avoid this, the detectors have control systems designed to minimise angular disturbances. Every so often, it is necessary to check that these are calibrated properly. To do this, the mirrors are given little pushes to rotate them in various directions, and we measure the output to see the impact.

Coupling of angular disturbances to length

Examples of how angular fluctuations can couple to length measurements. Here are examples of how pitch p rotations in the suspension level above the test mass (L3 is the test mass, L2 is the level above) can couple to length measurement l. Yaw fluctuations (rotations about the vertical axis) can also have an impact. Figure 1 of Kasprzack & Yu (2016).

The angular pushes are done at specific frequencies, so we we can tease apart the different effects of rotations in different directions. The frequencies are in the range 19–23 Hz. 30 Hz is a safe cut-off for effects of the procedure (we see no disturbances above this frequency).

Impact of commissioning on Hanford data

Imprint of angular coupling testing in Hanford. The left panel shows a spectrogram of strain data, you can clearly see the excitations between ~19 Hz and ~23 Hz. The right panel shows the amplitude spectral density for Hanford before and during the procedure, as well as for Livingston. The procedure adds extra noise in the broad peak about 20 Hz. There are no disturbances above ~30 Hz. Figure 4 of GW170608 Paper.

While we normally wouldn’t analyse data from during maintenance, we think it is safe to do so, after discarding the low-frequency data. If you are worried about the impact of including addition data in our rate estimates (there may be a bias only using time when you know there are signals), you can be reassured that it’s only a small percent of the total time, and so should introduce an error less significant than uncertainty from the calibration accuracy of the detectors.

Parameter estimation rota

Unusually for an O2 event, Aaron Zimmerman was not on shift for the Parameter Estimation rota at the time of GW170608. Instead, it was Patricia Schmidt and Eve Chase who led this analysis. Due to the engineering work in Hanford, and the low mass of the system (which means a long inspiral signal), this was one of the trickiest signals to analyse: I’d say only GW170817 was more challenging (if you ignore all the extra work we did for GW150914 as it was the first time).

Alerts and follow-up

Since this wasn’t a standard detection, it took a while to send out an alert (about thirteen and a half hours). Since this is a binary black hole merger, we wouldn’t expect that there is anything to see with telescopes, so the delay isn’t as important as it would be for a binary neutron star. Several observing teams did follow up the laert. Details can be found in the GCN Circular archive. So far, papers on follow-up have appeared from:

  • CALET—a gamma-ray search. This paper includes upper limits for GW151226, GW170104, GW170608, GW170814 and GW170817.
  • DLT40—an optical search designed for supernovae. This paper covers the whole of O2 including GW170104GW170814, GW170817 plus GW170809 and GW170823.
  • Mini-GWAC—a optical survey (the precursor to GWAC). This paper covers the whole of their O2 follow-up (including GW170104).
  • NOvA—a search for neutrinos and cosmic rays over a wide range of energies. This paper covers all the events from O1 and O2, plus triggers from O3.
  • The VLA and VLITE—radio follow-up, particularly targeting a potentially interesting gamma-ray transient spotted by Fermi.

Virgo?

If you are wondering about the status of Virgo: on June 8 it was still in commissioning ahead of officially joining the run on 1 August. We have data at the time of the event. The sensitivity is of the detector is not great. We often quantify detector sensitivity by quoting the binary neutron star range (the average distance a binary neutron star could be detected). Around the time of the event, this was something like 7–8 Mpc for Virgo. During O2, the LIGO detectors have been typically in the 60–100 Mpc region; when Virgo joined O2, it had a range of around 25–30 Mpc. Unsurprisingly, Virgo didn’t detect the signal. We could have folded the data in for parameter estimation, but it was decided that it was probably not well enough understood at the time to be worthwhile.

Journal

The GW170608 Paper is the first discovery paper to be made public before journal acceptance (although the GW170814 Paper was close, and we would have probably gone ahead with the announcement anyway). I have mixed feelings about this. On one hand, I like that the Collaboration is seen to take their detections seriously and follow the etiquette of peer review. On the other hand, I think it is good that we can get some feedback from the broader community on papers before they’re finalised. I think it is good that the first few were peer reviewed, it gives us credibility, and it’s OK to relax now. Binary black holes are becoming routine.

This is also the first discovery paper not to go to Physical Review Letters. I don’t think there’s any deep meaning to this, the Collaboration just wanted some variety. Perhaps GW170817 sold everyone that we were astrophysicists now? Perhaps people thought that we’ve abused Physical Review Letters‘ page limits too many times, and we really do need that appendix. I was still in favour of Physical Review Letters for this paper, if they would have had us, but I approve of sharing the love. There’ll be plenty more events.