However, it is important to have plans for research. While you might not be sure of the outcome, it is necessary to weigh the risks and rewards associated with the probable results before you invest your time and taxpayers’ money!

To help with planning and prioritising, researchers in astrophysics often pull together **white papers** [bonus note]. These are sketches of ideas for future research, arguing why you think they might be interesting. These can then be discussed within the community to help shape the direction of the field. If other scientists find the paper convincing, you can build support which helps push for funding. If there are gaps in the logic, others can point these out to ave you heading the wrong way. This type of consensus building is especially important for large experiments or missions—you don’t want to spend a billion dollars on something unless you’re *really* sure it is a good idea and *lots* of people agree.

I have been involved with a few white papers recently. Here are some key ideas for where research should go.

We’ve done some awesome things with Advanced LIGO and Advanced Virgo. In just a couple of years we have revolutionized our understanding of binary black holes. That’s not bad. However, our current gravitational-wave observatories are limited in what they can detect. What amazing things could we achieve with a new generation of detectors?

It can take decades to develop new instruments, therefore it’s important to start thinking about them early. Obviously, what we would most like is an observatory which can detect *everything*, but that’s not feasible. In this white paper, we pick the questions we most want answered, and see what the requirements for a new detector would be. A design which satisfies these specifications would therefore be a solid choice for future investment.

Binary black holes are the perfect source for ground-based detectors. What do we most want to know about them?

**How many mergers are there, and how does the merger rate change over the history of the Universe?**We want to know how binary black holes are made. The merger rate encodes lots of information about how to make binaries, and comparing how this evolves compared with the rate at which the Universe forms stars, will give us a deeper understanding of how black holes are made.**What are the properties (masses and spins) of black holes?**The merger rate tells us some things about how black holes form, but other properties like the masses, spins and orbital eccentricity complete the picture. We want to make precise measurements for individual systems, and also understand the population.**Where do supermassive black holes come from?**We know that stars can collapse to produce stellar-mass black holes. We also know that the centres of galaxies contain massive black holes. Where do these massive black holes come from? Do they grow from our smaller black holes, or do they form in a different way? Looking for intermediate-mass black holes in the gap in-between will tells us whether there is a missing link in the evolution of black holes.

What can we do to answer these questions?

**Increase sensitivity!**Advanced LIGO and Advanced Virgo can detect a binary out to a redshift of about . The planned detector upgrade A+ will see them out to redshift . That’s pretty impressive, it means we’re covering 10 billion years of history. However, the peak in the Universe’s star formation happens at around , so we’d really like to see beyond this in order to measure how the merger rate evolves. Ideally we would see all the way back to cosmic dawn at when the Universe was only 200 million years old and the first stars light up.**Increase our frequency range!**Our current detectors are limited in the range of frequencies they can detect. Pushing to lower frequencies helps us to detect heavier systems. If we want to detect intermediate-mass black holes of we need this low frequency sensitivity. At the moment, Advanced LIGO could get down to about . The plot below shows the signal from a binary at . The signal is completely undetectable at .**Increase sensitivity**Increasing sensitivity means that we will have higher signal-to-noise ratio detections. For these loudest sources, we will be able to make more precise measurements of the source properties. We will also have more detections overall, as we can survey a larger volume of the Universe. Increasing the frequency range means we can observe a longer stretch of the signal (for the systems we currently see). This means it is easier to measure spin precession and orbital eccentricity. We also get to measure a wider range of masses. Putting the improved sensitivity and frequency range together means that we’ll get better measurements of individual systems*and*frequency range!*and*a more complete picture of the population.

How much do we need to improve our observatories to achieve our goals? To quantify this, lets consider the boost in sensitivity relative to A+, which I’ll call . If the questions can be answered with , then we don’t need anything beyond the currently planned A+. If we need a slightly larger , we should start investigating extra ways to improve the A+ design. If we need much larger , we need to think about new facilities.

The plot below shows the boost necessary to detect a binary (with equal-mass nonspinning components) out to a given redshift. With a boost of (blue line) we can survey black holes around – across cosmic time.

The plot above shows that to see intermediate-mass black holes, we do need to completely overhaul the low-frequency sensitivity. What do we need to detect a binary at ? If we parameterize the noise spectrum (power spectral density) of our detector as with a lower cut-off frequency of , we can investigate the various possibilities. The plot below shows the possible combinations of parameters which meet of requirements.

To build up information about the population of black holes, we need lots of detections. Uncertainties scale inversely with the square root of the number of detections, so you would expect few percent uncertainty after 1000 detections. If we want to see how the population evolves, we need these many per redshift bin! The plot below shows the number of detections per year of observing time for different boost factors. The rate starts to saturate once we detect *all* the binaries in the redshift range. This is as good as you’ll ever going to get.

Looking at the plots above, it is clear that A+ is not going to satisfy our requirements. We need something with a boost factor of : a next-generation observatory. Both the Cosmic Explorer and Einstein Telescope designs do satisfy our goals.

**Title:** Deeper, wider, sharper: Next-generation ground-based gravitational-wave observations of binary black holes**
arXiv:** 1903.09220 [astro-ph.HE]

We have seen gravitational waves from a stellar-mass black hole merging with another stellar-mass black hole, can we observe a stellar-mass black hole merging with a massive black hole? Yes, these are a perfect source for a space-based gravitational wave observatory. We call these systems *extreme mass-ratio inspirals* (or **EMRIs**, pronounced em-rees, for short) [bonus note].

Having such an extreme mass ratio, with one black hole much bigger than the other, gives EMRIs interesting properties. The number of orbits over the course of an inspiral scales with the mass ratio: the more extreme the mass ratio, the more orbits there are. Each of these gives us something to measure in the gravitational wave signal.

As EMRIs are so intricate, we can make exquisit measurements of the source properties. These will enable us to:

- Measure the masses of both black holes to better than 10% precision
- Reconstruct the mass distribution of massive black holes out to redshift
- Measure massive black hole spins to a precision of better than 0.001, giving us an insight into how they formed
- Perform precision tests of the no-hair theorem describing black holes in general relativity, and test alternative theories of gravity in the strong-field regime
- Cross-correlate locations with galaxy catalogues to measure the expansion of the Universe

Event rates for EMRIs are currently uncertain: there could be just one per year or *thousands*. From the rate we can figure out the details of what is going in in the nuclei of galaxies, and what types of objects you find there.

With EMRIs you can unravel mysteries in astrophysics, fundamental physics and cosmology.

Have we sold you that EMRIs are awesome? Well then, what do we need to do to observe them? There is only *one* currently planned mission which can enable us to study EMRIs: **LISA**. To maximise the science from EMRIs, we have to support LISA.

**Title:** The unique potential of extreme mass-ratio inspirals for gravitational-wave astronomy**
arXiv:** 1903.03686 [astro-ph.HE]

Since white papers are proposals for future research, they aren’t as rigorous as usual academic papers. They are really attempts to figure out a good question to ask, rather than being answers. White papers are not usually peer reviewed before publication—the point is that you want everybody to comment on them, rather than just one or two anonymous referees.

Whilst white papers aren’t quite the same class as journal articles, they do still contain some interesting ideas, so I thought they still merit a blog post.

I have blogged about EMRIs before, so I won’t go into too much detail here. It was one of my former blog posts which inspired the LISA Science Team to get in touch to ask me to write the white paper.

]]>The new detections are largely consistent with our previous findings. GW170809, GW170818 and GW170823 are all similar to our first detection GW150914. Their black holes have masses around 20 to 40 times the mass of our Sun. I would lump GW170104 and GW170814 into this class too. Although there were models that predicted black holes of these masses, we weren’t sure they existed until our gravitational wave observations. The family of black holes continues out of this range. GW151012, GW151226 and GW170608 fall on the lower mass side. These overlap with the population of black holes previously observed in X-ray binaries. Lower mass systems can’t be detected as far away, so we find fewer of these. On the higher end we have GW170729 [bonus note]. Its source is made up of black holes with masses and (where is the mass of our Sun). The larger black hole is a contender for the most massive black hole we’ve found in a binary (the other probable contender is GW170823’s source, which has a black hole). We have a big happy family of black holes!

Of the new detections, GW170729, GW170809 and GW170818 were both observed by the Virgo detector as well as the two LIGO detectors. Virgo joined O2 for an exciting August [bonus note], and we decided that the data at the time of GW170729 were good enough to use too. Unfortunately, Virgo wasn’t observing at the time of GW170823. GW170729 and GW170809 are very quiet in Virgo, you can’t confidently say there is a signal there [bonus note]. However, GW170818 is a clear detection like GW170814. Well done Virgo!

Using the collection of results, we can start understand the physics of these binary systems. We will be summarising our findings in a series of papers. A huge amount of work went into these.

**Title:** GWTC-1: A gravitational-wave transient catalog of compact binary mergers observed by LIGO and Virgo during the first and second observing runs**
arXiv:** 1811.12907 [astro-ph.HE]

The paper summarises all our observations of binaries to date. It covers our first and second observing runs (O1 and O2). This is the paper to start with if you want any information. It contains estimates of parameters for all our sources, including updates for previous events. It also contains merger rate estimates for binary neutron stars and binary black holes, and an upper limit for neutron star–black hole binaries. We’re still missing a neutron star–black hole detection to complete the set.

**More details:** The O2 Catalogue Paper

**Title:** Binary black hole population properties inferred from the first and second observing runs of Advanced LIGO and Advanced Virgo**
arXiv:** 1811.12940 [astro-ph.HE]

Using our set of ten binary black holes, we can start to make some statistical statements about the population: the distribution of masses, the distribution of spins, the distribution of mergers over cosmic time. With only ten observations, we still have a lot of uncertainty, and can’t make too many definite statements. However, if you were wondering why we don’t see any more black holes more massive than GW170729, even though we can see these out to significant distances, so are we. We infer that almost all stellar-mass black holes have masses less than .

**More details:** The O2 Populations Paper

**Synopsis:** O2 Catalogue Paper

**Read this if:** You want the most up-to-date gravitational results

**Favourite part:** It’s out! We can tell everyone about our FOUR new detections

This is a BIG paper. It covers our first two observing runs and our main searches for coalescing stellar mass binaries. There will be separate papers going into more detail on searches for other gravitational wave signals.

Gravitational wave detectors are complicated machines. You don’t just take them out of the box and press go. We’ll be slowly improving the sensitivity of our detectors as we commission them over the next few years. O2 marks the best sensitivity achieved to date. The paper gives a brief overview of the detector configurations in O2 for both LIGO detectors, which did differ, and Virgo.

During O2, we realised that one source of noise was beam jitter, disturbances in the shape of the laser beam. This was particularly notable in Hanford, where there was a spot on the one of the optics. Fortunately, we are able to measure the effects of this, and hence subtract out this noise. This has now been done for the whole of O2. It makes a big difference! Derek Davis and TJ Massinger won the first LIGO Laboratory Award for Excellence in Detector Characterization and Calibration for implementing this noise subtraction scheme (the award citation almost spilled the beans on our new detections). I’m happy that GW170104 now has an increased signal-to-noise ratio, which means smaller uncertainties on its parameters.

We use three search algorithms in this paper. We have two matched-filter searches (GstLAL and PyCBC). These compare a bank of templates to the data to look for matches. We also use coherent WaveBurst (cWB), which is a search for generic short signals, but here has been tuned to find the characteristic chirp of a binary. Since cWB is more flexible in the signals it can find, it’s slightly less sensitive than the matched-filter searches, but it gives us confidence that we’re not missing things.

The two matched-filter searches both identify all 11 signals with the exception of GW170818, which is only found by GstLAL. This is because PyCBC only flags signals above a threshold in each detector. We’re confident it’s real though, as it is seen in all three detectors, albeit below PyCBC’s threshold in Hanford and Virgo. (PyCBC only looked at signals found in coincident Livingston and Hanford in O2, I suspect they would have found it if they were looking at all three detectors, as that would have let them lower their threshold).

The search pipelines try to distinguish between signal-like features in the data and noise fluctuations. Having multiple detectors is a big help here, although we still need to be careful in checking for correlated noise sources. The background of noise falls off quickly, so there’s a rapid transition between almost-certainly noise to almost-certainly signal. Most of the signals are off the charts in terms of significance, with GW170818, GW151012 and GW170729 being the least significant. GW170729 is found with best significance by cWB, that gives reports a false alarm rate of .

The false alarm rate indicates how often you would expect to find something at least as signal like if you were to analyse a stretch of data with the same statistical properties as the data considered, assuming that they is only noise in the data. The false alarm rate does not fold in the probability that there are real gravitational waves occurring at some average rate. Therefore, we need to do an extra layer of inference to work out the probability that something flagged by a search pipeline is a real signal versus is noise.

The results of this calculation is given in Table IV. GW170729 has a 94% probability of being real using the cWB results, 98% using the GstLAL results, but only 52% according to PyCBC. Therefore, if you’re feeling bold, you might, say, only wager the entire economy of the UK on it being real.

We also list the most marginal triggers. These all have probabilities way below being 50% of being real: if you were to add them all up you wouldn’t get a total of 1 real event. (In my professional opinion, they are garbage). However, if you want to check for what we might have missed, these may be a place to start. Some of these can be explained away as instrumental noise, say scattered light. Others show no obvious signs of disturbance, so are probably just some noise fluctuation.

We give updated parameter estimates for all 11 sources. These use updated estimates of calibration uncertainty (which doesn’t make too much difference), improved estimate of the noise spectrum (which makes some difference to the less well measured parameters like the mass ratio), the cleaned data (which helps for GW170104), and our most currently complete waveform models [bonus note].

This plot shows the masses of the two binary components (you can just make out GW170817 down in the corner). We use the convention that the more massive of the two is and the lighter is . We are now really filling in the mass plot! Implications for the population of black holes are discussed in the Populations Paper.

As well as mass, black holes have a spin. For the final black hole formed in the merger, these spins are always around 0.7, with a little more or less depending upon which way the spins of the two initial black holes were pointing. As well as being probably the most most massive, GW170729’s could have the highest final spin! It is a record breaker. It radiated a colossal worth of energy in gravitational waves [bonus note].

There is considerable uncertainty on the spins as there are hard to measure. The best combination to pin down is the effective inspiral spin parameter . This is a mass weighted combination of the spins which has the most impact on the signal we observe. It could be zero if the spins are misaligned with each other, point in the orbital plane, or are zero. If it is non-zero, then it means that at least one black hole definitely has some spin. GW151226 and GW170729 have with more than 99% probability. The rest are consistent with zero. The spin distribution for GW170104 has tightened up for GW170104 as its signal-to-noise ratio has increased, and there’s less support for negative , but there’s been no move towards larger positive .

For our analysis, we use two different waveform models to check for potential sources of systematic error. They agree pretty well. The spins are where they show most difference (which makes sense, as this is where they differ in terms of formulation). For GW151226, the effective precession waveform IMRPhenomPv2 gives and the full precession model gives and extends to negative . I panicked a little bit when I first saw this, as GW151226 having a non-zero spin was one of our headline results when first announced. Fortunately, when I worked out the numbers, all our conclusions were safe. The probability of is less than 1%. In fact, we can now say that at least one spin is greater than at 99% probability compared with previously, because the full precession model likes spins in the orbital plane a bit more. Who says data analysis can't be thrilling?

Our measurement of tells us about the part of the spins aligned with the orbital angular momentum, but not in the orbital plane. In general, the in-plane components of the spin are only weakly constrained. We basically only get back the information we put in. The leading order effects of in-plane spins is summarised by the effective precession spin parameter . The plot below shows the inferred distributions for . The left half for each event shows our results, the right shows our prior after imposed the constraints on spin we get from . We get the most information for GW151226 and GW170814, but even then it’s not much, and we generally cover the entire allowed range of values.

One final measurement which we can make (albeit with considerable uncertainty) is the distance to the source. The distance influences how loud the signal is (the further away, the quieter it is). This also depends upon the inclination of the source (a binary edge-on is quieter than a binary face-on/off). Therefore, the distance is correlated with the inclination and we end up with some butterfly-like plots. GW170729 is again a record setter. It comes from a luminosity distance of away. That means it has travelled across the Universe for – billion years—it potentially started its journey before the Earth formed!

To check our results, we reconstruct the waveforms from the data to see that they match our expectations for binary black hole waveforms (and there’s not anything extra there). To do this, we use unmodelled analyses which assume that there is a coherent signal in the detectors: we use both cWB and BayesWave. The results agree pretty well. The reconstructions beautifully match our templates when the signal is loud, but, as you might expect, can resolve the quieter details. You’ll also notice the reconstructions sometimes pick up a bit of background noise away from the signal. This gives you and idea of potential fluctuations.

I still think GW170814 looks like a slug. Some people think they look like crocodiles.

We’ll be doing more tests of the consistency of our signals with general relativity in a future paper.

Given all our observations now, we can set better limits on the merger rates. Going from the number of detections seen to the number merger out in the Universe depends upon what you assume about the mass distribution of the sources. Therefore, we make a few different assumptions.

For binary black holes, we use (i) a power-law model for the more massive black hole similar to the initial mass function of stars, with a uniform distribution on the mass ratio, and (ii) use uniform-in-logarithmic distribution for both masses. These were designed to bracket the two extremes of potential distributions. With our observations, we’re starting to see that the true distribution is more like the power-law, so I expect we’ll be abandoning these soon. Taking the range of possible values from our calculations, the rate is in the range of – for black holes between and [bonus note].

For binary neutron stars, which are perhaps more interesting astronomers, we use a uniform distribution of masses between and , and a Gaussian distribution to match electromagnetic observations. We find that these bracket the range –. This larger than are previous range, as we hadn’t considered the Gaussian distribution previously.

Finally, what about neutron star–black holes? Since we don’t have any detections, we can only place an upper limit. This is a maximum of . This is about a factor of 2 better than our O1 results, and is starting to get interesting!

We are sure to discover lots more in O3… [bonus note].

**Synopsis:** O2 Populations Paper

**Read this if:** You want the best family portrait of binary black holes

**Favourite part:** A maximum black hole mass?

Each detection is exciting. However, we can squeeze even more science out of our observations by looking at the entire population. Using all 10 of our binary black hole observations, we start to trace out the population of binary black holes. Since we still only have 10, we can’t yet be too definite in our conclusions. Our results give us some things to ponder, while we are waiting for the results of O3. I think now is a good time to start making some predictions.

We look at the distribution of black hole masses, black hole spins, and the redshift (cosmological time) of the mergers. The black hole masses tell us something about how you go from a massive star to a black hole. The spins tell us something about how the binaries form. The redshift tells us something about how these processes change as the Universe evolves. Ideally, we would look at these all together allowing for mixtures of binary black holes formed through different means. Given that we only have a few observations, we stick to a few simple models.

To work out the properties of the population, we perform a hierarchical analysis of our 10 binary black holes. We infer the properties of the individual systems, assuming that they come from a given population, and then see how well that population fits our data compared with a different distribution.

In doing this inference, we account for selection effects. Our detectors are not equally sensitive to all sources. For example, nearby sources produce louder signals and we can’t detect signals that are too far away, so if you didn’t account for this you’d conclude that binary black holes only merged in the nearby Universe. Perhaps less obvious is that we are not equally sensitive to all source masses. More massive binaries produce louder signals, so we can detect these further way than lighter binaries (up to the point where these binaries are so high mass that the signals are too low frequency for us to easily spot). This is why we detect more binary black holes than binary neutron stars, even though there are more binary neutron stars out here in the Universe.

When looking at masses, we try three models of increasing complexity:

- Model A is a simple power law for the mass of the more massive black hole . There’s no real reason to expect the masses to follow a power law, but the masses of stars when they form do, and astronomers generally like power laws as they’re friendly, so its a sensible thing to try. We fit for the power-law index. The power law goes from a lower limit of to an upper limit which we also fit for. The mass of the lighter black hole is assumed to be uniformly distributed between and the mass of the other black hole.
- Model B is the same power law, but we also allow the lower mass limit to vary from . We don’t have much sensitivity to low masses, so this lower bound is restricted to be above . I’d be interested in exploring lower masses in the future. Additionally, we allow the mass ratio of the black holes to vary, trying instead of Model A’s .
- Model C has the same power law, but now with some smoothing at the low-mass end, rather than a sharp turn-on. Additionally, it includes a Gaussian component towards higher masses. This was inspired by the possibility of pulsational pair-instability supernova causing a build up of black holes at certain masses: stars which undergo this lose extra mass, so you’d end up with lower mass black holes than if the stars hadn’t undergone the pulsations. The Gaussian could fit other effects too, for example if there was a secondary formation channel, or just reflect that the pure power law is a bad fit.

In allowing the mass distributions to vary, we find overall rates which match pretty well those we obtain with our main power-law rates calculation included in the O2 Catalogue Paper, higher than with the main uniform-in-log distribution.

The fitted mass distributions are shown in the plot below. The error bars are pretty broad, but I think the models agree on some broad features: there are more light black holes than heavy black holes; the minimum black hole mass is below about , but we can’t place a lower bound on it; the maximum black hole mass is above about and below about , and we prefer black holes to have more similar masses than different ones. The upper bound on the black hole minimum mass, and the lower bound on the black hole upper mass are set by the smallest and biggest black holes we’ve detected, respectively.

That there does seem to be a drop off at higher masses is interesting. There could be something which stops stars forming black holes in this range. It has been proposed that there is a mass gap due to pair instability supernovae. These explosions completely disrupt their progenitor stars, leaving nothing behind. (I’m not sure if they are accompanied by a flash of green light). You’d expect this to kick for black holes of about –. We infer that 99% of merging black holes have masses below with Model A, with Model B, and with Model C. Therefore, our results are not inconsistent with a mass gap. However, we don’t really have enough evidence to be sure.

We can compare how well each of our three models fits the data by looking at their Bayes factors. These naturally incorporate the complexity of the models: models with more parameters (which can be more easily tweaked to match the data) are penalised so that you don’t need to worry about overfitting. We have a preference for Model C. It’s not strong, but I think good evidence that we can’t use a simple power law.

To model the spins:

- For the magnitude, we assume a beta distribution. There’s no reason for this, but these are convenient distributions for things between 0 and 1, which are the limits on black hole spin (0 is nonspinning, 1 is as fast as you can spin). We assume that both spins are drawn from the same distribution.
- For the spin orientations, we use a mix of an isotropic distribution and a Gaussian centred on being aligned with the orbital angular momentum. You’d expect an isotropic distribution if binaries were assembled dynamically, and perhaps something with spins generally aligned with each other if the binary evolved in isolation.

We don’t get any useful information on the mixture fraction. Looking at the spin magnitudes, we have a preference towards smaller spins, but still have support for large spins. The more misaligned spins are, the larger the spin magnitudes can be: for the isotropic distribution, we have support all the way up to maximal values.

Since spins are harder to measure than masses, it is not surprising that we can’t make strong statements yet. If we were to find something with definitely negative , we would be able to deduce that spins can be seriously misaligned.

As a simple model of evolution over cosmological time, we allow the merger rate to evolve as . That’s right, another power law! Since we’re only sensitive to relatively small redshifts for the masses we detect (), this gives a good approximation to a range of different evolution schemes.

We find that we prefer evolutions that increase with redshift. There’s an 88% probability that , but we’re still consistent with no evolution. We might expect rate to increase as star formation was higher bach towards . If we can measure the time delay between forming stars and black holes merging, we could figure out what happens to these systems in the meantime.

The local merger rate is broadly consistent with what we infer with our non-evolving distributions, but is a little on the lower side.

Gravitational waves are named as GW-year-month-day, so our first observation from 14 September 2015 is GW150914. We realise that this convention suffers from a Y2K-style bug, but by the time we hit 2100, we’ll have so many detections we’ll need a new scheme anyway.

Previously, we had a second designation for less significant potential detections. They were LIGO–Virgo Triggers (LVT), the one example being LVT151012. No-one was really happy with this designation, but it stems from us being cautious with our first announcement, and not wishing to appear over bold with claiming we’d seen *two* gravitational waves when the second wasn’t that certain. Now we’re a bit more confident, and we’ve decided to simplify naming by labelling everything a GW on the understanding that this now includes more uncertain events. Under the old scheme, GW170729 would have been LVT170729. The idea is that the broader community can decide which events they want to consider as real for their own studies. The current condition for being called a GW is that the probability of it being a real astrophysical signal is at least 50%. Our 11 GWs are safely above that limit.

The naming change has hidden the fact that now when we used our improved search pipelines, the significance of GW151012 has increased. It would now be a GW even under the old scheme. Congratulations LVT151012, I always believed in you!

We are lacking nicknames for our new events. They came in so fast that we kind of lost track. Ilya Mandel has suggested that GW170729 should be the Tiger, as it happened on the International Tiger Day. Since tigers are the biggest of the big cats, this seems apt.

Carl-Johan Haster argues that LIGO+tiger = Liger. Since ligers are even bigger than tigers, this seems like an excellent case to me! I’d vote for calling the bigger of the two progenitor black holes GW170729-tiger, the smaller GW170729-lion, and the final black hole GW17-729-liger.

Suggestions for other nicknames are welcome, leave your ideas in the comments.

The final few weeks of O2 were exhausting. I was trying to write job applications at the time, and each time I sat down to work on my research proposal, my phone went off with another alert. You may be wondering about was special about August. Some have hypothesised that it is because Aaron Zimmerman, my partner for the analysis of GW170104, was on the Parameter Estimation rota to analyse the last few weeks of O2. The legend goes that Aaron is especially lucky as he was bitten by a radioactive Leprechaun. I can neither confirm nor deny this. However, I make a point of playing any lottery numbers suggested by him.

A slightly more mundane explanation is that August was when the detectors were running nice and stably. They were observing for a large fraction of the time. LIGO Livingston reached its best sensitivity at this time, although it was less happy for Hanford. We often quantify the sensitivity of our detectors using their binary neutron star range, the average distance they could see a binary neutron star system with a signal-to-noise ratio of 8. If this increases by a factor of 2, you can see twice as far, which means you survey 8 times the volume. This cubed factor means even small improvements can have a big impact. The LIGO Livingston range peak a little over . We’re targeting at least for O3, so August 2017 gives an indication of what you can expect.

Of course, in the case of GW170817, we just got lucky.

GW170809 was the first event we identified with Virgo after it joined observing. The signal in Virgo is very quiet. We actually got better results when we flipped the sign of the Virgo data. We were just starting to get paranoid when GW170814 came along and showed us that everything was set up right at Virgo. When I get some time, I’d like to investigate how often this type of confusion happens for quiet signals.

One of the waveforms, which includes the most complete prescription of the precession of the spins of the black holes, we use in our analysis goes by the technical name of SEOBNRv3. It is extremely computationally expensive. Work has been done to improve that, but this hasn’t been implemented in our reviewed codes yet. We managed to complete an analysis for the GW170104 Discovery Paper, which was a huge effort. I said then to not expect it for all future events. We did it for *all* the black holes, even for the lowest mass sources which have the longest signals. I was responsible for GW151226 runs (as well as GW170104) and I started these back at the start of the summer. Eve Chase put in a heroic effort to get GW170608 results, we pulled out all the stops for that.

I have recently enjoyed my first Thanksgiving in the US. I was lucky enough to be hosted for dinner by Shane Larson and his family (and cats). I ate so much I thought I might collapse to a black hole. Apparently, a Thanksgiving dinner can be 3000–4500 calories. That sounds like a lot, but the merger of GW170729 would have emitted about times more energy. In conclusion, I don’t need to go on a diet.

We cheated a little bit in calculating the rates. Roughly speaking, the merger rate is given by

,

where is the number of detections and is the amount of volume and time we’ve searched. You expect to detect more events if you increase the sensitivity of the detectors (and hence ), or observer for longer (and hence increase ). In our calculation, we included GW170608 in , even though it was found outside of standard observing time. Really, we should increase to factor in the extra time outside of standard observing time when we could have made a detection. This is messy to calculate though, as there’s not really a good way to check this. However, it’s only a small fraction of the time (so the extra should be small), and for much of the sensitivity of the detectors will be poor (so will be small too). Therefore, we estimated any bias from neglecting this is smaller than our uncertainty from the calibration of the detectors, and not worth worrying about.

We saw our first binary black hole shortly after turning on the Advanced LIGO detectors. We saw our first binary neutron star shortly after turning on the Advanced Virgo detector. My money is therefore on our first neutron star–black hole binary shortly after we turn on the KAGRA detector. Because science…

]]>In **this paper** we look at full *three-dimensional* localization of gravitational-wave sources; we important a (rather cunning) technique from computer vision to construct a probability distribution for the source’s location, and then explore how well we could localise a set of simulated binary neutron stars. Knowing the source location enables lots of cool science. First, it aids direct follow-up observations with non-gravitational-wave observatories, searching for electromagnetic or neutrino counterparts. It’s especially helpful if you can cross-reference with galaxy catalogues, to find the most probable source locations (this technique was used to find the kilonova associated with GW170817). Even without finding a counterpart, knowing the most probable host galaxy helps us figure out how the source formed (have lots of stars been born recently, or are all the stars old?), and allows us measure the expansion of the Universe. Having a reliable technique to reconstruct source locations is useful!

This was a fun paper to write [bonus note]. I’m sure it will be valuable, both for showing how to perform this type of reconstruction of a multi-dimensional probability density, and for its implications for source localization and follow-up of gravitational-wave signals. I go into details of both below, first discussing our statistical model (this is a bit technical), then looking at our results for a set of binary neutron stars (which have implications for hunting for counterparts) .

When we analyse gravitational-wave data to infer the source properties (location, masses, etc.), we map out parameter space with a set of samples: a list of points in the parameter space, with there being more around more probable locations and fewer in less probable locations. These samples encode everything about the probability distribution for the different parameters, we just need to extract it…

For our application, we want a nice smooth probability density. How do we convert a bunch of discrete samples to a smooth distribution? The simplest thing is to bin the samples. However, picking the right bin size is difficult, and becomes much harder in higher dimensions. Another popular option is to use kernel density estimation. This is better at ensuring smooth results, but you now have to worry about the size of your kernels.

Our approach is in essence to use a kernel density estimate, but to learn the size and position of the kernels (as well as the number) from the data as an extra layer of inference. The “Gaussian mixture model” part of the name refers to the kernels—we use several different Gaussians. The “Dirichlet process” part refers to how we assign their properties (their means and standard deviations). What I really like about this technique, as opposed to the usual rule-of-thumb approaches used for kernel density estimation, is that it is well justified from a theoretical point of view.

I hadn’t come across a Dirchlet process before. Section 2 of the paper is a walkthrough of how I built up an understanding of this mathematical object, and it contains lots of helpful references if you’d like to dig deeper.

In our application, you can think of the Dirichlet process as being a probability distribution for probability distributions. We want a probability distribution describing the source location. Given our samples, we infer what this looks like. We could put all the probability into one big Gaussian, or we could put it into lots of little Gaussians. The Gaussians could be wide or narrow or a mix. The Dirichlet distribution allows us to assign probabilities to each configuration of Gaussians; for example, if our samples are all in the northern hemisphere, we probably want Gaussians centred around there, rather than in the southern hemisphere.

With the resulting probability distribution for the source location, we can quickly evaluate it at a single point. This means we can rapidly produce a list of most probable source galaxies—extremely handy if you need to know where to point a telescope before a kilonova fades away (or someone else finds it).

To verify our technique works, and develop an intuition for three-dimensional localizations, we used studied a set of simulated binary neutron star signals created for the First 2 Years trilogy of papers. This data set is well studied now, it illustrates performance it what we anticipated to be the first two observing runs of the advanced detectors, which turned out to be not too far from the truth. We have previously looked at three-dimensional localizations for these signals using a super rapid approximation.

The plots below show how well we could localise the sources of our binary neutron star sources. Specifically, the plots show the size of the volume which has a 90% probability of containing the source verses the signal-to-noise ratio (the loudness) of the signal. Typically, volumes are –, which is about – Olympic swimming pools. Such a volume would contain something like – galaxies.

Looking at the results in detail, we can learn a number of things

- The localization volume is roughly inversely proportional to the
*sixth*power of the signal-to-noise ratio [bonus note]. Loud signals are localized*much*better than quieter ones! - The localization dramatically improves when we have three-detector observations. The extra detector improves the sky localization, which reduces the localization volume.
- To get the benefit of the extra detector, the source needs to be close enough that all the detectors could get a decent amount of the signal-to-noise ratio. In our case, Virgo is the least sensitive, and we see the the best localizations are when it has a fair share of the signal-to-noise ratio.
- Considering the cases where we only have two detectors, localization volumes get bigger at a given signal-to-noise ration as the detectors get more sensitive. This is because we can detect sources at greater distances.

Putting all these bits together, I think in the future, when we have lots of detections, it would make most sense to prioritise following up the loudest signals. These are the best localised, and will also be the brightest since they are the closest, meaning there’s the greatest potential for actually finding a counterpart. As the sensitivity of the detectors improves, its only going to get more difficult to find a counterpart to a typical gravitational-wave signal, as sources will be further away and less well localized. However, having more sensitive detectors also means that we are more likely to have a really loud signal, which should be really well localized.

Using our localization volumes as a guide, you would only need to search *one* galaxy to find the true source in about 7% of cases with a three-detector network similar to at the end of our second observing run. Similarly, only ten would need to be searched in 23% of cases. It might be possible to get even better performance by considering which galaxies are most probable because they are the biggest or the most likely to produce merging binary neutron stars. This is definitely a good approach to follow.

**arXiv:** 1801.08009 [astro-ph.IM]

**Journal:** *Monthly Notices of the Royal Astronomical Society*; **479(**1):601–614; 2018

**Code:** 3d_volume

**Buzzword bingo:** Interdisciplinary (we worked with computer scientist Tom Haines); machine learning (the inference involving our Dirichlet process Gaussian mixture model); multimessenger astronomy (as our results are useful for following up gravitational-wave signals in the search for counterparts)

We started writing this paper back before the first observing run of Advanced LIGO. We had a pretty complete draft on Friday 11 September 2015. We just needed to gather together a few extra numbers and polish up the figures and we’d be done! At 10:50 am on Monday 14 September 2015, we made our first detection of gravitational waves. The paper was put on hold. The pace of discoveries over the coming years meant we never quite found enough time to get it together—I’ve rewritten the introduction a dozen times. It’s extremely satisfying to have it done. This is a shame, as it meant that this study came out much later than our other three-dimensional localization study. The delay has the advantage of justifying one of my favourite acknowledgement sections.

We find that the localization volume is inversely proportional to the sixth power of the signal-to-noise ration . This is what you would expect. The localization volume depends upon the angular uncertainty on the sky , the distance to the source , and the distance uncertainty ,

.

Typically, the uncertainty on a parameter (like the masses) scales inversely with the signal-to-noise ratio. This is the case for the logarithm of the distance, which means

.

The uncertainty in the sky location (being two dimensional) scales inversely with the square of the signal-to-noise ration,

.

The signal-to-noise ratio itself is inversely proportional to the distance to the source (sources further way are quieter. Therefore, putting everything together gives

.

We all know that treasure is marked by a cross. In the case of a binary neutron star merger, dense material ejected from the neutron stars will decay to heavy elements like gold and platinum, so there is definitely a lot of treasure at the source location.

]]>This is the second published version, the **big** changes since the last version are

- We have now detected gravitational waves
- We have observed our first gravitational wave with a mulitmessenger counterpart [bonus note]
- We now include KAGRA, along with LIGO and Virgo

As you might imagine, these are quite significant updates! The first showed that we can do gravitational-wave astronomy. The second showed that we can do exactly the science this paper is about. The third makes this the first joint publication of the LIGO Scientific, Virgo and KAGRA Collaborations—hopefully the first of many to come.

I lead both this and the previous version. In my **blog on the previous version**, I explained how I got involved, and the long road that a collaboration must follow to get published. In this post, I’ll give an overview of the key details from the new version together with some behind-the-scenes background (working as part of a large scientific collaboration allows you to do *amazing* science, but it can also be exhausting). If you’d like a digest of this paper’s science, check out the **LIGO science summary**.

The first section of the paper outlines the progression of detector sensitivities. The instruments are incredibly sensitive—we’ve never made machines to make these types of measurements before, so it takes a lot of work to get them to run smoothly. We can’t just switch them on and have them work at design sensitivity [bonus note].

The plots above show the planned progression of the different detectors. We had to get these agreed before we could write the later parts of the paper because the sensitivity of the detectors determines how many sources we will see and how well we will be able to localize them. I had anticipated that KAGRA would be the most challenging here, as we had not previously put together this sequence of curves. However, this was not the case, instead it was Virgo which was tricky. They had a problem with the silica fibres which suspended their mirrors (they snapped, which is definitely not what you want). The silica fibres were replaced with steel ones, but it wasn’t immediately clear what sensitivity they’d achieve and when. The final word was they’d observe in August 2017 and that their projections were unchanged. I was sceptical, but they did pull it out of the bag! We had our first clear three-detector observation of a gravitational wave 14 August 2017. Bravo Virgo!

The second section explain our data analysis techniques: how we find signals in the data, how we work out probable source locations, and how we communicate these results with the broader astronomical community—from the start of our third observing run (O3), information will be shared publicly!

The information in this section hasn’t changed much [bonus note]. There is a nice collection of references on the follow-up of different events, including GW170817 (I’d recommend my blog for more on the electromagnetic story). The main update I wanted to include was information on the detection of our first gravitational waves. It turned out to be more difficult than I imagined to come up with a plot which showed results from the five different search algorithms (two which used templates, and three which did not) which found GW150914, and harder still to make a plot which everyone liked. This plot become somewhat infamous for the amount of discussion it generated. I think we ended up with something which was a good compromise and clearly shows our detections sticking out above the background of noise.

The third section brings everything together and looks at what the prospects are for (gravitational-wave) multimessenger astronomy during each observing run. It’s really all about the big table.

I think there are three really awesome take-aways from this

- Actual binary neutron stars detected = 1. We did it!
- Using the rates inferred using our observations so far (including GW170817), once we have the full
*five*detector network of LIGO-Hanford, LIGO-Livingston, Virgo, KAGRA and LIGO-India, we could be detected 11–180 binary neutron stars a year. That something like between one a month to one every other day! I’m kind of scared… - With the five detector network the sky localization is really good. The median localization is about 9–12 square degrees, about the area the LSST could cover in a single pointing! This really shows the benefit of adding more detector to the network. The improvement comes not because a source is much better localized with five detectors than four, but because when you have five detectors you almost always have at least three detectors(the number needed to get a good triangulation) online at any moment, so you get a nice localization for pretty much everything.

In summary, the prospects for observing and localizing gravitational-wave transients are pretty great. If you are an astronomer, make the most of the quiet before O3 begins next year.

**arXiv:** 1304.0670 [gr-qc]

**Journal:** *Living Reviews In Relativity*; **21**:3(57); 2018

**Science summary:** A Bright today and brighter tomorrow: Prospects for gravitational-wave astronomy With Advanced LIGO, Advanced Virgo, and KAGRA**
Prospects for the next update:** After two updates, I’ve stepped down from preparing the next one. Wooh!

The announcement of our first multimessenger detection came between us submitting this update and us getting referee reports. We wanted an updated version of this paper, with the current details of our observing plans, to be available for our astronomer partners to be able to cite when writing their papers on GW170817.

Predictably, when the referee reports came back, we were told we really should include reference to GW170817. This type of discovery is exactly what this paper is about! There was avalanche of results surrounding GW170817, so I had to read through a lot of papers. The reference list swelled from 8 to 13 pages, but this effort was handy for my blog writing. After including all these new results, it really felt like this was version 2.5 of the Observing Scenarios, rather than version 2.

We use the term design sensitivity to indicate the performance the current detectors were designed to achieve. They are the targets we aim to achieve with Advanced LIGO, Advance Virgo and KAGRA. One thing I’ve had to try to train myself not to say is that design sensitivity is the *final* sensitivity of our detectors. Teams are currently working on plans for how we can upgrade our detectors beyond design sensitivity. Reaching design sensitivity will not be the end of our journey.

Our first gravitational-wave detections were from binary black holes. Therefore, when we were starting on this update there was a push to switch from focusing on binary neutron stars to binary black holes. I resisted on this, partially because I’m lazy, but mostly because I still thought that binary neutron stars were our best bet for multimessenger astronomy. This worked out nicely.

]]>There are many proposed ways of making a binary black hole. The current leading contender is isolated binary evolution: start with a binary star system (most stars are in binaries or higher multiples, our lonesome Sun is a little unusual), and let the stars evolve together. Only a fraction will end with black holes close enough to merge within the age of the Universe, but these would be the sources of the signals we see with LIGO and Virgo. We consider this isolated binary scenario in this work [bonus note].

Now, you might think that with stars being so fundamentally important to astronomy, and with binary stars being so common, we’d have the evolution of binaries figured out by now. It turns out it’s actually pretty messy, so there’s lots of work to do. We consider constraining four parameters which describe the bits of binary physics which we are currently most uncertain of:

- Black hole natal kicks—the push black holes receive when they are born in supernova explosions. We now the neutron stars get kicks, but we’re less certain for black holes [bonus note].
- Common envelope efficiency—one of the most intricate bits of physics about binaries is how mass is transferred between stars. As they start exhausting their nuclear fuel they puff up, so material from the outer envelope of one star may be stripped onto the other. In the most extreme cases, a common envelope may form, where so much mass is piled onto the companion, that both stars live in a single fluffy envelope. Orbiting inside the envelope helps drag the two stars closer together, bringing them closer to merging. The efficiency determines how quickly the envelope becomes unbound, ending this phase.
- Mass loss rates during the Wolf–Rayet (not to be confused with Wolf 359) and luminous blue variable phases–stars lose mass through out their lives, but we’re not sure how much. For stars like our Sun, mass loss is low, there is enough to gives us the aurora, but it doesn’t affect the Sun much. For bigger and hotter stars, mass loss can be significant. We consider two evolutionary phases of massive stars where mass loss is high, and currently poorly known. Mass could be lost in clumps, rather than a smooth stream, making it difficult to measure or simulate.

We use parameters describing potential variations in these properties are ingredients to the COMPAS population synthesis code. This rapidly (albeit approximately) evolves a population of stellar binaries to calculate which will produce merging binary black holes.

The question is now which parameters affect our gravitational-wave measurements, and how accurately we can measure those which do?

For our deductions, we use two pieces of information we will get from LIGO and Virgo observations: the total number of detections, and the distributions of chirp masses. The chirp mass is a combination of the two black hole masses that is often well measured—it is the most important quantity for controlling the inspiral, so it is well measured for low mass binaries which have a long inspiral, but is less well measured for higher mass systems. In reality we’ll have much more information, so these results should be the *minimum* we can actually do.

We consider the population after 1000 detections. That sounds like a lot, but we should have collected this many detections after just 2 or 3 years observing at design sensitivity. Our default COMPAS model predicts 484 detections per year of observing time! Honestly, I’m a little scared about having this many signals…

For a set of population parameters (black hole natal kick, common envelope efficiency, luminous blue variable mass loss and Wolf–Rayet mass loss), COMPAS predicts the number of detections and the fraction of detections as a function of chirp mass. Using these, we can work out the probability of getting the observed number of detections and fraction of detections within different chirp mass ranges. This is the likelihood function: if a given model is correct we are more likely to get results similar to its predictions than further away, although we expect their to be some scatter.

If you like equations, the from of our likelihood is explained in this bonus note. If you don’t like equations, there’s one lurking in the paragraph below. Just remember, that it can’t see you if you don’t move. It’s OK to skip the equation.

To determine how sensitive we are to each of the population parameters, we see how the likelihood changes as we vary these. The more the likelihood changes, the easier it should be to measure that parameter. We wrap this up in terms of the Fisher information matrix. This is defined as

,

where is the likelihood for data (the number of observations and their chirp mass distribution in our case), are our parameters (natal kick, etc.), and the angular brackets indicate the average over the population parameters. In statistics terminology, this is the variance of the score, which I think sounds cool. The Fisher information matrix nicely quantifies how much information we can lean about the parameters, including the correlations between them (so we can explore degeneracies). The inverse of the Fisher information matrix gives a lower bound on the covariance matrix (the multidemensional generalisation of the variance in a normal distribution) for the parameters . In the limit of a large number of detections, we can use the Fisher information matrix to estimate the accuracy to which we measure the parameters [bonus note].

We simulated several populations of binary black hole signals, and then calculate measurement uncertainties for our four population uncertainties to see what we could learn from these measurements.

Using just the rate information, we find that we can constrain a combination of the common envelope efficiency and the Wolf–Rayet mass loss rate. Increasing the common envelope efficiency ends the common envelope phase earlier, leaving the binary further apart. Wider binaries take longer to merge, so this reduces the merger rate. Similarly, increasing the Wolf–Rayet mass loss rate leads to wider binaries and smaller black holes, which take longer to merge through gravitational-wave emission. Since the two parameters have similar effects, they are anticorrelated. We can increase one and still get the same number of detections if we decrease the other. There’s a hint of a similar correlation between the common envelope efficiency and the luminous blue variable mass loss rate too, but it’s not quite significant enough for us to be certain it’s there.

Adding in the chirp mass distribution gives us more information, and improves our measurement accuracies. The fraction uncertainties are about 2% for the two mass loss rates and the common envelope efficiency, and about 5% for the black hole natal kick. We’re less sensitive to the natal kick because the most massive black holes don’t receive a kick, and so are unaffected by the kick distribution [bonus note]. In any case, these measurements are exciting! With this type of precision, we’ll really be able to learn something about the details of binary evolution.

The accuracy of our measurements will improve (on average) with the square root of the number of gravitational-wave detections. So we can expect 1% measurements after about 4000 observations. However, we might be able to get even more improvement by combining constraints from other types of observation. Combining different types of observation can help break degeneracies. I’m looking forward to building a concordance model of binary evolution, and figuring out exactly how massive stars live their lives.

**arXiv:** 1711.06287 [astro-ph.HE]

**Journal:** *Monthly Notices of the Royal Astronomical Society*; **477**(4):4685–4695; 2018

**Favourite dinosaur:** Professor Science

In practise, we will need to worry about how binary black holes are formed, via isolated evolution or otherwise, before inferring the parameters describing binary evolution. This makes the problem more complicated. Some parameters, like mass loss rates or black hole natal kicks, might be common across multiple channels, while others are not. There are a number of ways we might be able to tell different formation mechanisms apart, such as by using spin measurements.

We model the supernova kicks as following a Maxwell–Boltzmann distribution,

,

where is the unknown population parameter. The natal kick received by the black hole is not the same as this, however, as we assume some of the material ejected by the supernova falls back, reducing the over kick. The final natal kick is

,

where is the fraction that falls back, taken from Fryer *et al*. (2012). The fraction is greater for larger black holes, so the biggest black holes get no kicks. This means that the largest black holes are unaffected by the value of .

In this analysis, we have two pieces of information: the number of detections, and the chirp masses of the detections. The first is easy to summarise with a single number. The second is more complicated, and we consider the fraction of events within different chirp mass bins.

Our COMPAS model predicts the merger rate and the probability of falling in each chirp mass bin (we factor measurement uncertainty into this). Our observations are the the total number of detections and the number in each chirp mass bin (). The likelihood is the probability of these observations given the model predictions. We can split the likelihood into two pieces, one for the rate, and one for the chirp mass distribution,

.

For the rate likelihood, we need the probability of observing given the predicted rate . This is given by a Poisson distribution,

,

where is the total observing time. For the chirp mass likelihood, we the probability of getting a number of detections in each bin, given the predicted fractions. This is given by a multinomial distribution,

.

These look a little messy, but they simplify when you take the logarithm, as we need to do for the Fisher information matrix.

When we substitute in our likelihood into the expression for the Fisher information matrix, we get

.

Conveniently, although we only need to evaluate first-order derivatives, even though the Fisher information matrix is defined in terms of second derivatives. The expected number of events is . Therefore, we can see that the measurement uncertainty defined by the inverse of the Fisher information matrix, scales on average as .

For anyone worrying about using the likelihood rather than the posterior for these estimates, the high number of detections [bonus note] should mean that the information we’ve gained from the data overwhelms our prior, meaning that the shape of the posterior is dictated by the shape of the likelihood.

As an alternative way of looking at the Fisher information matrix, we can consider the shape of the likelihood close to its peak. Around the maximum likelihood point, the first-order derivatives of the likelihood with respect to the population parameters is zero (otherwise it wouldn’t be the maximum). The maximum likelihood values of and are the same as their expectation values. The second-order derivatives are given by the expression we have worked out for the Fisher information matrix. Therefore, in the region around the maximum likelihood point, the Fisher information matrix encodes all the relevant information about the shape of the likelihood.

So long as we are working close to the maximum likelihood point, we can approximate the distribution as a multidimensional normal distribution with its covariance matrix determined by the inverse of the Fisher information matrix. Our results for the measurement uncertainties are made subject to this approximation (which we did check was OK).

Approximating the likelihood this way should be safe in the limit of large . As we get more detections, statistical uncertainties should reduce, with the peak of the distribution homing in on the maximum likelihood value, and its width narrowing. If you take the limit of , you’ll see that the distribution basically becomes a delta function at the maximum likelihood values. To check that our was large enough, we verified that higher-order derivatives were still small.

Michele Vallisneri has a good paper looking at using the Fisher information matrix for gravitational wave parameter estimation (rather than our problem of binary population synthesis). There is a good discussion of its range of validity. The high signal-to-noise ratio limit for gravitational wave signals corresponds to our high number of detections limit.

]]>

My previous post discussed some of the interesting features of EMRIs. Because of the extreme difference in masses of the two black holes, it takes a long time for them to complete their inspiral. We can measure tens of *thousands* of orbits, which allows us to make wonderfully precise measurements of the source properties (if we can accurately pick out the signal from the data). Here, we’ll examine exactly what we could learn with LISA from EMRIs [bonus note].

First we build a model to investigate how many EMRIs there could be. There is a lot of astrophysics which we are currently uncertain about, which leads to a large spread in estimates for the number of EMRIs. Second, we look at how precisely we could measure properties from the EMRI signals. The astrophysical uncertainties are less important here—we could get a revolutionary insight into the lives of massive black holes.

To build a model of how many EMRIs there are, we need a few different inputs:

- The population of massive black holes
- The distribution of stellar clusters around massive black holes
- The range of orbits of EMRIs

We examine each of these in turn, building a more detailed model than has previously been constructed for EMRIs.

We currently know little about the population of massive black holes. This means we’ll discover lots when we start measuring signals (yay), but it’s rather inconvenient now, when we’re trying to predict how many EMRIs there are (boo). We take two different models for the mass distribution of massive black holes. One is based upon a semi-analytic model of massive black hole formation, the other is at the pessimistic end allowed by current observations. The semi-analytic model predicts massive black hole spins around 0.98, but we also consider spins being uniformly distributed between 0 and 1, and spins of 0. This gives us a picture of the bigger black hole, now we need the smaller.

Observations show that the masses of massive black holes are correlated with their surrounding cluster of stars—bigger black holes have bigger clusters. We consider four different versions of this trend: Gültekin *et al*. (2009); Kormendy & Ho (2013); Graham & Scott (2013), and Shankar *et al*. (2016). The stars and black holes about a massive black hole should form a cusp, with the density of objects increasing towards the massive black hole. This is great for EMRI formation. However, the cusp is disrupted if two galaxies (and their massive black holes) merge. This tends to happen—it’s how we get bigger galaxies (and black holes). It then takes some time for the cusp to reform, during which time, we don’t expect as many EMRIs. Therefore, we factor in the amount of time for which there is a cusp for massive black holes of different masses and spins.

Given a cusp about a massive black hole, we then need to know how often an EMRI forms. Simulations give us a starting point. However, these only consider a snap-shot, and we need to consider how things evolve with time. As stellar-mass black holes inspiral, the massive black hole will grow in mass and the surrounding cluster will become depleted. Both these effects are amplified because for each inspiral, there’ll be many more stars or stellar-mass black holes which will just plunge directly into the massive black hole. We therefore need to limit the number of EMRIs so that we don’t have an unrealistically high rate. We do this by adding in a couple of feedback factors, one to cap the rate so that we don’t deplete the cusp quicker than new objects will be added to it, and one to limit the maximum amount of mass the massive black hole can grow from inspirals and plunges. This gives us an idea for the total number of inspirals.

Finally, we calculate the orbits that EMRIs will be on. We again base this upon simulations, and factor in how the spin of the massive black hole effects the distribution of orbital inclinations.

Putting all the pieces together, we can calculate the population of EMRIs. We now need to work out how many LISA would be able to detect. This means we need models for the gravitational-wave signal. Since we are simulating a large number, we use a computationally inexpensive analytic model. We know that this isn’t too accurate, but we consider two different options for setting the end of the inspiral (where the smaller black hole finally plunges) which should bound the true range of results.

Allowing for all the different uncertainties, we find that there should be somewhere between 1 and 4200 EMRIs detected per year. (The model we used when studying transient resonances predicted about 250 per year, albeit with a slightly different detector configuration, which is fairly typical of all the models we consider here). This range is encouraging. The lower end means that EMRIs are a pretty safe bet, we’d be unlucky not to get at least one over the course of a multi-year mission (LISA should have at least four years observing). The upper end means there could be lots—we might actually need to worry about them forming a background source of noise if we can’t individually distinguish them!

Having shown that EMRIs are a good LISA source, we now need to consider what we could learn by measuring them?

We estimate the precision we will be able to measure parameters using the Fisher information matrix. The Fisher matrix measures how sensitive our observations are to changes in the parameters (the more sensitive we are, the better we should be able to measure that parameter). It should be a lower bound on actual measurement precision, and well approximate the uncertainty in the high signal-to-noise (loud signal) limit. The combination of our use of the Fisher matrix and our approximate signal models means our results will not be perfect estimates of real performance, but they should give an indication of the typical size of measurement uncertainties.

Given that we measure a huge number of cycles from the EMRI signal, we can make really precise measurements of the the mass and spin of the massive black hole, as these parameters control the orbital frequencies. Below are plots for the typical measurement precision from our Fisher matrix analysis. The orbital eccentricity is measured to similar accuracy, as it influences the range of orbital frequencies too. We also get pretty good measurements of the the mass of the smaller black hole, as this sets how quickly the inspiral proceeds (how quickly the orbital frequencies change). EMRIs will allow us to do precision astronomy!

Now, before you get too excited that we’re going to learn *everything* about massive black holes, there is one confession I should make. In the plot above I show the measurement accuracy for the redshifted mass of the massive black hole. The cosmological expansion of the Universe causes gravitational waves to become stretched to lower frequencies in the same way light is (this makes visible light more red, hence the name). The measured frequency is where is the frequency emitted, and is the redshift ( for a nearby source, and is larger for further away sources). Lower frequency gravitational waves correspond to higher mass systems, so it is often convenient to work with the redshifted mass, the mass corresponding to the signal you measure if you ignore redshifting. The redshifted mass of the massive black hole is where is the true mass. To work out the true mass, we need the redshift, which means we need to measure the distance to the source.

The plot above shows the fractional uncertainty on the distance. We don’t measure this too well, as it is determined from the amplitude of the signal, rather than its frequency components. The situation is much as for LIGO. The larger uncertainties on the distance will dominate the overall uncertainty on the black hole masses. We won’t be getting all these to fractions of a percent. However, that doesn’t mean we can’t still figure out what the distribution of masses looks like!

One of the really exciting things we can do with EMRIs is check that the signal matches our expectations for a black hole in general relativity. Since we get such an excellent map of the spacetime of the massive black hole, it is easy to check for deviations. In general relativity, everything about the black hole is fixed by its mass and spin (often referred to as the no-hair theorem). Using the measured EMRI signal, we can check if this is the case. One convenient way of doing this is to describe the spacetime of the massive object in terms of a multipole expansion. The first (most important) terms gives the mass, and the next term the spin. The third term (the quadrupole) is set by the first two, so if we can measure it, we can check if it is consistent with the expected relation. We estimated how precisely we could measure a deviation in the quadrupole. Fortunately, for this consistency test, all factors from redshifting cancel out, so we can get really detailed results, as shown below. Using EMRIs, we’ll be able to check for really small differences from general relativity!

In summary: EMRIS are awesome. We’re not sure how many we’ll detect with LISA, but we’re confident there will be some, perhaps a couple of hundred per year. From the signals we’ll get new insights into the masses and spins of black holes. This should tell us something about how they, and their surrounding galaxies, evolved. We’ll also be able to do some stringent tests of whether the massive objects are black holes as described by general relativity. It’s all pretty exciting, for when LISA launches, which is currently planned about 2034…

**arXiv:** 1703.09722 [gr-qc]

**Journal:** *Physical Review D*; **477**(4):4685–4695; 2018

**Conference proceedings:** 1704.00009 [astro-ph.GA] (from when work was still in-progress)

**Estimated number of Marvel films before LISA launch: **48 (starting with *Ant-Man and the Wasp*)

Is it “extreme-mass-ratio inspiral”, “extreme mass-ratio inspiral” or “extreme mass ratio inspiral”? All are used in the literature. This is one of the advantage of using “EMRI”. The important thing is that we’re talking about inspirals that have a mass ratio which is extreme. For this paper, we used “extreme mass-ratio inspiral”, but when I first started my PhD, I was first introduced to “extreme-mass-ratio inspirals”, so they are always stuck that way in my mind.

I think hyphenation is a bit of an art, and there’s no definitive answer here, just like there isn’t for superhero names, where you can have Iron Man, Spider-Man or Iceman.

This paper is part of a series looking at what LISA could tells us about different gravitational wave sources. So far, this series covers

- Massive black hole binaries
- Cosmological phase transitions
- Standard sirens (for measuring the expansion of the Universe)
- Inflation
- Extreme-mass-ratio inspirals

You’ll notice there’s a change in the name of the mission from eLISA to LISA part-way through, as things have evolved. (Or devolved?) I think the main take-away so far is that the cosmology group is the most enthusiastic.

]]>EMRIs are a beautiful gravitational wave source. They occur when a stellar-mass black hole slowly inspirals into a massive black hole (as found in the centre of galaxies). The massive black hole can be tens of thousands or millions of times more massive than the stellar-mass black hole (hence *extreme* mass ratio). This means that the inspiral is slow—we can potentially measure tens of thousands of orbits. This is both the blessing and the curse of EMRIs. The huge numbers of cycles means that we can closely follow the inspiral, and build a detailed map of the massive black hole’s spacetime. EMRIs will give us precision measurements of the properties of massive black holes. However, to do this, we need to be able to find the EMRI signals in the data, we need models which can match the signals over all these cycles. Analysing EMRIs is a huge challenge.

EMRI orbits are complicated. At any moment, the orbit can be described by three orbital frequencies: one for radial (in/out) motion , one for polar (north/south if we think of the spin of the massive black hole like the rotation of the Earth) motion and one for axial (around in the east/west direction) motion. As gravitational waves are emitted, and the orbit shrinks, these frequencies evolve. The animation above, made by Steve Drasco, illustrates the evolution of an EMRI. Every so often, so can see the pattern freeze—the orbits stays in a constant shape (although this still rotates). This is a transient resonance. Two of the orbital frequencies become commensurate (so we might have 3 north/south cycles and 2 in/out cycles over the same period [bonus note])—this is the resonance. However, because the frequencies are still evolving, we don’t stay locked like this is forever—which is why the resonance is transient. To calculate an EMRI, you need to know how the orbital frequencies evolve.

The evolution of an EMRI is slow—the time taken to inspiral is much longer than the time taken to complete one orbit. Therefore, we can usually split the problem of calculating the trajectory of an EMRI into two parts. On short timescales, we can consider orbits as having fixed frequencies. On long timescale, we can calculate the evolution by averaging over many orbits. You might see the problem with this—around resonances, this averaging breaks down. Whereas normally averaging over many orbits means averaging over a complicated trajectory that hits pretty much all possible points in the orbital range, on resonance, you just average over the same bit again and again. On resonance, terms which usually average to zero can become important. Éanna Flanagan and Tanja Hinderer first pointed out that around resonances the usual scheme (referred to as the adiabatic approximation) doesn’t work.

Around a resonance, the evolution will be enhanced or decreased a little relative to the standard adiabatic evolution. We get a kick. This is only small, but because we observe EMRIs for so many orbits, a small difference can grow to become a significant difference later on. Does this mean that we won’t be able to detect EMRIs with our standard models? This was a concern, so back at the end of PhD I began to investigate [bonus note]. The first step is to understand the size of the kick.

If there were no gravitational waves, the orbit would not evolve, it would be fixed. The orbit could then be described by a set of constants of motion. The most commonly used when describing orbits about black holes are the energy, angular momentum and Carter constant. For the purposes of this blog, we’ll not worry too much about what these constants are, we’ll just consider some constant .

The resonance kick is a change in this constant . What should this depend on? There are three ingredients. First, the rate of change of this constant on the resonant orbit. Second, the time spent on resonance . The bigger these are, the bigger the size of the jump. Therefore,

.

However, the jump could be positive or negative. This depends upon the relative phase of the radial and polar motion [bonus note]—for example, do they both reach their maximum point at the same time, or does one lag behind the other? We’ll call this relative phase . By varying we explore we can get our resonant trajectory to go through any possible point in space. Therefore, averaging over should get us back to the adiabatic approximation: the average value of must be zero. To complete our picture for the jump, we need a periodic function of the phase,

,

with . Now, we know the pieces, we can try to figure out what the pieces are.

The rate of change is proportional the mass ratio : the smaller the stellar-mass black hole is relative to the massive one, the smaller is. The exact details depend upon gravitational self-force calculations, which we’ll skip over, as they’re pretty hard, but they are the same for all orbits (resonant or not).

We can think of the resonance timescale either as the time for the orbital frequencies to drift apart or the time for the orbit to start filling the space again (so that it’s safe to average). The two pictures yield the same answer—there’s a fuller explanation in Section III A of the paper. To define the resonance timescale, it is useful to define the frequency , which is zero exactly on resonance. If this is evolving at rate , then the resonance timescale is

.

This bridges the two timescales that usually define EMRIs: the short orbital timescale and the long evolution timescale :

.

To find the form of for , we need to do some quite involved maths (given in Appendix B of the paper) [bonus note]. This works by treating the evolution far from resonance as depending upon two independent times (effectively defining and ), and then matching the evolution close to resonance using an expansion in terms of a different time (something like ). The solution shows that the jump depends sensitively upon the phase at resonance, which makes them extremely difficult to calculate.

We numerically evaluated the size of kicks for different orbits and resonances. We found a number of trends. First, higher-order resonances (those with larger and ) have smaller jumps than lower-order ones. This makes sense, as higher-order resonances come closer to covering all the points in the space, and so are more like averaging over the entire space. Second, jumps are larger for higher eccentricity orbits. This also makes sense, as you can’t have resonances for circular (zero eccentricity orbits) as there’s no radial frequency, so the size of the jumps must tend to zero. We’ll see that these two points are important when it comes to observational consequences of transient resonances.

Now we’ve figured out the impact of passing through a transient resonance, let’s look at what this means for detecting EMRIs. The jump can mean that the evolution post-resonance can soon become out of phase with that pre-resonance. We can’t match both parts with the same adiabatic template. This could significantly hamper our prospects for detection, as we’re limited to the bits of signal we can pick up between resonances.

We created an astrophysical population of simulated EMRIs. We used numerical simulations to estimate a plausible population of massive black holes and distribution of stellar-mass black holes insprialling into them. We then used adiabatic models to see how many LISA (or eLISA as it was called at the time) could potentially detect. We found there were ~510 EMRIs detectable (with a signal-to-noise ratio of 15 or above) for a two-year mission.

We then calculated how much the signal-to-noise ratio would be reduced by passing through transient resonances. The plot below shows the distribution of signal-to-noise ratio for the original population, ignoring resonances, and then after factoring in the reduction. There are now ~490 detectable EMRIs, a loss of 4%. We can still detect the majority of EMRIs!

We were worried about the impact of transient resonances, we know that jumps can cause them to become undetectable, so why aren’t we seeing a bit effect in our population? The answer lies is in the trends we saw earlier. Jumps are large for low order resonances with high eccentricities. These were the ones first highlighted, as they are obviously the most important. However, low-order resonances are only encountered really close to the massive black hole. This means late in the inspiral, after we have already accumulated lots of signal-to-noise ratio. Losing a little bit of signal right at the end doesn’t hurt detectability too much. On top of this, gravitational wave emission efficiently damps down eccentricity. Orbits typically have low eccentricities by the time they hit low-order resonances, meaning that the jumps are actually quite small. Although small jumps lead to some mismatch, we can still use our signal templates without jumps. Therefore, resonances don’t hamper us (too much) in finding EMRIs!

This may seem like a happy ending, but it is not the end of the story. While we can detect EMRIs, we still need to be able to accurately infer their source properties. Features not included in our signal templates (like jumps), could bias our results. For example, it might be that we can better match a jump by using a template for a different black hole mass or spin. However, if we include jumps, these extra features could give us extra precision in our measurements. The question of what jumps could mean for parameter estimation remains to be answered.

**arXiv:** 1608.08951 [gr-qc]

**Journal:** *Physical Review D*; **94**(12):124042(24); 2016

**Conference proceedings:** 1702.05481 [gr-qc] (only 2 pages—ideal for emergency journal club presentations)

**Favourite jumpers: **Woolly, Mario, Kangaroos

When discussing resonances, and their impact on orbital evolution, we’ll only care about – resonances. Resonances with are not important because the spacetime is axisymmetric. The equations are exactly identical for all values of the the axial angle , so it doesn’t matter where you are (or if you keep cycling over the same spot) for the evolution of the EMRI.

This, however, doesn’t mean that resonances aren’t interesting. They can lead to small kicks to the binary, because you are preferentially emitting gravitational waves in one direction. For EMRIs this are negligibly small, but for more equal mass systems, they could have some interesting consequences as pointed out by Maarten van de Meent.

I’m grateful to the Cambridge Philosophical Society for giving me some extra funding to work on resonances. If you’re a Cambridge PhD student, make sure to become a member so you can take advantage of the opportunities they offer.

The theory of how to evolve through a transient resonance was developed by Kevorkian and coauthors. I spent a long time studying these calculations before working up the courage to attempt them myself. There are a few technical details which need to be adapted for the case of EMRIs. I finally figured everything out while in Warsaw Airport, coming back from a conference. It was the most I had ever felt like a real physicist.

]]>There are currently 9 papers in the GW170817 family. Further papers, for example looking at parameter estimation in detail, are in progress. Papers are listed below in order of arXiv posting. My favourite is the GW170817 Discovery Paper. Many of the highlights, especially from the Discovery and Multimessenger Astronomy Papers, are described in my **GW170817 announcement post**.

Keeping up with all the accompanying observational results is a task not even Sisyphus would envy. I’m sure that the details of these will be debated for a long time to come. I’ve included references to a few below (mostly as [citation notes]), but these are not guaranteed to be complete (I’ll continue to expand these in the future).

**Title:** GW170817: Observation of gravitational waves from a binary neutron star inspiral**
arXiv:** 1710.05832 [gr-qc]

Journal:

LIGO science summary:

This is the paper announcing the gravitational-wave detection. It gives an overview of the properties of the signal, initial estimates of the parameters of the source (see the GW170817 Properties Paper for updates) and the binary neutron star merger rate, as well as an overview of results from the other companion papers.

I was disappointed that “the era of gravitational-wave multi-messenger astronomy has opened with a bang” didn’t make the conclusion of the final draft.

**More details:** The GW170817 Discovery Paper summary

**Title:** Multi-messenger observations of a binary neutron star merger**
arXiv:** 1710.05833 [astro-ph.HE]

Journal:

LIGO science summary:

I’ve numbered this paper as −1 as it gives an overview of *all* the observations—gravitational wave, electromagnetic and neutrino—accompanying GW170817. I feel a little sorry for the neutrino observers, as they’re the only ones not to make a detection. Drawing together the gravitational wave and electromagnetic observations, we can confirm that binary neutron star mergers are the progenitors of (at least some) short gamma-ray bursts and kilonovae.

Do *not* print this paper, the author list stretches across 23 pages.

**More details:** The Multimessenger Astronomy Paper summary

**Title:** Gravitational waves and gamma-rays from a binary neutron star merger: GW170817 and GRB 170817A**
arXiv:** 1710.05834 [astro-ph.HE]

Journal:

LIGO science summary:

Here we bring together the LIGO–Virgo observations of GW170817 and the Fermi and INTEGRAL observations of GRB 170817A. From the spatial and temporal coincidence of the gravitational waves and gamma rays, we establish that the two are associated with each other. There is a 1.7 s time delay between the merger time estimated from gravitational waves and the arrival of the gamma-rays. From this, we make some inferences about the structure of the jet which is the source of the gamma rays. We can also use this to constrain deviations from general relativity, which is cool. Finally, we estimate that there be 0.3–1.7 joint gamma ray–gravitational wave detections per year once our gravitational-wave detectors reach design sensitivity!

**More details:** The GW170817 Gamma-ray Burst Paper summary

**Title:** A gravitational-wave standard siren measurement of the Hubble constant [bonus note]**
arXiv:** 1710.05835 [astro-ph.CO]

Journal:

LIGO science summary:

The Hubble constant quantifies the current rate of expansion of the Universe. If you know how far away an object is, and how fast it is moving away (due to the expansion of the Universe, not because it’s on a bus or something, that is important), you can estimate the Hubble constant. Gravitational waves give us an estimate of the distance to the source of GW170817. The observations of the optical transient AT 2017gfo allow us to identify the galaxy NGC 4993 as the host of GW170817’s source. We know the redshift of the galaxy (which indicates how fast its moving). Therefore, putting the two together we can infer the Hubble constant in a completely new way.

**More details:** The GW170817 Hubble Constant Paper summary

**Title:** Estimating the contribution of dynamical ejecta in the kilonova associated with GW170817**
arXiv:** 1710.05836 [astro-ph.HE]

Journal:

LIGO science summary:

During the coalescence of two neutron stars, lots of neutron-rich matter gets ejected. This undergoes rapid radioactive decay, which powers a kilonova, an optical transient. The observed signal depends upon the material ejected. Here, we try to use our gravitational-wave measurements to predict the properties of the ejecta ahead of the flurry of observational papers.

**More details:** The GW170817 Kilonova Paper summary

**Title:** GW170817: Implications for the stochastic gravitational-wave background from compact binary coalescences**
arXiv:** 1710.05837 [gr-qc]

We can detect signals if they are loud enough, but there will be many quieter ones that we cannot pick out from the noise. These add together to form an overlapping background of signals, a background rumbling in our detectors. We use the inferred rate of binary neutron star mergers to estimate their background. This is smaller than the background from binary black hole mergers (black holes are more massive, so they’re intrinsically louder), but they all add up. It’ll still be a few years before we could detect a background signal.

**More details:** The GW170817 Stochastic Paper summary

**Title:** On the progenitor of binary neutron star merger GW170817**
arXiv:** 1710.05838 [astro-ph.HE]

Journal:

LIGO science summary:

We know that GW170817 came from the coalescence of two neutron stars, but where did these neutron stars come from? Here, we combine the parameters inferred from our gravitational-wave measurements, the observed position of AT 2017gfo in NGC 4993 and models for the host galaxy, to estimate properties like the kick imparted to neutron stars during the supernova explosion and how long it took the binary to merge.

**More details:** The GW170817 Progenitor Paper summary

**Title:** Search for high-energy neutrinos from binary neutron star merger GW170817 with ANTARES, IceCube, and the Pierre Auger Observatory**
arXiv:** 1710.05839 [astro-ph.HE]

Journal:

This is the search for neutrinos from the source of GW170817. Lots of neutrinos are emitted during the collision, but not enough to be detectable on Earth. Indeed, we don’t find any neutrinos, but we combine results from three experiments to set upper limits.

**More details:** The GW170817 Neutrino Paper summary

**Title:** Search for post-merger gravitational waves from the remnant of the binary neutron star merger GW170817**
arXiv:** 1710.09320 [astro-ph.HE]

Journal:

LIGO science summary:

After the two neutron stars merged, what was left? A larger neutron star or a black hole? Potentially we could detect gravitational waves from a wibbling neutron star, as it sloshes around following the collision. We don’t. It would have to be a lot closer for this to be plausible. However, this paper outlines how to search for such signals; the GW170817 Properties Paper contains a more detailed look at any potential post-merger signal.

**More details:** The GW170817 Post-merger Paper summary

**Title:** Properties of the binary neutron star merger GW170817**
arXiv:** 1805.11579 [gr-qc]

In the GW170817 Discovery Paper we presented initial estimates for the properties of GW170817’s source. These were the best we could do on the tight deadline for the announcement (it was a pretty good job in my opinion). Now we have had a bit more time we can present a new, improved analysis. This uses recalibrated data and a wider selection of waveform models. We also fold in our knowledge of the source location, thanks to the observation of AT 2017gfo by our astronomer partners, for our best results. if you want to know the details of GW170817’s source, this is the paper for you!

If you’re looking for the most up-to-date results regarding GW170817, check out the **O2 Catalogue Paper**.

**More details:** The GW170817 Properties Paper summary

**Title:** GW170817: Measurements of neutron star radii and equation of state**
arXiv:** 1805.11581 [gr-qc]

Neutron stars are made of weird stuff: nuclear density material which we cannot replicate here on Earth. Neutron star matter is often described in terms of an equation of state, a relationship that explains how the material changes at different pressures or densities. A stiffer equation of state means that the material is harder to squash, and a softer equation of state is easier to squish. This means that for a given mass, a stiffer equation of state will predict a larger, fluffier neutron star, while a softer equation of state will predict a more compact, denser neutron star. In this paper, we assume that GW170817’s source is a binary neutron star system, where both neutron stars have the same equation of state, and see what we can infer about neutron star stuff.

**More details:** The GW170817 Equation-of-state Paper summary

**Synopsis:** GW170817 Discovery Paper

**Read this if:** You want all the details of our first gravitational-wave observation of a binary neutron star coalescence

**Favourite part:** Look how well we measure the chirp mass!

GW170817 was a remarkable gravitational-wave discovery. It is the loudest signal observed to date, and the source with the lowest mass components. I’ve written about some of the highlights of the discovery in my previous **GW170817 discovery post**.

Binary neutron stars are one of the principal targets for LIGO and Virgo. The first observational evidence for the existence of gravitational waves came from observations of binary pulsars—a binary neutron star system where (at least one) one of the components is a pulsar. Therefore (unlike binary black holes), we knew that these sources existed before we turned on our detectors. What was less certain was how often they merge. In our first advanced-detector observing run (O1), we didn’t find any, allowing us to estimate an upper limit on the merger rate of . Now, we know much more about merging binary neutron stars.

GW170817, as a loud and long signal, is a highly significant detection. You can see it in the data by eye. Therefore, it should have been a easy detection. As is often the case with real experiments, it wasn’t quite that simple. Data transfer from Virgo had stopped over night, and there was a glitch (a non-stationary and non-Gaussian noise feature) in the Livingston detector, which meant that this data weren’t automatically analysed. Nevertheless, GstLAL flagged something interesting in the Hanford data, and there was a mad flurry to get the other data in place so that we could analyse the signal in all three detectors. I remember being sceptical in these first few minutes until I saw the plot of Livingston data which blew me away: the chirp was clearly visible despite the glitch!

Using data from both of our LIGO detectors (as discussed for GW170814, our offline algorithms searching for coalescing binaries only use these two detectors during O2), GW170817 is an absolutely gold-plated detection. GstLAL estimates a false alarm rate (the rate at which you’d expect something at least this signal-like to appear in the detectors due to a random noise fluctuation) of less than one in 1,100,000 years, while PyCBC estimates the false alarm rate to be less than one in 80,000 years.

Parameter estimation (inferring the source properties) used data from all three detectors. We present a (remarkably thorough given the available time) initial analysis in this paper (more detailed results are given in the GW170817 Properties Paper, and the most up-to-date results are in O2 Catalogue Paper). This signal is challenging to analyse because of the glitch and because binary neutron stars are made of stuff, which can leave an imprint on the waveform. We’ll be looking at the effects of these complications in more detail in the future. Our initial results are

- The source is localized to a region of about at a distance of (we typically quote results at the 90% credible level). This is the closest gravitational-wave source yet.
- The chirp mass is measured to be , much lower than for our binary black hole detections.
- The spins are not well constrained, the uncertainty from this means that we don’t get precise measurements of the individual component masses. We quote results with two choices of spin prior: the astrophysically motivated limit of 0.05, and the more agnostic and conservative upper bound of 0.89. I’ll stick to using the low-spin prior results be default.
- Using the low-spin prior, the component masses are – and –. We have the convention that , which is why the masses look unequal; there’s a lot of support for them being nearly equal. These masses match what you’d expect for neutron stars.

As mentioned above, neutron stars are made of stuff, and the properties of this leave an imprint on the waveform. If neutron stars are big and fluffy, they will get tidally distorted. Raising tides sucks energy and angular momentum out of the orbit, making the inspiral quicker. If neutron stars are small and dense, tides are smaller and the inspiral looks like that for tow black holes. For this initial analysis, we used waveforms which includes some tidal effects, so we get some preliminary information on the tides. We cannot exclude zero tidal deformation, meaning we cannot rule out from gravitational waves alone that the source contains at least one black hole (although this would be surprising, given the masses). However, we can place a weak upper limit on the combined dimensionless tidal deformability of . This isn’t too informative, in terms of working out what neutron stars are made from, but we’ll come back to this in the GW170817 Properties Paper and the GW170817 Equation-of-state Paper.

Given the source masses, and all the electromagnetic observations, we’re pretty sure this is a binary neutron star system—there’s nothing to suggest otherwise.

Having observed one (and one one) binary neutron star coalescence in O1 and O2, we can now put better constraints on the merger rate. As a first estimate, we assume that component masses are uniformly distributed between and , and that spins are below 0.4 (in between the limits used for parameter estimation). Given this, we infer that the merger rate is , safely within our previous upper limit [citation note].

There’s a lot more we can learn from GW170817, especially as we don’t *just* have gravitational waves as a source of information, and this is explained in the companion papers.

**Synopsis:** Multimessenger Paper

**Read this if:** Don’t. Use it too look up which other papers to read.

**Favourite part:** The figures! It was a truly amazing observational effort to follow-up GW170817

The remarkable thing about this paper is that it exists. Bringing together such a diverse (and competitive) group was a huge effort. Alberto Vecchio was one of the editors, and each evening when leaving the office, he was convinced that the paper would have fallen apart by morning. However, it hung together—the story was too compelling. This paper explains how gravitational waves, short gamma-ray bursts, kilonovae all come from a single source [citation note]. This is the greatest collaborative effort in the history of astronomy.

The paper outlines the discoveries and all of the initial set of observations. If you want to understand the observations themselves, this is not the paper to read. However, using it, you can track down the papers that you do want. A huge amount of care went in to trying to describe how discoveries were made: for example, Fermi observed GRB 170817A independently of the gravitational-wave alert, and we found GW170817 without relying on the GRB alert, however, the communication between teams meant that we took everything much seriously and pushed out alerts as quickly as possible. For more on the history of observations, I’d suggest scrolling through the **GCN archive**.

The paper starts with an overview of the gravitational-wave observations from the inspiral, then the prompt detection of GRB 170817A, before describing how the gravitational-wave localization enabled discovery of the optical transient AT 2017gfo. This source, in nearby galaxy NGC 4993, was then the subject of follow-up across the electromagnetic spectrum. We have huge amount of photometric and spectroscopy of the source, showing general agreement with models for a kilonova. X-ray and radio afterglows were observed 9 days and 16 days after the merger, respectively [citation note]. No neutrinos were found, which isn’t surprising.

**Synopsis:** GW170817 Gamma-ray Burst Paper

**Read this if:** You’re interested in the jets from where short gamma-ray bursts originate or in tests of general relativity

**Favourite part:** How much science come come from a simple time delay measurement

This joint LIGO–Virgo–Fermi–INTEGRAL paper combines our observations of GW170817 and GRB 170817A. The result is one of the most contentful of the companion papers.

The first item on the to-do list for joint gravitational-wave–gamma-ray science, is to establish that we are really looking at the same source.

From the GW170817 Discovery Paper, we know that its source is consistent with being a binary neutron star system. Hence, there is matter around which can launch create the gamma-rays. The Fermi-GBM and INTEGRAL observations of GRB170817A indicate that it falls into the short class, as hypothesised as the result of a binary neutron star coalescence. Therefore, it looks like we could have the right ingredients.

Now, given that it is possible that the gravitational waves and gamma rays have the same source, we can calculate the probability of the two occurring by chance. The probability of temporal coincidence is , adding in spatial coincidence too, and the probability becomes . It’s safe to conclude that the two are associated: merging binary neutron stars *are* the source of at least some short gamma-ray bursts!

There is a delay time between the inferred merger time and the gamma-ray burst. Given that signal has travelled for about 85 million years (taking the 5% lower limit on the inferred distance), this is a really small difference: gravity and light must travel at almost exactly the same speed. To derive exact limit you need to make some assumptions about when the gamma-rays were created. We’d expect some delay as it takes time for the jet to be created, and then for the gamma-rays to blast their way out of the surrounding material. We conservatively (and arbitrarily) take a window of the delay being 0 to 10 seconds, this gives

.

That’s pretty small!

General relativity predicts that gravity and light should travel at the same speed, so I wasn’t too surprised by this result. I was surprised, however, that this result seems to have caused a flurry of activity in effectively ruling out several modified theories of gravity. I guess there’s not much point in explaining what these are now, but they are mostly theories which add in extra fields, which allow you to tweak how gravity works so you can explain some of the effects attributed to dark energy or dark matter. I’d recommend Figure 2 of Ezquiaga & Zumalacárregui (2017) for a summary of which theories pass the test and which are in trouble; Kase & Tsujikawa (2018) give a good review.

We don’t discuss the theoretical implications of the relative speeds of gravity and light in this paper, but we do use the time delay to place bounds for particular on potential deviations from general relativity.

- We look at a particular type of Lorentz invariance violation. This is similar to what we did for GW170104, where we looked at the dispersion of gravitational waves, but here it is for the case of , which we couldn’t test.
- We look at the Shapiro delay, which is the time difference travelling in a curved spacetime relative to a flat one. That light and gravity are effected the same way is a test of the weak equivalence principle—that everything falls the same way. The effects of the curvature can be quantified with the parameter , which describes the amount of curvature per unit mass. In general relativity . Considering the gravitational potential of the Milky Way, we find that [citation note].

As you’d expect given the small time delay, these bounds are pretty tight! If you’re working on a modified theory of gravity, you have some extra checks to do now.

From our gravitational-wave and gamma-ray observations, we can also make some deductions about the engine which created the burst. The complication here, is that we’re not exactly sure what generates the gamma rays, and so deductions are model dependent. Section 5 of the paper uses the time delay between the merger and the burst, together with how quickly the burst rises and fades, to place constraints on the size of the emitting region in different models. The papers goes through the derivation in a step-by-step way, so I’ll not summarise that here: if you’re interested, check it out.

GRB 170817A was unusually dim [citation note]. The plot above compares it to other gamma-ray bursts. It is definitely in the tail. Since it appears so dim, we think that we are not looking at a standard gamma-ray burst. The most obvious explanation is that we are not looking directly down the jet: we don’t expect to see many off-axis bursts, since they are dimmer. We expect that a gamma-ray burst would originate from a jet of material launched along the direction of the total angular momentum. From the gravitational waves alone, we can estimate that the misalignment angle between the orbital angular momentum axis and the line of sight is (adding in the identification of the host galaxy, this becomes using the Planck value for the Hubble constant and with the SH0ES value), so this is consistent with viewing the burst off-axis (updated numbers are given in the GW170817 Properties Paper). There are multiple models for such gamma-ray emission, as illustrated below. We could have a uniform top-hat jet (the simplest model) which we are viewing from slightly to the side, we could have a structured jet, which is concentrated on-axis but we are seeing from off-axis, or we could have a cocoon of material pushed out of the way by the main jet, which we are viewing emission from. Other electromagnetic observations will tell us more about the inclination and the structure of the jet [citation note].

Now that we know gamma-ray bursts can be this dim, if we observe faint bursts (with unknown distances), we have to consider the possibility that they are dim-and-close in addition to the usual bright-and-far-away.

The paper closes by considering how many more joint gravitational-wave–gamma-ray detections of binary neutron star coalescences we should expect in the future. In our next observing run, we could expect 0.1–1.4 joint detections per year, and when LIGO and Virgo get to design sensitivity, this could be 0.3–1.7 detections per year.

**Synopsis:** GW170817 Hubble Constant Paper

**Read this if:** You have an interest in cosmology

**Favourite part:** In the future, we may be able to settle the argument between the cosmic microwave background and supernova measurements

The Universe is expanding. In the nearby Universe, this can be described using the Hubble relation

,

where is the expansion velocity, is the Hubble constant and is the distance to the source. GW170817 is sufficiently nearby for this relationship to hold. We know the distance from the gravitational-wave measurement, and we can estimate the velocity from the redshift of the host galaxy. Therefore, it should be simple to combine the two to find the Hubble constant. Of course, there are a few complications…

This work is built upon the identification of the optical counterpart AT 2017gfo. This allows us to identify the galaxy NGC 4993 as the host of GW170817’s source: we calculate that there’s a probability that AT 2017gfo would be as close to NGC 4993 on the sky by chance. Without a counterpart, it would still be possible to infer the Hubble constant statistically by cross-referencing the inferred gravitational-wave source location with the ensemble of compatible galaxies in a catalogue (you assign a probability to the source being associated with each galaxy, instead of saying it’s definitely in this one). The identification of NGC 4993 makes things much simpler.

As a first ingredient, we need the distance from gravitational waves. For this, a slightly different analysis was done than in the GW170817 Discovery Paper. We fix the sky location of the source to match that of AT 2017gfo, and we use (binary black hole) waveforms which don’t include any tidal effects. The sky position needs to be fixed, because for this analysis we are assuming that we definitely know where the source is. The tidal effects were not included (but precessing spins were) because we needed results quickly: the details of spins and tides shouldn’t make much difference to the distance. From this analysis, we find the distance is if we follow our usual convention of quoting the median at symmetric 90% credible interval; however, this paper primarily quotes the most probable value and minimal (not-necessarily symmmetric) 68.3% credible interval, following this convention, we write the distance as .

While NGC 4993 being close by makes the relationship for calculating the Hubble constant simple, it adds a complication for calculating the velocity. The motion of the galaxy is not only due to the expansion of the Universe, but because of how it is moving within the gravitational potentials of nearby groups and clusters. This is referred to as peculiar motion. Adding this in increases our uncertainty on the velocity. Combining results from the literature, our final estimate for the velocity is .

We put together the velocity and the distance in a Bayesian analysis. This is a little more complicated than simply dividing the numbers (although that gives you a similar result). You have to be careful about writing things down, otherwise you might implicitly assume a prior that you didn’t intend (my most useful contribution to this paper is probably a whiteboard conversation with Will Farr where we tracked down a difference in prior assumptions approaching the problem two different ways). This is all explained in the Methods, it’s not easy to read, but makes sense when you work through. The result is (quoted as maximum a posteriori value and 68% interval, or in the usual median-and-90%-interval convention). An updated set of results is given in the GW170817 Properties Paper: (68% interval using the low-spin prior). This is nicely (and diplomatically) consistent with existing results.

The distance has considerable uncertainty because there is a degeneracy between the distance and the orbital inclination (the angle of the normal to the orbital plane relative to the line of sight). If you could figure out the inclination from another observation, then you could tighten constraints on the Hubble constant, or if you’re willing to adopt one of the existing values of the Hubble constant, you can pin down the inclination. Data (updated data) to help you try this yourself are available [citation note].

In the future we’ll be able to combine multiple events to produce a more precise gravitational-wave estimate of the Hubble constant. Chen, Fishbach & Holz (2017) is a recent study of how measurements should improve with more events: we should get to 4% precision after around 100 detections.

**Synopsis:** GW170817 Kilonova Paper

**Read this if:** You want to check our predictions for ejecta against observations

**Favourite part:** We might be able to create all of the heavy r-process elements—including the gold used to make Nobel Prizes—from merging neutron stars

When two neutron stars collide, lots of material gets ejected outwards. This neutron-rich material undergoes nuclear decay—now no longer being squeezed by the strong gravity inside the neutron star, it is unstable, and decays from the strange neutron star stuff to become more familiar elements (elements heavier than iron including gold and platinum). As these r-process elements are created, the nuclear reactions power a kilonova, the optical (infrared–ultraviolet) transient accompanying the merger. The properties of the kilonova depends upon how much material is ejected.

In this paper, we try to estimate how much material made up the dynamical ejecta from the GW170817 collision. Dynamical ejecta is material which escapes as the two neutron stars smash into each other (either from tidal tails or material squeezed out from the collision shock). There are other sources of ejected material, such as winds from the accretion disk which forms around the remnant (whether black hole or neutron star) *following* the collision, so this is only part of the picture; however, we can estimate the mass of the dynamical ejecta from our gravitational-wave measurements using simulations of neutron star mergers. These estimates can then be compared with electromagnetic observations of the kilonova [citation note].

The amount of dynamical ejecta depends upon the masses of the neutron stars, how rapidly they are rotating, and the properties of the neutron star material (described by the equation of state). Here, we use the masses inferred from our gravitational-wave measurements and feed these into fitting formulae calibrated against simulations for different equations of state. These don’t include spin, and they have quite large uncertainties (we include a 72% relative uncertainty when producing our results), so these are not precision estimates. Neutron star physics is a little messy.

We find that the dynamical ejecta is – (assuming the low-spin mass results). These estimates can be feed into models for kilonovae to produce lightcurves, which we do. There is plenty of this type of modelling in the literature as observers try to understand their observations, so this is nothing special in terms of understanding this event. However, it could be useful in the future (once we have hoverboards), as we might be able to use gravitational-wave data to predict how bright a kilonova will be at different times, and so help astronomers decide upon their observing strategy.

Finally, we can consider how much r-process elements we can create from the dynamical ejecta. Again, we don’t consider winds, which may also contribute to the total budget of r-process elements from binary neutron stars. Our estimate for r-process elements needs several ingredients: (i) the mass of the dynamical ejecta, (ii) the fraction of the dynamical ejecta converted to r-process elements, (iii) the merger rate of binary neutron stars, and (iv) the convolution of the star formation rate and the time delay between binary formation and merger (which we take to be ). Together (i) and (ii) give the mass of r-process elements per binary neutron star (assuming that GW170817 is typical); (iii) and (iv) give total density of mergers throughout the history of the Universe, and combining everything together you get the total mass of r-process elements accumulated over time. Using the estimated binary neutron star merger rate of , we can explain the Galactic abundance of r-process elements if more than about 10% of the dynamical ejecta is converted.

**Synopsis:** GW170817 Stochastic Paper

**Read this if:** You’re impatient for finding a background of gravitational waves

**Favourite part:** The background symphony

For every loud gravitational-wave signal, there are many more quieter ones. We can’t pick these out of the detector noise individually, but they are still there, in our data. They add together to form a stochastic background, which we might be able to detect by correlating the data across our detector network.

Following the detection of GW150914, we considered the background due to binary black holes. This is quite loud, and might be detectable in a few years. Here, we add in binary neutron stars. This doesn’t change the picture too much, but gives a more accurate picture.

Binary black holes have higher masses than binary neutron stars. This means that their gravitational-wave signals are louder, and shorter (they chirp quicker and chirp up to a lower frequency). Being louder, binary black holes dominate the overall background. Being shorter, they have a different character: binary black holes form a popcorn background of short chirps which rarely overlap, but binary neutron stars are long enough to overlap, forming a more continuous hum.

The dimensionless energy density at a gravitational-wave frequency of 25 Hz from binary black holes is , and from binary neutron stars it is . There are on average binary black hole signals in detectors at a given time, and binary neutron star signals.

To calculate the background, we need the rate of merger. We now have an estimate for binary neutron stars, and we take the most recent estimate from the GW170104 Discovery Paper for binary black holes. We use the rates assuming the power law mass distribution for this, but the result isn’t too sensitive to this: we care about the number of signals in the detector, and the rates are derived from this, so they agree when working backwards. We evolve the merger rate density across cosmic history by factoring in the star formation rate and delay time between formation and merger. A similar thing was done in the GW170817 Kilonova Paper, here we used a slightly different star formation rate, but results are basically the same with either. The addition of binary neutron stars increases the stochastic background from compact binaries by about 60%.

Detection in our next observing run, at a moderate significance, is possible, but I think unlikely. It will be a few years until detection is plausible, but the addition of binary neutron stars will bring this closer. When we do detect the background, it will give us another insight into the merger rate of binaries.

**Synopsis:** GW170817 Progenitor Paper

**Read this if:** You want to know about neutron star formation and supernovae

**Favourite part:** The Spirography figures

The identification of NGC 4993 as the host galaxy of GW170817’s binary neutron star system allows us to make some deductions about how it formed. In this paper, we simulate a large number of binaries, tracing the later stages of their evolution, to see which ones end up similar to GW170817. By doing so, we learn something about the supernova explosion which formed the second of the two neutron stars.

The neutron stars started life as a pair of regular stars [bonus note]. These burned through their hydrogen fuel, and once this is exhausted, they explode as a supernova. The core of the star collapses down to become a neutron star, and the outer layers are blasted off. The more massive star evolves faster, and goes supernova first. We’ll consider the effects of the second supernova, and the kick it gives to the binary: the orbit changes both because of the rocket effect of material being blasted off, and because one of the components loses mass.

From the combination of the gravitational-wave and electromagnetic observations of GW170817, we know the masses of the neutron star, the type of galaxy it is found in, and the position of the binary within the galaxy at the time of merger (we don’t know the exact position, just its projection as viewed from Earth, but that’s something).

We start be simulating lots of binaries just before the second supernova explodes. These are scattered at different distances from the the centre of the galaxy, have different orbital separations, and have different masses of the pre-supernova star. We then add the effects of the supernova, adding in a kick. We fix then neutron star masses to match those we inferred from the gravitational wave measurements. If the supernova kick is too big, the binary flies apart and will never merge (boo). If the binary remains bound, we follow its evolution as it moves through the galaxy. The structure of the galaxy is simulated as a simple spherical model, a Hernquist profile for the stellar component and a Navarro–Frenk–White profile for the dark matter halo [citation note], which are pretty standard. The binary shrinks as gravitational waves are emitted, and eventually merge. If the merger happens at a position which matches our observations (yay), we know that the initial conditions could explain GW170817.

The plot above shows the constraints on the progenitor’s properties. The inferred second supernova kick is , similar to what has been observed for neutron stars in the Milky Way; the per-supernova stellar mass is (we assume that the star is just a helium core, with the outer hydrogen layers having been stripped off, hence the subscript); the pre-supernova orbital separation was , and the offset from the the centre of the galaxy at the time of the supernova was . The main strongest constraints come from keeping the binary bound after the supernova; results are largely independent of the delay time once this gets above [citation note].

As we collect more binary neutron star detections, we’ll be able to deduce more about how they form. If you’re interested more in the how to build a binary neutron star system, the introduction to this paper is well referenced; Tauris *et al*. (2017) is a detailed (pre-GW170817) review.

**Synopsis:** GW170817 Neutrino Paper

**Read this if:** You want a change from gravitational wave–electromagnetic multimessenger astronomy

**Favourite part:** There’s still something to look forward to with future detections—GW170817 hasn’t stolen all the firsts. Also this paper is *not* Abbot* et al*.

This is a joint search by ANTARES, IceCube and the Pierre Auger Observatory for neutrinos coincident with GW170817. Knowing both the location and the time of the binary neutron star merger makes it easy to search for counterparts. No matching neutrinos were detected.

Using the non-detections, we can place upper limits on the neutrino flux. These are summarised in the plots below. Optimistic models for prompt emission from an on axis gamma-ray burst would lead to a detectable flux, but otherwise theoretical predictions indicate that a non-detection is expected. From electromagnetic observations, it doesn’t seem like we are on-axis, so the story all fits together.

Super-Kamiokande have done their own search for neutrinos, form to around (Abe *et al*. 2018). They found nothing in either the window around the event or the window following it. Similarly BUST looked for muon neutrinos and antineutrinos and found nothing in the window around the event, and no excess in the window following it (Petkov *et al*. 2019).

The only post-detection neutrino modelling paper I’ve seen is Biehl, Heinze, &Winter (2017). They model prompt emission from the same source as the gamma-ray burst and find that neutrino fluxes would be of current sensitivity.

**Synopsis:** GW170817 Post-merger Paper

**Read this if:** You are an optimist

**Favourite part:** We really do check everywhere for signals

Following the inspiral of two black holes, we know what happens next: the black holes merge to form a bigger black hole, which quickly settles down to its final stable state. We have a complete model of the gravitational waves from the inspiral–merger–ringdown life of coalescing binary black holes. Binary neutron stars are more complicated.

The inspiral of two binary neutron stars is similar to that for black holes. As they get closer together, we might see some imprint of tidal distortions not present for black holes, but the main details are the same. It is the chirp of the inspiral which we detect. As the neutron stars merge, however, we don’t have a clear picture of what goes on. Material gets shredded and ejected from the neutron stars; the neutron stars smash together; it’s all rather messy. We don’t have a good understanding of what should happen when our neutron stars merge, the details depend upon the properties of the stuff neutron stars are made of—if we could measure the gravitational-wave signal from this phase, we would learn a lot.

There are four plausible outcomes of a binary neutron star merger:

- If the total mass is below the maximum mass for a (non-rotating) neutron star (), we end up with a bigger, but still stable neutron star. Given our inferences from the inspiral (see the plot from the GW170817 Gamma-ray Burst Paper below), this is unlikely.
- If the total mass is above the limit for a stable, non-rotating neutron star, but can still be supported by uniform rotation (), we have a supramassive neutron star. The rotation will slow down due to the emission of electromagnetic and gravitational radiation, and eventually the neutron star will collapse to a black hole. The time until collapse could take something like –; it is unclear if this is long enough for supramassive neutron stars to have a mid-life crisis.
- If the total mass is above the limit for support from uniform rotation, but can still be supported through differential rotation and thermal gradients(), then we have a hypermassive neutron star. The hypermassive neutron star cools quickly through neutrino emission, and its rotation slows through magnetic braking, meaning that it promptly collapses to a black hole in .
- If the total mass is big enough(), the merging neutron stars collapse down to a black hole.

In the case of the collapse to a black hole, we get a ringdown as in the case of a binary black hole merger. The frequency is around , too high for us to currently measure. However, if there is a neutron star, there may be slightly lower frequency gravitational waves from the neutron star matter wibbling about. We’re not exactly sure of the form of these signals, so we perform an unmodelled search for them (knowing the position of GW170817’s source helps for this).

Several different search algorithms were used to hunt for a post-merger signal:

- coherent WaveBurst (cWB) was used to look for short duration () bursts. This searched a window including the merger time and covering the delay to the gamma-ray burst detection, and frequencies of –. Only LIGO data were used, as Virgo data suffered from large noise fluctuations above .
- cWB was used to look for intermediate duration () bursts. This searched a window from the merger time, and frequencies –. This used LIGO and Virgo data.
- The Stochastic Transient Analysis Multi-detector Pipeline (STAMP) was also used to look for intermediate duration signals. This searched the merger time until the end of O2 (in chunks), and frequencies –. This used only LIGO data. There are two variations of STAMP: Zebragard and Lonetrack, and both are used here.

Although GEO is similar to LIGO and Virgo and the searched high-frequencies, its data were not used as we have not yet studied its noise properties in enough detail. Since the LIGO detectors are the most sensitive, their data is most important for the search.

No plausible candidates were found, so we set some upper limits on what could have been detected. From these, it is not surprising that nothing was found, as we would need pretty much all of the mass of the remnant to somehow be converted into gravitational waves to see something. Results are shown in the plot below. An updated analysis which puts upper limits on the post-merger signal is given in the GW170817 Properties Paper.

We can’t tell the fate of GW170817’s neutron stars from gravitational waves alone [citation note]. As high-frequency sensitivity is improved in the future, we might be able to see something from a *really* close by binary neutron star merger.

**Synopsis:** GW170817 Properties Paper

**Read this if:** You want the best results for GW170817’s source, our best measurement of the Hubble constant, or limits on the post-merger signal

**Favourite part:** Look how tiny the uncertainties are!

As time progresses, we often refine our analyses of gravitational-wave data. This can be because we’ve had time to recalibrate data from our detectors, because better analysis techniques have been developed, or just because we’ve had time to allow more computationally intensive analyses to finish. This paper is our first attempt at improving our inferences about GW170817. The results use an improved calibration of Virgo data, and analyses more of the signal (down to a low frequency of 23 Hz, instead of 30 Hz, which gives use about an extra 1500 cycles), uses improved models of the waveforms, and includes a new analysis looking at the post-merger signal. The results update those given in the GW170817 Discovery Paper, the GW170817 Hubble Constant Paper and the GW170817 Post-merger Paper.

Our initial analysis was based upon quick to calculate post-Newtonian waveform known as TaylorF2. We thought this should be a conservative choice: any results with more complicated waveforms should give tighter results. This worked out. We try several different waveform models, each based upon the point particle waveforms we use for analysing binary black hole signals with extra bits to model the tidal deformation of neutron stars. The results are broadly consistent, so I’ll concentrate on discussing our preferred results calculated using IMRPhenomPNRT waveform (which uses IMRPhenomPv2 as a base and adds on numerical-relativity calibrated tides). As in the GW170817 Discovery Paper, we perform the analysis with two priors on the binary spins, one with spins up to 0.89 (which should safely encompass all possibilities for neutron stars), and one with spins of up to 0.05 (which matches observations of binary neutron stars in our Galaxy).

The first analysis we did was to check the location of the source. Reassuringly, we are still perfectly consistent with the location of AT 2017gfo (phew!). The localization is much improved, the 90% sky area is down to just ! Go Virgo!

Having established that it still makes sense that AT 2017gfo pin-points the source location, we use this as the position in subsequent analyses. We always use the sky position of the counterpart and the redshift of the host galaxy (Levan *et al*. 2017), but we don’t typically use the distance. This is because we want to be able to measure the Hubble constant, which relies on using the distance inferred from gravitational waves.

We use the distance from Cantiello *et al*. (2018) [citation note] for one calculation: an estimation of the inclination angle. The inclination is degenerate with the distance (both affect the amplitude of the signal), so having constraints on one lets us measure the other with improved precision. Without the distance information, we find that the angle between the binary’s total angular momentum and the line of sight is for the high-spin prior and with the low-spin prior. The difference between the two results is because of the spin angular momentum slightly shifts the direction of the total angular momentum. Incorporating the distance information, for the high-spin prior the angle is (so the misalignment angle is ), and for the low-spin prior it is (misalignment ) [citation note].

Main results include:

- The luminosity distance is with the the low-spin prior and with the high-spin prior. The difference is for the same reason as the difference in inclination measurements. The results are consistent with the distance to NGC 4993 [citation note].
- The chirp mass redshifted to the detector-frame is measured to be with the low-spin prior and with the high-spin. This corresponds to a physical chirp mass of .
- The spins are not well constrained. We get the best measurement along the direction of the orbital angular momentum. For the low-spin prior, this is enough to disfavour the spins being antialigned, but that’s about it. For the high-spin prior, we rule out large spins aligned or antialigned, and very large spins in the plane. The aligned components of the spin are best described by the effective inspiral spin parameter , for the low-spin prior it is and for the high-spin prior it is .
- Using the low-spin prior, the component masses are – and –, and for the high-spin prior they are – and –.

These are largely consistent with our previous results. There are small shifts, but the biggest change is that the errors are a little smaller.

For the Hubble constant, we find with the low-spin prior and with the high-spin prior. Here, we quote maximum a posterior value and narrowest 68% intervals as opposed to the the usual median and symmetric 90% credible interval. You might think its odd that the uncertainty is smaller when using the wider high-spin prior, but this is just another consequence of the difference in the inclination measurements. The values are largely in agreement with our initial values.

The best measured tidal parameter is the combined dimensionless tidal deformability . With the high-spin prior, we can only set an upper bound of . With the low-spin prior, we find that we are still consistent with zero deformation, but the distribution peaks away from zero. We have using the usual median and symmetric 90% credible interval, and if we take the narrowest 90% interval. This looks like we have detected matter effects, but since we’ve had to use the low-spin prior, which is only appropriate for neutron stars, this would be a circular argument. More details on what we can learn about tidal deformations and what neutron stars are made of, under the assumption that we do have neutron stars, are given in the GW170817 Equation-of-state Paper.

Previously, in the GW170817 Post-merger Paper, we searched for a post-merger signal. We didn’t find anything. Now, we try to infer the shape of the signal, assuming it is there (with a peak within of the coalescence time). We still don’t find anything, but now we set much tighter upper limits on what signal there could be there.

For this analysis, we use data from the two LIGO detectors, and from GEO 600! We don’t use Virgo data, as it is not well behaved at these high frequencies. We use BayesWave to try to constrain the signal.

While the upper limits are much better, they are still about 12–215 times larger than expectations from simulations. Therefore, we’d need to improve our detector sensitivity by about a factor of 3.5–15 to detect a similar signal. Fingers crossed!

**Synopsis:** GW170817 Equation-of-state Paper

**Read this if:** You want to know what neutron stars are made of

**Favourite part:** The beautiful butterfly plots

Usually in our work, we like to remain open minded and not make too many assumptions. In our analysis of GW170817, as presented in the GW170817 Properties Paper, we have remained agnostic about the components of the binary, seeing what the data tell us. However, from the electromagnetic observations, there is solid evidence that the source is a binary neutron star system. In this paper, we take it as granted that the source *is* made of two neutron stars, and that these neutron stars are made of similar stuff [citation note], to see what we can learn about the properties of neutron stars.

When a two neutron stars get close together, they become distorted by each other’s gravity. Tides are raised, kind of like how the Moon creates tides on Earth. Creating tides takes energy out of the orbit, causing the inspiral to proceed faster. This is something we can measure from the gravitational wave signal. Tides are larger when the neutron stars are bigger. The size of neutron stars and how easy they are the stretch and squash depends upon their equation of state. We can use the measurements of the neutron star masses and amount of tidal deformation to infer their size and their equation of state.

The signal is analysed as in the GW170817 Properties Paper (IMRPhenomPNRT waveform, low-spin prior, position set to match AT 2017gfo). However, we also add in some information about the composition of neutron stars.

Calculating the behaviour of this incredibly dense material is difficult, but there are some relations (called universal relations) between the tidal deformability of neutron stars and their radii which are insensitive to the details of the equation of state. One relates symmetric and antisymmetric combinations of the tidal deformations of the two neutron stars as a function of the mass ratio, allows us to calculate consistent tidal deformations. Another relates the tidal deformation to the compactness (mass divided by radius) allows us to convert tidal deformations to radii. The analysis includes the uncertainty in these relations.

In addition to this, we also use a parametric model of the equation of state to model the tidal deformations. By sampling directly in terms of the equation of state, it is easy to impose constraints on the allowed values. For example, we impose that the speed of sound inside the neutron star is less than the speed of light, that the equation of state can support neutron stars of that mass, that it is possible to explain the most massive confirmed neutron star (we use a lower limit for this mass of ), as well as it being thermodynamically stable. Accommodating the most massive neutron star turns out to be an important piece of information.

The plot below shows the inferred tidal deformation parameters for the two neutron stars. The two techniques, using the equation-of-state insensitive relations and using the parametrised equation-of-state model *without* included the constraint of matching the neutron star, give similar results. For a neutron star, these results indicate that the tidal deformation parameter would be . We favour softer equations of state over stiffer ones [citation note]. I think this means that neutron stars are more huggable.

We can translate our results into estimates on the size of the neutron stars. The plots below show the inferred radii. The results for the parametrised equation-of-state model now includes the constraint of accommodating a neutron star, which is the main reason for the difference in the plots. Using the equation-of-state insensitive relations we find that the radius of the heavier (–) neutron star is and the radius of the lighter (–) neutron star is . With the parametrised equation-of-state model, the radii are (–) and (–).

When I was an undergraduate, I remember learning that neutron stars were about in radius. We now know that’s not the case.

If you want to investigate further, you can download the posterior samples from these analyses.

In astronomy, we often use standard candles, objects like type IA supernovae of known luminosity, to infer distances. If you know how bright something should be, and how bright you measure it to be, you know how far away it is. By analogy, we can infer how far away a gravitational-wave source is by how loud it is. It is thus not a candle, but a siren. Sean Carrol explains more about this term on his blog.

I know… *Nature* published the original Schutz paper on measuring the Hubble constant using gravitational waves; therefore, there’s a nice symmetry in publishing the first real result doing this in *Nature* too.

Instead of a binary neutron star system forming from a binary of two stars born together, it is possible for two neutron stars to come close together in a dense stellar environment like a globular cluster. A significant fraction of binary black holes could be formed this way. Binary neutron stars, being less massive, are not as commonly formed this way. We wouldn’t expect GW170817 to have formed this way. In the GW170817 Progenitor Paper, we argue that the probability of GW170817’s source coming from a globular cluster is small—for predicted rates, see Bae, Kim & Lee (2014).

Levan *et al*. (2017) check for a stellar cluster at the site of AT 2017gfo, and find nothing. The smallest 30% of the Milky Way’s globular clusters would evade this limit, but these account for just 5% of the stellar mass in globular clusters, and a tiny fraction of dynamical interactions. Fong *et al*. (2019) perform some detailed observations looking for a globular cluster, and also find nothing. This excludes a cluster down to , which is basically all (99.996%) of them. Therefore, it’s unlikely that a cluster is the source of this binary.

From our gravitational-wave data, we estimate the current binary neutron star merger rate density is . Several electromagnetic observers performed their own rate estimates from the frequency of detection (or lack thereof) of electromagnetic transients.

Kasliwal *et al*. (2017) consider transients seen by the Palomar Transient Factory, and estimate a rate density of approximately (3-sigma upper limit of ), towards the bottom end of our range, but their rate increases if not all mergers are as bright as AT 2017gfo.

Siebert *et al*. (2017) works out the rate of AT 2017gfo-like transients in the Swope Supernova Survey. They obtain an upper limit of . They use to estimate the probability that AT 2017gfo and GW170817 are just a chance coincidence and are actually unrelated. The probability is at 90% confidence.

Smartt *et al*. (2017) estimate the kilonova rate from the ATLAS survey, they calculate a 95% upper limit of , safely above our range.

Yang *et al*. (2017) calculates upper limits from the DLT40 Supernova survey. Depending upon the reddening assumed, this is between and . Their figure 3 shows that this is well above expected rates.

Zhang *et al*. (2017) is interested in the rate of gamma-ray bursts. If you know the rate of short gamma-ray bursts and of binary neutron star mergers, you can learn something about the beaming angle of the jet. The smaller the jet, the less likely we are to observe a gamma-ray burst. In order to do this, they do their own back-of-the-envelope for the gravitational-wave rate. They get . That’s not too bad, but do stick with our result.

If you’re interested in the future prospects for kilonova detection, I’d recommend Scolnic *et al*. (2017). Check out their Table 2 for detection rates (assuming a rate of ): LSST and WFIRST will see lots, about 7 and 8 per year respectively.

Using later observational constraints on the jet structure, Gupta & Bartos (2018) use the short gamma-ray burst rate to estimate a binary neutron star merger rate of . They project that around 30% of gravitational-wave detections will be accompanied by gamma-ray bursts, once LIGO and Virgo reach design sensitivity.

Della Valle *et al*. (2018) calculate an observable kilonova rate of . To match up to our binary neutron star merger rate, we either need only a fraction of binary neutron star mergers to produce kilonova or for them to only be observable for viewing angles of less than . Their table 2 contains a nice compilation of rates for short gamma-ray bursts.

Some notes on an incomplete overview of papers describing the electromagnetic discovery. A list of the first wave of papers was compiled by Maria Drout, Stefano Valenti, and Iair Arcavi as a starting point for further reading.

Independently of our gravitational-wave detection, a short gamma-ray burst GRB 170817A was observed by Fermi-GBM (Goldstein *et al*. 2017). Fermi-LAT did not see anything, as it was offline for crossing through the South Atlantic Anomaly. At the time of the merger, INTEGRAL was following up the location of GW170814, fortunately this meant it could still observe the location of GW170817, and following the alert they found GRB 170817A in their data (Savchenko *et al*. 2017).

Following up on our gravitational-wave localization, an optical transient AT 2017gfo was discovered. The discovery was made by the One-Meter Two-Hemisphere (1M2H) collaboration using the Swope telescope at the Las Campanas Observatory in Chile; they designated the transient as SSS17a (Coulter *et al*. 2017). That same evening, several other teams also found the transient within an hour of each other:

- The Distance Less Than 40 Mpc (DLT40) search found the transient using the PROMPT 0.4-m telescope at the Cerro Tololo Inter-American Observatory in Chile; they designated the transient DLT17ck (Valenti
*et al*. 2017). - The VINROUGE collaboration (I think, they don’t actually identify themselves in their own papers) found the transient using VISTA at the European Southern Observatory in Chile (Tanvir
*et al*. 2017). Their paper also describes follow-up observations with the Very Large Telescope, the Hubble Space Telescope, the Nordic Optical Telescope and the Danish 1.54-m Telescope, and has one of my favourite introduction sections of the discovery papers. - The MASTER collaboration followed up with their network of global telescopes, and it was their telescope at the San Juan National University Observatory in Argentina which found the transient (Lipunov
*et al*. 2017); they, rather catchily denote the transient as OTJ130948.10-232253.3. - The Dark Energy Survey and the Dark Energy Camera GW–EM (DES and DECam) Collaboration found the transient with the DECam on the Blanco 4-m telescope, which is also at the Cerro Tololo Inter-American Observatory in Chile (Soares-Santos
*et al*. 2017). - The Las Cumbres Observatory Collaboration used their global network of telescopes, with, unsurprisingly, their 1-m telescope at the Cerro Tololo Inter-American Observatory in Chile first imaging the transient (Arcavi
*et al*. 2017). Their observing strategy is described in a companion paper (Arcavi*et al*. 2017), which also describes follow-up of GW170814.

From these, you can see that South America was the place to be for this event: it was night at just the right time.

There was a huge amount of follow-up across the infrared–optical–ultraviolet range of AT 2017gfo. Villar *et al*. (2017) attempts to bring these together in a consistent way. Their Figure 1 is beautiful.

Hinderer et al. (2018) use numerical relativity simulations to compare theory and observations for gravitational-wave constraints on the tidal deformation and the kilonova lightcurve. They find that observations could be consistent with a neutron star–black hole binary and well as a binary neutron star. Coughline & Dietrich (2019) come to a similar conclusion. I think it’s unlikely that there would be a black hole this low mass, but it’s interesting that there are some simulations which can fit the observations.

AT 2017gfo was also the target of observations across the electromagnetic spectrum. An X-ray afterglow was observed 9 days post merger, and 16 days post merger, just as we thought the excitement was over, a radio afterglow was found:

- The X-rays were first observed by Chandra X-ray Observatory, 9 days post merger (Troja
*et al*. 2017). This paper also describes optical follow-up with the Hubble Space Telescope, the Gemini Multi-Object Spectrograph, the Korea Microlensing Telescope Network, and a radio non-detection with the Australia Telescope Compact Array. Margutti*et al*. (2017) observed with Chandra 2.3 days post-merger (when they found nothing) and 15 days when they found something. Haggard*et al*. (2017) describe deep Chandra observations 15 and 16 days post merger. - The GROWTH Collaboration found radio emission initially 16 days post merger with the Very Large Array (Hallinan
*et al*. 2017): there’s a marginal signal after 10 days, but there’s no definitely identifiable source at that time. They also observed with the Australia Telescope Compact Array (which saw the afterglow when observing 19 days post merger), the Giant Metrewave Radio Telescope, the VLA Low Band Ionosphere and Transient Experiment and the Green Bank Telescope (which didn’t make detections). Alexander*et al*. (2017) first detect radio emission when observing 19 and 39 days post merger with the Very Large Array. They do not detect anything with the Atacama Large Millimeter/submillimeter Array.

The afterglow will continue to brighten for a while, so we can expect a series of updates:

- Pooley, Kumar & Wheeler (2017) observed with Chandra 108 and 111 days post merger. Ruan
*et al*. (2017) observed with Chandra 109 days post merger. The large gap in the the X-ray observations from the initial observations is because the Sun got in the way. - Mooley
*et al*. (2017) update the GROWTH radio results up to 107 days post merger (the largest span whilst still pre-empting new X-ray observations), observing with the Very Large Array, Australia Telescope Compact Array and Giant Meterewave Radio Telescope.

Excitingly, the afterglow has also now been spotted in the optical:

- Lyman
*et al*. (2018) observed with Hubble 110 (rest-frame) days post-merger (which is when the Sun was out of the way for Hubble). At this point the kilonova should have faded away, but they found something, and this is quite blue. The conclusion is that it’s the afterglow, and it will peak in about a year. - Margutti
*et al*. (2018) brings together Chandra X-ray observations, Very Large Array radio observations and Hubble optical observations. The Hubble observations are 137 days post merger, and the Chandra observations are 153 days and 163 days post-merger. They find that they all agree (including the tentative radio signal at 10 days post-merger). They argue that the emission disfavours on-axis jets and spherical fireballs.

The afterglow is fading.

- D’Avanzo
*et al*. (2018) observed in X-ray 135 days post-merger with XMM-Newton. They find that the flux is faded compared to the previous trend. They suggest that we’re just at the turn-over, so this is consistent with the most recent Hubble observations. - Resmi
*et al*. (2018) observed at low radio frequencies with the Giant Meterwave Radio Telescope. They saw the signal at after 67 days post-merger, but this evolves little over the duration of their observations (to day 152 post-merger), also suggesting a turn-over. - Dobie
*et al*. (2018) observed in radio 125–200 days post-merger with the Very Large Array and Australia Telescope Compact Array, and they find that the afterglow is starting to fade, with a peak at 149 ± 2 days post-merger. - Nynka
*et al*. (2018) made X-ray observations at 260 days post-merger. They conclude the afterglow is definitely fading, and that this is not because of passing of the synchrotron cooling frequency. - Mooley
*et al*. (2018) observed in radio to 298 days. They find the turn-over around 170 days. They argue that results support a narrow, successful jet. - Troja
*et al*. (2018) observed in radio and X-ray to 359 days. The fading is now obvious, and starting to reveal something about the jet structure. Their best fits seems to favour a structured relativistic jet or a wide-angled cocoon. - Lamb
*et al*. (2018) observed in optical to 358 days. They infer a peak around 140–160 days. Their observations are well fit either by a Gaussian structured jet or a two-component jet (with the second component being the cocoon), although the two-component model doesn’t fit early X-ray observations well. They conclude there must have been a successful jet of some form.

- Fong
*et al*. (2019) observe in optical to 584 days post-merger, combined with observation in radio to 585 days post-merger and in X-ray 583 days post-merger. These observations favour a structured jet over a quasi-spherical outflow.

The story of the most ambitious cross-over of astronomical observations might now becoming to an end.

Using the time delay between GW170817 and GRB 170817A, a few other teams also did their own estimation of the Shapiro delay before they knew what was in our GW170817 Gamma-ray Burst Paper.

- Wang
*et al*. (2017) consider the Milky Way potential and large scale structure to estimate . - Boran
*et al*. (2017) consider all the galaxies in the GLADE catalogue which are within a radius of of the line of sight, and derive . - Wei
*et al*. (2017) estimate using the Milky Way’s potential and using the Virgo cluster’s potential.

Our estimate of is the most conservative.

Are the electromagnetic counterparts to GW170817 similar to what has been observed before?

Yue *et al*. (2017) compare GRB 170817A with other gamma-ray bursts. It is low luminosity, but it may not be alone. There could be other bursts like it (perhaps GRB 070923, GRB 080121 and GRB 090417A), if indeed they are from nearby sources. They suggest that GRB 130603B may be the on-axis equivalent of GRB 170817A [citation note]; however, the non-detection of kilonovae for several bursts indicates that there needs to be some variation in their properties too. This agree with the results of Gompertz *et al*. (2017), who compares the GW170817 observations with other kilonovae: it is fainter than the other candidate kilonovae (GRB 050709, GRB 060614, GRB 130603B and tentatively GRB 160821B), but equally brighter than upper limits from other bursts. There must be a diversity in kilonovae observations. Fong *et al*. (2017) look at the diversity of afterglows (across X-ray to radio), and again find GW170817’s counterpart to be faint. This is probably because we are off-axis. The most comprehensive study is von Kienlin *et al*. (2019) who search ten years of Fermi archives and find 13 GRB 170817A-like short gamma-ray bursts: GRB 081209A, GRB 100328A, GRB 101224A, GRB 110717A; GRB 111024C, GRB 120302B, GRB 120915A, GRB 130502A, GRB 140511A, GRB 150101B, GRB 170111B, GRB 170817A and GRB 180511A. There is a range behaviours in these, with the shorter GRBs showing fast variability. Future observations will help unravel how much variation there is from viewing different angles, and how much intrinsic variation there is from the source—perhaps some short gamma-ray bursts come from neutron star–black hole binaries?

Pretty much every observational paper has a go at estimating the properties of the ejecta, the viewing angle or something about the structure of the jet. I may try to pull these together later, but I’ve not had time yet as it is a very long list! Most of the inclination measurements assumed a uniform top-hat jet, which we now know is not a good model.

In my non-expert opinion, the later results seem more interesting. With very-long baseline interferometry radio observations to 230 days post-merger, Mooley *et al*. (2018) claim that while the early radio emission was powered by the wide cocoon of a structured jet, the later emission is dominated by a narrow, energetic jet. There was a successful jet, so we would have seen something like a regular short gamma-ray burst on axis. They estimate that the jet opening angle is , and that we are viewing it at an angle of . With X-ray and radio observations to 359 days, Troja *et al*. (2018) estimate (folding in gravitational-wave constraints too) that the viewing angle is , and the width of a Gaussian structured jet would be .

Guidorzi *et al*. (2017) try to tighten the measurement of the Hubble constant by using radio and X-ray observations. Their modelling assumes a uniform jet, which doesn’t look like a currently favoured option [citation note], so there is some model-based uncertainty to be included here. Additionally, the jet is unlikely to be perfectly aligned with the orbital angular momentum, which may add a couple of degrees more uncertainty.

Mandel (2018) works the other way and uses the recent Dark Energy Survey Hubble constant estimate to bound the misalignment angle to less than , which (unsurprisingly) agrees pretty well with the result we obtained using the Planck value. Finstad *et al*. (2018) uses the luminosity distance from Cantiello *et al*. (2018) [citation note] as a (Gaussian) prior for an analysis of the gravitational-wave signal, and get a misalignment (where the errors are statistical uncertainty and an estimate of systematic error from calibration of the strain).

Hotokezaka *et al*. (2018) use the inclination results from Mooley *et al*. (2018) [citation note] (together with the updated posterior samples from the GW170817 Properties Paper) to infer a value of (quoting median and 68% symmetric credible interval). Using different jet models changes their value for the Hubble constant a little; the choice of spin prior does not (since we get basically all of the inclination information from their radio observations). The results is still consistent with Planck and SH0ES, but is closer to the Planck value.

In the GW170817 Progenitor Paper we used component properties for NGC 4993 from Lim *et al*. (2017): a stellar mass of and a dark matter halo mass of , where we use the Planck value of (but conclusions are similar using the SH0ES value for this).

Blanchard *et al*. (2017) estimate a stellar mass of about . They also look at the star formation history, 90% were formed by ago, and the median mass-weighted stellar age is . From this they infer a merger delay time of −. From this, and assuming that the system was born close to its current location, they estimate that the supernova kick , towards the lower end of our estimate. They use .

Im *et al*. (2017) find a mean stellar mass of − and the mean stellar age is greater than about . They also give a luminosity distance estimate of , which overlaps with our gravitational-wave estimate. I’m not sure what value of they are using.

Levan *et al*. (2017) suggest a stellar mass of around . They find that 60% of stars by mass are older than and that less than 1% are less than old. Their Figure 5 has some information on likely supernova kicks, they conclude it was probably small, but don’t quantify this. They use .

Pan *et al*. (2017) find . They calculate a mass-weighted mean stellar age of and a likely minimum age for GW170817’s source system of . They use .

Troja *et al*. (2017) find a stellar mass of , and suggest an old stellar population of age .

Ebrová & Bílek (2018) assume a distance of and find a halo mass of . They suggest that NGC 4993 swallowed a smaller late-type galaxy somewhere between and ago, most probably around ago.

The consensus seems to be that the stellar population is old (and not much else). Fortunately, the conclusions of the GW170817 Progenitor Paper are pretty robust for delay times longer than as seems likely.

A couple of other papers look at the distance of the galaxy:

- Hjoth
*et al.*(2017) combine a redshift measurement from MUSE, and a fundamental plane estimate based upon Hubble observations, to obtain an distance of . - Cantiello
*et al*. (2018) use Hubble observations to estimate the distance using surface brightness fluctuations. They obtain a distance of . This implies a value for the Hubble constant of .

The values are consistent with our gravitational-wave estimates.

We cannot be certain what happened to the merger remnant from gravitational-wave observations alone. However, electromagnetic observations do give some hints here.

Evans *et al*. (2017) argue that their non-detection of X-rays when observing with Swift and NuSTAR indicates that there is no neutron star remnant at this point, meaning we must have collapsed to form a black hole by 0.6 days post-merger. This isn’t too restricting in terms of the different ways the remnant could collapse, but does exclude a stable neutron star remnant. MAXI also didn’t detect any X-rays 4.6 hours after the merger (Sugita *et al*. 2018).

Pooley, Kumar & Wheeler (2017) consider X-ray observations of the afterglow. They calculate that if the remnant was a hypermassive neutron star with a large magnetic field, the early (10 day post-merger) luminosity would be much higher (and we could expect to see magnetar outbursts). Therefore, they think it is more likely that the remnant is a black hole. However, Piro *et al*. (2018) suggest that if the the spin-down of the neutron star remnant is dominated by losses due to gravitational wave emission, rather than electromagnetic emission, then the scenario is still viable. They argue that a tentatively identified X-ray flare seen 155 days post-merger, could be evidence of dissipation of the the neutron star’s toroidal magnetic field.

Kasen *et al*. (2017) use the observed red component of the kilonova to argue that the remnant must have collapsed to a black hole in . A neutron star would irradiate the ejecta with neutrinos, lower the neutron fraction and making the ejecta bluer. Since it is red, the neutrino flux must have been shut off, and the neutron star must have collapsed. We are in case b in their figure below.

Ai *et al*. (2018) find that there are some corners of parameter space for certain equations of state where a long-lived neutron star is possible, even given the observations. Therefore, we should remain open minded.

Margalit & Metzger (2017) and Bauswein *et al*. (2017) note that the relatively large amount of ejecta inferred from observations [citation note] is easier to explain when there is delayed (on timescales of ). This is difficult to resolve unless neutron star radii are small (). Metzger, Thompson & Quataert (2018) derive how this tension could be resolved if the remnant was a rapidly spinning magnetar with a life time of –. Matsumoto *et al*. (2018), suggest that the optical emission is powered by the the jet and material accreting onto the central object, rather than r-process decay, and this permits much smaller amounts of ejecta, which could also solve the issue. Yu & Dai (2017) suggest that accretion onto a long-lived neutron star could power the emission, and would only require a single opacity for the ejecta. Li *et al*. (2018) put forward a similar theory, arguing that both the high ejecta mass and low opacity are problems for the standard r-process explanation, but fallback onto a neutron star could work. However, Margutti *et al*. (2018) say that X-ray emission powered by a central engine is disfavoured at all times.

In conclusion, it seems probable that we ended up with a black hole, and we had an a unstable neutron star for a short time after merger, but I don’t think it’s yet settled how long this was around.

Gill, Nathanail & Rezzolla (2019) considered how long it would take to produce the observed amount of ejecta, and the relative amounts of red and blue eject, as well as the delay time between the gravitational-wave measurement of the merger and the observation of the gamma-ray burst, to estimate how long it took the remnant to collapse to a black hole. They find a lifetime of .

We might not have two neutron stars with the same equation of state if they can undergo a phase transition. This would be kind of of like if one one made up of fluffer marshmallow, and the other was made up of gooey toasted marshmallow: they have the same ingredient, but in one the type of stuff has changed, giving it different physical properties. Standard neutron stars could be made of hadronic matter, kind of like a giant nucleus, but we could have another type where the hadrons break down into their component quarks. We could therefore have two neutron stars with similar masses but with very different equations of state. This is referred to as the twin star scenario. Hybrid stars which have quark cores surrounded by hadronic outer layers are often discussed in this context.

Several papers have explored what we can deduce about the nature of neutron star stuff from gravitational wave or electromagnetic observations the neutron star coalescence. It is quite a tricky problem. Below are some investigations into the radii of neutron stars and their tidal deformations; these seem compatible with the radii inferred in the GW170817 Equation-of-state Paper.

Bauswein *et al*. (2017) argue that the amount of ejecta inferred from the kilonova is too large for there to have been a prompt collapse to a black hole [citation note]. Using this, they estimate that the radius of a non-rotating neutron star of mass has a radius of at least . They also estimate that the radius for the maximum mass nonrotating neutron star must be greater than . Köppel, Bovard & Rezzolla (2019) calculate a similar, updated analysis, using a new approach to fit for the maximum mass of a neutron star, and they find a radius for is greater than , and for is greater than .

Annala *et al*. (2018) combine our initial measurement of the tidal deformation, with the requirement hat the equation of state supports a neutron star (which they argue requires that the tidal deformation of a neutron star is at least ). They argue that the latter condition implies that the radius of a neutron star is at least and the former that it is less than .

Radice *et **al*. (2018) combine together observations of the kilonova (the amount of ejecta inferred) with gravitational-wave measurements of the masses to place constraints on the tidal deformation. From their simulations, they argue that to explain the ejecta, the combined dimensionless tidal deformability must be . This is consistent with results in the GW170817 Properties Paper, but would eliminate the main peak of the distribution we inferred from gravitational waves alone. However, Kuichi *et al*. (2019) show that it is possible to get the required ejecta for smaller tidal deformations, depending upon assumptions about the maximum neutron star mass (higher masses allow smaller tidal deformations)mand asymmetry of the binary components.

Lim & Holt (2018) perform some equation-of-state calculations. They find that their particular method (chiral effective theory) is already in good agreement with estimates of the maximum neutron star mass and tidal deformations. Which is nice. Using their models, they predict that for GW170817’s chirp mass .

Raithel, Özel & Psaltis (2018) argue that for a given chirp mass, is only a weak function of component masses, and depends mostly on the radii. Therefore, from our initial inferred value, they put a 90% upper limit on the radii of .

Most *et al*. (2018) consider a wide range of parametrised equations of state. They consider both hadronic (made up of particles like neutrons and protons) equation of states, and ones where they undergo phase transitions (with hadrons breaking into quarks), which could potentially mean that the two neutron stars have quite different properties [citation note]. A number of different constraints are imposed, to give a selection of potential radius ranges. Combining the requirement that neutron stars can be up to (Antoniadis *et al*. 2013), the maximum neutron star mass of inferred by Margalit & Metzger (2017), our initial gravitational-wave upper limit on the tidal deformation and the lower limit from Radice *et **al*. (2018), they estimate that the radius of a neutron star is – for the hadronic equation of state. For the equation of state with the phase transition, they do the same, but without the tidal deformation from Radice *et **al*. (2018), and find the radius of a neutron star is –.

Paschalidis *et al*. (2018) consider in more detail the idea equations of state with hadron–quark phase transitions, and the possibility that one of the components of GW170817’s source was a hadron–quark hybrid star. They find that the initial tidal measurements are consistent with this.

Burgio *et al*. (2018) further explore the possibility that the two binary components have different properties. They consider both there being a hadron–quark phase transition, and also that one star is hadronic and the other is a quark star (made up of deconfined quarks, rather than ones packaged up inside hadrons). X-ray observations indicate that neutron stars have radii in the range –, whereas most of the radii inferred for GW170817’s components are larger. This paper argues that this can be resolved if one of the components of GW170817’s source was a hadron–quark hybrid star or a quark star.

De *et al*. (2018) perform their own analysis of the gravitational signal, with a variety of different priors on the component masses. They assume that the two neutron stars have the same radii. In the GW170817 Equation-of-state Paper we find that the difference can be up to about , which I think makes this an OK approximation; Zhao & Lattimer (2018) look at this in more detail. Within their approximation, they estimate the neutron stars to have a common radius of –.

Malik *et al*. (2018) use the initial gravitational-wave upper bound on tidal deformation and the lower bound from Radice *et **al*. (2018) in combination with several equations of state (calculated using relativistic mean field and of Skyrme Hartree–Fock recipes, which sound delicious). For a neutron star, they obtain a tidal deformation in the range – and the radius in the range –.

Radice & Dai (2018) do their own analysis of our gravitational-wave data (using relative binning) and combine this with an analysis of the electromagnetic observations using models for the accretion disc. They find that the areal radius of a is . These results are in good agreement with ours, their inclusion of electromagnetic data pushes their combined results towards larger values for the tidal deformation.

Montaña *et al*. (2018) consider twin star scenarios [citation note] where we have a regular hadronic neutron star and a hybrid hadron–quark star. They find the data are consistent with neutron star–neutron star, neutron star–hybrid star or hybrid star–hybrid star binaries. Their Table II is a useful collection of results for the radius of a neutron star, including the possibility of phase transitions.

Coughlin et al. (2018) use our LIGO–Virgo results and combine them with constraints from the observation of the kilonova (combined with fits to numerical simulations) and the gamma-ray burst. The electromagnetic observations give some extra information of the tidal deformability, mass ratio and inclination. They use the approximation that the neutron stars have equal radii. They find that the tidal deformability has a 90% interval – and the neutron star radius is –.

Zhou, Chen & Zhang (2019) use data from heavy ion collider experiments, which constrains the properties of nuclear density stuff at one end of the spectrum, the existence of neutron stars, and our GW170817 Equation-of-state Paper constraints on the tidal deformation to determine that the radius of a neutron star is –.

Kumar & Landry (2019) use the GW170817 Equation-of-state Paper constraints, and combine these of electromagnetic constraints to get an overall tidal deformability measurement. They use of observations of X-ray bursters from Özel *et al*. (2016) which give mass and radius measurements, and translate these using universal relations. Their overall result is the tidal deformability of a neutron star is .

Gamba, Read & Wade (2019) estimate the systematic error in the GW170817 Equation-of-state Paper results for the neutron star radius which may have been introduced from assumptions about the crust’s equation of state. They find that the error could be (about 3%).

]]>

Our family of binary black holes is now growing large. During our first observing run (O1) we found three: GW150914, LVT151012 and GW151226. The advanced detector observing run (O2) ran from 30 November 2016 to 25 August 2017 (with a couple of short breaks). From our O1 detections, we were expecting roughly one binary black hole per month. The first same in January, GW170104, and we have announced the first detection which involved Virgo from August, GW170814, so you might be wondering what happened in-between? Pretty much everything was dropped following the detection of our first binary neutron star system, GW170817, as a sizeable fraction of the astronomical community managed to observe its electromagnetic counterparts. Now, we are starting to dig our way out of the O2 back-log.

On 8 June 2017, a chirp was found in data from LIGO Livingston. At the time, LIGO Hanford was undergoing planned engineering work [bonus explanation]. We would not normally analyse this data, as the detector is disturbed; however, we had to follow up on the potential signal in Livingston. Only low frequency data in Hanford should have been affected, so we limited our analysis to above 30 Hz (this sounds easier than it is—I was glad I was not on rota to analyse this event [bonus note]). A coincident signal was found [bonus note]. Hello GW170608, the June event!

Analysing data from both Hanford and Livingston (limiting Hanford to above 30 Hz) [bonus note], GW170608 was found by both of our offline searches for binary signals. PyCBC detected it with a false alarm rate of less than 1 in 3000 years, and GstLAL estimated a false alarm rate of 1 in 160000 years. The signal was also picked up by coherent WaveBurst, which doesn’t use waveform templates, and so is more flexible in what it can detect at the cost off sensitivity: this analysis estimates a false alarm rate of about 1 in 30 years. GW170608 probably isn’t a bit of random noise.

GW170608 comes from a low mass binary. Well, relatively low mass for a binary black hole. For low mass systems, we can measure the chirp mass , the particular combination of the two black hole masses which governs the inspiral, well. For GW170608, the chirp mass is . This is the smallest chirp mass we’ve ever measured, the next smallest is GW151226 with . GW170608 is probably the lowest mass binary we’ve found—the total mass and individual component masses aren’t as well measured as the chirp mass, so there is small probability (~11%) that GW151226 is actually lower mass. The plot below compares the two.

One caveat with regards to the masses is that the current results only consider spin magnitudes up to 0.89, as opposed to the usual 0.99. There is a correlation between the mass ratio and the spins: you can have a more unequal mass binary with larger spins. There’s not a lot of support for large spins, so it shouldn’t make too much difference. We use the full range in updated analysis in the O2 Catalogue Paper.

Speaking of spins, GW170608 seems to prefer small spins aligned with the angular momentum; spins are difficult to measure, so there’s a lot of uncertainty here. The best measured combination is the effective inspiral spin parameter . This is a combination of the spins aligned with the orbital angular momentum. For GW170608 it is , so consistent with zero and leaning towards being small and positive. For GW151226 it was , and we could exclude zero spin (at least one of the black holes must have some spin). The plot below shows the probability distribution for the two component spins (you can see the cut at a maximum magnitude of 0.89). We prefer small spins, and generally prefer spins in the upper half of the plots, but we can’t make any definite statements other than both spins aren’t large and antialigned with the orbital angular momentum.

The properties of GW170608’s source are consistent with those inferred from observations of low-mass X-ray binaries (here the low-mass refers to the companion star, not the black hole). These are systems where mass overflows from a star onto a black hole, swirling around in an accretion disc before plunging in. We measure the X-rays emitted from the hot gas from the disc, and these measurements can be used to estimate the mass and spin of the black hole. The similarity suggests that all these black holes—observed with X-rays or with gravitational waves—may be part of the same family.

We’ll present update merger rates and results for testing general relativity in our end-of-O2 paper. The low mass of GW170608’s source will make it a useful addition to our catalogue here. Small doesn’t mean unimportant.

**Title:** GW170608: Observation of a 19 solar-mass binary black hole coalescence

**Journal:** *Astrophysical Journal Letters*; **851**(2):L35(11); 2017

**arXiv:** 1711.05578 [gr-qc] [bonus note]

**Science summary:** GW170608: LIGO’s lightest black hole binary?

**Data release:** LIGO Open Science Center

If you’re looking for the most up-to-date results regarding GW170608, check out the **O2 Catalogue Paper**.

A lot of time and effort goes into monitoring, maintaining and tweaking the detectors so that they achieve the best possible performance. The majority of work on the detectors happens during engineering breaks between observing runs, as we progress towards design sensitivity. However, some work is also needed during observing runs, to keep the detectors healthy.

On 8 June, Hanford was undergoing angle-to-length (A2L) decoupling, a regular maintenance procedure which minimises the coupling between the angular position of the test-mass mirrors and the measurement of strain. Our gravitational-wave detectors carefully measure the time taken for laser light to bounce between the test-mass mirrors in their arms. If one of these mirrors gets slightly tilted, then the laser could bounce of part of the mirror which is slightly closer or further away than usual: we measure a change in travel time even though the length of the arm is the same. To avoid this, the detectors have control systems designed to minimise angular disturbances. Every so often, it is necessary to check that these are calibrated properly. To do this, the mirrors are given little pushes to rotate them in various directions, and we measure the output to see the impact.

The angular pushes are done at specific frequencies, so we we can tease apart the different effects of rotations in different directions. The frequencies are in the range 19–23 Hz. 30 Hz is a safe cut-off for effects of the procedure (we see no disturbances above this frequency).

While we normally wouldn’t analyse data from during maintenance, we think it is safe to do so, after discarding the low-frequency data. If you are worried about the impact of including addition data in our rate estimates (there may be a bias only using time when you know there are signals), you can be reassured that it’s only a small percent of the total time, and so should introduce an error less significant than uncertainty from the calibration accuracy of the detectors.

Unusually for an O2 event, Aaron Zimmerman was not on shift for the Parameter Estimation rota at the time of GW170608. Instead, it was Patricia Schmidt and Eve Chase who led this analysis. Due to the engineering work in Hanford, and the low mass of the system (which means a long inspiral signal), this was one of the trickiest signals to analyse: I’d say only GW170817 was more challenging (if you ignore all the extra work we did for GW150914 as it was the first time).

Since this wasn’t a standard detection, it took a while to send out an alert (about thirteen and a half hours). Since this is a binary black hole merger, we wouldn’t expect that there is anything to see with telescopes, so the delay isn’t as important as it would be for a binary neutron star. Several observing teams did follow up the laert. Details can be found in the GCN Circular archive. So far, papers on follow-up have appeared from:

- CALET—a gamma-ray search. This paper includes upper limits for GW151226, GW170104, GW170608, GW170814 and GW170817.
- DLT40—an optical search designed for supernovae. This paper covers the whole of O2 including GW170104, GW170814, GW170817 plus GW170809 and GW170823.
- Mini-GWAC—a optical survey (the precursor to GWAC). This paper covers the whole of their O2 follow-up (including GW170104).
- The VLA and VLITE—radio follow-up, particularly targeting a potentially interesting gamma-ray transient spotted by Fermi.

If you are wondering about the status of Virgo: on June 8 it was still in commissioning ahead of officially joining the run on 1 August. We have data at the time of the event. The sensitivity is of the detector is not great. We often quantify detector sensitivity by quoting the binary neutron star range (the average distance a binary neutron star could be detected). Around the time of the event, this was something like 7–8 Mpc for Virgo. During O2, the LIGO detectors have been typically in the 60–100 Mpc region; when Virgo joined O2, it had a range of around 25–30 Mpc. Unsurprisingly, Virgo didn’t detect the signal. We could have folded the data in for parameter estimation, but it was decided that it was probably not well enough understood at the time to be worthwhile.

The GW170608 Paper is the first discovery paper to be made public before journal acceptance (although the GW170814 Paper was close, and we would have probably gone ahead with the announcement anyway). I have mixed feelings about this. On one hand, I like that the Collaboration is seen to take their detections seriously and follow the etiquette of peer review. On the other hand, I think it is good that we can get some feedback from the broader community on papers before they’re finalised. I think it is good that the first few were peer reviewed, it gives us credibility, and it’s OK to relax now. Binary black holes are becoming routine.

This is also the first discovery paper not to go to *Physical Review Letters*. I don’t think there’s any deep meaning to this, the Collaboration just wanted some variety. Perhaps GW170817 sold everyone that we were astrophysicists now? Perhaps people thought that we’ve abused *Physical Review Letters*‘ page limits too many times, and we really do need that appendix. I was still in favour of *Physical Review Letters* for *this* paper, if they would have had us, but I approve of sharing the love. There’ll be plenty more events.

In this post, I’ll go through some of the story of GW170817. As for GW150914, I’ll write another post on the more **technical details of our papers**, once I’ve had time to catch up on sleep.

The second observing run (O2) of the advanced gravitational-wave detectors started on 30 November 2016. The first detection came in January—GW170104. I was heavily involved in the analysis and paper writing for this. We finally finished up in June, at which point I was thoroughly exhausted. I took some time off in July [bonus note], and was back at work for August. With just one month left in the observing run, it would all be downhill from here, right?

August turned out to be the lava-filled, super-difficult final level of O2. As we have now announced, on August 14, we detected a binary black hole coalescence—GW170814. This was the first clear detection including Virgo, giving us superb sky localization. This is fantastic for astronomers searching for electromagnetic counterparts to our gravitational-wave signals. There was a flurry of excitement, and we thought that this was a fantastic conclusion to O2. We were wrong, this was just the save point before the final opponent. On August 17, we met the final, fire-ball throwing boss.

At 1:58 pm BST my phone buzzed with a text message, an automated alert of a gravitational-wave trigger. I was obviously excited—I recall that my exact thoughts were “What fresh hell is this?” I checked our online event database and saw that it was a single-detector trigger, it was only seen by our Hanford instrument. I started to relax, this was probably going to turn out to be a glitch. The template masses, were low, in the neutron star range, not like the black holes we’ve been finding. Then I saw the false alarm rate was better than one in 9000 years. Perhaps it wasn’t just some noise after all—even though it’s difficult to estimate false alarm rates accurately online, as especially for single-detector triggers, this was significant! I kept reading. Scrolling down the page there was an external coincident trigger, a gamma-ray burst (GRB 170817A) within a couple of seconds…

Short gamma-ray bursts are some of the most powerful explosions in the Universe. I’ve always found it mildly disturbing that we didn’t know what causes them. The leading theory has been that they are the result of two neutron stars smashing together. Here seemed to be the proof.

The rapid response call was under way by the time I joined. There was a clear chirp in Hanford, you could be see it by eye! We also had data from Livingston and Virgo too. It was bad luck that they weren’t folded into the online alert. There had been a drop out in the data transfer from Italy to the US, breaking the flow for Virgo. In Livingston, there was a glitch at the time of the signal which meant the data wasn’t automatically included in the search. My heart sank. Glitches are common—check out Gravity Spy for some examples—so it was only a matter of time until one overlapped with a signal [bonus note], and with GW170817 being such a long signal, it wasn’t that surprising. However, this would complicate the analysis. Fortunately, the glitch is short and the signal is long (if this had been a high-mass binary black hole, things might not have been so smooth). We *were* able to exorcise the glitch. A preliminary sky map using all three detectors was sent out at 12:54 am BST. Not only did we defeat the final boss, we did a speed run on the hard difficulty setting first time [bonus note].

The three-detector sky map provided a great localization for the source—this preliminary map had a 90% area of ~30 square degrees. It was just in time for that night’s observations. The plot below shows our gravitational-wave localizations in green—the long band is without Virgo, and the smaller is with all three detectors—as with GW170814, Virgo makes a big difference. The blue areas are the localizations from Fermi and INTEGRAL, the gamma-ray observatories which measured the gamma-ray burst. The inset is something new…

That night, the discoveries continued. Following up on our sky location, an optical counterpart (AT 2017gfo) was found. The source is just on the outskirts of galaxy NGC 4993, which is right in the middle of the distance range we inferred from the gravitational wave signal. At around 40 Mpc, this is the closest gravitational wave source.

After this source was reported, I think about every single telescope possible was pointed at this source. I think it may well be the most studied transient in the history of astronomy. I think there are *~250 circulars* about follow-up. Not only did we find an optical counterpart, but there was emission in X-ray and radio. There was a delay in these appearing, I remember there being excitement at our Collaboration meeting as the X-ray emission was reported (there was a lack of cake though).

The figure below tries to summarise all the observations. As you can see, it’s a mess because there is too much going on!

The observations paint a compelling story. Two neutron stars insprialled together and merged. Colliding two balls of nuclear density material at around a third of the speed of light causes a big explosion. We get a jet blasted outwards and a gamma-ray burst. The ejected, neutron-rich material decays to heavy elements, and we see this hot material as a kilonova [bonus material]. The X-ray and radio may then be the afterglow formed by the bubble of ejected material pushing into the surrounding interstellar material.

What have we learnt from our results? Here are some gravitational wave highlights.

We measure several thousand cycles from the inspiral. It is the most beautiful chirp! This is the loudest gravitational wave signal yet found, beating even GW150914. GW170817 has a signal-to-noise ratio of 32, while for GW150914 it is just 24.

The signal-to-noise ratios in the Hanford, Livingston and Virgo were 19, 26 and 2 respectively. The signal is quiet in Virgo, which is why you can’t spot it by eye in the plots above. The lack of a clear signal is really useful information, as it restricts where on the sky the source could be, as beautifully illustrated in the video below.

While we measure the inspiral nicely, we don’t detect the merger: we can’t tell if a hypermassive neutron star is formed or if there is immediate collapse to a black hole. This isn’t too surprising at current sensitivity, the system would basically need to convert all of its energy into gravitational waves for us to see it.

From measuring all those gravitational wave cycles, we can measure the chirp mass stupidly well. Unfortunately, converting the chirp mass into the component masses is not easy. The ratio of the two masses is degenerate with the spins of the neutron stars, and we don’t measure these well. In the plot below, you can see the probability distributions for the two masses trace out bananas of roughly constant chirp mass. How far along the banana you go depends on what spins you allow. We show results for two ranges: one with spins (aligned with the orbital angular momentum) up to 0.89, the other with spins up to 0.05. There’s nothing physical about 0.89 (it was just convenient for our analysis), but it is designed to be agnostic, and above the limit you’d plausibly expect for neutron stars (they should rip themselves apart at spins of ~0.7); the lower limit of 0.05 should safely encompass the spins of the binary neutron stars (which are close enough to merge in the age of the Universe) we have estimated from pulsar observations. The masses roughly match what we have measured for the neutron stars in our Galaxy. (The combinations at the tip of the banana for the high spins would be a bit odd).

If we were dealing with black holes, we’d be done: they are only described by mass and spin. Neutron stars are more complicated. Black holes are just made of warped spacetime, neutron stars are made of delicious nuclear material. This can get distorted during the inspiral—tides are raised on one by the gravity of the other. These extract energy from the orbit and accelerate the inspiral. The tidal deformability depends on the properties of the neutron star matter (described by its equation of state). The fluffier a neutron star is, the bigger the impact of tides; the more compact, the smaller the impact. We don’t know enough about neutron star material to predict this with certainty—by measuring the tidal deformation we can learn about the allowed range. Unfortunately, we also didn’t yet have good model waveforms including tides, so for to start we’ve just done a preliminary analysis (an improved analysis was done for the GW170817 Properties Paper). We find that some of the stiffer equations of state (the ones which predict larger neutron stars and bigger tides) are disfavoured; however, we cannot rule out zero tides. This means we can’t rule out the possibility that we have found two low-mass black holes from the gravitational waves alone. This would be an interesting discovery; however, the electromagnetic observations mean that the more obvious explanation of neutron stars is more likely.

From the gravitational wave signal, we can infer the source distance. Combining this with the electromagnetic observations we can do some cool things.

First, the gamma ray burst arrived at Earth 1.7 seconds after the merger. 1.7 seconds is not a lot of difference after travelling something like 85–160 million years (that’s roughly the time since the Cretaceous or Late Jurassic periods). Of course, we don’t expect the gamma-rays to be emitted at exactly the moment of merger, but allowing for a sensible range of emission times, we can bound the difference between the speed of gravity and the speed of light. In general relativity they should be the same, and we find that the difference should be no more than three parts in .

Second, we can combine the gravitational wave distance with the redshift of the galaxy to measure the Hubble constant, the rate of expansion of the Universe. Our best estimates for the Hubble constant, from the cosmic microwave background and from supernova observations, are inconsistent with each other (the most recent supernova analysis only increase the tension). Which is awkward. Gravitational wave observations should have different sources of error and help to resolve the difference. Unfortunately, with only one event our uncertainties are rather large, which leads to a diplomatic outcome.

Finally, we can now change from estimating upper limits on binary neutron star merger rates to estimating the rates! We estimate the merger rate density is in the range (assuming a uniform of neutron star masses between one and two solar masses). This is surprisingly close to what the Collaboration expected back in 2010: a rate of between and , with a *realistic* rate of . This means that we are on track to see many more binary neutron stars—perhaps one a week at design sensitivity!

Advanced LIGO and Advanced Virgo observed a binary neutron star insprial. The rest of the astronomical community has observed what happened next (sadly there are no neutrinos). This is the first time we have such complementary observations—hopefully there will be many more to come. There’ll be a *huge* number of results coming out over the following days and weeks. From these, we’ll start to piece together more information on what neutron stars are made of, and what happens when you smash them together (take that particle physicists).

Also: I’m exhausted, my inbox is overflowing, and I will have far too many papers to read tomorrow.

**GW170817 Discovery Paper:** GW170817: Observation of gravitational waves from a binary neutron star inspiral

**Multimessenger Astronomy Paper:** Multi-messenger observations of a binary neutron star merger**
Data release:** LIGO Open Science Center

If you’re looking for the most up-to-date results regarding GW170817, check out the **O2 Catalogue Paper**.

Over my vacation I cleaned up my email. I had a backlog starting around September 2015. I think there were over 6000 which I sorted or deleted. I had about 20 left to deal with when I got back to work. GW170817 undid that. Despite doing my best to keep up, there are over a 1000 emails in my inbox…

Around the start of O2, I was asked when I expected our results to be public. I said it would depend upon what we found. If it was only high-mass black holes, those are quick to analyse and we know what to do with them, so results shouldn’t take long, now we have the first few out of the way. In this case, perhaps a couple months as we would have been generating results as we went along. However, the worst case scenario would be a binary neutron star overlapping with non-Gaussian noise. Binary neutron stars are more difficult to analyse (they are longer signals, and there are matter effects to worry about), and it would be complicated to get everyone to be happy with our results because we were doing lots of things for the first time. Obviously, if one of these happened at the end of the run, there’d be quite a delay…

I think I got that half-right. We’re done amazingly well analysing GW170817 to get results out in just two months, but I think it will be a while before we get the full O2 set of results out, as we’ve been neglecting otherthings (you’ll notice we’ve not updated our binary black hole merger rate estimate since GW170104, nor given detailed results for testing general relativity with the more recent detections).

At the time of the GW170817 alert, I was working on writing a research proposal. As part of this, I was explaining why it was important to continue working on gravitational-wave parameter estimation, in particular how to deal with non-Gaussian or non-stationary noise. I think I may be a bit of a jinx. For GW170817, the glitch wasn’t a big problem, these type of blips can be removed. I’m more concerned about the longer duration ones, which are less easy to separate out from background noise. Don’t say I didn’t warn you in O3.

The duty of analysing signals to infer their source properties was divided up into shifts for O2. On January 4, the time of GW170104, I was on shift with my partner Aaron Zimmerman. It was his first day. Having survived that madness, Aaron signed back up for the rota. Can you guess who was on shift for the week which contained GW170814 and GW170817? Yep, Aaron (this time partnered with the excellent Carl-Johan Haster). Obviously, we’ll need to have Aaron on rota for the entirety of O3. In preparation, he has already started on paper drafting

Methods Section:Chained ROTA member to a terminal, ignored his cries for help. Detections followed swiftly.

The lightest elements (hydrogen, helium and lithium) we made during the Big Bang. Stars burn these to make heavier elements. Energy can be released up to around iron. Therefore, heavier elements need to be made elsewhere, for example in the material ejected from supernova or (as we have now seen) neutron star mergers, where there are lots of neutrons flying around to be absorbed. Elements (like gold and platinum) formed by this rapid neutron capture are known as r-process elements, I think because they are beloved by pirates.

A couple of weeks ago, the Nobel Prize in Physics was announced for the observation of gravitational waves. In December, the laureates will be presented with a gold (not chocolate) medal. I love the idea that this gold may have come from merging neutron stars.

]]>