In **this paper** we look at full *three-dimensional* localization of gravitational-wave sources; we important a (rather cunning) technique from computer vision to construct a probability distribution for the source’s location, and then explore how well we could localise a set of simulated binary neutron stars. Knowing the source location enables lots of cool science. First, it aids direct follow-up observations with non-gravitational-wave observatories, searching for electromagnetic or neutrino counterparts. It’s especially helpful if you can cross-reference with galaxy catalogues, to find the most probable source locations (this technique was used to find the kilonova associated with GW170817). Even without finding a counterpart, knowing the most probable host galaxy helps us figure out how the source formed (have lots of stars been born recently, or are all the stars old?), and allows us measure the expansion of the Universe. Having a reliable technique to reconstruct source locations is useful!

This was a fun paper to write [bonus note]. I’m sure it will be valuable, both for showing how to perform this type of reconstruction of a multi-dimensional probability density, and for its implications for source localization and follow-up of gravitational-wave signals. I go into details of both below, first discussing our statistical model (this is a bit technical), then looking at our results for a set of binary neutron stars (which have implications for hunting for counterparts) .

When we analyse gravitational-wave data to infer the source properties (location, masses, etc.), we map out parameter space with a set of samples: a list of points in the parameter space, with there being more around more probable locations and fewer in less probable locations. These samples encode everything about the probability distribution for the different parameters, we just need to extract it…

For our application, we want a nice smooth probability density. How do we convert a bunch of discrete samples to a smooth distribution? The simplest thing is to bin the samples. However, picking the right bin size is difficult, and becomes much harder in higher dimensions. Another popular option is to use kernel density estimation. This is better at ensuring smooth results, but you now have to worry about the size of your kernels.

Our approach is in essence to use a kernel density estimate, but to learn the size and position of the kernels (as well as the number) from the data as an extra layer of inference. The “Gaussian mixture model” part of the name refers to the kernels—we use several different Gaussians. The “Dirichlet process” part refers to how we assign their properties (their means and standard deviations). What I really like about this technique, as opposed to the usual rule-of-thumb approaches used for kernel density estimation, is that it is well justified from a theoretical point of view.

I hadn’t come across a Dirchlet process before. Section 2 of the paper is a walkthrough of how I built up an understanding of this mathematical object, and it contains lots of helpful references if you’d like to dig deeper.

In our application, you can think of the Dirichlet process as being a probability distribution for probability distributions. We want a probability distribution describing the source location. Given our samples, we infer what this looks like. We could put all the probability into one big Gaussian, or we could put it into lots of little Gaussians. The Gaussians could be wide or narrow or a mix. The Dirichlet distribution allows us to assign probabilities to each configuration of Gaussians; for example, if our samples are all in the northern hemisphere, we probably want Gaussians centred around there, rather than in the southern hemisphere.

With the resulting probability distribution for the source location, we can quickly evaluate it at a single point. This means we can rapidly produce a list of most probable source galaxies—extremely handy if you need to know where to point a telescope before a kilonova fades away (or someone else finds it).

To verify our technique works, and develop an intuition for three-dimensional localizations, we used studied a set of simulated binary neutron star signals created for the First 2 Years trilogy of papers. This data set is well studied now, it illustrates performance it what we anticipated to be the first two observing runs of the advanced detectors, which turned out to be not too far from the truth. We have previously looked at three-dimensional localizations for these signals using a super rapid approximation.

The plots below show how well we could localise the sources of our binary neutron star sources. Specifically, the plots show the size of the volume which has a 90% probability of containing the source verses the signal-to-noise ratio (the loudness) of the signal. Typically, volumes are –, which is about – Olympic swimming pools. Such a volume would contain something like – galaxies.

Looking at the results in detail, we can learn a number of things

- The localization volume is roughly inversely proportional to the
*sixth*power of the signal-to-noise ratio [bonus note]. Loud signals are localized*much*better than quieter ones! - The localization dramatically improves when we have three-detector observations. The extra detector improves the sky localization, which reduces the localization volume.
- To get the benefit of the extra detector, the source needs to be close enough that all the detectors could get a decent amount of the signal-to-noise ratio. In our case, Virgo is the least sensitive, and we see the the best localizations are when it has a fair share of the signal-to-noise ratio.
- Considering the cases where we only have two detectors, localization volumes get bigger at a given signal-to-noise ration as the detectors get more sensitive. This is because we can detect sources at greater distances.

Putting all these bits together, I think in the future, when we have lots of detections, it would make most sense to prioritise following up the loudest signals. These are the best localised, and will also be the brightest since they are the closest, meaning there’s the greatest potential for actually finding a counterpart. As the sensitivity of the detectors improves, its only going to get more difficult to find a counterpart to a typical gravitational-wave signal, as sources will be further away and less well localized. However, having more sensitive detectors also means that we are more likely to have a really loud signal, which should be really well localized.

Using our localization volumes as a guide, you would only need to search *one* galaxy to find the true source in about 7% of cases with a three-detector network similar to at the end of our second observing run. Similarly, only ten would need to be searched in 23% of cases. It might be possible to get even better performance by considering which galaxies are most probable because they are the biggest or the most likely to produce merging binary neutron stars. This is definitely a good approach to follow.

**arXiv:** 1801.08009 [astro-ph.IM]

**Journal:** *Monthly Notices of the Royal Astronomical Society*; **479(**1):601–614; 2018

**Code:** 3d_volume

**Buzzword bingo:** Interdisciplinary (we worked with computer scientist Tom Haines); machine learning (the inference involving our Dirichlet process Gaussian mixture model); multimessenger astronomy (as our results are useful for following up gravitational-wave signals in the search for counterparts)

We started writing this paper back before the first observing run of Advanced LIGO. We had a pretty complete draft on Friday 11 September 2015. We just needed to gather together a few extra numbers and polish up the figures and we’d be done! At 10:50 am on Monday 14 September 2015, we made our first detection of gravitational waves. The paper was put on hold. The pace of discoveries over the coming years meant we never quite found enough time to get it together—I’ve rewritten the introduction a dozen times. It’s extremely satisfying to have it done. This is a shame, as it meant that this study came out much later than our other three-dimensional localization study. The delay has the advantage of justifying one of my favourite acknowledgement sections.

We find that the localization volume is inversely proportional to the sixth power of the signal-to-noise ration . This is what you would expect. The localization volume depends upon the angular uncertainty on the sky , the distance to the source , and the distance uncertainty ,

.

Typically, the uncertainty on a parameter (like the masses) scales inversely with the signal-to-noise ratio. This is the case for the logarithm of the distance, which means

.

The uncertainty in the sky location (being two dimensional) scales inversely with the square of the signal-to-noise ration,

.

The signal-to-noise ratio itself is inversely proportional to the distance to the source (sources further way are quieter. Therefore, putting everything together gives

.

We all know that treasure is marked by a cross. In the case of a binary neutron star merger, dense material ejected from the neutron stars will decay to heavy elements like gold and platinum, so there is definitely a lot of treasure at the source location.

]]>This is the second published version, the **big** changes since the last version are

- We have now detected gravitational waves
- We have observed our first gravitational wave with a mulitmessenger counterpart [bonus note]
- We now include KAGRA, along with LIGO and Virgo

As you might imagine, these are quite significant updates! The first showed that we can do gravitational-wave astronomy. The second showed that we can do exactly the science this paper is about. The third makes this the first joint publication of the LIGO Scientific, Virgo and KAGRA Collaborations—hopefully the first of many to come.

I lead both this and the previous version. In my **blog on the previous version**, I explained how I got involved, and the long road that a collaboration must follow to get published. In this post, I’ll give an overview of the key details from the new version together with some behind-the-scenes background (working as part of a large scientific collaboration allows you to do *amazing* science, but it can also be exhausting). If you’d like a digest of this paper’s science, check out the **LIGO science summary**.

The first section of the paper outlines the progression of detector sensitivities. The instruments are incredibly sensitive—we’ve never made machines to make these types of measurements before, so it takes a lot of work to get them to run smoothly. We can’t just switch them on and have them work at design sensitivity [bonus note].

The plots above show the planned progression of the different detectors. We had to get these agreed before we could write the later parts of the paper because the sensitivity of the detectors determines how many sources we will see and how well we will be able to localize them. I had anticipated that KAGRA would be the most challenging here, as we had not previously put together this sequence of curves. However, this was not the case, instead it was Virgo which was tricky. They had a problem with the silica fibres which suspended their mirrors (they snapped, which is definitely not what you want). The silica fibres were replaced with steel ones, but it wasn’t immediately clear what sensitivity they’d achieve and when. The final word was they’d observe in August 2017 and that their projections were unchanged. I was sceptical, but they did pull it out of the bag! We had our first clear three-detector observation of a gravitational wave 14 August 2017. Bravo Virgo!

The second section explain our data analysis techniques: how we find signals in the data, how we work out probable source locations, and how we communicate these results with the broader astronomical community—from the start of our third observing run (O3), information will be shared publicly!

The information in this section hasn’t changed much [bonus note]. There is a nice collection of references on the follow-up of different events, including GW170817 (I’d recommend my blog for more on the electromagnetic story). The main update I wanted to include was information on the detection of our first gravitational waves. It turned out to be more difficult than I imagined to come up with a plot which showed results from the five different search algorithms (two which used templates, and three which did not) which found GW150914, and harder still to make a plot which everyone liked. This plot become somewhat infamous for the amount of discussion it generated. I think we ended up with something which was a good compromise and clearly shows our detections sticking out above the background of noise.

The third section brings everything together and looks at what the prospects are for (gravitational-wave) multimessenger astronomy during each observing run. It’s really all about the big table.

I think there are three really awesome take-aways from this

- Actual binary neutron stars detected = 1. We did it!
- Using the rates inferred using our observations so far (including GW170817), once we have the full
*five*detector network of LIGO-Hanford, LIGO-Livingston, Virgo, KAGRA and LIGO-India, we could be detected 11–180 binary neutron stars a year. That something like between one a month to one every other day! I’m kind of scared… - With the five detector network the sky localization is really good. The median localization is about 9–12 square degrees, about the area the LSST could cover in a single pointing! This really shows the benefit of adding more detector to the network. The improvement comes not because a source is much better localized with five detectors than four, but because when you have five detectors you almost always have at least three detectors(the number needed to get a good triangulation) online at any moment, so you get a nice localization for pretty much everything.

In summary, the prospects for observing and localizing gravitational-wave transients are pretty great. If you are an astronomer, make the most of the quiet before O3 begins next year.

**arXiv:** 1304.0670 [gr-qc]

**Journal:** *Living Reviews In Relativity*; **21**:3(57); 2018

**Science summary:** A Bright today and brighter tomorrow: Prospects for gravitational-wave astronomy With Advanced LIGO, Advanced Virgo, and KAGRA**
Prospects for the next update:** After two updates, I’ve stepped down from preparing the next one. Wooh!

The announcement of our first multimessenger detection came between us submitting this update and us getting referee reports. We wanted an updated version of this paper, with the current details of our observing plans, to be available for our astronomer partners to be able to cite when writing their papers on GW170817.

Predictably, when the referee reports came back, we were told we really should include reference to GW170817. This type of discovery is exactly what this paper is about! There was avalanche of results surrounding GW170817, so I had to read through a lot of papers. The reference list swelled from 8 to 13 pages, but this effort was handy for my blog writing. After including all these new results, it really felt like this was version 2.5 of the Observing Scenarios, rather than version 2.

We use the term design sensitivity to indicate the performance the current detectors were designed to achieve. They are the targets we aim to achieve with Advanced LIGO, Advance Virgo and KAGRA. One thing I’ve had to try to train myself not to say is that design sensitivity is the *final* sensitivity of our detectors. Teams are currently working on plans for how we can upgrade our detectors beyond design sensitivity. Reaching design sensitivity will not be the end of our journey.

Our first gravitational-wave detections were from binary black holes. Therefore, when we were starting on this update there was a push to switch from focusing on binary neutron stars to binary black holes. I resisted on this, partially because I’m lazy, but mostly because I still thought that binary neutron stars were our best bet for multimessenger astronomy. This worked out nicely.

]]>There are many proposed ways of making a binary black hole. The current leading contender is isolated binary evolution: start with a binary star system (most stars are in binaries or higher multiples, our lonesome Sun is a little unusual), and let the stars evolve together. Only a fraction will end with black holes close enough to merge within the age of the Universe, but these would be the sources of the signals we see with LIGO and Virgo. We consider this isolated binary scenario in this work [bonus note].

Now, you might think that with stars being so fundamentally important to astronomy, and with binary stars being so common, we’d have the evolution of binaries figured out by now. It turns out it’s actually pretty messy, so there’s lots of work to do. We consider constraining four parameters which describe the bits of binary physics which we are currently most uncertain of:

- Black hole natal kicks—the push black holes receive when they are born in supernova explosions. We now the neutron stars get kicks, but we’re less certain for black holes [bonus note].
- Common envelope efficiency—one of the most intricate bits of physics about binaries is how mass is transferred between stars. As they start exhausting their nuclear fuel they puff up, so material from the outer envelope of one star may be stripped onto the other. In the most extreme cases, a common envelope may form, where so much mass is piled onto the companion, that both stars live in a single fluffy envelope. Orbiting inside the envelope helps drag the two stars closer together, bringing them closer to merging. The efficiency determines how quickly the envelope becomes unbound, ending this phase.
- Mass loss rates during the Wolf–Rayet (not to be confused with Wolf 359) and luminous blue variable phases–stars lose mass through out their lives, but we’re not sure how much. For stars like our Sun, mass loss is low, there is enough to gives us the aurora, but it doesn’t affect the Sun much. For bigger and hotter stars, mass loss can be significant. We consider two evolutionary phases of massive stars where mass loss is high, and currently poorly known. Mass could be lost in clumps, rather than a smooth stream, making it difficult to measure or simulate.

We use parameters describing potential variations in these properties are ingredients to the COMPAS population synthesis code. This rapidly (albeit approximately) evolves a population of stellar binaries to calculate which will produce merging binary black holes.

The question is now which parameters affect our gravitational-wave measurements, and how accurately we can measure those which do?

For our deductions, we use two pieces of information we will get from LIGO and Virgo observations: the total number of detections, and the distributions of chirp masses. The chirp mass is a combination of the two black hole masses that is often well measured—it is the most important quantity for controlling the inspiral, so it is well measured for low mass binaries which have a long inspiral, but is less well measured for higher mass systems. In reality we’ll have much more information, so these results should be the *minimum* we can actually do.

We consider the population after 1000 detections. That sounds like a lot, but we should have collected this many detections after just 2 or 3 years observing at design sensitivity. Our default COMPAS model predicts 484 detections per year of observing time! Honestly, I’m a little scared about having this many signals…

For a set of population parameters (black hole natal kick, common envelope efficiency, luminous blue variable mass loss and Wolf–Rayet mass loss), COMPAS predicts the number of detections and the fraction of detections as a function of chirp mass. Using these, we can work out the probability of getting the observed number of detections and fraction of detections within different chirp mass ranges. This is the likelihood function: if a given model is correct we are more likely to get results similar to its predictions than further away, although we expect their to be some scatter.

If you like equations, the from of our likelihood is explained in this bonus note. If you don’t like equations, there’s one lurking in the paragraph below. Just remember, that it can’t see you if you don’t move. It’s OK to skip the equation.

To determine how sensitive we are to each of the population parameters, we see how the likelihood changes as we vary these. The more the likelihood changes, the easier it should be to measure that parameter. We wrap this up in terms of the Fisher information matrix. This is defined as

,

where is the likelihood for data (the number of observations and their chirp mass distribution in our case), are our parameters (natal kick, etc.), and the angular brackets indicate the average over the population parameters. In statistics terminology, this is the variance of the score, which I think sounds cool. The Fisher information matrix nicely quantifies how much information we can lean about the parameters, including the correlations between them (so we can explore degeneracies). The inverse of the Fisher information matrix gives a lower bound on the covariance matrix (the multidemensional generalisation of the variance in a normal distribution) for the parameters . In the limit of a large number of detections, we can use the Fisher information matrix to estimate the accuracy to which we measure the parameters [bonus note].

We simulated several populations of binary black hole signals, and then calculate measurement uncertainties for our four population uncertainties to see what we could learn from these measurements.

Using just the rate information, we find that we can constrain a combination of the common envelope efficiency and the Wolf–Rayet mass loss rate. Increasing the common envelope efficiency ends the common envelope phase earlier, leaving the binary further apart. Wider binaries take longer to merge, so this reduces the merger rate. Similarly, increasing the Wolf–Rayet mass loss rate leads to wider binaries and smaller black holes, which take longer to merge through gravitational-wave emission. Since the two parameters have similar effects, they are anticorrelated. We can increase one and still get the same number of detections if we decrease the other. There’s a hint of a similar correlation between the common envelope efficiency and the luminous blue variable mass loss rate too, but it’s not quite significant enough for us to be certain it’s there.

Adding in the chirp mass distribution gives us more information, and improves our measurement accuracies. The fraction uncertainties are about 2% for the two mass loss rates and the common envelope efficiency, and about 5% for the black hole natal kick. We’re less sensitive to the natal kick because the most massive black holes don’t receive a kick, and so are unaffected by the kick distribution [bonus note]. In any case, these measurements are exciting! With this type of precision, we’ll really be able to learn something about the details of binary evolution.

The accuracy of our measurements will improve (on average) with the square root of the number of gravitational-wave detections. So we can expect 1% measurements after about 4000 observations. However, we might be able to get even more improvement by combining constraints from other types of observation. Combining different types of observation can help break degeneracies. I’m looking forward to building a concordance model of binary evolution, and figuring out exactly how massive stars live their lives.

**arXiv:** 1711.06287 [astro-ph.HE]

**Journal:** *Monthly Notices of the Royal Astronomical Society*; **477**(4):4685–4695; 2018

**Favourite dinosaur:** Professor Science

In practise, we will need to worry about how binary black holes are formed, via isolated evolution or otherwise, before inferring the parameters describing binary evolution. This makes the problem more complicated. Some parameters, like mass loss rates or black hole natal kicks, might be common across multiple channels, while others are not. There are a number of ways we might be able to tell different formation mechanisms apart, such as by using spin measurements.

We model the supernova kicks as following a Maxwell–Boltzmann distribution,

,

where is the unknown population parameter. The natal kick received by the black hole is not the same as this, however, as we assume some of the material ejected by the supernova falls back, reducing the over kick. The final natal kick is

,

where is the fraction that falls back, taken from Fryer *et al*. (2012). The fraction is greater for larger black holes, so the biggest black holes get no kicks. This means that the largest black holes are unaffected by the value of .

In this analysis, we have two pieces of information: the number of detections, and the chirp masses of the detections. The first is easy to summarise with a single number. The second is more complicated, and we consider the fraction of events within different chirp mass bins.

Our COMPAS model predicts the merger rate and the probability of falling in each chirp mass bin (we factor measurement uncertainty into this). Our observations are the the total number of detections and the number in each chirp mass bin (). The likelihood is the probability of these observations given the model predictions. We can split the likelihood into two pieces, one for the rate, and one for the chirp mass distribution,

.

For the rate likelihood, we need the probability of observing given the predicted rate . This is given by a Poisson distribution,

,

where is the total observing time. For the chirp mass likelihood, we the probability of getting a number of detections in each bin, given the predicted fractions. This is given by a multinomial distribution,

.

These look a little messy, but they simplify when you take the logarithm, as we need to do for the Fisher information matrix.

When we substitute in our likelihood into the expression for the Fisher information matrix, we get

.

Conveniently, although we only need to evaluate first-order derivatives, even though the Fisher information matrix is defined in terms of second derivatives. The expected number of events is . Therefore, we can see that the measurement uncertainty defined by the inverse of the Fisher information matrix, scales on average as .

For anyone worrying about using the likelihood rather than the posterior for these estimates, the high number of detections [bonus note] should mean that the information we’ve gained from the data overwhelms our prior, meaning that the shape of the posterior is dictated by the shape of the likelihood.

As an alternative way of looking at the Fisher information matrix, we can consider the shape of the likelihood close to its peak. Around the maximum likelihood point, the first-order derivatives of the likelihood with respect to the population parameters is zero (otherwise it wouldn’t be the maximum). The maximum likelihood values of $latex N_\mathrm{obs} = \mu t_\mathrm{obs}$ and are the same as their expectation values. The second-order derivatives are given by the expression we have worked out for the Fisher information matrix. Therefore, in the region around the maximum likelihood point, the Fisher information matrix encodes all the relevant information about the shape of the likelihood.

So long as we are working close to the maximum likelihood point, we can approximate the distribution as a multidimensional normal distribution with its covariance matrix determined by the inverse of the Fisher information matrix. Our results for the measurement uncertainties are made subject to this approximation (which we did check was OK).

Approximating the likelihood this way should be safe in the limit of large . As we get more detections, statistical uncertainties should reduce, with the peak of the distribution homing in on the maximum likelihood value, and its width narrowing. If you take the limit of , you’ll see that the distribution basically becomes a delta function at the maximum likelihood values. To check that our was large enough, we verified that higher-order derivatives were still small.

Michele Vallisneri has a good paper looking at using the Fisher information matrix for gravitational wave parameter estimation (rather than our problem of binary population synthesis). There is a good discussion of its range of validity. The high signal-to-noise ratio limit for gravitational wave signals corresponds to our high number of detections limit.

]]>

My previous post discussed some of the interesting features of EMRIs. Because of the extreme difference in masses of the two black holes, it takes a long time for them to complete their inspiral. We can measure tens of *thousands* of orbits, which allows us to make wonderfully precise measurements of the source properties (if we can accurately pick out the signal from the data). Here, we’ll examine exactly what we could learn with LISA from EMRIs [bonus note].

First we build a model to investigate how many EMRIs there could be. There is a lot of astrophysics which we are currently uncertain about, which leads to a large spread in estimates for the number of EMRIs. Second, we look at how precisely we could measure properties from the EMRI signals. The astrophysical uncertainties are less important here—we could get a revolutionary insight into the lives of massive black holes.

To build a model of how many EMRIs there are, we need a few different inputs:

- The population of massive black holes
- The distribution of stellar clusters around massive black holes
- The range of orbits of EMRIs

We examine each of these in turn, building a more detailed model than has previously been constructed for EMRIs.

We currently know little about the population of massive black holes. This means we’ll discover lots when we start measuring signals (yay), but it’s rather inconvenient now, when we’re trying to predict how many EMRIs there are (boo). We take two different models for the mass distribution of massive black holes. One is based upon a semi-analytic model of massive black hole formation, the other is at the pessimistic end allowed by current observations. The semi-analytic model predicts massive black hole spins around 0.98, but we also consider spins being uniformly distributed between 0 and 1, and spins of 0. This gives us a picture of the bigger black hole, now we need the smaller.

Observations show that the masses of massive black holes are correlated with their surrounding cluster of stars—bigger black holes have bigger clusters. We consider four different versions of this trend: Gültekin *et al*. (2009); Kormendy & Ho (2013); Graham & Scott (2013), and Shankar *et al*. (2016). The stars and black holes about a massive black hole should form a cusp, with the density of objects increasing towards the massive black hole. This is great for EMRI formation. However, the cusp is disrupted if two galaxies (and their massive black holes) merge. This tends to happen—it’s how we get bigger galaxies (and black holes). It then takes some time for the cusp to reform, during which time, we don’t expect as many EMRIs. Therefore, we factor in the amount of time for which there is a cusp for massive black holes of different masses and spins.

Given a cusp about a massive black hole, we then need to know how often an EMRI forms. Simulations give us a starting point. However, these only consider a snap-shot, and we need to consider how things evolve with time. As stellar-mass black holes inspiral, the massive black hole will grow in mass and the surrounding cluster will become depleted. Both these effects are amplified because for each inspiral, there’ll be many more stars or stellar-mass black holes which will just plunge directly into the massive black hole. We therefore need to limit the number of EMRIs so that we don’t have an unrealistically high rate. We do this by adding in a couple of feedback factors, one to cap the rate so that we don’t deplete the cusp quicker than new objects will be added to it, and one to limit the maximum amount of mass the massive black hole can grow from inspirals and plunges. This gives us an idea for the total number of inspirals.

Finally, we calculate the orbits that EMRIs will be on. We again base this upon simulations, and factor in how the spin of the massive black hole effects the distribution of orbital inclinations.

Putting all the pieces together, we can calculate the population of EMRIs. We now need to work out how many LISA would be able to detect. This means we need models for the gravitational-wave signal. Since we are simulating a large number, we use a computationally inexpensive analytic model. We know that this isn’t too accurate, but we consider two different options for setting the end of the inspiral (where the smaller black hole finally plunges) which should bound the true range of results.

Allowing for all the different uncertainties, we find that there should be somewhere between 1 and 4200 EMRIs detected per year. (The model we used when studying transient resonances predicted about 250 per year, albeit with a slightly different detector configuration, which is fairly typical of all the models we consider here). This range is encouraging. The lower end means that EMRIs are a pretty safe bet, we’d be unlucky not to get at least one over the course of a multi-year mission (LISA should have at least four years observing). The upper end means there could be lots—we might actually need to worry about them forming a background source of noise if we can’t individually distinguish them!

Having shown that EMRIs are a good LISA source, we now need to consider what we could learn by measuring them?

We estimate the precision we will be able to measure parameters using the Fisher information matrix. The Fisher matrix measures how sensitive our observations are to changes in the parameters (the more sensitive we are, the better we should be able to measure that parameter). It should be a lower bound on actual measurement precision, and well approximate the uncertainty in the high signal-to-noise (loud signal) limit. The combination of our use of the Fisher matrix and our approximate signal models means our results will not be perfect estimates of real performance, but they should give an indication of the typical size of measurement uncertainties.

Given that we measure a huge number of cycles from the EMRI signal, we can make really precise measurements of the the mass and spin of the massive black hole, as these parameters control the orbital frequencies. Below are plots for the typical measurement precision from our Fisher matrix analysis. The orbital eccentricity is measured to similar accuracy, as it influences the range of orbital frequencies too. We also get pretty good measurements of the the mass of the smaller black hole, as this sets how quickly the inspiral proceeds (how quickly the orbital frequencies change). EMRIs will allow us to do precision astronomy!

Now, before you get too excited that we’re going to learn *everything* about massive black holes, there is one confession I should make. In the plot above I show the measurement accuracy for the redshifted mass of the massive black hole. The cosmological expansion of the Universe causes gravitational waves to become stretched to lower frequencies in the same way light is (this makes visible light more red, hence the name). The measured frequency is where is the frequency emitted, and is the redshift ( for a nearby source, and is larger for further away sources). Lower frequency gravitational waves correspond to higher mass systems, so it is often convenient to work with the redshifted mass, the mass corresponding to the signal you measure if you ignore redshifting. The redshifted mass of the massive black hole is where is the true mass. To work out the true mass, we need the redshift, which means we need to measure the distance to the source.

The plot above shows the fractional uncertainty on the distance. We don’t measure this too well, as it is determined from the amplitude of the signal, rather than its frequency components. The situation is much as for LIGO. The larger uncertainties on the distance will dominate the overall uncertainty on the black hole masses. We won’t be getting all these to fractions of a percent. However, that doesn’t mean we can’t still figure out what the distribution of masses looks like!

One of the really exciting things we can do with EMRIs is check that the signal matches our expectations for a black hole in general relativity. Since we get such an excellent map of the spacetime of the massive black hole, it is easy to check for deviations. In general relativity, everything about the black hole is fixed by its mass and spin (often referred to as the no-hair theorem). Using the measured EMRI signal, we can check if this is the case. One convenient way of doing this is to describe the spacetime of the massive object in terms of a multipole expansion. The first (most important) terms gives the mass, and the next term the spin. The third term (the quadrupole) is set by the first two, so if we can measure it, we can check if it is consistent with the expected relation. We estimated how precisely we could measure a deviation in the quadrupole. Fortunately, for this consistency test, all factors from redshifting cancel out, so we can get really detailed results, as shown below. Using EMRIs, we’ll be able to check for really small differences from general relativity!

In summary: EMRIS are awesome. We’re not sure how many we’ll detect with LISA, but we’re confident there will be some, perhaps a couple of hundred per year. From the signals we’ll get new insights into the masses and spins of black holes. This should tell us something about how they, and their surrounding galaxies, evolved. We’ll also be able to do some stringent tests of whether the massive objects are black holes as described by general relativity. It’s all pretty exciting, for when LISA launches, which is currently planned about 2034…

**arXiv:** 1703.09722 [gr-qc]

**Journal:** *Physical Review D*; **477**(4):4685–4695; 2018

**Conference proceedings:** 1704.00009 [astro-ph.GA] (from when work was still in-progress)

**Estimated number of Marvel films before LISA launch: **48 (starting with *Ant-Man and the Wasp*)

Is it “extreme-mass-ratio inspiral”, “extreme mass-ratio inspiral” or “extreme mass ratio inspiral”? All are used in the literature. This is one of the advantage of using “EMRI”. The important thing is that we’re talking about inspirals that have a mass ratio which is extreme. For this paper, we used “extreme mass-ratio inspiral”, but when I first started my PhD, I was first introduced to “extreme-mass-ratio inspirals”, so they are always stuck that way in my mind.

I think hyphenation is a bit of an art, and there’s no definitive answer here, just like there isn’t for superhero names, where you can have Iron Man, Spider-Man or Iceman.

This paper is part of a series looking at what LISA could tells us about different gravitational wave sources. So far, this series covers

- Massive black hole binaries
- Cosmological phase transitions
- Standard sirens (for measuring the expansion of the Universe)
- Inflation
- Extreme-mass-ratio inspirals

You’ll notice there’s a change in the name of the mission from eLISA to LISA part-way through, as things have evolved. (Or devolved?) I think the main take-away so far is that the cosmology group is the most enthusiastic.

]]>EMRIs are a beautiful gravitational wave source. They occur when a stellar-mass black hole slowly inspirals into a massive black hole (as found in the centre of galaxies). The massive black hole can be tens of thousands or millions of times more massive than the stellar-mass black hole (hence *extreme* mass ratio). This means that the inspiral is slow—we can potentially measure tens of thousands of orbits. This is both the blessing and the curse of EMRIs. The huge numbers of cycles means that we can closely follow the inspiral, and build a detailed map of the massive black hole’s spacetime. EMRIs will give us precision measurements of the properties of massive black holes. However, to do this, we need to be able to find the EMRI signals in the data, we need models which can match the signals over all these cycles. Analysing EMRIs is a huge challenge.

EMRI orbits are complicated. At any moment, the orbit can be described by three orbital frequencies: one for radial (in/out) motion , one for polar (north/south if we think of the spin of the massive black hole like the rotation of the Earth) motion and one for axial (around in the east/west direction) motion. As gravitational waves are emitted, and the orbit shrinks, these frequencies evolve. The animation above, made by Steve Drasco, illustrates the evolution of an EMRI. Every so often, so can see the pattern freeze—the orbits stays in a constant shape (although this still rotates). This is a transient resonance. Two of the orbital frequencies become commensurate (so we might have 3 north/south cycles and 2 in/out cycles over the same period [bonus note])—this is the resonance. However, because the frequencies are still evolving, we don’t stay locked like this is forever—which is why the resonance is transient. To calculate an EMRI, you need to know how the orbital frequencies evolve.

The evolution of an EMRI is slow—the time taken to inspiral is much longer than the time taken to complete one orbit. Therefore, we can usually split the problem of calculating the trajectory of an EMRI into two parts. On short timescales, we can consider orbits as having fixed frequencies. On long timescale, we can calculate the evolution by averaging over many orbits. You might see the problem with this—around resonances, this averaging breaks down. Whereas normally averaging over many orbits means averaging over a complicated trajectory that hits pretty much all possible points in the orbital range, on resonance, you just average over the same bit again and again. On resonance, terms which usually average to zero can become important. Éanna Flanagan and Tanja Hinderer first pointed out that around resonances the usual scheme (referred to as the adiabatic approximation) doesn’t work.

Around a resonance, the evolution will be enhanced or decreased a little relative to the standard adiabatic evolution. We get a kick. This is only small, but because we observe EMRIs for so many orbits, a small difference can grow to become a significant difference later on. Does this mean that we won’t be able to detect EMRIs with our standard models? This was a concern, so back at the end of PhD I began to investigate [bonus note]. The first step is to understand the size of the kick.

If there were no gravitational waves, the orbit would not evolve, it would be fixed. The orbit could then be described by a set of constants of motion. The most commonly used when describing orbits about black holes are the energy, angular momentum and Carter constant. For the purposes of this blog, we’ll not worry too much about what these constants are, we’ll just consider some constant .

The resonance kick is a change in this constant . What should this depend on? There are three ingredients. First, the rate of change of this constant on the resonant orbit. Second, the time spent on resonance . The bigger these are, the bigger the size of the jump. Therefore,

.

However, the jump could be positive or negative. This depends upon the relative phase of the radial and polar motion [bonus note]—for example, do they both reach their maximum point at the same time, or does one lag behind the other? We’ll call this relative phase . By varying we explore we can get our resonant trajectory to go through any possible point in space. Therefore, averaging over should get us back to the adiabatic approximation: the average value of must be zero. To complete our picture for the jump, we need a periodic function of the phase,

,

with . Now, we know the pieces, we can try to figure out what the pieces are.

The rate of change is proportional the mass ratio : the smaller the stellar-mass black hole is relative to the massive one, the smaller is. The exact details depend upon gravitational self-force calculations, which we’ll skip over, as they’re pretty hard, but they are the same for all orbits (resonant or not).

We can think of the resonance timescale either as the time for the orbital frequencies to drift apart or the time for the orbit to start filling the space again (so that it’s safe to average). The two pictures yield the same answer—there’s a fuller explanation in Section III A of the paper. To define the resonance timescale, it is useful to define the frequency , which is zero exactly on resonance. If this is evolving at rate , then the resonance timescale is

.

This bridges the two timescales that usually define EMRIs: the short orbital timescale and the long evolution timescale :

.

To find the form of for , we need to do some quite involved maths (given in Appendix B of the paper) [bonus note]. This works by treating the evolution far from resonance as depending upon two independent times (effectively defining and ), and then matching the evolution close to resonance using an expansion in terms of a different time (something like ). The solution shows that the jump depends sensitively upon the phase at resonance, which makes them extremely difficult to calculate.

We numerically evaluated the size of kicks for different orbits and resonances. We found a number of trends. First, higher-order resonances (those with larger and ) have smaller jumps than lower-order ones. This makes sense, as higher-order resonances come closer to covering all the points in the space, and so are more like averaging over the entire space. Second, jumps are larger for higher eccentricity orbits. This also makes sense, as you can’t have resonances for circular (zero eccentricity orbits) as there’s no radial frequency, so the size of the jumps must tend to zero. We’ll see that these two points are important when it comes to observational consequences of transient resonances.

Now we’ve figured out the impact of passing through a transient resonance, let’s look at what this means for detecting EMRIs. The jump can mean that the evolution post-resonance can soon become out of phase with that pre-resonance. We can’t match both parts with the same adiabatic template. This could significantly hamper our prospects for detection, as we’re limited to the bits of signal we can pick up between resonances.

We created an astrophysical population of simulated EMRIs. We used numerical simulations to estimate a plausible population of massive black holes and distribution of stellar-mass black holes insprialling into them. We then used adiabatic models to see how many LISA (or eLISA as it was called at the time) could potentially detect. We found there were ~510 EMRIs detectable (with a signal-to-noise ratio of 15 or above) for a two-year mission.

We then calculated how much the signal-to-noise ratio would be reduced by passing through transient resonances. The plot below shows the distribution of signal-to-noise ratio for the original population, ignoring resonances, and then after factoring in the reduction. There are now ~490 detectable EMRIs, a loss of 4%. We can still detect the majority of EMRIs!

We were worried about the impact of transient resonances, we know that jumps can cause them to become undetectable, so why aren’t we seeing a bit effect in our population? The answer lies is in the trends we saw earlier. Jumps are large for low order resonances with high eccentricities. These were the ones first highlighted, as they are obviously the most important. However, low-order resonances are only encountered really close to the massive black hole. This means late in the inspiral, after we have already accumulated lots of signal-to-noise ratio. Losing a little bit of signal right at the end doesn’t hurt detectability too much. On top of this, gravitational wave emission efficiently damps down eccentricity. Orbits typically have low eccentricities by the time they hit low-order resonances, meaning that the jumps are actually quite small. Although small jumps lead to some mismatch, we can still use our signal templates without jumps. Therefore, resonances don’t hamper us (too much) in finding EMRIs!

This may seem like a happy ending, but it is not the end of the story. While we can detect EMRIs, we still need to be able to accurately infer their source properties. Features not included in our signal templates (like jumps), could bias our results. For example, it might be that we can better match a jump by using a template for a different black hole mass or spin. However, if we include jumps, these extra features could give us extra precision in our measurements. The question of what jumps could mean for parameter estimation remains to be answered.

**arXiv:** 1608.08951 [gr-qc]

**Journal:** *Physical Review D*; **94**(12):124042(24); 2016

**Conference proceedings:** 1702.05481 [gr-qc] (only 2 pages—ideal for emergency journal club presentations)

**Favourite jumpers: **Woolly, Mario, Kangaroos

When discussing resonances, and their impact on orbital evolution, we’ll only care about – resonances. Resonances with are not important because the spacetime is axisymmetric. The equations are exactly identical for all values of the the axial angle , so it doesn’t matter where you are (or if you keep cycling over the same spot) for the evolution of the EMRI.

This, however, doesn’t mean that resonances aren’t interesting. They can lead to small kicks to the binary, because you are preferentially emitting gravitational waves in one direction. For EMRIs this are negligibly small, but for more equal mass systems, they could have some interesting consequences as pointed out by Maarten van de Meent.

I’m grateful to the Cambridge Philosophical Society for giving me some extra funding to work on resonances. If you’re a Cambridge PhD student, make sure to become a member so you can take advantage of the opportunities they offer.

The theory of how to evolve through a transient resonance was developed by Kevorkian and coauthors. I spent a long time studying these calculations before working up the courage to attempt them myself. There are a few technical details which need to be adapted for the case of EMRIs. I finally figured everything out while in Warsaw Airport, coming back from a conference. It was the most I had ever felt like a real physicist.

]]>There are currently 9 papers in the GW170817 family. Further papers, for example looking at parameter estimation in detail, are in progress. Papers are listed below in order of arXiv posting. My favourite is the GW170817 Discovery Paper. Many of the highlights, especially from the Discovery and Multimessenger Astronomy Papers, are described in my **GW170817 announcement post**.

Keeping up with all the accompanying observational results is a task not even Sisyphus would envy. I’m sure that the details of these will be debated for a long time to come. I’ve included references to a few below (mostly as [citation notes]), but these are not guaranteed to be complete (I’ll continue to expand these in the future).

**Title:** GW170817: Observation of gravitational waves from a binary neutron star inspiral**
arXiv:** 1710.05832 [gr-qc]

Journal:

LIGO science summary:

This is the paper announcing the gravitational-wave detection. It gives an overview of the properties of the signal, initial estimates of the parameters of the source (see the GW170817 Properties Paper for updates) and the binary neutron star merger rate, as well as an overview of results from the other companion papers.

I was disappointed that “the era of gravitational-wave multi-messenger astronomy has opened with a bang” didn’t make the conclusion of the final draft.

**More details:** The GW170817 Discovery Paper summary

**Title:** Multi-messenger observations of a binary neutron star merger**
arXiv:** 1710.05833 [astro-ph.HE]

Journal:

LIGO science summary:

I’ve numbered this paper as −1 as it gives an overview of *all* the observations—gravitational wave, electromagnetic and neutrino—accompanying GW170817. I feel a little sorry for the neutrino observers, as they’re the only ones not to make a detection. Drawing together the gravitational wave and electromagnetic observations, we can confirm that binary neutron star mergers are the progenitors of (at least some) short gamma-ray bursts and kilonovae.

Do *not* print this paper, the author list stretches across 23 pages.

**More details:** The Multimessenger Astronomy Paper summary

**Title:** Gravitational waves and gamma-rays from a binary neutron star merger: GW170817 and GRB 170817A**
arXiv:** 1710.05834 [astro-ph.HE]

Journal:

LIGO science summary:

Here we bring together the LIGO–Virgo observations of GW170817 and the Fermi and INTEGRAL observations of GRB 170817A. From the spatial and temporal coincidence of the gravitational waves and gamma rays, we establish that the two are associated with each other. There is a 1.7 s time delay between the merger time estimated from gravitational waves and the arrival of the gamma-rays. From this, we make some inferences about the structure of the jet which is the source of the gamma rays. We can also use this to constrain deviations from general relativity, which is cool. Finally, we estimate that there be 0.3–1.7 joint gamma ray–gravitational wave detections per year once our gravitational-wave detectors reach design sensitivity!

**More details:** The GW170817 Gamma-ray Burst Paper summary

**Title:** A gravitational-wave standard siren measurement of the Hubble constant [bonus note]**
arXiv:** 1710.05835 [astro-ph.CO]

Journal:

LIGO science summary:

The Hubble constant quantifies the current rate of expansion of the Universe. If you know how far away an object is, and how fast it is moving away (due to the expansion of the Universe, not because it’s on a bus or something, that is important), you can estimate the Hubble constant. Gravitational waves give us an estimate of the distance to the source of GW170817. The observations of the optical transient AT 2017gfo allow us to identify the galaxy NGC 4993 as the host of GW170817’s source. We know the redshift of the galaxy (which indicates how fast its moving). Therefore, putting the two together we can infer the Hubble constant in a completely new way.

**More details:** The GW170817 Hubble Constant Paper summary

**Title:** Estimating the contribution of dynamical ejecta in the kilonova associated with GW170817**
arXiv:** 1710.05836 [astro-ph.HE]

Journal:

LIGO science summary:

During the coalescence of two neutron stars, lots of neutron-rich matter gets ejected. This undergoes rapid radioactive decay, which powers a kilonova, an optical transient. The observed signal depends upon the material ejected. Here, we try to use our gravitational-wave measurements to predict the properties of the ejecta ahead of the flurry of observational papers.

**More details:** The GW170817 Kilonova Paper summary

**Title:** GW170817: Implications for the stochastic gravitational-wave background from compact binary coalescences**
arXiv:** 1710.05837 [gr-qc]

We can detect signals if they are loud enough, but there will be many quieter ones that we cannot pick out from the noise. These add together to form an overlapping background of signals, a background rumbling in our detectors. We use the inferred rate of binary neutron star mergers to estimate their background. This is smaller than the background from binary black hole mergers (black holes are more massive, so they’re intrinsically louder), but they all add up. It’ll still be a few years before we could detect a background signal.

**More details:** The GW170817 Stochastic Paper summary

**Title:** On the progenitor of binary neutron star merger GW170817**
arXiv:** 1710.05838 [astro-ph.HE]

Journal:

LIGO science summary:

We know that GW170817 came from the coalescence of two neutron stars, but where did these neutron stars come from? Here, we combine the parameters inferred from our gravitational-wave measurements, the observed position of AT 2017gfo in NGC 4993 and models for the host galaxy, to estimate properties like the kick imparted to neutron stars during the supernova explosion and how long it took the binary to merge.

**More details:** The GW170817 Progenitor Paper summary

**Title:** Search for high-energy neutrinos from binary neutron star merger GW170817 with ANTARES, IceCube, and the Pierre Auger Observatory**
arXiv:** 1710.05839 [astro-ph.HE]

Journal:

This is the search for neutrinos from the source of GW170817. Lots of neutrinos are emitted during the collision, but not enough to be detectable on Earth. Indeed, we don’t find any neutrinos, but we combine results from three experiments to set upper limits.

**More details:** The GW170817 Neutrino Paper summary

**Title:** Search for post-merger gravitational waves from the remnant of the binary neutron star merger GW170817**
arXiv:** 1710.09320 [astro-ph.HE]

Journal:

LIGO science summary:

After the two neutron stars merged, what was left? A larger neutron star or a black hole? Potentially we could detect gravitational waves from a wibbling neutron star, as it sloshes around following the collision. We don’t. It would have to be a lot closer for this to be plausible. However, this paper outlines how to search for such signals; the GW170817 Properties Paper contains a more detailed look at any potential post-merger signal.

**More details:** The GW170817 Post-merger Paper summary

**Title:** Properties of the binary neutron star merger GW170817**
arXiv:** 1805.11579 [gr-qc]

In the GW170817 Discovery Paper we presented initial estimates for the properties of GW170817’s source. These were the best we could do on the tight deadline for the announcement (it was a pretty good job in my opinion). Now we have had a bit more time we can present a new, improved analysis. This uses recalibrated data and a wider selection of waveform models. We also fold in our knowledge of the source location, thanks to the observation of AT 2017gfo by our astronomer partners, for our best results. if you want to know the details of GW170817’s source, this is the paper for you!

**More details:** The GW170817 Properties Paper summary

**Title:** GW170817: Measurements of neutron star radii and equation of state**
arXiv:** 1805.11581 [gr-qc]

Neutron stars are made of weird stuff: nuclear density material which we cannot replicate here on Earth. Neutron star matter is often described in terms of an equation of state, a relationship that explains how the material changes at different pressures or densities. A stiffer equation of state means that the material is harder to squash, and a softer equation of state is easier to squish. This means that for a given mass, a stiffer equation of state will predict a larger, fluffier neutron star, while a softer equation of state will predict a more compact, denser neutron star. In this paper, we assume that GW170817’s source is a binary neutron star system, where both neutron stars have the same equation of state, and see what we can infer about neutron star stuff.

**More details:** The GW170817 Equation-of-state Paper summary

**Synopsis:** GW170817 Discovery Paper

**Read this if:** You want all the details of our first gravitational-wave observation of a binary neutron star coalescence

**Favourite part:** Look how well we measure the chirp mass!

GW170817 was a remarkable gravitational-wave discovery. It is the loudest signal observed to date, and the source with the lowest mass components. I’ve written about some of the highlights of the discovery in my previous **GW170817 discovery post**.

Binary neutron stars are one of the principal targets for LIGO and Virgo. The first observational evidence for the existence of gravitational waves came from observations of binary pulsars—a binary neutron star system where (at least one) one of the components is a pulsar. Therefore (unlike binary black holes), we knew that these sources existed before we turned on our detectors. What was less certain was how often they merge. In our first advanced-detector observing run (O1), we didn’t find any, allowing us to estimate an upper limit on the merger rate of . Now, we know much more about merging binary neutron stars.

GW170817, as a loud and long signal, is a highly significant detection. You can see it in the data by eye. Therefore, it should have been a easy detection. As is often the case with real experiments, it wasn’t quite that simple. Data transfer from Virgo had stopped over night, and there was a glitch (a non-stationary and non-Gaussian noise feature) in the Livingston detector, which meant that this data weren’t automatically analysed. Nevertheless, GstLAL flagged something interesting in the Hanford data, and there was a mad flurry to get the other data in place so that we could analyse the signal in all three detectors. I remember being sceptical in these first few minutes until I saw the plot of Livingston data which blew me away: the chirp was clearly visible despite the glitch!

Using data from both of our LIGO detectors (as discussed for GW170814, our offline algorithms searching for coalescing binaries only use these two detectors during O2), GW170817 is an absolutely gold-plated detection. GstLAL estimates a false alarm rate (the rate at which you’d expect something at least this signal-like to appear in the detectors due to a random noise fluctuation) of less than one in 1,100,000 years, while PyCBC estimates the false alarm rate to be less than one in 80,000 years.

Parameter estimation (inferring the source properties) used data from all three detectors. We present a (remarkably thorough given the available time) initial analysis in this paper (updated results are given in the GW170817 Properties Paper). This signal is challenging to analyse because of the glitch and because binary neutron stars are made of stuff, which can leave an imprint on the waveform. We’ll be looking at the effects of these complications in more detail in the future. Our initial results are

- The source is localized to a region of about at a distance of (we typically quote results at the 90% credible level). This is the closest gravitational-wave source yet.
- The chirp mass is measured to be , much lower than for our binary black hole detections.
- The spins are not well constrained, the uncertainty from this means that we don’t get precise measurements of the individual component masses. We quote results with two choices of spin prior: the astrophysically motivated limit of 0.05, and the more agnostic and conservative upper bound of 0.89. I’ll stick to using the low-spin prior results be default.
- Using the low-spin prior, the component masses are – and –. We have the convention that , which is why the masses look unequal; there’s a lot of support for them being nearly equal. These masses match what you’d expect for neutron stars.

As mentioned above, neutron stars are made of stuff, and the properties of this leave an imprint on the waveform. If neutron stars are big and fluffy, they will get tidally distorted. Raising tides sucks energy and angular momentum out of the orbit, making the inspiral quicker. If neutron stars are small and dense, tides are smaller and the inspiral looks like that for tow black holes. For this initial analysis, we used waveforms which includes some tidal effects, so we get some preliminary information on the tides. We cannot exclude zero tidal deformation, meaning we cannot rule out from gravitational waves alone that the source contains at least one black hole (although this would be surprising, given the masses). However, we can place a weak upper limit on the combined dimensionless tidal deformability of . This isn’t too informative, in terms of working out what neutron stars are made from, but we’ll come back to this in the GW170817 Properties Paper and the GW170817 Equation-of-state Paper.

Given the source masses, and all the electromagnetic observations, we’re pretty sure this is a binary neutron star system—there’s nothing to suggest otherwise.

Having observed one (and one one) binary neutron star coalescence in O1 and O2, we can now put better constraints on the merger rate. As a first estimate, we assume that component masses are uniformly distributed between and , and that spins are below 0.4 (in between the limits used for parameter estimation). Given this, we infer that the merger rate is , safely within our previous upper limit [citation note].

There’s a lot more we can learn from GW170817, especially as we don’t *just* have gravitational waves as a source of information, and this is explained in the companion papers.

**Synopsis:** Multimessenger Paper

**Read this if:** Don’t. Use it too look up which other papers to read.

**Favourite part:** The figures! It was a truly amazing observational effort to follow-up GW170817

The remarkable thing about this paper is that it exists. Bringing together such a diverse (and competitive) group was a huge effort. Alberto Vecchio was one of the editors, and each evening when leaving the office, he was convinced that the paper would have fallen apart by morning. However, it hung together—the story was too compelling. This paper explains how gravitational waves, short gamma-ray bursts, kilonovae all come from a single source [citation note]. This is the greatest collaborative effort in the history of astronomy.

The paper outlines the discoveries and all of the initial set of observations. If you want to understand the observations themselves, this is not the paper to read. However, using it, you can track down the papers that you do want. A huge amount of care went in to trying to describe how discoveries were made: for example, Fermi observed GRB 170817A independently of the gravitational-wave alert, and we found GW170817 without relying on the GRB alert, however, the communication between teams meant that we took everything much seriously and pushed out alerts as quickly as possible. For more on the history of observations, I’d suggest scrolling through the **GCN archive**.

The paper starts with an overview of the gravitational-wave observations from the inspiral, then the prompt detection of GRB 170817A, before describing how the gravitational-wave localization enabled discovery of the optical transient AT 2017gfo. This source, in nearby galaxy NGC 4993, was then the subject of follow-up across the electromagnetic spectrum. We have huge amount of photometric and spectroscopy of the source, showing general agreement with models for a kilonova. X-ray and radio afterglows were observed 9 days and 16 days after the merger, respectively [citation note]. No neutrinos were found, which isn’t surprising.

**Synopsis:** GW170817 Gamma-ray Burst Paper

**Read this if:** You’re interested in the jets from where short gamma-ray bursts originate or in tests of general relativity

**Favourite part:** How much science come come from a simple time delay measurement

This joint LIGO–Virgo–Fermi–INTEGRAL paper combines our observations of GW170817 and GRB 170817A. The result is one of the most contentful of the companion papers.

The first item on the to-do list for joint gravitational-wave–gamma-ray science, is to establish that we are really looking at the same source.

From the GW170817 Discovery Paper, we know that its source is consistent with being a binary neutron star system. Hence, there is matter around which can launch create the gamma-rays. The Fermi-GBM and INTEGRAL observations of GRB170817A indicate that it falls into the short class, as hypothesised as the result of a binary neutron star coalescence. Therefore, it looks like we could have the right ingredients.

Now, given that it is possible that the gravitational waves and gamma rays have the same source, we can calculate the probability of the two occurring by chance. The probability of temporal coincidence is , adding in spatial coincidence too, and the probability becomes . It’s safe to conclude that the two are associated: merging binary neutron stars *are* the source of at least some short gamma-ray bursts!

There is a delay time between the inferred merger time and the gamma-ray burst. Given that signal has travelled for about 85 million years (taking the 5% lower limit on the inferred distance), this is a really small difference: gravity and light must travel at almost exactly the same speed. To derive exact limit you need to make some assumptions about when the gamma-rays were created. We’d expect some delay as it takes time for the jet to be created, and then for the gamma-rays to blast their way out of the surrounding material. We conservatively (and arbitrarily) take a window of the delay being 0 to 10 seconds, this gives

.

That’s pretty small!

General relativity predicts that gravity and light should travel at the same speed, so I wasn’t too surprised by this result. I was surprised, however, that this result seems to have caused a flurry of activity in effectively ruling out several modified theories of gravity. I guess there’s not much point in explaining what these are now, but they are mostly theories which add in extra fields, which allow you to tweak how gravity works so you can explain some of the effects attributed to dark energy or dark matter. I’d recommend Figure 2 of Ezquiaga & Zumalacárregui (2017) for a summary of which theories pass the test and which are in trouble; Kase & Tsujikawa (2018) give a good review.

We don’t discuss the theoretical implications of the relative speeds of gravity and light in this paper, but we do use the time delay to place bounds for particular on potential deviations from general relativity.

- We look at a particular type of Lorentz invariance violation. This is similar to what we did for GW170104, where we looked at the dispersion of gravitational waves, but here it is for the case of , which we couldn’t test.
- We look at the Shapiro delay, which is the time difference travelling in a curved spacetime relative to a flat one. That light and gravity are effected the same way is a test of the weak equivalence principle—that everything falls the same way. The effects of the curvature can be quantified with the parameter , which describes the amount of curvature per unit mass. In general relativity . Considering the gravitational potential of the Milky Way, we find that [citation note].

As you’d expect given the small time delay, these bounds are pretty tight! If you’re working on a modified theory of gravity, you have some extra checks to do now.

From our gravitational-wave and gamma-ray observations, we can also make some deductions about the engine which created the burst. The complication here, is that we’re not exactly sure what generates the gamma rays, and so deductions are model dependent. Section 5 of the paper uses the time delay between the merger and the burst, together with how quickly the burst rises and fades, to place constraints on the size of the emitting region in different models. The papers goes through the derivation in a step-by-step way, so I’ll not summarise that here: if you’re interested, check it out.

GRB 170817A was unusually dim [citation note]. The plot above compares it to other gamma-ray bursts. It is definitely in the tail. Since it appears so dim, we think that we are not looking at a standard gamma-ray burst. The most obvious explanation is that we are not looking directly down the jet: we don’t expect to see many off-axis bursts, since they are dimmer. We expect that a gamma-ray burst would originate from a jet of material launched along the direction of the total angular momentum. From the gravitational waves alone, we can estimate that the misalignment angle between the orbital angular momentum axis and the line of sight is (adding in the identification of the host galaxy, this becomes using the Planck value for the Hubble constant and with the SH0ES value), so this is consistent with viewing the burst off-axis (updated numbers are given in the GW170817 Properties Paper). There are multiple models for such gamma-ray emission, as illustrated below. We could have a uniform top-hat jet (the simplest model) which we are viewing from slightly to the side, we could have a structured jet, which is concentrated on-axis but we are seeing from off-axis, or we could have a cocoon of material pushed out of the way by the main jet, which we are viewing emission from. Other electromagnetic observations will tell us more about the inclination and the structure of the jet [citation note].

Now that we know gamma-ray bursts can be this dim, if we observe faint bursts (with unknown distances), we have to consider the possibility that they are dim-and-close in addition to the usual bright-and-far-away.

The paper closes by considering how many more joint gravitational-wave–gamma-ray detections of binary neutron star coalescences we should expect in the future. In our next observing run, we could expect 0.1–1.4 joint detections per year, and when LIGO and Virgo get to design sensitivity, this could be 0.3–1.7 detections per year.

**Synopsis:** GW170817 Hubble Constant Paper

**Read this if:** You have an interest in cosmology

**Favourite part:** In the future, we may be able to settle the argument between the cosmic microwave background and supernova measurements

The Universe is expanding. In the nearby Universe, this can be described using the Hubble relation

,

where is the expansion velocity, is the Hubble constant and is the distance to the source. GW170817 is sufficiently nearby for this relationship to hold. We know the distance from the gravitational-wave measurement, and we can estimate the velocity from the redshift of the host galaxy. Therefore, it should be simple to combine the two to find the Hubble constant. Of course, there are a few complications…

This work is built upon the identification of the optical counterpart AT 2017gfo. This allows us to identify the galaxy NGC 4993 as the host of GW170817’s source: we calculate that there’s a probability that AT 2017gfo would be as close to NGC 4993 on the sky by chance. Without a counterpart, it would still be possible to infer the Hubble constant statistically by cross-referencing the inferred gravitational-wave source location with the ensemble of compatible galaxies in a catalogue (you assign a probability to the source being associated with each galaxy, instead of saying it’s definitely in this one). The identification of NGC 4993 makes things much simpler.

As a first ingredient, we need the distance from gravitational waves. For this, a slightly different analysis was done than in the GW170817 Discovery Paper. We fix the sky location of the source to match that of AT 2017gfo, and we use (binary black hole) waveforms which don’t include any tidal effects. The sky position needs to be fixed, because for this analysis we are assuming that we definitely know where the source is. The tidal effects were not included (but precessing spins were) because we needed results quickly: the details of spins and tides shouldn’t make much difference to the distance. From this analysis, we find the distance is if we follow our usual convention of quoting the median at symmetric 90% credible interval; however, this paper primarily quotes the most probable value and minimal (not-necessarily symmmetric) 68.3% credible interval, following this convention, we write the distance as .

While NGC 4993 being close by makes the relationship for calculating the Hubble constant simple, it adds a complication for calculating the velocity. The motion of the galaxy is not only due to the expansion of the Universe, but because of how it is moving within the gravitational potentials of nearby groups and clusters. This is referred to as peculiar motion. Adding this in increases our uncertainty on the velocity. Combining results from the literature, our final estimate for the velocity is .

We put together the velocity and the distance in a Bayesian analysis. This is a little more complicated than simply dividing the numbers (although that gives you a similar result). You have to be careful about writing things down, otherwise you might implicitly assume a prior that you didn’t intend (my most useful contribution to this paper is probably a whiteboard conversation with Will Farr where we tracked down a difference in prior assumptions approaching the problem two different ways). This is all explained in the Methods, it’s not easy to read, but makes sense when you work through. The result is (quoted as maximum a posteriori value and 68% interval, or in the usual median-and-90%-interval convention). An updated set of results is given in the GW170817 Properties Paper: (68% interval using the low-spin prior). This is nicely (and diplomatically) consistent with existing results.

The distance has considerable uncertainty because there is a degeneracy between the distance and the orbital inclination (the angle of the normal to the orbital plane relative to the line of sight). If you could figure out the inclination from another observation, then you could tighten constraints on the Hubble constant, or if you’re willing to adopt one of the existing values of the Hubble constant, you can pin down the inclination. Data (updated data) to help you try this yourself are available [citation note].

In the future we’ll be able to combine multiple events to produce a more precise gravitational-wave estimate of the Hubble constant. Chen, Fishbach & Holz (2017) is a recent study of how measurements should improve with more events: we should get to 4% precision after around 100 detections.

**Synopsis:** GW170817 Kilonova Paper

**Read this if:** You want to check our predictions for ejecta against observations

**Favourite part:** We might be able to create all of the heavy r-process elements—including the gold used to make Nobel Prizes—from merging neutron stars

When two neutron stars collide, lots of material gets ejected outwards. This neutron-rich material undergoes nuclear decay—now no longer being squeezed by the strong gravity inside the neutron star, it is unstable, and decays from the strange neutron star stuff to become more familiar elements (elements heavier than iron including gold and platinum). As these r-process elements are created, the nuclear reactions power a kilonova, the optical (infrared–ultraviolet) transient accompanying the merger. The properties of the kilonova depends upon how much material is ejected.

In this paper, we try to estimate how much material made up the dynamical ejecta from the GW170817 collision. Dynamical ejecta is material which escapes as the two neutron stars smash into each other (either from tidal tails or material squeezed out from the collision shock). There are other sources of ejected material, such as winds from the accretion disk which forms around the remnant (whether black hole or neutron star) *following* the collision, so this is only part of the picture; however, we can estimate the mass of the dynamical ejecta from our gravitational-wave measurements using simulations of neutron star mergers. These estimates can then be compared with electromagnetic observations of the kilonova [citation note].

The amount of dynamical ejecta depends upon the masses of the neutron stars, how rapidly they are rotating, and the properties of the neutron star material (described by the equation of state). Here, we use the masses inferred from our gravitational-wave measurements and feed these into fitting formulae calibrated against simulations for different equations of state. These don’t include spin, and they have quite large uncertainties (we include a 72% relative uncertainty when producing our results), so these are not precision estimates. Neutron star physics is a little messy.

We find that the dynamical ejecta is – (assuming the low-spin mass results). These estimates can be feed into models for kilonovae to produce lightcurves, which we do. There is plenty of this type of modelling in the literature as observers try to understand their observations, so this is nothing special in terms of understanding this event. However, it could be useful in the future (once we have hoverboards), as we might be able to use gravitational-wave data to predict how bright a kilonova will be at different times, and so help astronomers decide upon their observing strategy.

Finally, we can consider how much r-process elements we can create from the dynamical ejecta. Again, we don’t consider winds, which may also contribute to the total budget of r-process elements from binary neutron stars. Our estimate for r-process elements needs several ingredients: (i) the mass of the dynamical ejecta, (ii) the fraction of the dynamical ejecta converted to r-process elements, (iii) the merger rate of binary neutron stars, and (iv) the convolution of the star formation rate and the time delay between binary formation and merger (which we take to be ). Together (i) and (ii) give the mass of r-process elements per binary neutron star (assuming that GW170817 is typical); (iii) and (iv) give total density of mergers throughout the history of the Universe, and combining everything together you get the total mass of r-process elements accumulated over time. Using the estimated binary neutron star merger rate of , we can explain the Galactic abundance of r-process elements if more than about 10% of the dynamical ejecta is converted.

**Synopsis:** GW170817 Stochastic Paper

**Read this if:** You’re impatient for finding a background of gravitational waves

**Favourite part:** The background symphony

For every loud gravitational-wave signal, there are many more quieter ones. We can’t pick these out of the detector noise individually, but they are still there, in our data. They add together to form a stochastic background, which we might be able to detect by correlating the data across our detector network.

Following the detection of GW150914, we considered the background due to binary black holes. This is quite loud, and might be detectable in a few years. Here, we add in binary neutron stars. This doesn’t change the picture too much, but gives a more accurate picture.

Binary black holes have higher masses than binary neutron stars. This means that their gravitational-wave signals are louder, and shorter (they chirp quicker and chirp up to a lower frequency). Being louder, binary black holes dominate the overall background. Being shorter, they have a different character: binary black holes form a popcorn background of short chirps which rarely overlap, but binary neutron stars are long enough to overlap, forming a more continuous hum.

The dimensionless energy density at a gravitational-wave frequency of 25 Hz from binary black holes is , and from binary neutron stars it is . There are on average binary black hole signals in detectors at a given time, and binary neutron star signals.

To calculate the background, we need the rate of merger. We now have an estimate for binary neutron stars, and we take the most recent estimate from the GW170104 Discovery Paper for binary black holes. We use the rates assuming the power law mass distribution for this, but the result isn’t too sensitive to this: we care about the number of signals in the detector, and the rates are derived from this, so they agree when working backwards. We evolve the merger rate density across cosmic history by factoring in the star formation rate and delay time between formation and merger. A similar thing was done in the GW170817 Kilonova Paper, here we used a slightly different star formation rate, but results are basically the same with either. The addition of binary neutron stars increases the stochastic background from compact binaries by about 60%.

Detection in our next observing run, at a moderate significance, is possible, but I think unlikely. It will be a few years until detection is plausible, but the addition of binary neutron stars will bring this closer. When we do detect the background, it will give us another insight into the merger rate of binaries.

**Synopsis:** GW170817 Progenitor Paper

**Read this if:** You want to know about neutron star formation and supernovae

**Favourite part:** The Spirography figures

The identification of NGC 4993 as the host galaxy of GW170817’s binary neutron star system allows us to make some deductions about how it formed. In this paper, we simulate a large number of binaries, tracing the later stages of their evolution, to see which ones end up similar to GW170817. By doing so, we learn something about the supernova explosion which formed the second of the two neutron stars.

The neutron stars started life as a pair of regular stars [bonus note]. These burned through their hydrogen fuel, and once this is exhausted, they explode as a supernova. The core of the star collapses down to become a neutron star, and the outer layers are blasted off. The more massive star evolves faster, and goes supernova first. We’ll consider the effects of the second supernova, and the kick it gives to the binary: the orbit changes both because of the rocket effect of material being blasted off, and because one of the components loses mass.

From the combination of the gravitational-wave and electromagnetic observations of GW170817, we know the masses of the neutron star, the type of galaxy it is found in, and the position of the binary within the galaxy at the time of merger (we don’t know the exact position, just its projection as viewed from Earth, but that’s something).

We start be simulating lots of binaries just before the second supernova explodes. These are scattered at different distances from the the centre of the galaxy, have different orbital separations, and have different masses of the pre-supernova star. We then add the effects of the supernova, adding in a kick. We fix then neutron star masses to match those we inferred from the gravitational wave measurements. If the supernova kick is too big, the binary flies apart and will never merge (boo). If the binary remains bound, we follow its evolution as it moves through the galaxy. The structure of the galaxy is simulated as a simple spherical model, a Hernquist profile for the stellar component and a Navarro–Frenk–White profile for the dark matter halo [citation note], which are pretty standard. The binary shrinks as gravitational waves are emitted, and eventually merge. If the merger happens at a position which matches our observations (yay), we know that the initial conditions could explain GW170817.

The plot above shows the constraints on the progenitor’s properties. The inferred second supernova kick is , similar to what has been observed for neutron stars in the Milky Way; the per-supernova stellar mass is (we assume that the star is just a helium core, with the outer hydrogen layers having been stripped off, hence the subscript); the pre-supernova orbital separation was , and the offset from the the centre of the galaxy at the time of the supernova was . The main strongest constraints come from keeping the binary bound after the supernova; results are largely independent of the delay time once this gets above [citation note].

As we collect more binary neutron star detections, we’ll be able to deduce more about how they form. If you’re interested more in the how to build a binary neutron star system, the introduction to this paper is well referenced; Tauris *et al*. (2017) is a detailed (pre-GW170817) review.

**Synopsis:** GW170817 Neutrino Paper

**Read this if:** You want a change from gravitational wave–electromagnetic multimessenger astronomy

**Favourite part:** There’s still something to look forward to with future detections—GW170817 hasn’t stolen all the firsts. Also this paper is *not* Abbot* et al*.

This is a joint search by ANTARES, IceCube and the Pierre Auger Observatory for neutrinos coincident with GW170817. Knowing both the location and the time of the binary neutron star merger makes it easy to search for counterparts. No matching neutrinos were detected.

Using the non-detections, we can place upper limits on the neutrino flux. These are summarised in the plots below. Optimistic models for prompt emission from an on axis gamma-ray burst would lead to a detectable flux, but otherwise theoretical predictions indicate that a non-detection is expected. From electromagnetic observations, it doesn’t seem like we are on-axis, so the story all fits together.

Super-Kamiokande have done their own search for neutrinos, form to around (Abe *et al*. 2018). They found nothing in either the window around the event or the window following it.

The only post-detection neutrino modelling paper I’ve seen is Biehl, Heinze, &Winter (2017). They model prompt emission from the same source as the gamma-ray burst and find that neutrino fluxes would be of current sensitivity.

**Synopsis:** GW170817 Post-merger Paper

**Read this if:** You are an optimist

**Favourite part:** We really do check everywhere for signals

Following the inspiral of two black holes, we know what happens next: the black holes merge to form a bigger black hole, which quickly settles down to its final stable state. We have a complete model of the gravitational waves from the inspiral–merger–ringdown life of coalescing binary black holes. Binary neutron stars are more complicated.

The inspiral of two binary neutron stars is similar to that for black holes. As they get closer together, we might see some imprint of tidal distortions not present for black holes, but the main details are the same. It is the chirp of the inspiral which we detect. As the neutron stars merge, however, we don’t have a clear picture of what goes on. Material gets shredded and ejected from the neutron stars; the neutron stars smash together; it’s all rather messy. We don’t have a good understanding of what should happen when our neutron stars merge, the details depend upon the properties of the stuff neutron stars are made of—if we could measure the gravitational-wave signal from this phase, we would learn a lot.

There are four plausible outcomes of a binary neutron star merger:

- If the total mass is below the maximum mass for a (non-rotating) neutron star (), we end up with a bigger, but still stable neutron star. Given our inferences from the inspiral (see the plot from the GW170817 Gamma-ray Burst Paper below), this is unlikely.
- If the total mass is above the limit for a stable, non-rotating neutron star, but can still be supported by uniform rotation (), we have a supramassive neutron star. The rotation will slow down due to the emission of electromagnetic and gravitational radiation, and eventually the neutron star will collapse to a black hole. The time until collapse could take something like –; it is unclear if this is long enough for supramassive neutron stars to have a mid-life crisis.
- If the total mass is above the limit for support from uniform rotation, but can still be supported through differential rotation and thermal gradients(), then we have a hypermassive neutron star. The hypermassive neutron star cools quickly through neutrino emission, and its rotation slows through magnetic braking, meaning that it promptly collapses to a black hole in .
- If the total mass is big enough(), the merging neutron stars collapse down to a black hole.

In the case of the collapse to a black hole, we get a ringdown as in the case of a binary black hole merger. The frequency is around , too high for us to currently measure. However, if there is a neutron star, there may be slightly lower frequency gravitational waves from the neutron star matter wibbling about. We’re not exactly sure of the form of these signals, so we perform an unmodelled search for them (knowing the position of GW170817’s source helps for this).

Several different search algorithms were used to hunt for a post-merger signal:

- coherent WaveBurst (cWB) was used to look for short duration () bursts. This searched a window including the merger time and covering the delay to the gamma-ray burst detection, and frequencies of –. Only LIGO data were used, as Virgo data suffered from large noise fluctuations above .
- cWB was used to look for intermediate duration () bursts. This searched a window from the merger time, and frequencies –. This used LIGO and Virgo data.
- The Stochastic Transient Analysis Multi-detector Pipeline (STAMP) was also used to look for intermediate duration signals. This searched the merger time until the end of O2 (in chunks), and frequencies –. This used only LIGO data. There are two variations of STAMP: Zebragard and Lonetrack, and both are used here.

Although GEO is similar to LIGO and Virgo and the searched high-frequencies, its data were not used as we have not yet studied its noise properties in enough detail. Since the LIGO detectors are the most sensitive, their data is most important for the search.

No plausible candidates were found, so we set some upper limits on what could have been detected. From these, it is not surprising that nothing was found, as we would need pretty much all of the mass of the remnant to somehow be converted into gravitational waves to see something. Results are shown in the plot below. An updated analysis which puts upper limits on the post-merger signal is given in the GW170817 Properties Paper.

We can’t tell the fate of GW170817’s neutron stars from gravitational waves alone [citation note]. As high-frequency sensitivity is improved in the future, we might be able to see something from a *really* close by binary neutron star merger.

**Synopsis:** GW170817 Properties Paper

**Read this if:** You want the best results for GW170817’s source, our best measurement of the Hubble constant, or limits on the post-merger signal

**Favourite part:** Look how tiny the uncertainties are!

As time progresses, we often refine our analyses of gravitational-wave data. This can be because we’ve had time to recalibrate data from our detectors, because better analysis techniques have been developed, or just because we’ve had time to allow more computationally intensive analyses to finish. This paper is our first attempt at improving our inferences about GW170817. The results use an improved calibration of Virgo data, and analyses more of the signal (down to a low frequency of 23 Hz, instead of 30 Hz, which gives use about an extra 1500 cycles), uses improved models of the waveforms, and includes a new analysis looking at the post-merger signal. The results update those given in the GW170817 Discovery Paper, the GW170817 Hubble Constant Paper and the GW170817 Post-merger Paper.

Our initial analysis was based upon quick to calculate post-Newtonian waveform known as TaylorF2. We thought this should be a conservative choice: any results with more complicated waveforms should give tighter results. This worked out. We try several different waveform models, each based upon the point particle waveforms we use for analysing binary black hole signals with extra bits to model the tidal deformation of neutron stars. The results are broadly consistent, so I’ll concentrate on discussing our preferred results calculated using IMRPhenomPNRT waveform (which uses IMRPhenomPv2 as a base and adds on numerical-relativity calibrated tides). As in the GW170817 Discovery Paper, we perform the analysis with two priors on the binary spins, one with spins up to 0.89 (which should safely encompass all possibilities for neutron stars), and one with spins of up to 0.05 (which matches observations of binary neutron stars in our Galaxy).

The first analysis we did was to check the location of the source. Reassuringly, we are still perfectly consistent with the location of AT 2017gfo (phew!). The localization is much improved, the 90% sky area is down to just ! Go Virgo!

Having established that it still makes sense that AT 2017gfo pin-points the source location, we use this as the position in subsequent analyses. We always use the sky position of the counterpart and the redshift of the host galaxy (Levan *et al*. 2017), but we don’t typically use the distance. This is because we want to be able to measure the Hubble constant, which relies on using the distance inferred from gravitational waves.

We use the distance from Cantiello *et al*. (2018) [citation note] for one calculation: an estimation of the inclination angle. The inclination is degenerate with the distance (both affect the amplitude of the signal), so having constraints on one lets us measure the other with improved precision. Without the distance information, we find that the angle between the binary’s total angular momentum and the line of sight is for the high-spin prior and with the low-spin prior. The difference between the two results is because of the spin angular momentum slightly shifts the direction of the total angular momentum. Incorporating the distance information, for the high-spin prior the angle is (so the misalignment angle is ), and for the low-spin prior it is (misalignment ) [citation note].

Main results include:

- The luminosity distance is with the the low-spin prior and with the high-spin prior. The difference is for the same reason as the difference in inclination measurements. The results are consistent with the distance to NGC 4993 [citation note].
- The chirp mass redshifted to the detector-frame is measured to be with the low-spin prior and with the high-spin. This corresponds to a physical chirp mass of .
- The spins are not well constrained. We get the best measurement along the direction of the orbital angular momentum. For the low-spin prior, this is enough to disfavour the spins being antialigned, but that’s about it. For the high-spin prior, we rule out large spins aligned or antialigned, and very large spins in the plane. The aligned components of the spin are best described by the effective inspiral spin parameter , for the low-spin prior it is and for the high-spin prior it is .
- Using the low-spin prior, the component masses are – and –, and for the high-spin prior they are – and –.

These are largely consistent with our previous results. There are small shifts, but the biggest change is that the errors are a little smaller.

For the Hubble constant, we find with the low-spin prior and with the high-spin prior. Here, we quote maximum a posterior value and narrowest 68% intervals as opposed to the the usual median and symmetric 90% credible interval. You might think its odd that the uncertainty is smaller when using the wider high-spin prior, but this is just another consequence of the difference in the inclination measurements. The values are largely in agreement with our initial values.

The best measured tidal parameter is the combined dimensionless tidal deformability . With the high-spin prior, we can only set an upper bound of . With the low-spin prior, we find that we are still consistent with zero deformation, but the distribution peaks away from zero. We have using the usual median and symmetric 90% credible interval, and if we take the narrowest 90% interval. This looks like we have detected matter effects, but since we've had to use the low-spin prior, which is only appropriate for neutron stars, this would be a circular argument. More details on what we can learn about tidal deformations and what neutron stars are made of, under the assumption that we do have neutron stars, are given in the GW170817 Equation-of-state Paper.

Previously, in the GW170817 Post-merger Paper, we searched for a post-merger signal. We didn’t find anything. Now, we try to infer the shape of the signal, assuming it is there (with a peak within of the coalescence time). We still don’t find anything, but now we set much tighter upper limits on what signal there could be there.

For this analysis, we use data from the two LIGO detectors, and from GEO 600! We don’t use Virgo data, as it is not well behaved at these high frequencies. We use BayesWave to try to constrain the signal.

While the upper limits are much better, they are still about 12–215 times larger than expectations from simulations. Therefore, we’d need to improve our detector sensitivity by about a factor of 3.5–15 to detect a similar signal. Fingers crossed!

**Synopsis:** GW170817 Equation-of-state Paper

**Read this if:** You want to know what neutron stars are made of

**Favourite part:** The beautiful butterfly plots

Usually in our work, we like to remain open minded and not make too many assumptions. In our analysis of GW170817, as presented in the GW170817 Properties Paper, we have remained agnostic about the components of the binary, seeing what the data tell us. However, from the electromagnetic observations, there is solid evidence that the source is a binary neutron star system. In this paper, we take it as granted that the source *is* made of two neutron stars, and that these neutron stars are made of similar stuff, to see what we can learn about the properties of neutron stars.

When a two neutron stars get close together, they become distorted by each other’s gravity. Tides are raised, kind of like how the Moon creates tides on Earth. Creating tides takes energy out of the orbit, causing the inspiral to proceed faster. This is something we can measure from the gravitational wave signal. Tides are larger when the neutron stars are bigger. The size of neutron stars and how easy they are the stretch and squash depends upon their equation of state. We can use the measurements of the neutron star masses and amount of tidal deformation to infer their size and their equation of state.

The signal is analysed as in the GW170817 Properties Paper (IMRPhenomPNRT waveform, low-spin prior, position set to match AT 2017gfo). However, we also add in some information about the composition of neutron stars.

Calculating the behaviour of this incredibly dense material is difficult, but there are some relations between the tidal deformability of neutron stars and their radii which are insensitive to the details of the equation of state. One relates symmetric and antisymmetric combinations of the tidal deformations of the two neutron stars as a function of the mass ratio, allows us to calculate consistent tidal deformations. Another relates the tidal deformation to the compactness (mass divided by radius) allows us to convert tidal deformations to radii. The analysis includes the uncertainty in these relations.

In addition to this, we also use a parametric model of the equation of state to model the tidal deformations. By sampling directly in terms of the equation of state, it is easy to impose constraints on the allowed values. For example, we impose that the speed of sound inside the neutron star is less than the speed of light, that the equation of state can support neutron stars of that mass, that it is possible to explain the most massive confirmed neutron star (we use a lower limit for this mass of ), as well as it being thermodynamically stable. Accommodating the most massive neutron star turns out to be an important piece of information.

The plot below shows the inferred tidal deformation parameters for the two neutron stars. The two techniques, using the equation-of-state insensitive relations and using the parametrised equation-of-state model *without* included the constraint of matching the neutron star, give similar results. For a neutron star, these results indicate that the tidal deformation parameter would be . We favour softer equations of state over stiffer ones [citation note]. I think this means that neutron stars are more huggable.

We can translate our results into estimates on the size of the neutron stars. The plots below show the inferred radii. The results for the parametrised equation-of-state model now includes the constraint of accommodating a neutron star, which is the main reason for the difference in the plots. Using the equation-of-state insensitive relations we find that the radius of the heavier (–) neutron star is and the radius of the lighter (–) neutron star is . With the parametrised equation-of-state model, the radii are (–) and (–).

When I was an undergraduate, I remember learning that neutron stars were about in radius. We now know that’s not the case.

If you want to investigate further, you can download the posterior samples from these analyses.

In astronomy, we often use standard candles, objects like type IA supernovae of known luminosity, to infer distances. If you know how bright something should be, and how bright you measure it to be, you know how far away it is. By analogy, we can infer how far away a gravitational-wave source is by how loud it is. It is thus not a candle, but a siren. Sean Carrol explains more about this term on his blog.

I know… *Nature* published the original Schutz paper on measuring the Hubble constant using gravitational waves; therefore, there’s a nice symmetry in publishing the first real result doing this in *Nature* too.

Instead of a binary neutron star system forming from a binary of two stars born together, it is possible for two neutron stars to come close together in a dense stellar environment like a globular cluster. A significant fraction of binary black holes could be formed this way. Binary neutron stars, being less massive, are not as commonly formed this way. We wouldn’t expect GW170817 to have formed this way. In the GW170817 Progenitor Paper, we argue that the probability of GW170817’s source coming from a globular cluster is small—for predicted rates, see Bae, Kim & Lee (2014).

Levan *et al*. (2017) check for a stellar cluster at the site of AT 2017gfo, and find nothing. The smallest 30% of the Milky Way’s globular clusters would evade this limit, but these account for just 5% of the stellar mass in globular clusters, and a tiny fraction of dynamical interactions. Therefore, it’s unlikely that a cluster is the source of this binary.

From our gravitational-wave data, we estimate the current binary neutron star merger rate density is . Several electromagnetic observers performed their own rate estimates from the frequency of detection (or lack thereof) of electromagnetic transients.

Kasliwal *et al*. (2017) consider transients seen by the Palomar Transient Factory, and estimate a rate density of approximately (3-sigma upper limit of ), towards the bottom end of our range, but their rate increases if not all mergers are as bright as AT 2017gfo.

Siebert *et al*. (2017) works out the rate of AT 2017gfo-like transients in the Swope Supernova Survey. They obtain an upper limit of . They use to estimate the probability that AT 2017gfo and GW170817 are just a chance coincidence and are actually unrelated. The probability is at 90% confidence.

Smartt *et al*. (2017) estimate the kilonova rate from the ATLAS survey, they calculate a 95% upper limit of , safely above our range.

Yang *et al*. (2017) calculates upper limits from the DLT40 Supernova survey. Depending upon the reddening assumed, this is between and . Their figure 3 shows that this is well above expected rates.

Zhang *et al*. (2017) is interested in the rate of gamma-ray bursts. If you know the rate of short gamma-ray bursts and of binary neutron star mergers, you can learn something about the beaming angle of the jet. The smaller the jet, the less likely we are to observe a gamma-ray burst. In order to do this, they do their own back-of-the-envelope for the gravitational-wave rate. They get . That’s not too bad, but do stick with our result.

If you’re interested in the future prospects for kilonova detection, I’d recommend Scolnic *et al*. (2017). Check out their Table 2 for detection rates (assuming a rate of ): LSST and WFIRST will see lots, about 7 and 8 per year respectively.

Using later observational constraints on the jet structure, Gupta & Bartos (2018) use the short gamma-ray burst rate to estimate a binary neutron star merger rate of . They project that around 30% of gravitational-wave detections will be accompanied by gamma-ray bursts, once LIGO and Virgo reach design sensitivity.

Della Valle *et al*. (2018) calculate an observable kilonova rate of . To match up to our binary neutron star merger rate, we either need only a fraction of binary neutron star mergers to produce kilonova or for them to only be observable for viewing angles of less than . Their table 2 contains a nice compilation of rates for short gamma-ray bursts.

Some notes on an incomplete overview of papers describing the electromagnetic discovery. A list of the first wave of papers was compiled by Maria Drout, Stefano Valenti, and Iair Arcavi as a starting point for further reading.

Independently of our gravitational-wave detection, a short gamma-ray burst GRB 170817A was observed by Fermi-GBM (Goldstein *et al*. 2017). Fermi-LAT did not see anything, as it was offline for crossing through the South Atlantic Anomaly. At the time of the merger, INTEGRAL was following up the location of GW170814, fortunately this meant it could still observe the location of GW170817, and following the alert they found GRB 170817A in their data (Savchenko *et al*. 2017).

Following up on our gravitational-wave localization, an optical transient AT 2017gfo was discovered. The discovery was made by the One-Meter Two-Hemisphere (1M2H) collaboration using the Swope telescope at the Las Campanas Observatory in Chile; they designated the transient as SSS17a (Coulter *et al*. 2017). That same evening, several other teams also found the transient within an hour of each other:

- The Distance Less Than 40 Mpc (DLT40) search found the transient using the PROMPT 0.4-m telescope at the Cerro Tololo Inter-American Observatory in Chile; they designated the transient DLT17ck (Valenti
*et al*. 2017). - The VINROUGE collaboration (I think, they don’t actually identify themselves in their own papers) found the transient using VISTA at the European Southern Observatory in Chile (Tanvir
*et al*. 2017). Their paper also describes follow-up observations with the Very Large Telescope, the Hubble Space Telescope, the Nordic Optical Telescope and the Danish 1.54-m Telescope, and has one of my favourite introduction sections of the discovery papers. - The MASTER collaboration followed up with their network of global telescopes, and it was their telescope at the San Juan National University Observatory in Argentina which found the transient (Lipunov
*et al*. 2017); they, rather catchily denote the transient as OTJ130948.10-232253.3. - The Dark Energy Survey and the Dark Energy Camera GW–EM (DES and DECam) Collaboration found the transient with the DECam on the Blanco 4-m telescope, which is also at the Cerro Tololo Inter-American Observatory in Chile (Soares-Santos
*et al*. 2017). - The Las Cumbres Observatory Collaboration used their global network of telescopes, with, unsurprisingly, their 1-m telescope at the Cerro Tololo Inter-American Observatory in Chile first imaging the transient (Arcavi
*et al*. 2017). Their observing strategy is described in a companion paper (Arcavi*et al*. 2017), which also describes follow-up of GW170814.

From these, you can see that South America was the place to be for this event: it was night at just the right time.

There was a huge amount of follow-up across the infrared–optical–ultraviolet range of AT 2017gfo. Villar *et al*. (2017) attempts to bring these together in a consistent way. Their Figure 1 is beautiful.

Hinderer et al. (2018) use numerical relativity simulations to compare theory and observations for gravitational-wave constraints on the tidal deformation and the kilonova lightcurve. They find that observations could be consistent with a neutron star–black hole binary and well as a binary neutron star. I think it’s unlikely that there would be a black hole this low mass, but it’s interesting that there are some simulations which can fit the observations.

AT 2017gfo was also the target of observations across the electromagnetic spectrum. An X-ray afterglow was observed 9 days post merger, and 16 days post merger, just as we thought the excitement was over, a radio afterglow was found:

- The X-rays were first observed by Chandra X-ray Observatory, 9 days post merger (Troja
*et al*. 2017). This paper also describes optical follow-up with the Hubble Space Telescope, the Gemini Multi-Object Spectrograph, the Korea Microlensing Telescope Network, and a radio non-detection with the Australia Telescope Compact Array. Margutti*et al*. (2017) observed with Chandra 2.3 days post-merger (when they found nothing) and 15 days when they found something. Haggard*et al*. (2017) describe deep Chandra observations 15 and 16 days post merger. - The GROWTH Collaboration found radio emission initially 16 days post merger with the Very Large Array (Hallinan
*et al*. 2017): there’s a marginal signal after 10 days, but there’s no definitely identifiable source at that time. They also observed with the Australia Telescope Compact Array (which saw the afterglow when observing 19 days post merger), the Giant Metrewave Radio Telescope, the VLA Low Band Ionosphere and Transient Experiment and the Green Bank Telescope (which didn’t make detections). Alexander*et al*. (2017) first detect radio emission when observing 19 and 39 days post merger with the Very Large Array. They do not detect anything with the Atacama Large Millimeter/submillimeter Array.

The afterglow will continue to brighten for a while, so we can expect a series of updates:

- Pooley, Kumar & Wheeler (2017) observed with Chandra 108 and 111 days post merger. Ruan
*et al*. (2017) observed with Chandra 109 days post merger. The large gap in the the X-ray observations from the initial observations is because the Sun got in the way. - Mooley
*et al*. (2017) update the GROWTH radio results up to 107 days post merger (the largest span whilst still pre-empting new X-ray observations), observing with the Very Large Array, Australia Telescope Compact Array and Giant Meterewave Radio Telescope.

Excitingly, the afterglow has also now been spotted in the optical:

- Lyman
*et al*. (2018) observed with Hubble 110 (rest-frame) days post-merger (which is when the Sun was out of the way for Hubble). At this point the kilonova should have faded away, but they found something, and this is quite blue. The conclusion is that it’s the afterglow, and it will peak in about a year. - Margutti
*et al*. (2018) brings together Chandra X-ray observations, Very Large Array radio observations and Hubble optical observations. The Hubble observations are 137 days post merger, and the Chandra observations are 153 days and 163 days post-merger. They find that they all agree (including the tentative radio signal at 10 days post-merger). They argue that the emission disfavours on-axis jets and spherical fireballs.

The afterglow is now starting to fade.

- D’Avanzo
*et al*. (2018) observed in X-ray 135 days post-merger with XMM-Newton. They find that the flux is faded compared to the previous trend. They suggest that we’re just at the turn-over, so this is consistent with the most recent Hubble observations. - Resmi
*et al*. (2018) observed at low radio frequencies with the Giant Meterwave Radio Telescope. They saw the signal at after 67 days post-merger, but this evolves little over the duration of their observations (to day 152 post-merger), also suggesting a turn-over. - Dobie
*et al*. (2018) observed in radio 125–200 days post-merger with the Very Large Array and Australia Telescope Compact Array, and they find that the afterglow is starting to fade, with a peak at 149 ± 2 days post-merger. - Nynka
*et al*. (2018) made X-ray observations at 260 days post-merger. They conclude the afterglow is definitely fading, and that this is not because of passing of the synchrotron cooling frequency. - Troja
*et al*. (2018) observed in radio and X-ray to 359 days. The fading is now obvious, and starting to reveal something about the jet structure. Their best fits seems to favour a structured relativistic jet or a wide-angled cocoon.

The story isn’t over yet!

Using the time delay between GW170817 and GRB 170817A, a few other teams also did their own estimation of the Shapiro delay before they knew what was in our GW170817 Gamma-ray Burst Paper.

- Wang
*et al*. (2017) consider the Milky Way potential and large scale structure to estimate . - Boran
*et al*. (2017) consider all the galaxies in the GLADE catalogue which are within a radius of of the line of sight, and derive . - Wei
*et al*. (2017) estimate using the Milky Way’s potential and using the Virgo cluster’s potential.

Our estimate of is the most conservative.

Are the electromagnetic counterparts to GW170817 similar to what has been observed before? Yue *et al*. (2017) compare GRB 170817A with other gamma-ray bursts. It is low luminosity, but it may not be alone. There could be other bursts like it (perhaps GRB 070923, GRB 080121 and GRB 090417A), if indeed they are from nearby sources. They suggest that GRB 130603B may be the on-axis equivalent of GRB 170817A [citation note]; however, the non-detection of kilonovae for several bursts indicates that there needs to be some variation in their properties too. This agree with the results of Gompertz *et al*. (2017), who compares the GW170817 observations with other kilonovae: it is fainter than the other candidate kilonovae (GRB 050709, GRB 060614, GRB 130603B and tentatively GRB 160821B), but equally brighter than upper limits from other bursts. There must be a diversity in kilonovae observations. Fong *et al*. (2017) look at the diversity of afterglows (across X-ray to radio), and again find GW170817’s counterpart to be faint. This is probably because we are off-axis. Future observations will help unravel how much variation there is from viewing different angles, and how much intrinsic variation there is from the source—perhaps some short gamma-ray bursts come from neutron star–black hole binaries?

Pretty much every observational paper has a go at estimating the properties of the ejecta, the viewing angle or something about the structure of the jet. I may try to pull these together later, but I’ve not had time yet as it is a very long list! Most of the inclination measurements assumed a uniform top-hat jet, which we now know is not a good model.

In my non-expert opinion, the later results seem more interesting. With very-long baseline interferometry radio observations to 230 days post-merger, Mooley *et al*. (2018) claim that while the early radio emission was powered by the wide cocoon of a structured jet, the later emission is dominated by a narrow, energetic jet. There was a successful jet, so we would have seen something like a regular short gamma-ray burst on axis. They estimate that the jet opening angle is , and that we are viewing it at an angle of . With X-ray and radio observations to 359 days, Troja *et al*. (2018) estimate (folding in gravitational-wave constraints too) that the viewing angle is , and the width of a Gaussian structured jet would be .

Guidorzi *et al*. (2017) try to tighten the measurement of the Hubble constant by using radio and X-ray observations. Their modelling assumes a uniform jet, which doesn’t look like a currently favoured option [citation note], so there is some model-based uncertainty to be included here. Additionally, the jet is unlikely to be perfectly aligned with the orbital angular momentum, which may add a couple of degrees more uncertainty.

Mandel (2018) works the other way and uses the recent Dark Energy Survey Hubble constant estimate to bound the misalignment angle to less than , which (unsurprisingly) agrees pretty well with the result we obtained using the Planck value. Finstad *et al*. (2018) uses the luminosity distance from Cantiello *et al*. (2018) [citation note] as a (Gaussian) prior for an analysis of the gravitational-wave signal, and get a misalignment (where the errors are statistical uncertainty and an estimate of systematic error from calibration of the strain).

Hotokezaka *et al*. (2018) use the inclination results from Mooley *et al*. (2018) [citation note] (together with the updated posterior samples from the GW170817 Properties Paper) to infer a value of (quoting median and 68% symmetric credible interval). Using different jet models changes their value for the Hubble constant a little; the choice of spin prior does not (since we get basically all of the inclination information from their radio observations). The results is still consistent with Planck and SH0ES, but is closer to the Planck value.

In the GW170817 Progenitor Paper we used component properties for NGC 4993 from Lim *et al*. (2017): a stellar mass of and a dark matter halo mass of , where we use the Planck value of (but conclusions are similar using the SH0ES value for this).

Blanchard *et al*. (2017) estimate a stellar mass of about . They also look at the star formation history, 90% were formed by ago, and the median mass-weighted stellar age is . From this they infer a merger delay time of −. From this, and assuming that the system was born close to its current location, they estimate that the supernova kick , towards the lower end of our estimate. They use .

Im *et al*. (2017) find a mean stellar mass of − and the mean stellar age is greater than about . They also give a luminosity distance estimate of , which overlaps with our gravitational-wave estimate. I’m not sure what value of they are using.

Levan *et al*. (2017) suggest a stellar mass of around . They find that 60% of stars by mass are older than and that less than 1% are less than old. Their Figure 5 has some information on likely supernova kicks, they conclude it was probably small, but don’t quantify this. They use .

Pan *et al*. (2017) find . They calculate a mass-weighted mean stellar age of and a likely minimum age for GW170817’s source system of . They use .

Troja *et al*. (2017) find a stellar mass of , and suggest an old stellar population of age .

Ebrová & Bílek (2018) assume a distance of and find a halo mass of . They suggest that NGC 4993 swallowed a smaller late-type galaxy somewhere between and ago, most probably around ago.

The consensus seems to be that the stellar population is old (and not much else). Fortunately, the conclusions of the GW170817 Progenitor Paper are pretty robust for delay times longer than as seems likely.

A couple of other papers look at the distance of the galaxy:

- Hjoth
*et al.*(2017) combine a redshift measurement from MUSE, and a fundamental plane estimate based upon Hubble observations, to obtain an distance of . - Cantiello
*et al*. (2018) use Hubble observations to estimate the distance using surface brightness fluctuations. They obtain a distance of . This implies a value for the Hubble constant of .

The values are consistent with our gravitational-wave estimates.

We cannot be certain what happened to the merger remnant from gravitational-wave observations alone. However, electromagnetic observations do give some hints here.

Evans *et al*. (2017) argue that their non-detection of X-rays when observing with Swift and NuSTAR indicates that there is no neutron star remnant at this point, meaning we must have collapsed to form a black hole by 0.6 days post-merger. This isn’t too restricting in terms of the different ways the remnant could collapse, but does exclude a stable neutron star remnant. MAXI also didn’t detect any X-rays 4.6 hours after the merger (Sugita *et al*. 2018).

Pooley, Kumar & Wheeler (2017) consider X-ray observations of the afterglow. They calculate that if the remnant was a hypermassive neutron star with a large magnetic field, the early (10 day post-merger) luminosity would be much higher (and we could expect to see magnetar outbursts). Therefore, they think it is more likely that the remnant is a black hole. However, Piro *et al*. (2018) suggest that if the pthe spin-down of the neutron star remnant is dominated by losses due to gravitational wave emission, rather than electromagnetic emission, then the scenario is still viable. They argue that a tentatively identified X-ray flare seen 155 days post-merger, could be evidence of dissipation of the the neutron star’s toroidal magnetic field.

Kasen *et al*. (2017) use the observed red component of the kilonova to argue that the remnant must have collapsed to a black hole in . A neutron star would irradiate the ejecta with neutrinos, lower the neutron fraction and making the ejecta bluer. Since it is red, the neutrino flux must have been shut off, and the neutron star must have collapsed. We are in case b in their figure below.

Ai *et al*. (2018) find that there are some corners of parameter space for certain equations of state where a long-lived neutron star is possible, even given the observations. Therefore, we should remain open minded.

Margalit & Metzger (2017) and Bauswein *et al*. (2017) note that the relatively large amount of ejecta inferred from observations [citation note] is easier to explain when there is delayed (on timescales of ). This is difficult to resolve unless neutron star radii are small (). Metzger, Thompson & Quataert (2018) derive how this tension could be resolved if the remnant was a rapidly spinning magnetar with a life time of –. Matsumoto *et al*. (2018), suggest that the optical emission is powered by the the jet and material accreting onto the central object, rather than r-process decay, and this permits much smaller amounts of ejecta, which could also solve the issue. Yu & Dai (2017) suggest that accretion onto a long-lived neutron star could power the emission, and would only require a single opacity for the ejecta. Li *et al*. (2018) put forward a similar theory, arguing that both the high ejecta mass and low opacity are problems for the standard r-process explanation, but fallback onto a neutron star could work. However, Margutti *et al*. (2018) say that X-ray emission powered by a central engine is disfavoured at all times.

In conclusion, it seems probable that we ended up with a black hole, and we had an a unstable neutron star for a short time after merger, but I don’t think it’s yet settled how long this was around.

Several papers have explored what we can deduce about the nature of neutron star stuff from gravitational wave or electromagnetic observations the neutron star coalescence. It is quite a tricky problem. Below are some investigations into the radii of neutron stars and their tidal deformations; these seem compatible with the radii inferred in the GW170817 Equation-of-state Paper.

Bauswein *et al*. (2017) argue that the amount of ejecta inferred from the kilonova is too large for there to have been a prompt collapse to a black hole [citation note]. Using this, they estimate that the radius of a non-rotating neutron star of mass has a radius of at least . They also estimate that the radius for the maximum mass nonrotating neutron star must be greater than .

Annala *et al*. (2018) combine our initial measurement of the tidal deformation, with the requirement hat the equation of state supports a neutron star (which they argue requires that the tidal deformation of a neutron star is at least ). They argue that the latter condition implies that the radius of a neutron star is at least and the former that it is less than .

Radice *et **al*. (2018) combine together observations of the kilonova (the amount of ejecta inferred) with gravitational-wave measurements of the masses to place constraints on the tidal deformation. From their simulations, they argue that to explain the ejecta, the combined dimensionless tidal deformability must be . This is consistent with results in the GW170817 Properties Paper, but would eliminate the main peak of the distribution we inferred from gravitational waves alone.

Lim & Holt (2018) perform some equation-of-state calculations. They find that their particular method (chiral effective theory) is already in good agreement with estimates of the maximum neutron star mass and tidal deformations. Which is nice. Using their models, they predict that for GW170817’s chirp mass .

Raithel, Özel & Psaltis (2018) argue that for a given chirp mass, is only a weak function of component masses, and depends mostly on the radii. Therefore, from our initial inferred value, they put a 90% upper limit on the radii of .

Most *et al*. (2018) consider a wide range of parametrised equations of state. They consider both hadronic (made up of particles like neutrons and protons) equation of states, and ones where they undergo phase transitions (with hadrons breaking into quarks), which could potentially mean that the two neutron stars have quite different properties. A number of different constraints are imposed, to give a selection of potential radius ranges. Combining the requirement that neutron stars can be up to (Antoniadis *et al*. 2013), the maximum neutron star mass of inferred by Margalit & Metzger (2017), our initial gravitational-wave upper limit on the tidal deformation and the lower limit from Radice *et **al*. (2018), they estimate that the radius of a neutron star is – for the hadronic equation of state. For the equation of state with the phase transition, they do the same, but without the tidal deformation from Radice *et **al*. (2018), and find the radius of a neutron star is –.

Paschalidis *et al*. (2018) consider in more detail the idea equations of state with hadron–quark phase transitions, and the possibility that one of the components of GW170817’s source was a hadron–quark hybrid star. They find that the initial tidal measurements are consistent with this.

Burgio *et al*. (2018) further explore the possibility that the two binary components have different properties. They consider both there being a hadron–quark phase transition, and also that one star is hardonic and the other is a quark star (made up of deconfined quarks, rather than ones packaged up inside hadrons). X-ray observations indicate that neutron stars have radii in the range –, whereas most of the radii inferred for GW170817’s components are larger. This paper argues that this can be resolved if one of the components of GW170817’s source was a hadron–quark hybrid star or a quark star.

De *et al*. (2018) perform their own analysis of the gravitational signal, with a variety of different priors on the component masses. They assume that the two neutron stars have the same radii. In the GW170817 Equation-of-state Paper we find that the difference can be up to about , which I think makes this an OKish approximation; Zhao & Lattimer (2018) look at this in more detail. Within their approximation, they estimate the neutron stars to have a common radius of –.

Malik *et al*. (2018) use the initial gravitational-wave upper bound on tidal deformation and the lower bound from Radice *et **al*. (2018) in combination with several equations of state (calculated using relativistic mean field and of Skyrme Hartree–Fock recipes, which sound delicious). For a neutron star, they obtain a tidal defomation in the range – and the radius in the range –.

afo

]]>Our family of binary black holes is now growing large. During our first observing run (O1) we found three: GW150914, LVT151012 and GW151226. The advanced detector observing run (O2) ran from 30 November 2016 to 25 August 2017 (with a couple of short breaks). From our O1 detections, we were expecting roughly one binary black hole per month. The first same in January, GW170104, and we have announced the first detection which involved Virgo from August, GW170814, so you might be wondering what happened in-between? Pretty much everything was dropped following the detection of our first binary neutron star system, GW170817, as a sizeable fraction of the astronomical community managed to observe its electromagnetic counterparts. Now, we are starting to dig our way out of the O2 back-log.

On 8 June 2017, a chirp was found in data from LIGO Livingston. At the time, LIGO Hanford was undergoing planned engineering work [bonus explanation]. We would not normally analyse this data, as the detector is disturbed; however, we had to follow up on the potential signal in Livingston. Only low frequency data in Hanford should have been affected, so we limited our analysis to above 30 Hz (this sounds easier than it is—I was glad I was not on rota to analyse this event [bonus note]). A coincident signal was found. Hello GW170608, the June event!

Analysing data from both Hanford and Livingston (limiting Hanford to above 30 Hz) [bonus note], GW170608 was found by both of our offline searches for binary signals. PyCBC detected it with a false alarm rate of less than 1 in 3000 years, and GstLAL estimated a false alarm rate of 1 in 160000 years. The signal was also picked up by coherent WaveBurst, which doesn’t use waveform templates, and so is more flexible in what it can detect at the cost off sensitivity: this analysis estimates a false alarm rate of about 1 in 30 years. GW170608 probably isn’t a bit of random noise.

GW170608 comes from a low mass binary. Well, relatively low mass for a binary black hole. For low mass systems, we can measure the chirp mass , the particular combination of the two black hole masses which governs the inspiral, well. For GW170608, the chirp mass is . This is the smallest chirp mass we’ve ever measured, the next smallest is GW151226 with . GW170608 is probably the lowest mass binary we’ve found—the total mass and individual component masses aren’t as well measured as the chirp mass, so there is small probability (~11%) that GW151226 is actually lower mass. The plot below compares the two.

One caveat with regards to the masses is that the current results only consider spin magnitudes up to 0.89, as opposed to the usual 0.99. There is a correlation between the mass ratio and the spins: you can have a more unequal mass binary with larger spins. There’s not a lot of support for large spins, so it shouldn’t make too much difference.

Speaking of spins, GW170608 seems to prefer small spins aligned with the angular momentum; spins are difficult to measure, so there’s a lot of uncertainty here. The best measured combination is the effective inspiral spin parameter . This is a combination of the spins aligned with the orbital angular momentum. For GW170608 it is , so consistent with zero and leaning towards being small and positive. For GW151226 it was , and we could exclude zero spin (at least one of the black holes must have some spin). The plot below shows the probability distribution for the two component spins (you can see the cut at a maximum magnitude of 0.89). We prefer small spins, and generally prefer spins in the upper half of the plots, but we can’t make any definite statements other than both spins aren’t large and antialigned with the orbital angular momentum.

The properties of GW170608’s source are consistent with those inferred from observations of low-mass X-ray binaries (here the low-mass refers to the companion star, not the black hole). These are systems where mass overflows from a star onto a black hole, swirling around in an accretion disc before plunging in. We measure the X-rays emitted from the hot gas from the disc, and these measurements can be used to estimate the mass and spin of the black hole. The similarity suggests that all these black holes—observed with X-rays or with gravitational waves—may be part of the same family.

We’ll present update merger rates and results for testing general relativity in our end-of-O2 paper. The low mass of GW170608’s source will make it a useful addition to our catalogue here. Small doesn’t mean unimportant.

**Title:** GW170608: Observation of a 19 solar-mass binary black hole coalescence

**Journal:** *Astrophysical Journal Letters*; **851**(2):L35(11); 2017

**arXiv:** 1711.05578 [gr-qc] [bonus note]

**Science summary:** GW170608: LIGO’s lightest black hole binary?

**Data release:** LIGO Open Science Center

A lot of time and effort goes into monitoring, maintaining and tweaking the detectors so that they achieve the best possible performance. The majority of work on the detectors happens during engineering breaks between observing runs, as we progress towards design sensitivity. However, some work is also needed during observing runs, to keep the detectors healthy.

On 8 June, Hanford was undergoing angle-to-length (A2L) decoupling, a regular maintenance procedure which minimises the coupling between the angular position of the test-mass mirrors and the measurement of strain. Our gravitational-wave detectors carefully measure the time taken for laser light to bounce between the test-mass mirrors in their arms. If one of these mirrors gets slightly tilted, then the laser could bounce of part of the mirror which is slightly closer or further away than usual: we measure a change in travel time even though the length of the arm is the same. To avoid this, the detectors have control systems designed to minimise angular disturbances. Every so often, it is necessary to check that these are calibrated properly. To do this, the mirrors are given little pushes to rotate them in various directions, and we measure the output to see the impact.

The angular pushes are done at specific frequencies, so we we can tease apart the different effects of rotations in different directions. The frequencies are in the range 19–23 Hz. 30 Hz is a safe cut-off for effects of the procedure (we see no disturbances above this frequency).

While we normally wouldn’t analyse data from during maintenance, we think it is safe to do so, after discarding the low-frequency data. If you are worried about the impact of including addition data in our rate estimates (there may be a bias only using time when you know there are signals), you can be reassured that it’s only a small percent of the total time, and so should introduce an error less significant than uncertainty from the calibration accuracy of the detectors.

Unusually for an O2 event, Aaron Zimmerman was not on shift for the Parameter Estimation rota at the time of GW170608. Instead, it was Patricia Schmidt and Eve Chase who led this analysis. Due to the engineering work in Hanford, and the low mass of the system (which means a long inspiral signal), this was one of the trickiest signals to analyse: I’d say only GW170817 was more challenging (if you ignore all the extra work we did for GW150914 as it was the first time).

If you are wondering about the status of Virgo: on June 8 it was still in commissioning ahead of officially joining the run on 1 August. We have data at the time of the event. The sensitivity is of the detector is not great. We often quantify detector sensitivity by quoting the binary neutron star range (the average distance a binary neutron star could be detected). Around the time of the event, this was something like 7–8 Mpc for Virgo. During O2, the LIGO detectors have been typically in the 60–100 Mpc region; when Virgo joined O2, it had a range of around 25–30 Mpc. Unsurprisingly, Virgo didn’t detect the signal. We could have folded the data in for parameter estimation, but it was decided that it was probably not well enough understood at the time to be worthwhile.

The GW170608 Paper is the first discovery paper to be made public before journal acceptance (although the GW170814 Paper was close, and we would have probably gone ahead with the announcement anyway). I have mixed feelings about this. On one hand, I like that the Collaboration is seen to take their detections seriously and follow the etiquette of peer review. On the other hand, I think it is good that we can get some feedback from the broader community on papers before they’re finalised. I think it is good that the first few were peer reviewed, it gives us credibility, and it’s OK to relax now. Binary black holes are becoming routine.

This is also the first discovery paper not to go to *Physical Review Letters*. I don’t think there’s any deep meaning to this, the Collaboration just wanted some variety. Perhaps GW170817 sold everyone that we were astrophysicists now? Perhaps people thought that we’ve abused *Physical Review Letters*‘ page limits too many times, and we really do need that appendix. I was still in favour of *Physical Review Letters* for *this* paper, if they would have had us, but I approve of sharing the love. There’ll be plenty more events.

In this post, I’ll go through some of the story of GW170817. As for GW150914, I’ll write another post on the more **technical details of our papers**, once I’ve had time to catch up on sleep.

The second observing run (O2) of the advanced gravitational-wave detectors started on 30 November 2016. The first detection came in January—GW170104. I was heavily involved in the analysis and paper writing for this. We finally finished up in June, at which point I was thoroughly exhausted. I took some time off in July [bonus note], and was back at work for August. With just one month left in the observing run, it would all be downhill from here, right?

August turned out to be the lava-filled, super-difficult final level of O2. As we have now announced, on August 14, we detected a binary black hole coalescence—GW170814. This was the first clear detection including Virgo, giving us superb sky localization. This is fantastic for astronomers searching for electromagnetic counterparts to our gravitational-wave signals. There was a flurry of excitement, and we thought that this was a fantastic conclusion to O2. We were wrong, this was just the save point before the final opponent. On August 17, we met the final, fire-ball throwing boss.

At 1:58 pm BST my phone buzzed with a text message, an automated alert of a gravitational-wave trigger. I was obviously excited—I recall that my exact thoughts were “What fresh hell is this?” I checked our online event database and saw that it was a single-detector trigger, it was only seen by our Hanford instrument. I started to relax, this was probably going to turn out to be a glitch. The template masses, were low, in the neutron star range, not like the black holes we’ve been finding. Then I saw the false alarm rate was better than one in 9000 years. Perhaps it wasn’t just some noise after all—even though it’s difficult to estimate false alarm rates accurately online, as especially for single-detector triggers, this was significant! I kept reading. Scrolling down the page there was an external coincident trigger, a gamma-ray burst (GRB 170817A) within a couple of seconds…

Short gamma-ray bursts are some of the most powerful explosions in the Universe. I’ve always found it mildly disturbing that we didn’t know what causes them. The leading theory has been that they are the result of two neutron stars smashing together. Here seemed to be the proof.

The rapid response call was under way by the time I joined. There was a clear chirp in Hanford, you could be see it by eye! We also had data from Livingston and Virgo too. It was bad luck that they weren’t folded into the online alert. There had been a drop out in the data transfer from Italy to the US, breaking the flow for Virgo. In Livingston, there was a glitch at the time of the signal which meant the data wasn’t automatically included in the search. My heart sank. Glitches are common—check out Gravity Spy for some examples—so it was only a matter of time until one overlapped with a signal [bonus note], and with GW170817 being such a long signal, it wasn’t that surprising. However, this would complicate the analysis. Fortunately, the glitch is short and the signal is long (if this had been a high-mass binary black hole, things might not have been so smooth). We *were* able to exorcise the glitch. A preliminary sky map using all three detectors was sent out at 12:54 am BST. Not only did we defeat the final boss, we did a speed run on the hard difficulty setting first time [bonus note].

The three-detector sky map provided a great localization for the source—this preliminary map had a 90% area of ~30 square degrees. It was just in time for that night’s observations. The plot below shows our gravitational-wave localizations in green—the long band is without Virgo, and the smaller is with all three detectors—as with GW170814, Virgo makes a big difference. The blue areas are the localizations from Fermi and INTEGRAL, the gamma-ray observatories which measured the gamma-ray burst. The inset is something new…

That night, the discoveries continued. Following up on our sky location, an optical counterpart (AT 2017gfo) was found. The source is just on the outskirts of galaxy NGC 4993, which is right in the middle of the distance range we inferred from the gravitational wave signal. At around 40 Mpc, this is the closest gravitational wave source.

After this source was reported, I think about every single telescope possible was pointed at this source. I think it may well be the most studied transient in the history of astronomy. I think there are *~250 circulars* about follow-up. Not only did we find an optical counterpart, but there was emission in X-ray and radio. There was a delay in these appearing, I remember there being excitement at our Collaboration meeting as the X-ray emission was reported (there was a lack of cake though).

The figure below tries to summarise all the observations. As you can see, it’s a mess because there is too much going on!

The observations paint a compelling story. Two neutron stars insprialled together and merged. Colliding two balls of nuclear density material at around a third of the speed of light causes a big explosion. We get a jet blasted outwards and a gamma-ray burst. The ejected, neutron-rich material decays to heavy elements, and we see this hot material as a kilonova [bonus material]. The X-ray and radio may then be the afterglow formed by the bubble of ejected material pushing into the surrounding interstellar material.

What have we learnt from our results? Here are some gravitational wave highlights.

We measure several thousand cycles from the inspiral. It is the most beautiful chirp! This is the loudest gravitational wave signal yet found, beating even GW150914. GW170817 has a signal-to-noise ratio of 32, while for GW150914 it is just 24.

The signal-to-noise ratios in the Hanford, Livingston and Virgo were 19, 26 and 2 respectively. The signal is quiet in Virgo, which is why you can’t spot it by eye in the plots above. The lack of a clear signal is really useful information, as it restricts where on the sky the source could be, as beautifully illustrated in the video below.

While we measure the inspiral nicely, we don’t detect the merger: we can’t tell if a hypermassive neutron star is formed or if there is immediate collapse to a black hole. This isn’t too surprising at current sensitivity, the system would basically need to convert all of its energy into gravitational waves for us to see it.

From measuring all those gravitational wave cycles, we can measure the chirp mass stupidly well. Unfortunately, converting the chirp mass into the component masses is not easy. The ratio of the two masses is degenerate with the spins of the neutron stars, and we don’t measure these well. In the plot below, you can see the probability distributions for the two masses trace out bananas of roughly constant chirp mass. How far along the banana you go depends on what spins you allow. We show results for two ranges: one with spins (aligned with the orbital angular momentum) up to 0.89, the other with spins up to 0.05. There’s nothing physical about 0.89 (it was just convenient for our analysis), but it is designed to be agnostic, and above the limit you’d plausibly expect for neutron stars (they should rip themselves apart at spins of ~0.7); the lower limit of 0.05 should safely encompass the spins of the binary neutron stars (which are close enough to merge in the age of the Universe) we have estimated from pulsar observations. The masses roughly match what we have measured for the neutron stars in our Galaxy. (The combinations at the tip of the banana for the high spins would be a bit odd).

If we were dealing with black holes, we’d be done: they are only described by mass and spin. Neutron stars are more complicated. Black holes are just made of warped spacetime, neutron stars are made of delicious nuclear material. This can get distorted during the inspiral—tides are raised on one by the gravity of the other. These extract energy from the orbit and accelerate the inspiral. The tidal deformability depends on the properties of the neutron star matter (described by its equation of state). The fluffier a neutron star is, the bigger the impact of tides; the more compact, the smaller the impact. We don’t know enough about neutron star material to predict this with certainty—by measuring the tidal deformation we can learn about the allowed range. Unfortunately, we also didn’t yet have good model waveforms including tides, so for to start we’ve just done a preliminary analysis (an improved analysis was done for the GW170817 Properties Paper). We find that some of the stiffer equations of state (the ones which predict larger neutron stars and bigger tides) are disfavoured; however, we cannot rule out zero tides. This means we can’t rule out the possibility that we have found two low-mass black holes from the gravitational waves alone. This would be an interesting discovery; however, the electromagnetic observations mean that the more obvious explanation of neutron stars is more likely.

From the gravitational wave signal, we can infer the source distance. Combining this with the electromagnetic observations we can do some cool things.

First, the gamma ray burst arrived at Earth 1.7 seconds after the merger. 1.7 seconds is not a lot of difference after travelling something like 85–160 million years (that’s roughly the time since the Cretaceous or Late Jurassic periods). Of course, we don’t expect the gamma-rays to be emitted at exactly the moment of merger, but allowing for a sensible range of emission times, we can bound the difference between the speed of gravity and the speed of light. In general relativity they should be the same, and we find that the difference should be no more than three parts in .

Second, we can combine the gravitational wave distance with the redshift of the galaxy to measure the Hubble constant, the rate of expansion of the Universe. Our best estimates for the Hubble constant, from the cosmic microwave background and from supernova observations, are inconsistent with each other (the most recent supernova analysis only increase the tension). Which is awkward. Gravitational wave observations should have different sources of error and help to resolve the difference. Unfortunately, with only one event our uncertainties are rather large, which leads to a diplomatic outcome.

Finally, we can now change from estimating upper limits on binary neutron star merger rates to estimating the rates! We estimate the merger rate density is in the range (assuming a uniform of neutron star masses between one and two solar masses). This is surprisingly close to what the Collaboration expected back in 2010: a rate of between and , with a *realistic* rate of . This means that we are on track to see many more binary neutron stars—perhaps one a week at design sensitivity!

Advanced LIGO and Advanced Virgo observed a binary neutron star insprial. The rest of the astronomical community has observed what happened next (sadly there are no neutrinos). This is the first time we have such complementary observations—hopefully there will be many more to come. There’ll be a *huge* number of results coming out over the following days and weeks. From these, we’ll start to piece together more information on what neutron stars are made of, and what happens when you smash them together (take that particle physicists).

Also: I’m exhausted, my inbox is overflowing, and I will have far too many papers to read tomorrow.

**GW170817 Discovery Paper:** GW170817: Observation of gravitational waves from a binary neutron star inspiral

**Multimessenger Astronomy Paper:** Multi-messenger observations of a binary neutron star merger**
Data release:** LIGO Open Science Center

Over my vacation I cleaned up my email. I had a backlog starting around September 2015. I think there were over 6000 which I sorted or deleted. I had about 20 left to deal with when I got back to work. GW170817 undid that. Despite doing my best to keep up, there are over a 1000 emails in my inbox…

Around the start of O2, I was asked when I expected our results to be public. I said it would depend upon what we found. If it was only high-mass black holes, those are quick to analyse and we know what to do with them, so results shouldn’t take long, now we have the first few out of the way. In this case, perhaps a couple months as we would have been generating results as we went along. However, the worst case scenario would be a binary neutron star overlapping with non-Gaussian noise. Binary neutron stars are more difficult to analyse (they are longer signals, and there are matter effects to worry about), and it would be complicated to get everyone to be happy with our results because we were doing lots of things for the first time. Obviously, if one of these happened at the end of the run, there’d be quite a delay…

I think I got that half-right. We’re done amazingly well analysing GW170817 to get results out in just two months, but I think it will be a while before we get the full O2 set of results out, as we’ve been neglecting otherthings (you’ll notice we’ve not updated our binary black hole merger rate estimate since GW170104, nor given detailed results for testing general relativity with the more recent detections).

At the time of the GW170817 alert, I was working on writing a research proposal. As part of this, I was explaining why it was important to continue working on gravitational-wave parameter estimation, in particular how to deal with non-Gaussian or non-stationary noise. I think I may be a bit of a jinx. For GW170817, the glitch wasn’t a big problem, these type of blips can be removed. I’m more concerned about the longer duration ones, which are less easy to separate out from background noise. Don’t say I didn’t warn you in O3.

The duty of analysing signals to infer their source properties was divided up into shifts for O2. On January 4, the time of GW170104, I was on shift with my partner Aaron Zimmerman. It was his first day. Having survived that madness, Aaron signed back up for the rota. Can you guess who was on shift for the week which contained GW170814 and GW170817? Yep, Aaron (this time partnered with the excellent Carl-Johan Haster). Obviously, we’ll need to have Aaron on rota for the entirety of O3. In preparation, he has already started on paper drafting

Methods Section:Chained ROTA member to a terminal, ignored his cries for help. Detections followed swiftly.

The lightest elements (hydrogen, helium and lithium) we made during the Big Bang. Stars burn these to make heavier elements. Energy can be released up to around iron. Therefore, heavier elements need to be made elsewhere, for example in the material ejected from supernova or (as we have now seen) neutron star mergers, where there are lots of neutrons flying around to be absorbed. Elements (like gold and platinum) formed by this rapid neutron capture are known as r-process elements, I think because they are beloved by pirates.

A couple of weeks ago, the Nobel Prize in Physics was announced for the observation of gravitational waves. In December, the laureates will be presented with a gold (not chocolate) medal. I love the idea that this gold may have come from merging neutron stars.

]]>Advanced Virgo joined O2, the second observing run of the advanced detector era, on 1 August. This was a huge achievement. It has not been an easy route commissioning the new detector—it never ceases to amaze me how sensitive these machines are. Together, Advanced Virgo (near Pisa) and the two Advanced LIGO detectors (in Livingston and Hanford in the US) would take data until the end of O2 on 25 August.

On 14 August, we found a signal. A signal that was observable in all three detectors [bonus note]. Virgo is less sensitive than the LIGO instruments, so there is no impressive plot that shows something clearly popping out, but the Virgo data do complement the LIGO observations, indicating a consistent signal in all three detectors [bonus note].

The signal originated from the coalescence of two black holes. GW170814 is thus added to the growing family of GW150914, LVT151012, GW151226 and GW170104.

GW170814 most closely resembles GW150914 and GW170104 (perhaps there’s something about ending with a 4). If we compare the masses of the two component black holes of the binary ( and ), and the black hole they merge to form (), they are all quite similar

- GW150914: , , ;
- GW170104: , , ;
- GW170814: , , .

GW170814’s source is another high-mass black hole system. It’s not too surprising (now we know that these systems exist) that we observe lots of these, as more massive black holes produce louder gravitational wave signals.

GW170814 is also comparable in terms of black holes spins. Spins are more difficult to measure than masses, so we’ll just look at the effective inspiral spin , a particular combination of the two component spins that influences how they inspiral together, and the spin of the final black hole

- GW150914: , ;
- GW170104:, ;
- GW170814:, .

There’s some spread, but the effective inspiral spins are all consistent with being close to zero. Small values occur when the individual spins are small, if the spins are misaligned with each other, or some combination of the two. I’m starting to ponder if high-mass black holes might have small spins. We don’t have enough information to tease these apart yet, but this new system is consistent with the story so far.

One of the things Virgo helps a lot with is localizing the source on the sky. Most of the information about the source location comes from the difference in arrival times at the detectors (since we know that gravitational waves should travel at the speed of light). With two detectors, the time delay constrains the source to a ring on the sky; with three detectors, time delays can narrow the possible locations down to a couple of blobs. Folding in the amplitude of the signal as measured by the different detectors adds extra information, since detectors are not equally sensitive to all points on the sky (they are most sensitive to sources over head or underneath). This can even help when you don’t observe the signal in all detectors, as you know the source must be in a direction that detector isn’t too sensitive too. GW170814 arrived at LIGO Livingston first (although it’s not a competition), then ~8 ms later at LIGO Hanford, and finally ~14 ms later at Virgo. If we only had the two LIGO detectors, we’d have an uncertainty on the source’s sky position of over 1000 square degrees, but adding in Virgo, we get this down to 60 square degrees. That’s still pretty large by astronomical standards (the full Moon is about a quarter of a square degree), but a fantastic improvement [bonus note]!

Having additional detectors can help improve gravitational wave measurements in other ways too. One of the predictions of general relativity is that gravitational waves come in two polarizations. These polarizations describe the pattern of stretching and squashing as the wave passes, and are illustrated below.

These two polarizations are the two tensor polarizations, but other patterns of squeezing could be present in modified theories of gravity. If we could detect any of these we would immediately know that general relativity is wrong. The two LIGO detectors are almost exactly aligned, so its difficult to get any information on other polarizations. (We tried with GW150914 and couldn’t say anything either way). With Virgo, we get a little more information. As a first illustration of what we may be able to do, we compared how well the observed pattern of radiation at the detectors matched different polarizations, to see how general relativity’s tensor polarizations compared to a signal of entirely vector or scalar radiation. The tensor polarizations are clearly preferred, so general relativity lives another day. This isn’t too surprising, as most modified theories of gravity with other polarizations predict mixtures of the different polarizations (rather than all of one). To be able to constrain all the mixtures with these short signals we really need a network of five detectors, so we’ll have to wait for KAGRA and LIGO-India to come on-line.

We’ll be presenting a more detailed analysis of GW170814 later, in papers summarising our O2 results, so stay tuned for more.

**Title:** GW170814: A three-detector observation of gravitational waves from a binary black hole coalescence

**arXiv:** 1709.09660 [gr-qc]

**Journal:** *Physical Review Letters*; **119**(14):141101(16) [bonus note]

**Data release:** LIGO Open Science Center

**Science summary:** GW170814: A three-detector observation of gravitational waves from a binary black hole coalescence

Those of you who have been following the story of gravitational waves for a while may remember the case of the Big Dog. This was a blind injection of a signal during the initial detector era. One of the things that made it an interesting signal to analyse, was that it had been injected with an inconsistent sign in Virgo compared to the two LIGO instruments (basically it was upside down). Making this type of sign error is easy, and we were a little worried that we might make this sort of mistake when analysing the real data. The Virgo calibration team were extremely careful about this, and confident in their results. Of course, we’re quite paranoid, so during the preliminary analysis of GW170814, we tried some parameter estimation runs with the data from Virgo flipped. This was clearly disfavoured compared to the right sign, so we all breathed easily.

I am starting to believe that God may be a detector commissioner. At the start of O1, we didn’t have the hardware injection systems operational, but GW150914 showed that things were working properly. Now, with a third detector on-line, GW170814 shows that the network is functioning properly. Astrophysical injections are definitely the best way to confirm things are working!

Our usual way to search for binary black hole signals is compare the data to a bank of waveform templates. Since Virgo is less sensitive the the two LIGO detectors, and would only be running for a short amount of time, these main searches weren’t extended to use data from all three detectors. This seemed like a sensible plan, we were confident that this wouldn’t cause us to miss anything, and we can detect GW170814 with high significance using just data from Livingston and Hanford—the false alarm rate is estimated to be less than 1 in 27000 years (meaning that if the detectors were left running in the same state, we’d expect random noise to make something this signal-like less than once every 27000 years). However, we realised that we wanted to be able to show that Virgo had indeed seen something, and the search wasn’t set up for this.

Therefore, for the paper, we list three different checks to show that Virgo did indeed see the signal.

- In a similar spirit to the main searches, we took the best fitting template (it doesn’t matter in terms of results if this is the best matching template found by the search algorithms, or the maximum likelihood waveform from parameter estimation), and compared this to a stretch of data. We then calculated the probability of seeing a peak in the signal-to-noise ratio (as shown in the top row of Figure 1) at least as large as identified for GW170814, within the time window expected for a real signal. Little blips of noise can cause peaks in the signal-to-noise ratio, for example, there’s a glitch about 50 ms after GW170814 which shows up. We find that there’s a 0.3% probability of getting a signal-to-ratio peak as large as GW170814. That’s pretty solid evidence for Virgo having seen the signal, but perhaps not overwhelming.
- Binary black hole coalescences can also be detected (if the signals are short) by our searches for unmodelled signals. This was the case for GW170814. These searches were using data from all three detectors, so we can compare results with and without Virgo. Using just the two LIGO detectors, we calculate a false alarm rate of 1 per 300 years. This is good enough to claim a detection. Adding in Virgo, the false alarm rate drops to 1 per 5900 years! We see adding in Virgo improves the significance by almost a factor of 20.
- Using our parameter estimation analysis, we calculate the evidence (marginal likelihood) for (i) there being a coherent signal in Livingston and Hanford, and Gaussian noise in Virgo, and (ii) there being a coherent signal in all three detectors. We then take the ratio to calculate the Bayes factor. We find that a coherent signal in all three detectors is preferred by a factor of over 1600. This is a variant of a test proposed in Veitch & Vecchio (2010); it could be fooled if the noise in Virgo is non-Gaussian (if there is a glitch), but together with the above we think that the simplest explanation for Virgo’s data is that there is a signal.

In conclusion: Virgo works. Probably.

Adding Virgo to the network greatly improves localization of the source, which is a huge advantage when searching for counterparts. For a binary black hole, as we have here, we don’t expect a counterpart (which would make finding one even more exciting). So far, no counterpart has been reported.

- Arcavi
*et al*. (2017) reported an optical search from the Las Cumbres Observatory. - Smith
*et al*. (2018) reported an optical search, targeting strong-lensing galaxy clusters, with Gemini South and the Very Large Telescope. - Adriani
*et al*. (2018) describe their gamma-ray observations with CALET and report upper limits for GW151226, GW170104, GW170608, GW170814 and GW170817.

This is the first observation we’ve announced *before* being published. The draft made public at time at announcement was accepted, pending fixing up some minor points raised by the referees (who were fantastically quick in reporting back). I guess that binary black holes are now familiar enough that we are on solid ground claiming them. I’d be interested to know if people think that it would be good if we didn’t always wait for the rubber stamp of peer review, or whether they would prefer to for detections to be externally vetted? Sharing papers before publication would mean that we get more chance for feedback from the community, which is would be good, but perhaps the Collaboration should be seen to do things properly?

One reason that the draft paper is being shared early is because of an opportunity to present to the G7 Science Ministers Meeting in Italy. I think any excuse to remind politicians that international collaboration is a good thing is worth taking. Although I would have liked the paper to be a little more polished [bonus advice]. The opportunity to present here only popped up recently, which is one reason why things aren’t as perfect as usual.

I also suspect that Virgo were keen to demonstrate that they had detected something prior to any Nobel Prize announcement. There’s a big difference between stories being written about LIGO *and* Virgo’s discoveries, and having as an afterthought that Virgo also ran in August.

The main reason, however, was to get this paper out before the announcement of GW170817. The identification of GW170817’s counterpart relied on us being able to localize the source. In that case, there wasn’t a clear signal in Virgo (the lack of a signal tells us the source wan’t in a direction wasn’t particularly sensitive to). People agreed that we really need to demonstrate that Virgo *can* detect gravitational waves in order to be convincing that not seeing a signal is useful information. We needed to demonstrate that Virgo does work so that our case for GW170817 was watertight *and* bulletproof (it’s important to be prepared).

Some useful advice I was given when a PhD student was that done is better than perfect. Having something finished is often more valuable than having lots of really polished bits that don’t fit together to make a cohesive whole, and having *everything* absolutely perfect takes forever. This is useful to remember when writing up a thesis. I think it might apply here too: the Paper Writing Team have done a truly heroic job in getting something this advanced in little over a month. There’s always one more thing to do… [one more bonus note]

One point I was hoping that the Paper Writing Team would clarify is our choice of prior probability distribution for the black hole spins. We don’t get a lot of information about the spins from the signal, so our choice of prior has an impact on the results.

The paper says that we assume “no restrictions on the spin orientations”, which doesn’t make much sense, as one of the two waveforms we use to analyse the signal only includes spins aligned with the orbital angular momentum! What the paper meant was that we assume a prior distribution which has an isotopic distribution of spins, and for the aligned spin (no precession) waveform, we assume a prior probability distribution on the aligned components of the spins which matches what you would have for an isotropic distribution of spins (in effect, assuming that we can only measure the aligned components of the spins, which is a good approximation).

]]>I’ll add to this post as I get time, and as papers are published. I’ve started off with papers searching for compact binary coalescences (as these are closest to my own research). There are separate posts on our detections GW150914 (and its follow-up papers: set I, set II) and GW151226 (this post includes our end-of-run summary of the search for binary black holes, including details of LVT151012).

**Title:** Upper limits on the rates of binary neutron star and neutron-star–black-hole mergers from Advanced LIGO’s first observing run

**arXiv:** 1607.07456 [astro-ph.HE]

**Journal:** *Astrophysical Journal Letters*; **832**(2):L21(15); 2016

Our main search for compact binary coalescences targets binary black holes (binaries of two black holes), binary neutron stars (two neutron stars) and neutron-star–black-hole binaries (one of each). Having announced the results of our search for binary black holes, this paper gives the detail of the rest. Since we didn’t make any detections, we set some new, stricter upper limits on their merger rates. For binary neutron stars, this is .

**More details:** O1 Binary Neutron Star/Neutron Star–Black Hole Paper Paper summary

**Title:** Search for gravitational waves associated with gamma-ray bursts during the first Advanced LIGO observing run and implications for the origin of GRB 150906B

**arXiv:** 1611.07947 [astro-ph.HE]

**Journal:** *Astrophysical Journal*; **841**(2):89(18); 2016

**LIGO science summary:** What’s behind the mysterious gamma-ray bursts? LIGO’s search for clues to their origins

Some binary neutron star or neutron-star–black-hole mergers may be accompanied by a gamma-ray burst. This paper describes our search for signals coinciding with observations of gamma-ray bursts (including GRB 150906B, which was potentially especially close by). Knowing when to look makes it easy to distinguish a signal from noise. We don’t find anything, so we we can exclude any close binary mergers as sources of these gamma-ray bursts.

**More details:** O1 Gamma-Ray Burst Paper summary

**Title:** Search for intermediate mass black hole binaries in the first observing run of Advanced LIGO

**arXiv:** 1704.04628 [gr-qc]

**Journal:** *Physical Review D*; **96**(2):022001(14); 2017

**LIGO science summary:** Search for mergers of intermediate-mass black holes

Our main search for binary black holes in O1 targeted systems with masses less than about 100 solar masses. There could be more massive black holes out there. Our detectors are sensitive to signals from binaries up to a few hundred solar masses, but these are difficult to detect because they are so short. This paper describes our specially designed such systems. This combines techniques which use waveform templates and those which look for unmodelled transients (bursts). Since we don’t find anything, we set some new upper limits on merger rates.

**More details:** O1 Intermediate Mass Black Hole Binary Paper summary

**Title:** All-sky search for short gravitational-wave bursts in the first Advanced LIGO run

**arXiv:** 1611.02972 [gr-qc]

**Journal:** *Physical Review D*; **95**(4):042003(14); 2017

If we only search for signals for which we have models, we’ll never discover something new. Unmodelled (burst) searches are more flexible and don’t assume a particular form for the signal. This paper describes our search for short bursts. We successfully find GW150914, as it is short and loud, and burst searches are good for these type of signals, but don’t find anything else. (It’s not too surprising GW151226 and LVT151012 are below the threshold for detection because they are longer and quieter than GW150914).

**More details:** O1 Burst Paper summary

**Synopsis:** O1 Binary Neutron Star/Neutron Star–Black Hole Paper

**Read this if:** You want a change from black holes

**Favourite part:** We’re getting closer to detection (and it’ll still be interesting if we don’t find anything)

The Compact Binary Coalescence (CBC) group target gravitational waves from three different flavours of binary in our main search: binary neutron stars, neutron star–black hole binaries and binary black holes. Before O1, I would have put my money on us detecting a binary neutron star first, around-about O3. Reality had other ideas, and we discovered binary black holes. Those results were reported in the O1 Binary Black Hole Paper; this paper goes into our results for the others (which we didn’t detect).

To search for signals from compact binaries, we use a bank of gravitational wave signals to match against the data. This bank goes up to total masses of 100 solar masses. We split the bank up, so that objects below 2 solar masses are considered neutron stars. This doesn’t make too much difference to the waveforms we use to search (neutrons stars, being made of stuff, can be tidally deformed by their companion, which adds some extra features to the waveform, but we don’t include these in the search). However, we do limit the spins for neutron stars to less the 0.05, as this encloses the range of spins estimated for neutron star binaries from binary pulsars. This choice shouldn’t impact our ability to detect neutron stars with moderate spins too much.

We didn’t find any interesting events: the results were consistent with there just being background noise. If you read *really* carefully, you might have deduced this already from the O1 Binary Black Hole Paper, as the results from the different types of binaries are completely decoupled. Since we didn’t find anything, we can set some upper limits on the merger rates for binary neutron stars and neutron star–black hole binaries.

The expected number of events found in the search is given by

where is the merger rate, and is the surveyed time–volume (you expect more detections if your detectors are more sensitive, so that they can find signals from further away, or if you leave them on for longer). We can estimate by performing a set of injections and seeing how many are found/missed at a given threshold. Here, we use a false alarm rate of one per century. Given our estimate for and our observation of zero detections we can, calculate a probability distribution for using Bayes’ theorem. This requires a choice for a prior distribution of . We use a uniform prior, for consistency with what we’ve done in the past.

With a uniform prior, the confidence level limit on the rate is

,

so the 90% confidence upper limit is . This is quite commonly used, for example we make use of it in the O1 Intermediate Mass Black Hole Binary Search. For comparison, if we had used a Jeffrey’s prior of , the equivalent results is

,

and hence , so results would be the same to within a factor of 2, but the results with the uniform prior are more conservative.

The plot below shows upper limits for different neutron star masses, assuming that neutron spins are (uniformly distributed) between 0 and 0.05 and isotropically orientated. From our observations of binary pulsars, we have seen that most of these neutron stars have masses of ~1.35 solar masses, so we can also put a limit of the binary neutron star merger rate assuming that their masses are normally distributed with mean of 1.35 solar masses and standard deviation of 0.13 solar masses. This gives an upper limit of for isotropic spins up to 0.05, and if you allow the spins up to 0.4.

For neutron star–black hole binaries there’s a greater variation in possible merger rates because the black holes can have a greater of masses and spins. The upper limits range from about to for a 1.4 solar mass neutron star and a black hole between 30 and 5 solar masses and a range of different spins (Table II of the paper).

It’s not surprising that we didn’t see anything in O1, but what about in future runs. The plots below compare projections for our future sensitivity with various predictions for the merger rates of binary neutron stars and neutron star–black hole binaries. A few things have changed since we made these projections, for example O2 ended up being 9 months instead of 6 months, but I think we’re still somewhere in the O2 band. We’ll have to see for O3. From these, it’s clear that a detection on O1 was overly optimistic. In O2 and O3 it becomes more plausible. This means even if we don’t see anything, we’ll be still be doing some interesting astrophysics as we can start ruling out some models.

Binary neutron star or neutron star–black hole mergers may be the sources of gamma-ray bursts. These are some of the most energetic explosions in the Universe, but we’re not sure where they come from (I actually find that kind of worrying). We look at this connection a bit more in the O1 Gamma-Ray Burst Paper. The theory is that during the merger, neutron star matter gets ripped apart, squeezed and heated, and as part of this we get jets blasted outwards from the swirling material. There are always jets in these type of things. We see the gamma-ray burst if we are looking down the jet: the wider the jet, the larger the fraction of gamma-ray bursts we see. By comparing our estimated merger rates, with the estimated rate of gamma-ray bursts, we can place some lower limits on the opening angle of the jet. If all gamma-ray bursts come from binary neutron stars, the opening angle needs to be bigger than and if they all come from neutron star–black hole mergers the angle needs to be bigger than .

**Synopsis:** O1 Gamma-Ray Burst Paper

**Read this if:** You like explosions. But from a safe distance

**Favourite part:** We exclude GRB 150906B from being associated with galaxy NGC 3313

Gamma-ray bursts are extremely violent explosions. They come in two (overlapping) classes: short and long. Short gamma-ray bursts are typically shorter than ~2 seconds and have a harder spectrum (more high energy emission). We think that these may come from the coalescence of neutron star binaries. Long gamma-ray bursts are (shockingly) typically longer than ~2 seconds, and have a softer spectrum (less high energy emission). We think that these could originate from the collapse of massive stars (like a supernova explosion). The introduction of the paper contains a neat review of the physics of both these types of sources. Both types of progenitors would emit gravitational waves that could be detected if the source was close enough.

The binary mergers could be picked up by our templated search (as reported in the O1 Binary Neutron Star/Neutron Star–Black Hole Paper): we have a good models for what these signals look like, which allows us to efficiently search for them. We don’t have good models for the collapse of stars, but our unmodelled searches could pick these up. These look for the same signal in multiple detectors, but since they don’t know what they are looking for, it is harder to distinguish a signal from noise than for the templated search. Cross-referencing our usual searches with the times of gamma-ray bursts could help us boost the significance of a trigger: it might not be noteworthy as just a weak gravitational-wave (or gamma-ray) candidate, but considering them *together* makes it much more unlikely that a coincidence would happen by chance. The on-line RAVEN pipeline monitors for alerts to minimise the chance that miss a coincidence. As well as relying on our standard searches, we also do targeted searches following up on gamma-ray bursts, using the information from these external triggers.

We used two search algorithms:

- X-Pipeline is an unmodelled search (similar to cWB) which looks for a coherent signal, consistent with the sky position of the gamma-ray burst. This was run for all the gamma-ray bursts (long and short) for which we have good data from both LIGO detectors and a good sky location.
- PyGRB is a modelled search which looks for binary signals using templates. Our main binary search algorithms check for coincident signals: a signal matching the same template in both detectors with compatible times. This search looks for coherent signals, factoring the source direction. This gives extra sensitivity (~20%–25% in terms of distance). Since we know what the signal looks like, we can also use this algorithm to look for signals when only one detector is taking data. We used this algorithm on all short (or ambiguously classified) gamma-ray bursts for which we data from at least one detector.

In total we analysed times corresponding to 42 gamma-ray bursts: 41 which occurred during O1 plus GRB 150906B. This happening in the engineering run before the start of O1, and luckily Handord was in a stable observing state at the time. GRB 150906B was localised to come from part of the sky close to the galaxy NGC 3313, which is only 54 megaparsec away. This is within the regime where we could have detected a binary merger. This caused much excitement at the time—people thought that this could be the most interesting result of O1—but this dampened down a week later with the detection of GW150914.

We didn’t find any gravitational-wave counterparts. These means that we could place some lower limits on how far away their sources could be. We performed injections of signals—using waveforms from binaries, collapsing stars (approximated with circular sine–Gaussian waveforms), and unstable discs (using an accretion disc instability model)—to see how far away we could have detected a signal, and set 90% probability limits on the distances (see Table 3 of the paper). The best of these are ~100–200 megaparsec (the worst is just 4 megaparsec, which is basically next door). These results aren’t too interesting yet, they will become more so in the future, and around the time we hit design sensitivity we will start overlapping with electromagnetic measurements of distances for short gamma-ray bursts. However, we can rule out GRB 150906B coming from NGC 3133 at high probability!

**Synopsis:** O1 Intermediate Mass Black Hole Binary Paper

**Read this if:** You like intermediate mass black holes (black holes of ~100 solar masses)

**Favourite part:** The teamwork between different searches

Black holes could come in many sizes. We know of stellar-mass black holes, the collapsed remains of dead stars, which are a few to a few tens of times the mas of our Sun, and we know of (super)massive black holes, lurking in the centres of galaxies, which are tens of thousands to billions of times the mass of our Sun. Between the two, lie the elusive *intermediate mass* black holes. There have been repeated claims of observational evidence for their existence, but these are notoriously difficult to confirm. Gravitational waves provide a means of confirming the reality of intermediate mass black holes, if they do exist.

The gravitational wave signal emitted by a binary depends upon the mass of its components. More massive objects produce louder signals, but these signals also end at lower frequencies. The merger frequency of a binary is inversely proportional to the total mass. Ground-based detectors can’t detect massive black hole binaries as they are too low frequency, but they can detect binaries of a few hundred solar masses. We look for these in this search.

Our flagship search for binary black holes looks for signals using matched filtering: we compare the data to a bank of template waveforms. The bank extends up to a total mass of 100 solar masses. This search continues above this (there’s actually some overlap as we didn’t want to miss anything, but we shouldn’t have worried). Higher mass binaries are hard to detect as they as shorter, and so more difficult to distinguish from a little blip of noise, which is why this search was treated differently.

As well as using templates, we can do an unmodelled (burst) search for signals by looking for coherent signals in both detectors. This type of search isn’t as sensitive, as you don’t know what you are looking for, but can pick up short signals (like GW150914).

Our search for intermediate mass black holes uses both a modelled search (with templates spanning total masses of 50 to 600 solar masses) and a specially tuned burst search. Both make sure to include low frequency data in their analysis. This work is one of the few cross-working group (CBC for the templated search, and Burst for the unmodelled) projects, and I was pleased with the results.

This is probably where you expect me to say that we didn’t detect anything so we set upper limits. That is actually not the case here: we *did* detect something! Unfortunately, it wasn’t what we were looking for. We detected GW150914, which was a relief as it did lie within the range we where searching, as well as LVT151012 and GW151226. These were more of a surprise. GW151226 has a total mass of just ~24 solar masses (as measured with cosmological redshift), and so is well outside our bank. It was actually picked up *just* on the edge, but still, it’s impressive that the searches can find things beyond what they are aiming to pick up. Having found no intermediate mass black holes, we went and set some upper limits. (Yay!)

To set our upper limits, we injected some signals from binaries with specific masses and spins, and then saw how many would have be found with greater significance than our most significant trigger (after excluding GW150914, LVT151012 and GW151226). This is effectively asking the question of when would we see something as significant as this trigger which we think is just noise. This gives us a sensitive time–volume which we have surveyed and found no mergers. We use this number of events to set 90% upper limits on the merge rates , and define an effective distance defined so that where is the analysed amount of time. The plot below show our limits on rate and effective distance for our different injections.

There are a couple of caveats associated with our limits. The waveforms we use don’t include all the relevant physics (like orbital eccentricity and spin precession). Including everything is hard: we may use some numerical relativity waveforms in the future. However, they should give a good impression on our sensitivity. There’s quite a big improvement compared to previous searches (S6 Burst Search; S6 Templated Search). This comes form the improvement of Advanced LIGO’s sensitivity at low frequencies compared to initial LIGO. Future improvements to the low frequency sensitivity should increase our probability of making a detection.

I spent a lot of time working on this search as I was the review chair. As a reviewer, I had to make sure everything was done properly, and then reported accurately. I think our review team did a thorough job. I was glad when we were done, as I dislike being the bad cop.

**Synopsis:** O1 Burst Paper

**Read this if:** You like to keep an open mind about what sources could be out there

**Favourite part:** GW150914 (of course)

The best way to find a signal is to know what you are looking for. This makes it much easier to distinguish a signal from random noise. However, what about the sources for which we don’t have good models? Burst searches aim to find signals regardless of their shape. To do this, they look for coherent signals in multiple detectors. Their flexibility means that they are less sensitive than searches targeting a specific signal—the signal needs to be louder before we can be confident in distinguishing it from noise—but they could potentially detect a wider number of sources, and crucially catch signals missed by other searches.

This paper presents our main results looking for short burst signals (up to a few seconds in length). Complementary burst searches were done as part of the search for intermediate mass black hole binaries (whose signals can be so short that it doesn’t matter too much if you have a model or not) and for counterparts to gamma-ray bursts.

There are two-and-a-half burst search pipelines. There is coherent WaveBurst (cWB), Omicron–LALInferenceBurst (oLIB), and BayesWave follow-up to cWB. More details of each are found in the GW150914 Burst Companion Paper.

cWB looks for coherent power in the detectors—it looks for clusters of excess power in time and frequency. The search in O1 was split into a low-frequency component (signals below 1024 Hz) and a high-frequency component (1024 Hz). The low-frequency search was further divided into three classes:

- C1 for signals which have a small range of frequencies (80% of the power in just a 5 Hz range). This is designed to catch blip glitches, short bursts of transient noise in our detectors. We’re not sure what causes blip glitches yet, but we know they are not real signals as they are seen independently in both detectors.
- C3 looks for signals which increase in frequency with time—chirps. I suspect that this was (cheekily) designed to find binary black hole coalescences.
- C2 (no, I don’t understand the ordering either) is everything else.

The false alarm rate is calculated independently for each division using time-slides. We analyse data from the two detectors which has been shifted in time, so that there can be no real coincident signals between the two, and compare this background of noise-only triggers to the no-slid data.

oLIB works in two stages. First (the Omicron bit), data from the individual detectors are searches for excess power. If there is anything interesting, the data from both detectors are analysed coherently. We use a sine–Gaussian template, and compare the probability that the same signal is in both detectors, to there being independent noise (potentially a glitch) in the two. This analysis is split too: there is a high-quality factor vs low quality-factor split, which is similar to cWB’s splitting off C1 to catch narrow band features (the low quality-factor group catches the blip glitches). The false alarm rate is computed with time slides.

BayesWave is run as follow-up to triggers produced by cWB: it is too computationally expensive to run on all the data. BayesWave’s approach is similar to oLIB’s. It compares three hypotheses: just Gaussian noise, Gaussian noise and a glitch, and Gaussian noise and a signal. It constructs its signal using a variable number of sine–Gaussian wavelets. There are no cuts on its data. Again, time slides are used to estimate the false alarm rate.

The search does find a signal: GW150914. It is clearly found by all three algorithms. It is cWB’s C3, with a false alarm rate of less than 1 per 350 years; it is is oLIB’s high quality-factor bin with a false alarm rate of less than 1 per 230 years, and is found by BayesWave with a false alarm rate of less than 1 per 1000 years. You might notice that these results are less stringent than in the initial search results presented at the time of the detection. This is because only a limited number of time slides were done: we could get higher significance if we did more, but it was decided that it wasn’t worth the extra computing time, as we’re already convinced that GW150914 is a real signal. I’m a little sad they took GW150914 out of their plots (I guess it distorted the scale since it’s such an outlier from the background). Aside from GW150914, there are no detections.

Given the lack of detections, we can set some upper limits. I’ll skip over the limits for binary black holes, since our templated search is more sensitive here. The plot below shows limits on the amount of gravitational-wave energy emitted by a burst source at 10 kpc, which could be detected with a false alarm rate of 1 per century 50% of the time. We use some simple waveforms for this calculation. The energy scales with the inverse distance squared, so at a distance of 20 kpc, you need to increase the energy by a factor of 4.

Maybe next time we’ll find something unexpected, but it will either need to be really energetic (like a binary black hole merger) or really close by (like a supernova in our own Galaxy)

]]>