# GW190412—A new flavour of binary black hole

On 1 April 2019 LIGO and Virgo began their third observing run (O3). Never before had we observed using such sensitive gravitational wave detectors. Throughout O3 discoveries came rapidly. Binary black holes are our most common source, and as we built a larger collection we starting to find some unusual systems. GW190412 is our first observation from a binary with two distinctly differently sized black holes. This observation lets us test our predictions for gravitational wave signals in a new way, and is another piece in the puzzle of understanding how binary black holes form.

### The discovery

On 12 April 2019 I awoke to the news that we had a new gravitational wave candidate [bonus note]. The event was picked up by our searches and sent out as a public alert under the name S190412m. The signal is a real beauty. There’s a striking chirp visible in the Livingston data, and a respectable chirp in the Hanford data. You can’t see a chirp in Virgo, the signal-to-noise ratio is only about 4, but this is why we have cunning search algorithms instead of looking at the data by eye. In our final search results, our matched-filter searches (which use templates of gravitational wave signals to comb through the data) GstLAL and PyCBC identified the event with false alarm rates of better than 1 in 100,000 years and 1 in 30,000 years, respectively. Our unmodelled search coherent WaveBurst (which looks for compatible signals in multiple detectors, rather than a specific template) also identified the event with a false alarm rate of better than 1 in 1,000 years. This is a confident detection!

Time–frequency plots for GW190412 as measured by LIGO Hanford, LIGO Livingston and Virgo. The chirp of a binary coalescence is clearer in two LIGO detectors, with the signal being loudest in Livingston. Figure 1 of the GW190412 Discovery Paper.

### Vanilla black holes

Our first gravitational wave detection, GW150914, was amazing. We had never seen a black hole around 30 times the mass of our Sun, and here we had two merging together (which we had also never seen). By the end of our second observing run, we had discovered that GW150914 was not rare! Many of detections consisted of two roughly equal mass black holes around 20 to 40 times the mass of Sun. We now call these systems vanilla binary black holes. They are nice and easy to analyse: we know what to do, and it’s not too difficult. I think that these signals are delicious.

GW190412’s source, however, is different. We estimate that the binary had one black hole $m_1 = 29.7^{+5.0}_{-5.3}$ times the mass of our Sun (quoting the 90% range for parameters), and the other $m_2 = 8.4^{+1.7}_{-1.0}$ times the mass of our Sun. Neither of these masses is too surprising on their own. We know black holes come in these sizes. What is new is the ratio of the masses $q = m_2/m_1 = 0.28^{+0.13}_{-0.07}$ [bonus note]. This is roughly equal to the ratio of filling in a regular Oreo to in a Mega Stuf Oreo. Investigations of connections between Oreos and black hole formation are ongoing. All our previous observations have mass ratios close to 1 or at least with uncertainties stretching all the way to 1. GW190412’s mass ratio is the exception.

Estimated mass ratio $q$ for the two components in the binary and the effective inspiral spin $\chi_\mathrm{eff}$ (a mass-weighted combination of the spins perpendicular to the orbital plane). We show results for two different model waveforms: Phenom PHM and EOB PHM (the PHM stands for precession and higher order multipoles). Systems with unequal masses are difficult to model, so we have some extra uncertainty from the accuracy of our models. The two-dimensional shows the 90% probability contour. The one-dimensional plots show the probability distributions and the the dotted lines mark the central 90%. Figure 2 of the GW190412 Discovery Paper.

The interesting mass ratio has a few awesome implications:

1. We get a really wonderful measurement of the spin of the more massive black hole.
2. We can observe a new feature of the gravitational wave signal (higher order multipole moments).
3. We understand a bit more about the population of binary black holes.

### Spin

Black holes have two important properties: mass (how much the bend spacetime) and spin (how much they swirl spacetime around). The black hole masses are most important for determining what a gravitational wave signal looks like, so we measure the masses pretty well. Spins leave a more subtle imprint, and so are more difficult to measure.

A well measured, and convenient to work with, combination of the two spins is the effective inspiral spin parameter

$\displaystyle \chi_\mathrm{eff} = \frac{m_1 \chi_1 \cos \theta_1 + m_2 \chi_2 \cos \theta_2}{m_1 + m_2}$,

where $\chi_1$ and $\chi_2$ are the spins of the two black holes [bonus note], and $\theta_1$ and $\theta_2$ are the tilts angles measuring the alignment of the spins with the orbital angular momentum. The spins change orientations during the inspiral if they are not perfectly aligned with the orbital angular momentum, which is referred to as precession, but $\chi_\mathrm{eff}$ is roughly constant. It also effects the rate of inspiral, binaries with larger $\chi_\mathrm{eff}$ also merge when they’re a bit closer. For GW190412, we measure $\chi_\mathrm{eff} = 0.25^{+0.09}_{-0.11}$.

This is only the second time we’ve had a definite non-zero measurement of $\chi_\mathrm{eff}$ after GW151226. GW170729 had a reasonable large value, but the uncertainties did stretch to include zero. The measurement of a non-zero $\chi_\mathrm{eff}$ means that we know at least one of the black holes has spin.

The effective inspiral spin parameter $\chi_\mathrm{eff}$ measures the spin components aligned with the orbital angular momentum. To measure the spin components in the orbital plane, we typically use the effective precession spin parameter [bonus note]

$\displaystyle \chi_\mathrm{p} = \max\left\{\chi_1 \sin \theta_1 , \frac{q(4q + 3)}{(4 + 3q)}\chi_2 \sin \theta_2\right\}$.

This characterises how much spin precession we have: 1 means significant in-plane spin and maximal precession, and zero means no in-plane spin and no precession.

For GW190412, we measure $\chi_\mathrm{p} = 0.30^{+0.19}_{-0.15}$. This is the best measurement of $\chi_\mathrm{p}$ so far. It shows that we don’t see strong procession, but also suggests that there is some in-plane spin.

Estimated effective precession spin parameter $\chi_\mathrm{p}$. Results are shown for two different waveform models. To indicate how much (or little) we’ve learnt, the prior probability distribution is shown: the global prior is what we would get if we had learnt nothing, the restricted prior is what we would have after placing cuts on the effective inspiral spin parameter and mass ratio to match our observations. We are definitely getting information on precession from the data. Figure 5 of the GW190412 Discovery Paper.

Now, since we know that the masses are unequal in the binary, the contribution to $\chi_\mathrm{eff}$ is dominated by the spin of the larger black hole, or at least the component of the spin aligned with the orbital angular momentum ($\chi_\mathrm{eff} \approx \chi_1 \cos \theta_1$), and similarly $\chi_\mathrm{p}$ is dominated by the in-place components of the larger black hole’s spin ($\chi_\mathrm{p} \approx \chi_1 \sin \theta_1$). Combining all this information, we can actually get a good measurement of the spin of the bigger black hole. We infer that $\chi_1 = 0.43^{+0.16}_{-0.26}$. This is the first time we’ve really been able to measure an individual spin!

We don’t yet have a really good understanding of the spins black holes are born with. Their spins can increase if they accrete material, but it needs to be a lot of stuff to change it significantly. When we make a few more spin measurements, I’m looking forward to using the information to help figure out the histories of our black holes.

### Higher order multipoles

When calculating gravitational wave signals, we often use spin-weighted spherical harmonics. These are a set of functions, which describe possible patterns on a sphere. Using them, we can describe the amount of gravitational waves emitted in a particular direction. Any gravitational wave signal can be approximated as a sum of the spin-weighted spherical harmonics ${}_{-2}Y_{\ell m}(\vartheta, \phi)$, where we use $\{\vartheta, \phi\}$ as the angles on the sphere, and $(\ell, m)$ specify the harmonic. The majority of the gravitational radiation emitted from a binary is from the $(2, \pm2)$ harmonic, so we usually start with this. Larger values of $\ell$ contribute less and less. For exactly equal mass binaries with non-spinning components, only harmonics with even $\ell$ are non-zero, so really the $(2, \pm2)$ harmonic is all you need. For unequal mass binaries this is not the case. Here odd $\ell$ are important, and harmonics with $\ell = \pm m$ are expected to contribute a significant amount. In previous detection, we’ve not had to worry too much about the harmonics with $\ell > 2$, which we refer to as higher order multipole moments, as they contributed little to the signal. GW190412’s unequal masses mean that they are important here.

During the inspiral, the frequency of the part of the gravitational wave signal corresponding to a given $(\ell, m)$ is $f_{\ell m} \simeq m f_\mathrm{orb}$, where $f_\mathrm{orb}$ is the orbital frequency. Most of the signal is emitted at twice the orbital frequency, but the emission from the higher order multipoles is at higher frequencies. If the $m = 2$ multipole was a music A, then the $m = 3$ multipole would correspond to an E, and if the $m = 2$ multipole was a C, the $m = 3$  would be a G. There’s a family of chirps [bonus note]. For GW190412, we clearly pick out the frequency component at $3 f_\mathrm{orb}$ showing the significance of the $(3,\pm3)$ mode. This shows that the harmonic structure of gravitational waves is as expected [bonus note]. We have observed a perfect fifth, as played by the inspiral of two black holes.

Using waveforms which include higher order multipoles is important to get good measurements of the source’s parameters. We would not get a good measurement of the mass ratio or the distance ($730^{+140}_{-170}~\mathrm{Mpc}$, corresponding to a travel time for the signal of around 2 billion years) using templates calculated using only the $(2,\pm2)$ harmonic.

### The black hole population

GW190412’s source has two unequal mass black holes, unlike our vanilla binary black holes. Does this indicate a new flavour of binary black hole, and what can we learn about how it formed from it’s properties?

After our second observing run, we analysed our family of ten binary black holes to infer what the population looked like. This included fitting for the distribution of mass mass ratios. We assumed that the mass ratios were drawn from a distribution something like $p(q) \propto q^{\beta_q}$ and estimated the value of $\beta_q$. A result of $\beta_q = 0$ would mean that all mass ratios were equally common, while larger values would mean that binaries liked more equal mass binaries. Our analysis preferred larger values of $\beta_q$, making it appear that black holes were picky about their partners. However, with only ten systems, our uncertainties spanned the entire range we’d allowed for $\beta_q$. It was too early to say anything definite about the mass ratio distribution.

If we add in GW190412 to the previous ten observations, we get a much tighter measurement of $\beta_q$, and generally prefer values towards the lower end of what we found previously. Really, we shouldn’t just add in GW190412 when making statements about the entire population, we should fold in everything we saw in our observing run. We’re working on that. For now, consider these as preliminary results which would be similar to those we would have got if the observing run was only a couple of weeks long.

Estimated power-law slope $\beta_q$ for the binary black hole mass ratio distribution $p(q) \propto q^{\beta_q}$. Dotted lines show the results with our first ten detections, and solid lines include GW190412. Results are shown for two different waveform models. Figure 11 of the GW190412 Discovery Paper.

Since most of the other binaries are more equal mass, we can see the effects of folding this information into our analysis of GW190412. Instead of making weak assumptions about what we expect the masses to be (we normally assume uniform prior probability on the masses as redshifted and measured in the detector, as that’s easy to work with), we can use our knowledge of the population. In this case, our prior expectation that we should have something near equal mass does shift the result a little, the 90% upper limit for the mass ratio shifts from $q < 0.38$ to $q < 0.43$, but we see that the mass ratio is still clearly unequal.

Have we detected a new flavour of binary black hole? Should we be lumping in GW190412 with the others, or should it be it’s own category? Going back to our results from the second observing run, we find that we’d expect that in a set of eleven observations that at least one would have a mass ratio as extreme as GW190412 $1.7^{+10.3}_{-1.3}\%$ of the time. Therefore, GW190412 is exceptional, but not completely inconsistent with our previous observations. If we repeat the calculation using the population inferred folding in GW190412, we (unsurprisingly) find it is much less unusual, with such systems being found in a set of eleven observations $25^{+47}_{-17}\%$ of the time. In conclusion, GW190412 is not vanilla, but is possibly raspberry ripple or Neapolitan: there’s still a trace of vanilla in there to connect it to the more usual binaries

Now we’ve compared GW190412 to our previous observations, where does its source fit in with predictions? The two main options for making a merging binary black hole are via isolated evolution, where two stars live their lives together, and dynamical formation, where you have lots of black holes in a dense environment like a globular cluster and two get close enough together to capture each other. Both of these favour more equal mass binaries, with unequal mass binaries like GW190412’s source being rare (but not impossible). Since we’ve only seen one system with such a mass ratio in amongst our detections so far, either channel could possibly explain things. My money is on a mixture.

In case you were curious, calculations from Chase Kimball indicate that GW190412 is not a hierarchical merger with the primary black hole being formed from the merger of two smaller black holes.

Odds of binary black holes being a hierarchical merger verses being original generation binary. 1G indicates first generation black holes formed from the collapse of stars, 2G indicates a black hole formed from the merger of two 1G black holes. These are preliminary results using the GWTC-1 results plus GW!90412. Fig. 15 of Kimball et al. (2020).

As we build up a larger collection of detections, we’ll be able to use our constraints on the population to better understand the relative contributions from the different formation mechanisms, and hence the physics of black hole manufacturing.

### Einstein is not wrong yet

Finally, since GW190412 is beautifully loud and has a respectably long inspiral, we were able to perform our usual tests of general relativity and confirm that all is as predicted.

We performed the inspiral/merger–ringdown consistency test, where we check that parameters inferred from the early, low frequency part of the signal match those from the later, high frequency part. They do.

We also performed the parameterized test, where we we allow different pieces of the signal template vary. We found that all the deviations were consistent with zero, as expected. The results are amongst the tightest we have from a single event, being comparable to results from GW151226 and GW170608. These are the lowest mass binary black holes we’ve observed so far, and so have the longest chirps.

We’ll keep checking for any evidence that Einstein’s theory of gravity is wrong. If Columbo has taught us anything, it is that the guest star is usually guilty. If it’s taught us something else, it’s the importance of a good raincoat. After that, however, it’s taught us the importance of perseverance, and always asking one more thing. Maybe we’ll catch Einstein out eventually.

### Just a taste of what’s to come

GW190412 was observed on the 12th day of O3. There were many detections to follow. Using this data set, we’ll be able to understand the properties of black holes and gravitational waves better than ever before. There are exciting results still being finalised.

Perhaps there will be a salted caramel binary black hole, or even a rocky road flavoured one? We might need to wait for our next observing run in 2021 for sprinkles though.

Title: GW190412: Observation of a binary-black-hole coalescence with asymmetric masses
arXiv: 2004.08342 [astro-ph.HE]
Science summary: GW190412: The first observation of an unequal-mass black hole merger
Data release: Gravitational Wave Open Science Center
Rating: 🍨🐦🎶🐦🥴

### Bonus notes

#### Sleep

I like sleep. I’d strongly recommend it.

#### Notation

Possibly the greatest dispute in gravitational wave astronomy is the definition of $q$. We pretty much all agree that the larger mass in a binary is $m_1$ and the lesser mass $m_2$. However, there two camps on the mass ratio: those enlightened individuals who define $q = m_2/m_1$, meaning that the mass ratio spans the entirely sensible range of $0 \leq q \leq 1$, and those heretics who define $q = m_1/m_2$, meaning that it cover the ridiculous range of $1 \leq q \leq \infty$. Within LIGO and Virgo, we have now settled on the correct convention. Many lives may have been lost, but I’m sure you’ll agree that it is a sacrifice worth making in the cause of consistent notation.

The second greatest dispute may be what to call the spin magnitudes. In LIGO and Virgo we’ve often used both $\chi$ (the Greek letter chi) and $a$. After a tense negotiation, conflict was happily avoided, and we have settled on $\chi$, with only the minimum amount of bloodshed. If you’re reading some of our older stuff, please bear in mind that we’ve not been consistent about the meaning of these symbols.

#### Effective spins

Sadly, my suggestions to call $\chi_\mathrm{p}$ and $\chi_\mathrm{eff}$ Chip and Dale have not caught on.

#### Hey! Listen!

Here are two model waveforms (made by Florian Wicke and Frank Ohme) consistent with the properties of GW190412, but shifted in frequency by a factor of 25 to make them easier to hear:

Can you tell the difference? I prefer the more proper one with harmonics.

#### Exactly as predicted

The presence of higher order multipole moments, as predicted, could be seen as another win for Einstein’s theory of general relativity. However, we expect the same pattern of emission in any theory, as it’s really set by the geometry of the source. If the frequency were not an integer multiple of the orbital frequency, the gravitational waves would get out of phase with their source, which would not make any sense.

The really cool thing, in my opinion, is that we now how detectors sensitive enough to pick out these subtle details.

# Classifying the unknown: Discovering novel gravitational-wave detector glitches using similarity learning

Gravity Spy is an awesome project that combines citizen science and machine learning to classify glitches in LIGO and Virgo data. Glitches are short bursts of noise in our detectors which make analysing our data more difficult. Some glitches have known causes, others are more mysterious. Classifying glitches into different types helps us better understand their properties, and in some cases track down their causes and eliminate them! In this paper, led by Scotty Coughlin, we demonstrated the effectiveness of a new tool which are citizen scientists can use to identify new glitch classes.

### The Gravity Spy project

Gravitational-wave detectors are complicated machines. It takes a lot of engineering to achieve the required accuracy needed to observe gravitational waves. Most of the time, our detectors perform well. The background noise in our detectors is easy to understand and model. However, our detectors are also subject to glitches, unusual  (sometimes extremely loud and complicated) noise that doesn’t fit the usual properties of noise. Glitches are short, they only appear in a small fraction of the total data, but they are common. This makes detection and analysis of gravitational-wave signals more difficult. Detection is tricky because you need to be careful to distinguish glitches from signals (and possibly glitches and signals together), and understanding the signal is complicated as we may need to model a signal and a glitch together [bonus note]. Understanding glitches is essential if gravitational-wave astronomy is to be a success.

To understand glitches, we need to be able to classify them. We can search for glitches by looking for loud pops, whooshes and splats in our data. The task is then to spot similarities between them. Once we have a set of glitches of the same type, we can examine the state of the instruments at these times. In the best cases, we can identify the cause, and then work to improve the detectors so that this no longer happens. Other times, we might be able to find the source, but we can find one of the monitors in our detectors which acts a witness to the glitch. Then we know that if something appears in that monitor, we expect a glitch of a particular form. This might mean that we throw away that bit of data, or perhaps we can use the witness data to subtract out the glitch. Since glitches are so common, classifying them is a huge amount of work. It is too much for our detector characterisation experts to do by hand.

There are two cunning options for classifying large numbers of glitches

1. Get a computer to do it. The difficulty  is teaching a computer to identify the different classes. Machine-learning algorithms can do this, if they are properly trained. Training can require a large training set, and careful validation, so the process is still labour intensive.
2. Get lots of people to help. The difficulty here is getting non-experts up-to-speed on what to look for, and then checking that they are doing a good job. Crowdsourcing classifications is something citizen scientists can do, but we will need a large number of dedicated volunteers to tackle the full set of data.

The idea behind Gravity Spy is to combine the two approaches. We start with a small training set from our detector characterization experts, and train a machine-learning algorithm on them. We then ask citizen scientists (thanks Zooniverse) to classify the glitches. We start them off with glitches the machine-learning algorithm is confident in its classification; these should be easy to identify. As citizen scientists get more experienced, they level up and start tackling more difficult glitches. The citizen scientists validate the classifications of the machine-learning algorithm, and provide a larger training set (especially helpful for the rarer glitch classes) for it. We can then happily apply the machine-learning algorithm to classify the full data set [bonus note].

How Gravity Spy works: the interconnection of machine-learning classification and citizen-scientist classification. The similarity search is used to identify glitches similar to one which do not fit into current classes. Figure 2 of Coughlin et al. (2019).

I especially like the levelling-up system in Gravity Spy. I think it helps keep citizen scientists motivated, as it both prevents them from being overwhelmed when they start and helps them see their own progress. I am currently Level 4.

Gravity Spy works using images of the data. We show spectrograms, plots of how loud the output of the detectors are at different frequencies at different times. A gravitational wave form a binary would show a chirp structure, starting at lower frequencies and sweeping up.

Spectrogram showing the upward-sweeping chirp of gravitational wave GW170104 as seen in Gravity Spy. I correctly classified this as a Chirp.

### New glitches

The Gravity Spy system works smoothly. However, it is set up to work with a fixed set of glitch classes. We may be missing new glitch classes, either because they are rare, and hadn’t been spotted by our detector characterization team, or because we changed something in our detectors and new class arose (we expect this to happen as we tune up the detectors between observing runs). We can add more classes to our citizen scientists and machine-learning algorithm to use, but how do we spot new classes in the first place?

Our citizen scientists managed to identify a few new glitches by spotting things which didn’t fit into any of the classes. These get put in the None-of-the-Above class. Occasionally, you’ll come across similar looking glitches, and by collecting a few of these together, build a new class. The Paired Dove and Helix classes were identified early on by our citizen scientists this way; my favourite suggested new class is the Falcon [bonus note]. The difficulty is finding a large number of examples of a new class—you might only recognise a common feature after going past a few examples, backtracking to find the previous examples is hard, and you just have to keep working until you are lucky enough to be given more of the same.

Example Helix (left) and Paired Dove glitches. These classes were identified by Gravity Spy citizen scientists. Helix glitches are related to related to hiccups in the auxiliary lasers used to calibrate the detectors by pushing on the mirrors. Paired Dove glitches are related to motion of the beamsplitter in the interferometer. Adapted from Figure 8 of Zevin et al. (2017).

To help our citizen scientists find new glitches, we created a similar search. Having found an interesting glitch, you can search for similar examples, and put quickly put together a collection of your new class. The video below shows how it works. The thing we had to work out was how to define similar?

#### Transfer learning

Our machine-learning algorithm only knows about the classes we tell it about. It then works out the features we distinguish the different classes, and are common to glitches of the same class. Working in this feature space, glitches form clusters of different classes.

Visualisation showing the clustering of different glitches in the Gravity Spy feature space. Each point is a different glitch from our training set. The feature space has more than three dimensions: this visualisation was made using a technique which preserves the separation and clustering of different and similar points. Figure 1 of Coughlin et al. (2019).

For our similarity search, our idea was to measure distances in feature space [bonus note for experts]. This should work well if our current set of classes have a wide enough set of features to capture to characteristics of the new class; however, it won’t be effective if the new class is completely different, so that its unique features are not recognised. As an analogy, imagine that you had an algorithm which classified M&M’s by colour. It would probably do well if you asked it to distinguish a new colour, but would probably do poorly if you asked it to distinguish peanut butter filled M&M’s as they are identified by flavour, which is not a feature it knows about. The strategy of using what a machine learning algorithm learnt about one problem to tackle a new problem is known as transfer learning, and we found this strategy worked well for our similarity search.

### Raven Pecks and Water Jets

To test our similarity search, we applied it to two glitches classes not in the Gravity Spy set:

1. Raven Peck glitches are caused by thirsty ravens pecking ice built up along nitrogen vent lines outside of the Hanford detector. Raven Pecks look like horizontal lines in spectrograms, similar to other Gravity Spy glitch classes (like the Power Line, Low Frequency Line and 1080 Line). The similarity search should therefore do a good job, as we should be able to recognise its important features.
2. Water Jet glitches were caused by local seismic noise at the Hanford detector which  causes loud bands which disturb the input laser optics. These glitches are found between , over which time there are 26,871 total glitches in GRavity Spy. The Water Jet glitch doesn’t have anything to do with water, it is named based on its appearance (like a fountain, not a weasel). Its features are subtle, and unlike other classes, so we would expect this to be difficult for our similarity search to handle.

These glitches appeared in the data from the second observing run. Raven Pecks appeared between 14 April and 9 August 2017, and Water Jets appeared 4 January and 28 May 2017. Over these intervals there are a total of 13,513 and 26,871 Gravity Spy glitches from all type, so even if you knew exactly when to look, you have a large number to search through to find examples.

Example Raven Peck (left) and Water Jet (right) glitches. These classes of glitch are not included in the usual Gravity Spy scheme. Adapted from Figure 3 of Coughlin et al. (2019).

We tested using our machine-learning feature space for the similarity search against simpler approaches: using the raw difference in pixels, and using a principal component analysis to create a feature space. Results are shown in the plots below. These show the fraction of glitches we want returned by the similarity search versus the total number of glitches rejected. Ideally, we would want to reject all the glitches except the ones we want, so the search would return 100% of the wanted classes and reject almost 100% of the total set. However, the actual results will depend on the adopted threshold for the similarity search: if we’re very strict we’ll reject pretty much everything, and only get the most similar glitches of the class we want, if we are too accepting, we get everything back, regardless of class. The plots can be read as increasing the range of the similarity search (becoming less strict) as you go left to right.

Performance of the similarity search for Raven Peck (left) and Water Jet (right) glitches: the fraction of known glitches of the desired class that have a higher similarity score (compared to an example of that glitch class) than a given percentage of full data set. Results are shown for three different ways of defining similarity: the DIRECT machine-learning algorithm feature space (think line), a principal component analysis (medium line) and a comparison of pixels (thin line). Adapted from Figure 3 of Coughlin et al. (2019).

For the Raven Peck, the similarity search always performs well. We have 50% of Raven Pecks returned while rejecting 99% of the total set of glitches, and we can get the full set while rejecting 92% of the total set! The performance is pretty similar between the different ways of defining feature space. Raven Pecks are easy to spot.

Water Jets are more difficult. When we have 50% of Water Jets returned by the search, our machine-learning feature space can still reject almost all glitches. The simpler approaches do much worse, and will only reject about 30% of the full data set. To get the full set of Water Jets we would need to loosen the similarity search so that it only rejects 55% of the full set using our machine-learning feature space; for the simpler approaches we’d basically get the full set of glitches back. They do not do a good job at narrowing down the hunt for glitches. Despite our suspicion that our machine-learning approach would struggle, it still seems to do a decent job [bonus note for experts].

### Do try this at home

Having developed and testing our similarity search tool, it is now live. Citizen scientists can use it to hunt down new glitch classes. Several new glitches classes have been identified in data from LIGO and Virgo’s (currently ongoing) third observing run. If you are looking for a new project, why not give it a go yourself? (Or get your students to give it a go, I’ve had some reasonable results with high-schoolers). There is the real possibility that your work could help us with the next big gravitational-wave discovery.

arXiv: arXiv:1903.04058 [astro-ph.IM]
Journal: Physical Review D; 99(8):082002(8); 2019
Websites: Gravity Spy; Gravity Spy Tools
Gravity Spy blog: Introducing Gravity Spy Tools
Current stats: Gravity Spy has 15,500 registered users, who have made 4.4 million glitch classifications, leading to 200,000 successfully identified glitches.

### Bonus notes

#### Signals and glitches

The best example of a gravitational-wave overlapping a glitch is GW170817. The glitch meant that the signal in the LIGO Livingston detector wasn’t immediately recognised. Fortunately, the signal in the Hanford detector was easy to spot. The glitch was analyse and categorised in Gravity Spy. It is a simple glitch, so it wasn’t too difficult to remove from the data. As our detectors become more sensitive, so that detections become more frequent, we expect that signal overlapping with glitches will become a more common occurrence. Unless we can eliminate glitches, it is only a matter of time before we get a glitch that prevents us from analysing an important signal.

In the third observing run of LIGO and Virgo, we send out automated alerts when we have a new gravitational-wave candidate. Astronomers can then pounce into action to see if they can spot anything coinciding with the source. It is important to quickly check the state of the instruments to ensure we don’t have a false alarm. To help with this, a data quality report is automatically prepared, containing many diagnostics. The classification from the Gravity Spy algorithm is one of many pieces of information included. It is the one I check first.

#### The Falcon

Excellent Gravity Spy moderator EcceruElme suggested a new glitch class Falcon. This suggestion was followed up by Oli Patane, they found that all the examples identified occured between 6:30 am and 8:30 am on 20 June 2017 in the Hanford detector. The instrument was misbehaving at the time. To solve this, the detector was taken out of observing mode and relocked (the equivalent of switching it off and on again). Since this glitch class was only found in this one 2-hour window, we’ve not added it as a class. I love how it was possible to identify this problematic stretch of time using only Gravity Spy images (which don’t identify when they are from). I think this could be the seed of a good detective story. The Hanfordese Falcon?

Examples of the proposed Falcon glitch class, illustrating the key features (and where the name comes from). This new glitch class was suggested by Gravity Spy citizen scientist EcceruElme.

#### Distance measure

We chose a cosine distance to measure similarity in feature space. We found this worked better than a Euclidean metric. Possibly because for identifying classes it is more important to have the right mix of features, rather than how significant the individual features are. However, we didn’t do a systematic investigation of the optimal means of measuring similarity.

#### Retraining the neural net

We tested the performance of the machine-learning feature space in the similarity search after modifying properties of our machine-learning algorithm. The algorithm we are using is a deep multiview convolution neural net. We switched the activation function in the fully connected layer of the net, trying tanh and leaukyREU. We also varied the number of training rounds and the number of pairs of similar and dissimilar images that are drawn from the training set each round. We found that there was little variation in results. We found that leakyREU performed a little better than tanh, possibly because it covers a larger dynamic range, and so can allow for cleaner separation of similar and dissimilar features. The number of training rounds and pairs makes negligible difference, possibly because the classes are sufficiently distinct that you don’t need many inputs to identify the basic features to tell them apart. Overall, our results appear robust. The machine-learning approach works well for the similarity search.

# GW190425—First discovery from O3

The first gravitational wave detection of LIGO and Virgo’s third observing run (O3) has been announced: GW190425! [bonus note] The signal comes from the inspiral of two objects which have a combined mass of about 3.4 times the mass of our Sun. These masses are in range expected for neutron stars, this makes GW190425 the second observation of gravitational waves from a binary neutron star inspiral (after GW170817). While the individual masses of the two components agree with the masses of neutron stars found in binaries, the overall mass of the binary (times the mass of our Sun) is noticeably larger than any previously known binary neutron star system. GW190425 may be the first evidence for multiple ways of forming binary neutron stars.

### The gravitational wave signal

On 25 April 2019 the LIGO–Virgo network observed a signal. This was promptly shared with the world as candidate event S190425z [bonus note]. The initial source classification was as a binary neutron star. This caused a flurry of excitement in the astronomical community [bonus note], as the smashing together of two neutron stars should lead to the emission of light. Unfortunately, the sky localization was HUGE (the initial 90% area wass about a quarter of the sky, and the refined localization provided the next day wasn’t much improvement), and the distance was four times that of GW170817 (meaning that any counterpart would be about 16 times fainter). Covering all this area is almost impossible. No convincing counterpart has been found [bonus note].

Early sky localization for GW190425. Darker areas are more probable. This localization was circulated in GCN 24228 on 26 April and was used to guide follow-up, even though it covers a huge amount of the sky (the 90% area is about 18% of the sky).

The localization for GW19045 was so large because LIGO Hanford (LHO) was offline at the time. Only LIGO Livingston (LLO) and Virgo were online. The Livingston detector was about 2.8 times more sensitive than Virgo, so pretty much all the information came from Livingston. I’m looking forward to when we have a larger network of detectors at comparable sensitivity online (we really need three detectors observing for a good localization).

We typically search for gravitational waves by looking for coincident signals in our detectors. When looking for binaries, we have templates for what the signals look like, so we match these to the data and look for good overlaps. The overlap is quantified by the signal-to-noise ratio. Since our detectors contains all sorts of noise, you’d expect them to randomly match templates from time to time. On average, you’d expect the signal-to-noise ratio to be about 1. The higher the signal-to-noise ratio, the less likely that a random noise fluctuation could account for this.

Our search algorithms don’t just rely on the signal-to-noise ratio. The complication is that there are frequently glitches in our detectors. Glitches can be extremely loud, and so can have a significant overlap with a template, even though they don’t look anything like one. Therefore, our search algorithms also look at the overlap for different parts of the template, to check that these match the expected distribution (for example, there’s not one bit which is really loud, while the others don’t match). Each of our different search algorithms has their own way of doing this, but they are largely based around the ideas from Allen (2005), which is pleasantly readable if you like these sort of things. It’s important to collect lots of data so that we know the expected distribution of signal-to-noise ratio and signal-consistency statistics (sometimes things change in our detectors and new types of noise pop up, which can confuse things).

It is extremely important to check the state of the detectors at the time of an event candidate. In O3, we have unfortunately had to retract various candidate events after we’ve identified that our detectors were in a disturbed state. The signal consistency checks take care of most of the instances, but they are not perfect. Fortunately, it is usually easy to identify that there is a glitch—the difficult question is whether there is a glitch on top of a signal (as was the case for GW170817).  Our checks revealed nothing up with the detectors which could explain the signal (there was a small glitch in Livingston about 60 seconds before the merger time, but this doesn’t overlap with the signal).

Now, the search that identified GW190425 was actually just looking for single-detector events: outliers in the distribution of signal-to-noise ratio and signal-consistency as expected for signals. This was a Good Thing™. While the signal-to-noise ratio in Livingston was 12.9 (pretty darn good), the signal-to-noise ration in Virgo was only 2.5 (pretty meh) [bonus note]. This is below the threshold (signal-to-noise ratio of 4) the search algorithms use to look for coincidences (a threshold is there to cut computational expense: the lower the threshold, the more triggers need to be checked) [bonus note]. The Bad Thing™ about GW190425 being found by the single-detector search, and being missed by the usual multiple detector search, is that it is much harder to estimate the false-alarm rate—it’s much harder to rule out the possibility of some unusual noise when you don’t have another detector to cross-reference against. We don’t have a final estimate for the significance yet. The initial estimate was 1 in 69,000 years (which relies on significant extrapolation). What we can be certain of is that this event is a noticeable outlier: across the whole of O1, O2 and the first 50 days of O3, it comes second only to GW170817. In short, we can say that GW190425 is worth betting on, but I’m not sure (yet) how heavily you want to bet.

Detection statistics for GW190425 showing how it stands out from the background. The left plot shows the signal-to-noise ratio (SNR) and signal-consistency statistic from the GstLAL algorithm, which made the detection. The coloured density plot shows the distribution of background triggers. Right shows the detection statistic from PyCBC, which combines the SNR and their signal-consistency statistic. The lines show the background distributions. GW190425 is more significant than everything apart from GW170817. Adapted from Figures 1 and 6 of the GW190425 Discovery Paper.

I’m always cautious of single-detector candidates. If you find a high-mass binary black hole (which would be an extremely short template), or something with extremely high spins (indicating that the templates don’t match unless you push to the bounds of what is physical), I would be suspicious. Here, we do have consistent Virgo data, which is good for backing up what is observed in Livingston. It may be a single-detector detection, but it is a multiple-detector observation. To further reassure ourselves about GW190425, we ran our full set of detection algorithms on the Livingston data to check that they all find similar signals, with reasonable signal-consistency test values. Indeed, they do! The best explanation for the data seems to be a gravitational wave.

### The source

Given that we have a gravitational wave, where did it come from? The best-measured property of a binary inspiral is its chirp mass—a particular combination of the two component masses. For GW190425, this is $1.44^{+0.02}_{-0.02}$ solar masses (quoting the 90% range for parameters). This is larger than GW170817’s $1.186^{+0.001}_{-0.001}$ solar masses: we have a heavier binary.

Estimated masses for the two components in the binary. We show results for two different spin limits. The two-dimensional shows the 90% probability contour, which follows a line of constant chirp mass. The one-dimensional plot shows individual masses; the dotted lines mark 90% bounds away from equal mass. The masses are in the range expected for neutron stars. Figure 3 of the GW190425 Discovery Paper.

Figuring out the component masses is trickier. There is a degeneracy between the spins and the mass ratio—by increasing the spins of the components it is possible to get more extreme mass ratios to fit the signal. As we did for GW170817, we quote results with two ranges of spins. The low-spin results use a maximum spin of 0.05, which matches the range of spins we see for binary neutron stars in our Galaxy, while the high-spin results use a limit of 0.89, which safely encompasses the upper limit for neutron stars (if they spin faster than about 0.7 they’ll tear themselves apart). We find that the heavier component of the binary has a mass of $1.62$$1.88$ solar masses with the low-spin assumption, and $1.61$$2.52$ solar masses with the high-spin assumption; the lighter component has a mass $1.45$$1.69$ solar masses with the low-spin assumption, and $1.12$$1.68$ solar masses with the high-spin. These are the range of masses expected for neutron stars.

Without an electromagnetic counterpart, we cannot be certain that we have two neutron stars. We could tell from the gravitational wave by measuring the imprint in the signal left by the tidal distortion of the neutron star. Black holes have a tidal deformability of 0, so measuring a nonzero tidal deformability would be the smoking gun that we have a neutron star. Unfortunately, the signal isn’t loud enough to find any evidence of these effects. This isn’t surprising—we couldn’t say anything for GW170817, without assuming its source was a binary neutron star, and GW170817 was louder and had a lower mass source (where tidal effects are easier to measure). We did check—it’s probably not the case that the components were made of marshmallow, but there’s not much more we can say (although we can still make pretty simulations). It would be really odd to have black holes this small, but we can’t rule out than at least one of the components was a black hole.

Two binary neutron stars is the most likely explanation for GW190425. How does it compare to other binary neutron stars? Looking at the 17 known binary neutron stars in our Galaxy, we see that GW190425’s source is much heavier. This is intriguing—could there be a different, previously unknown formation mechanism for this binary? Perhaps the survey of Galactic binary neutron stars (thanks to radio observations) is incomplete? Maybe the more massive binaries form in close binaries, which are had to spot in the radio (as the neutron star moves so quickly, the radio signals gets smeared out), or maybe such heavy binaries only form from stars with low metallicity (few elements heavier than hydrogen and helium) from earlier in the Universe’s history, so that they are no longer emitting in the radio today? I think it’s too early to tell—but it’s still fun to speculate. I expect there’ll be a flurry of explanations out soon.

Comparison of the total binary mass of the 10 known binary neutron stars in our Galaxy that will merge within a Hubble time and GW190425’s source (with both the high-spin and low-spin assumptions). We also show a Gaussian fit to the Galactic binaries. GW190425’s source is higher mass than previously known binary neutron stars. Figure 5 of the GW190425 Discovery Paper.

Since the source seems to be an outlier in terms of mass compared to the Galactic population, I’m a little cautious about using the low-spin results—if this sample doesn’t reflect the full range of masses, perhaps it doesn’t reflect the full range of spins too? I think it’s good to keep an open mind. The fastest spinning neutron star we know of has a spin of around 0.4, maybe binary neutron star components can spin this fast in binaries too?

One thing we can measure is the distance to the source: $160^{+70}_{-70}~\mathrm{Mpc}$. That means the signal was travelling across the Universe for about half a billion years. This is as many times bigger than diameter of Earth’s orbit about the Sun, as the diameter of the orbit is than the height of a LEGO brick. Space is big.

We have now observed two gravitational wave signals from binary neutron stars. What does the new observation mean for the merger rate of binary neutron stars? To go from an observed number of signals to how many binaries are out there in the Universe we need to know how sensitive our detectors are to the sources. This depends on  the masses of the sources, since more massive binaries produce louder signals. We’re not sure of the mass distribution for binary neutron stars yet. If we assume a uniform mass distribution for neutron stars between 0.8 and 2.3 solar masses, then at the end of O2 we estimated a merger rate of $110$$2520~\mathrm{Gpc^{-3}\,\mathrm{yr}^{-3}}$. Now, adding in the first 50 days of O3, we estimate the rate to be $250$$2470~\mathrm{Gpc^{-3}\,\mathrm{yr}^{-3}}$, so roughly the same (which is nice) [bonus note].

Since GW190425’s source looks rather different from other neutron stars, you might be interested in breaking up the merger rates to look at different classes. Using measured masses, we can construct rates for GW170817-like (matching the usual binary neutron star population) and GW190425-like binaries (we did something similar for binary black holes after our first detection). The GW170817-like rate is $110$$2500~\mathrm{Gpc^{-3}\,\mathrm{yr}^{-3}}$, and the GW190425-like rate is lower at $70$$4600~\mathrm{Gpc^{-3}\,\mathrm{yr}^{-3}}$. Combining the two (Assuming that binary neutron stars are all one class or the other), gives an overall rate of $290$$2810~\mathrm{Gpc^{-3}\,\mathrm{yr}^{-3}}$, which is not too different than assuming the uniform distribution of masses.

Given these rates, we might expect some more nice binary neutron star signals in the O3 data. There is a lot of science to come.

### Future mysteries

GW190425 hints that there might be a greater variety of binary neutron stars out there than previously thought. As we collect more detections, we can start to reconstruct the mass distribution. Using this, together with the merger rate, we can start to pin down the details of how these binaries form.

As we find more signals, we should also find a few which are loud enough to measure tidal effects. With these, we can start to figure out the properties of the Stuff™ which makes up neutron stars, and potentially figure out if there are small black holes in this mass range. Discovering smaller black holes would be extremely exciting—these wouldn’t be formed from collapsing stars, but potentially could be remnants left over from the early Universe.

Probability distributions for neutron star masses and radii (blue for the more massive neutron star, orange for the lighter), assuming that GW190425’s source is a binary neutron star. The left plots use the high-spin assumption, the right plots use the low-spin assumptions. The top plots use equation-of-state insensitive relations, and the bottom use parametrised equation-of-state models incorporating the requirement that neutron stars can be 1.97 solar masses. Similar analyses were done in the GW170817 Equation-of-state Paper. In the one-dimensional plots, the dashed lines indicate the priors. Figure 16 of the GW190425 Discovery Paper.

With more detections (especially when we have more detectors online), we should also be lucky enough to have a few which are well localised. These are the events when we are most likely to find an electromagnetic counterpart. As our gravitational-wave detectors become more sensitive, we can detect sources further out. These are much harder to find counterparts for, so we mustn’t expect every detection to have a counterpart. However, for nearby sources, we will be able to localise them better, and so increase our odds of finding a counterpart. From such multimessenger observations we can learn a lot. I’m especially interested to see how typical GW170817 really was.

O3 might see gravitational wave detection becoming routine, but that doesn’t mean gravitational wave astronomy is any less exciting!

Title: GW190425: Observation of a compact binary coalescence with total mass ~ 3.4 solar masses
Journal: Astrophysical Journal Letters; 892(1):L3(24); 2020
arXiv: arXiv:2001.01761 [astro-ph.HE] [bonus note]
Science summary: GW190425: The heaviest binary neutron star system ever seen?
Data release: Gravitational Wave Open Science Center; Parameter estimation results
Rating: 🥇😮🥂🥇

### Bonus notes

#### Exceptional events

The plan for publishing papers in O3 is that we would write a paper for any particularly exciting detections (such as a binary neutron star), and then put out a catalogue of all our results later. The initial discovery papers wouldn’t be the full picture, just the key details so that the entire community could get working on them. Our initial timeline was to get the individual papers out in four months—that’s not going so well, it turns out that the most interesting events have lots of interesting properties, which take some time to understand. Who’d have guessed?

We’re still working on getting papers out as soon as possible. We’ll be including full analyses, including results which we can’t do on these shorter timescales in our catalogue papers. The catalogue paper for the first half of O3 (O3a) is currently pencilled in for April 2020.

#### Naming conventions

The name of a gravitational wave signal is set by the date it is observed. GW190425 is hence the gravitational wave (GW) observed on 2019 April 25th. Our candidates alerts don’t start out with the GW prefix, as we still need to do lots of work to check if they are real. Their names start with S for superevent (not for hope) [bonus bonus note], then the date, and then a letter indicating the order it was uploaded to our database of candidates (we upload candidates with false alarm rates of around one per hour, so there are multiple database entries per day, and most are false alarms). S190425z was the 26th superevent uploaded on 2019 April 25th.

What is a superevent? We call anything flagged by our detection pipelines an event. We have multiple detection pipelines, and often multiple pipelines produce events for the same stretch of data (you’d expect this to happen for real signals). It was rather confusing having multiple events for the same signal (especially when trying to quickly check a candidate to issue an alert), so in O3 we group together events from similar times into SUPERevents.

#### GRB 190425?

Pozanenko et al. (2019) suggest a gamma-ray burst observed by INTEGRAL (first reported in GCN 24170). The INTEGRAL team themselves don’t find anything in their data, and seem sceptical of the significance of the detection claim. The significance of the claim seems to be based on there being two peaks in the data (one about 0.5 seconds after the merger, one 5.9 seconds after the merger), but I’m not convinced why this should be the case. Nothing was observed by Fermi, which is possibly because the source was obscured by the Earth for them. I’m interested in seeing more study of this possible gamma-ray burst.

#### EMMA 2019

At the time of GW190425, I was attending the first day of the Enabling Multi-Messenger Astrophysics in the Big Data Era Workshop. This was a meeting bringing together many involved in the search for counterparts to gravitational wave events. The alert for S190425z cause some excitement. I don’t think there was much sleep that week.

#### Signal-to-noise ratio ratios

The signal-to-noise ratio reported from our search algorithm for LIGO Livingston is 12.9, and the same code gives 2.5 for Virgo. Virgo was about 2.8 times less sensitive that Livingston at the time, so you might be wondering why we have a signal-to-noise ratio of 2.8, instead of 4.6? The reason is that our detectors are not equally sensitive in all directions. They are most sensitive directly to sources directly above and below, and less sensitive to sources from the sides. The relative signal-to-noise ratios, together with the time or arrival at the different detectors, helps us to figure out the directions the signal comes from.

#### Detection thresholds

In O2, GW170818 was only detected by GstLAL because its signal-to-noise ratios in Hanford and Virgo (4.1 and 4.2 respectively) were below the threshold used by PyCBC for their analysis (in O2 it was 5.5). Subsequently, PyCBC has been rerun on the O2 data to produce the second Open Gravitational-wave Catalog (2-OGC). This is an analysis performed by PyCBC experts both inside and outside the LIGO Scientific & Virgo Collaboration. For this, a threshold of 4 was used, and consequently they found GW170818, which is nice.

I expect that if the threshold for our usual multiple-detector detection pipelines were lowered to ~2, they would find GW190425. Doing so would make the analysis much trickier, so I’m not sure if anyone will ever attempt this. Let’s see. Perhaps the 3-OGC team will be feeling ambitious?

#### Rates calculations

In comparing rates calculated for this papers and those from our end-of-O2 paper, my student Chase Kimball (who calculated the new numbers) would like me to remember that it’s not exactly an apples-to-apples comparison. The older numbers evaluated our sensitivity to gravitational waves by doing a large number of injections: we simulated signals in our data and saw what fraction of search algorithms could pick out. The newer numbers used an approximation (using a simple signal-to-noise ratio threshold) to estimate our sensitivity. Performing injections is computationally expensive, so we’re saving that for our end-of-run papers. Given that we currently have only two detections, the uncertainty on the rates is large, and so we don’t need to worry too much about the details of calculating the sensitivity. We did calibrate our approximation to past injection results, so I think it’s really an apples-to-pears-carved-into-the-shape-of-apples comparison.

#### Paper release

The original plan for GW190425 was to have the paper published before the announcement, as we did with our early detections. The timeline neatly aligned with the AAS meeting, so that seemed like an good place to make the announcement. We managed to get the the paper submitted, and referee reports back, but we didn’t quite get everything done in time for the AAS announcement, so Plan B was to have the paper appear on the arXiv just after the announcement. Unfortunately, there was a problem uploading files to the arXiv (too large), and by the time that was fixed the posting deadline had passed. Therefore, we went with Plan C or sharing the paper on the LIGO DCC. Next time you’re struggling to upload something online, remember that it happens to Nobel-Prize winning scientific collaborations too.

On the question of when it is best to share a paper, I’m still not decided. I like the idea of being peer-reviewed before making a big splash in the media. I think it is important to show that science works by having lots of people study a topic, before coming to a consensus. Evidence needs to be evaluated by independent experts. On the other hand, engaging the entire community can lead to greater insights than a couple of journal reviewers, and posting to arXiv gives opportunity to make adjustments before you having the finished article.

I think I am leaning towards early posting in general—the amount of internal review that our Collaboration papers receive, satisfies my requirements that scientists are seen to be careful, and I like getting a wider range of comments—I think this leads to having the best paper in the end.

#### S

The joke that S stands for super, not hope is recycled from an article I wrote for the LIGO Magazine. The editor, Hannah Middleton wasn’t sure that many people would get the reference, but graciously printed it anyway. Did people get it, or do I need to fly around the world really fast?

# Prospects for observing and localizing gravitational-wave transients with Advanced LIGO, Advanced Virgo and KAGRA

This paper, known as the Observing Scenarios Document with the Collaboration, outlines the observing plans of the ground-based detectors over the coming decade. If you want to search for electromagnetic or neutrino signals from our gravitational-wave sources, this is the paper for you. It is a living review—a document that is continuously updated.

This is the second published version, the big changes since the last version are

1. We have now detected gravitational waves
2. We have observed our first gravitational wave with a mulitmessenger counterpart [bonus note]
3. We now include KAGRA, along with LIGO and Virgo

As you might imagine, these are quite significant updates! The first showed that we can do gravitational-wave astronomy. The second showed that we can do exactly the science this paper is about. The third makes this the first joint publication of the LIGO Scientific, Virgo and KAGRA Collaborations—hopefully the first of many to come.

I lead both this and the previous version. In my blog on the previous version, I explained how I got involved, and the long road that a collaboration must follow to get published. In this post, I’ll give an overview of the key details from the new version together with some behind-the-scenes background (working as part of a large scientific collaboration allows you to do amazing science, but it can also be exhausting). If you’d like a digest of this paper’s science, check out the LIGO science summary.

### Commissioning and observing phases

The first section of the paper outlines the progression of detector sensitivities. The instruments are incredibly sensitive—we’ve never made machines to make these types of measurements before, so it takes a lot of work to get them to run smoothly. We can’t just switch them on and have them work at design sensitivity [bonus note].

Target evolution of the Advanced LIGO and Advanced Virgo detectors with time. The lower the sensitivity curve, the further away we can detect sources. The distances quoted are binary neutron star (BNS) ranges, the average distance we could detect a binary neutron star system. The BNS-optimized curve is a proposal to tweak the detectors for finding BNSs. Figure 1 of the Observing Scenarios Document.

The plots above show the planned progression of the different detectors. We had to get these agreed before we could write the later parts of the paper because the sensitivity of the detectors determines how many sources we will see and how well we will be able to localize them. I had anticipated that KAGRA would be the most challenging here, as we had not previously put together this sequence of curves. However, this was not the case, instead it was Virgo which was tricky. They had a problem with the silica fibres which suspended their mirrors (they snapped, which is definitely not what you want). The silica fibres were replaced with steel ones, but it wasn’t immediately clear what sensitivity they’d achieve and when. The final word was they’d observe in August 2017 and that their projections were unchanged. I was sceptical, but they did pull it out of the bag! We had our first clear three-detector observation of a gravitational wave 14 August 2017. Bravo Virgo!

Plausible time line of observing runs with Advanced LIGO (Hanford and Livingston), advanced Virgo and KAGRA. It is too early to give a timeline for LIGO India. The numbers above the bars give binary neutron star ranges (italic for achieved, roman for target); the colours match those in the plot above. Currently our third observing run (O3) looks like it will start in early 2019; KAGRA might join with an early sensitivity run at the end of it. Figure 2 of the Observing Scenarios Document.

### Searches for gravitational-wave transients

The second section explain our data analysis techniques: how we find signals in the data, how we work out probable source locations, and how we communicate these results with the broader astronomical community—from the start of our third observing run (O3), information will be shared publicly!

The information in this section hasn’t changed much [bonus note]. There is a nice collection of references on the follow-up of different events, including GW170817 (I’d recommend my blog for more on the electromagnetic story). The main update I wanted to include was information on the detection of our first gravitational waves. It turned out to be more difficult than I imagined to come up with a plot which showed results from the five different search algorithms (two which used templates, and three which did not) which found GW150914, and harder still to make a plot which everyone liked. This plot become somewhat infamous for the amount of discussion it generated. I think we ended up with something which was a good compromise and clearly shows our detections sticking out above the background of noise.

Offline transient search results from our first observing run (O1). The plot shows the number of events found verses false alarm rate: if there were no gravitational waves we would expect the points to follow the dashed line. The left panel shows the results of the templated search for compact binary coalescences (binary black holes, binary neutron stars and neutron star–black hole binaries), the right panel shows the unmodelled burst search. GW150914, GW151226 and LVT151012 are found by the templated search; GW150914 is also seen in the burst search. Arrows indicate bounds on the significance. Figure 3 of the Observing Scenarios Document.

### Observing scenarios

The third section brings everything together and looks at what the prospects are for (gravitational-wave) multimessenger astronomy during each observing run. It’s really all about the big table.

Summary of different observing scenarios with the advanced detectors. We assume a 70–75% duty factor for each instrument (including Virgo for the second scenario’s sky localization, even though it only joined our second observing run for the final month). Table 3 from the Observing Scenarios Document.

I think there are three really awesome take-aways from this

1. Actual binary neutron stars detected = 1. We did it!
2. Using the rates inferred using our observations so far (including GW170817), once we have the full five detector network of LIGO-Hanford, LIGO-Livingston, Virgo, KAGRA and LIGO-India, we could be detected 11–180 binary neutron stars a year. That something like between one a month to one every other day! I’m kind of scared…
3. With the five detector network the sky localization is really good. The median localization is about 9–12 square degrees, about the area the LSST could cover in a single pointing! This really shows the benefit of adding more detector to the network. The improvement comes not because a source is much better localized with five detectors than four, but because when you have five detectors you almost always have at least three detectors(the number needed to get a good triangulation) online at any moment, so you get a nice localization for pretty much everything.

In summary, the prospects for observing and localizing gravitational-wave transients are pretty great. If you are an astronomer, make the most of the quiet before O3 begins next year.

arXiv: 1304.0670 [gr-qc]
Journal: Living Reviews In Relativity21:3(57); 2018
Science summary: A Bright today and brighter tomorrow: Prospects for gravitational-wave astronomy With Advanced LIGO, Advanced Virgo, and KAGRA
Prospects for the next update:
After two updates, I’ve stepped down from preparing the next one. Wooh!

### Bonus notes

#### GW170817 announcement

The announcement of our first multimessenger detection came between us submitting this update and us getting referee reports. We wanted an updated version of this paper, with the current details of our observing plans, to be available for our astronomer partners to be able to cite when writing their papers on GW170817.

Predictably, when the referee reports came back, we were told we really should include reference to GW170817. This type of discovery is exactly what this paper is about! There was avalanche of results surrounding GW170817, so I had to read through a lot of papers. The reference list swelled from 8 to 13 pages, but this effort was handy for my blog writing. After including all these new results, it really felt like this was version 2.5 of the Observing Scenarios, rather than version 2.

#### Design sensitivity

We use the term design sensitivity to indicate the performance the current detectors were designed to achieve. They are the targets we aim to achieve with Advanced LIGO, Advance Virgo and KAGRA. One thing I’ve had to try to train myself not to say is that design sensitivity is the final sensitivity of our detectors. Teams are currently working on plans for how we can upgrade our detectors beyond design sensitivity. Reaching design sensitivity will not be the end of our journey.

#### Binary black holes vs binary neutron stars

Our first gravitational-wave detections were from binary black holes. Therefore, when we were starting on this update there was a push to switch from focusing on binary neutron stars to binary black holes. I resisted on this, partially because I’m lazy, but mostly because I still thought that binary neutron stars were our best bet for multimessenger astronomy. This worked out nicely.