GW190814—The mystery of a 2.6 solar mass compact object

GW190814 is an exception discovery from the third observing run (O3) of the LIGO and Virgo gravitational wave detectors. The signal came from the coalescence of a binary made up of a component about 23 times the mass of our Sun (solar masses) and one about 2.6 solar masses. The more massive component would be a black hole, similar to past discoveries. The less massive component, however, we’re not sure about. This is a mass range where observations have been lacking. It could be a neutron star. In this case, GW190814 would be the first time we have seen a neutron star–black hole binary. This could also be the most massive neutron star ever found, certainly the most massive in a compact-object (black hole or neutron star) binary. Alternatively, it could be a black hole, in which case it would be the smallest black hole ever found. We have discovered something special, we’re just not sure exactly what…

Black hole and neutron star masses highlighting GW190814

The population of compact objects (black holes and neutron stars) observed with gravitational waves and with electromagnetic astronomy, including a few which are uncertain. GW190814 is highlighted. It is not clear if its lighter component is a black hole or neutron star. Source: Northwestern

Detection

14 August 2019 marked the second birthday of GW170814—the first gravitational wave we clearly detected using all three of our detectors. As a present, we got an even more exciting detection.

I was at the MESA Summer School at the time [bonus advertisement], learning how to model stars. My student Chase come over excitedly as soon as he saw the alert. We snuck a look at the data in a private corner of the class. GW190814 (then simply known as candidate S190814bv) was a beautifully clear chirp. You shouldn’t assess how plausible a candidate signal is by eye (that’s why we spent years building detection algorithms [bonus note]), but GW190814 was a clear slam dunk that hit it out of the park straight into the bullseye. Check mate!

Normalised spectrograms for GW190814

Time–frequency plots for GW190814 as measured by LIGO Hanford, LIGO Livingston and Virgo. The chirp of a binary coalescence is clearest in Livingston. For long signals, like GW190814, it is usually hard to pick out the chirp by eye. Figure 1 of the GW190814 Discovery Paper.

Unlike GW170814, however, it seemed that we only had two detectors observing. LIGO Hanford was undergoing maintenance (the same procedure as when GW170608 occurred). However, after some quick checks, it was established that the Hanford data was actually good to use—the detectors had been left alone in the 5 minutes around the signal (phew), so the data were clean (wooh)! We had another three-detector detection.

The big difference that having three detectors make is a much better localization of the source. For GW190814 we get a beautifully tight localization. This was exciting, as GW190814 could be a neutron star–black hole. The initial source classification (which is always pretty uncertain as it’s done before we have detailed analysis) went back and forth between being a binary black hole with one component in the the 3–5 solar mass range, and a neutron star–black hole (which means the less massive component is below 3 solar masses, not necessarily a neutron star). Neutron star–black hole mergers may potentially have an electromagnetic counterparts which can be found by telescopes. Not all neutron star–black hole binaries will have counterparts as sometimes, when the black hole is much bigger than the neutron star, it will be swallowed whole. Even if there is a counterpart, it may be too faint to see (we expect this to be increasingly common as our detectors detect gravitational waves from more distance sources). GW190814’s source is about 240 Mpc away (six times the distance of GW170817, meaning any light emitted would be about 36 times fainter) [bonus note]. Many teams searched for counterparts, but none have been reported. Despite the excellent localization, we have no multimessenger counterpart this time.

Sky map for GW190814

Sky localizations for GW190814’s source. The blue dashed contour shows the preliminary localization using only LIGO Livingston and Virgo data, and the solid orange shows the preliminary localization adding in Hanford data. The dashed green contour shows and updated localization used by many for their follow-up studies. The solid purple contour shows our final result, which has an area of just 18.5~\mathrm{deg^2}. All contours are for 90% probabilities. Figure 2 of the GW190814 Discovery Paper.

The sky localisation for GW190814 demonstrates nicely how localization works for gravitational-wave sources. We get most of our information from the delay time between the signal reaching the different detectors. With a two-detector network, a single time delay corresponds to a ring on the sky. We kind of see this with the blue dashed localization above, which was the initial result using just LIGO Livingston and Virgo data. There are actual arcs corresponding to two different time delays. This is because the signal is quiet in Virgo, and so we don’t get an absolute lock on the arrival time: if you shift the signal so it’s one cycle different, it still matches pretty well, so we get two possibilities. The arcs aren’t full circles because information on the phase of the signals, and the relative amplitudes (since detectors are not uniformal sensitive in all directions) add extra information. Adding in LIGO Hanford data gives us more information on the timing. The Hanford–Livingston circle of constant time delay slices through the Livingston–Virgo one, leaving us with just the two overlapping islands as possibilities. The sky localizations shifted a little bit as we refined the analysis, but remained pretty consistent.

Whodunnit?

From the gravitational wave signal we inferred that GW190814 came from a binary with masses m_1 = 23.2^{+1.1}_{-1.0} solar masses (quoting the 90% range for parameters), and the other m_2 = 2.59^{+0.08}_{-0.09} solar masses. This is remarkable for two reasons: first, the lower mass object is right in the range where we might hit the maximum mass of a neutron star, and second, this is the most asymmetric masses from any of our gravitational wave sources.

Binary component masses for GW190814

Estimated masses for the two components in the binary m_i \geq m_2. We show results several different waveform models (which include spin precession and higher order multiple moments). The two-dimensional shows the 90% probability contour. The one-dimensional plot shows individual masses; the dotted lines mark 90% bounds away from equal mass. Estimates for the maximum neutron star mass are shown for comparison with the mass of the lighter component m_2. Figure 3 of the GW190814 Discovery Paper.

Neutron star or black hole?

Neutron stars are massive balls of stuff™. They are made of matter in its most squished form. A neutron star about 1.4 solar masses would have a radius of only about 12 kilometres. For comparison, that’s roughly the same as trying to fit the mass of 3\times 10^{33} M&Ms (plain; for peanut butter it would be different, and of course, more delicious) into the volume of just 1.2 \times 10^{19} M&Ms (ignoring the fact that you can’t perfectly pack them)! Neutron stars are about 3 \times 10^{14} times more dense than M&Ms. As you make neutron stars heavier, their gravity gets stronger until at some point the strange stuff™ they are made of can’t take the pressure. At this point the neutron star will collapse down to a black hole. Since we don’t know the properties of neutron star stuff™ we don’t know the maximum mass of a neutron star.

We have observed neutron stars of a range of masses. The recently discovered pulsar J0740+6620 may be around 2.1 solar masses, and potentially pulsar J1748−2021B may be around 2.7 solar masses (although that measurement is more uncertain as it requires some strong assumptions about the pulsar’s orbit and its companion star). Using observations of GW170817, estimates have been made that the maximum neutron star mass should be below 2.2 or 2.3 solar masses; using late-time observations of short gamma-ray bursts (assuming that they all come from binary neutron star mergers) indicates an upper limit of 2.4 solar masses, and looking at the observed population of neutron stars, it could be anywhere between 2 and 3 solar masses. About 3 solar masses is a safe upper limit,  as it’s not possible to make stuff™ stiff enough to withstand more pressure than that.

At about 2.6 solar masses, it’s not too much of a stretch to believe that the less massive component is a neutron star. In this case, we have learnt something valuable about the properties of neutron star stuff™. Assuming that we have a neutron star, we can infer the properties of neutron star stuff™. We find that a typical neutron star 1.4 solar masses, the radius would be R_{1.4} = 12.9^{+0.8}_{-0.7}~\mathrm{km} and the tidal deformability \Lambda_{1.4} = 616^{+273}_{-158}.

The plot below shows our results fitting the neutron star equation of state, which describes how the density pf neutron star stuff™ changes with pressure. The dashed lines show the 90% range of our prior (what the analysis would return with no input information). The blue curve shows results adding in GW170817 (what we would have if GW190814 was a binary black hole), we prefer neutron stars made of softer stuff™ (which is squisher to hug, and would generally result in more compact neutron stars). Adding in GW190814 (assuming a neutron star–black hole) pushes us back up to stiffer stuff™ as we now need to support a massive maximum mass.

Neutron star pressure and density

Constraints on the neutron star equation of state, showing how density \rho changes with pressure $p$. The blue curve just uses GW170817, implicitly assuming that GW190814 is from a binary black hole, while the orange shows what happens if we include GW190814, assuming it is from a neutron star–black hole binary. The 90% and 50% credible contours are shown as the dark and lighter bands, and the dashed lines indicate the 90% region of the prior. Figure 8 of the GW190814 Discovery Paper.

What if it’s not a neutron star?

In this case we must have a black hole. In theory black holes can be any mass: you just need to squish enough mass into a small enough space. However, from our observations of X-ray binaries, there seem to be no black holes below about 5 solar masses. This is referred to as the lower mass gap, or the core collapse mass gap. The theory was that when the cores of massive stars collapse, there are different types of explosions and implosions depending upon the core’s mass. When you have a black hole, more material from outside the core falls back than when you have a neutron star. All the extra material would always mean that black holes are born above 5 solar masses. If we’ve found a black hole below this, either this theory is wrong and we need a new explanation for the lack of X-ray observations, or we have a black hole formed via a different means.

Potentially, we could if we measured the effects of the tidal distortion of the neutron star in the gravitational wave signal. Unfortunately, tidal effects are weaker for more unequal mass binaries. GW190814 is extremely unequal, so we can’t measure anything and say either way. Equally, seeing an electromagnetic counterpart would be evidence for a neutron star, but with such unequal masses the neutron star would likely be eaten whole, like me eating an M&M. The mass ratio means that we can’t be certain what we have.

The calculation we can do, is use past observations of neutron stars and measurements of the stiffness of neutron star stuff™ to estimate the probability the the mass of the less massive component is below the maximum neutron star mass. Using measurements from GW170817 for the stuff™ stiffness, we estimate that there’s only a 3% probability of the mass being below the maximum neutron star mass, and using the observed population of neutron stars the probability is 29%. It seems that it is improbable, but not impossible, that the component is a neutron star.

I’m yet to be convinced one way or the other on black hole vs neutron star [bonus note], but I do like the idea of extra small black holes. They would be especially cute, although you must never try to hug them.

The unequal masses

Most of the binaries we’ve seen with gravitational waves so far are consistent with having equal masses. The exception is GW190412, which has a mass ratio of q = m_2/m_1 = 0.28^{+0.13}_{-0.07}. The mass ratio changes a few things about the gravitational wave signal. When you have unequal masses, it is possible to observe higher harmonics in the gravitational wave signal: chirps at multiples of the orbital frequency (the dominant two form a perfect fifth). We observed higher harmonics for the first time with GW190412. GW190814 has a more extreme mass ratio q = 0.112^{+0.008}_{-0.009}. We again spot the next harmonic in GW190814, this time it is even more clear. Modelling gravitational waves from systems with mass ratios of q \sim 0.1 is tricky, it is important to include the higher order multipole moments in order to get good estimates of the source parameters.

Having unequal masses makes some of the properties of the lighter component, like its tidal deformability of its spin, harder to measure. Potentially, it can be easier to pick out the spin of the more massive component. In the case of GW190814, we find that the spin is small, \chi_1 < 0.07. This is our best ever measurement of black hole spin!

Orientation and magnitudes of the two spins

Estimated orientation and magnitude of the two component spins. The distribution for the more massive component is on the left, and for the lighter component on the right. The probability is binned into areas which have uniform prior probabilities, so if we had learnt nothing, the plot would be uniform. The maximum spin magnitude of 1 is appropriate for black holes. On account of the mass ratio, we get a good measurement of the spin of the more massive component, but not the lighter one. Figure 6 of the GW190814 Discovery Paper.

Typically, it is easier to measure the amount of spin aligned with the orbital angular momentum. We often characterise this as the effective inspiral spin parameter. In this case, we measure \chi_\mathrm{eff} = -0.002^{+0.060}_{-0.061}. Harder to measure is the spin in the orbital plane. This controls the amount of spin precession (wobbling in the spin orientation as the orbital angular momentum is not aligned with the total angular momentum), and is characterised by the effective precession spin parameter. For GW190814, we find \chi_\mathrm{p} < 0.07, which is our tightest measurement. It might seem odd that we get our best measurement of in-plane spin in the case when there is no precession. However, this is because if there were precession, we would clearly measure it. Since there is no support for precession in the data, we know that it isn’t there, and hence that the amount of in-plane spin is small.

Implications

While we haven’t solved the mystery of neutron star vs black hole, what can we deduce?

  1. Einstein is still not wrong yet. Our tests of general relativity didn’t give us any evidence that something was wrong. We even tried a new test looking for deviations in the spin-induced quadrupole moment. GW190814 was initially thought to be a good case to try this, on account of its mass ratio, unfortunately, since there’s little hint of spin, we don’t get particularly informative results. Next time.
  2. The Universe is expanded about as fast as we’d expect. We have a wonderfully tight localization: GW190814 has the best localization of all our gravitational waves except for GW170817. This means we can cross-reference with galaxy catalogues to estimate the Hubble constant, a measure of the expansion rate of the Universe. We get the distance from our gravitational wave measurement, and the redshift from the catalogue, and putting them together give the Hubble constant H_0. From GW190814 alone, we get H_0 = 83^{+55}_{-53}~\mathrm{km\,s^{-1}\,Mpc^{-1}} (quoting numbers with our usual median and symmetric 90% interval convention; if you like mode and narrowest 68% region, it’s H_0 = 75^{+59}_{-13}~\mathrm{km\,s^{-1}\,Mpc^{-1}}). If we combine with results for GW170817, we get H_0 = 77^{+33}_{-23}~\mathrm{km\,s^{-1}\,Mpc^{-1}} (or H_0 = 70^{+17}_{-8}~\mathrm{km\,s^{-1}\,Mpc^{-1}}) [bonus note].
  3. The merger rate density for a population of GW190814-like systems is 7^{+16}_{-6}~\mathrm{Gpc^{-3}\,yr^{-1}}. If you think you know how GW190814 formed, you’ll need to make sure to get a compatible rate estimate.

What can we say about potential formation channels for the source? This is rather tricky as many predictions assume supernova models which lead to a mass group, so there’s nothing with a compatible mass for the lighter component. I expect there will be lots of checking what happens without this assumption.

Given the mass of the black hole, we would expect that it formed from a low metallicity star. That is a star which doesn’t have too many of the elements heavier than hydrogen and helium. Heavier elements lead to stronger stellar winds, meaning that stars are smaller at the end of their lives and it is harder to get a black hole that’s 23 solar masses. The same is true for many of the black holes we’ve seen in gravitational waves.

Massive stars have short lives. The bigger they are, the more quickly they burn up all their nuclear fuel. This has an important implication for the mass of the lighter component: it probably has not grown much since it formed. We could either have the bigger component forming from the initially bigger star (which is the simpler scenario to imagine). In this case, the black hole forms first, and there is no chance for the lighter component to grow after it forms as it’s sitting next to a black hole. It is possible that the lighter component formed first if when its parent star started expanding in middle age (as many of us do) it transferred lots of mass to its companion star. The mass transfer would reverse which of the stars was more massive, and we could then have some accretion back onto the lighter compact object to grow it a bit. However, the massive partner star would only have a short lifetime, and compact objects can only swallow a relatively small rate of material, so you wouldn’t be able the lighter component by much more than 0.1 solar masses, not nearly enough to bridge the gap from what we would consider a typical neutron star. We do need to figure out a way to form compact objects about 2.6 solar masses.

How to form GW190814-like systems through isolated binary evolution.

Two possible ways of forming GW190814-like systems through isolated binary evolution. In Channel A the heavier black hole forms first from the initially more massive star. In Channel B, the initially more massive star transfers so much mass to its companion that we get a mass inversion, and the lighter component forms first. In the plot, a is the orbital separation, e is the orbital inclination, t is the time since the stars started their life on the main sequence. The letters on the right indicate the evolution phase: ZAMS is zero-age main sequence, MS is main sequence (burning hydrogen), CHeB is core helium burning (once the hydrogen has been used up), and BH and NS mean black hole and neutron star. At low metallicities Z (when stars have few elements heavier than hydrogen and helium), the two channels are about as common, as metallicity increases Channel A becomes more common. Figure 6 of Zevin et al. (2020).

The mass ratio is difficult to produce. It’s not what you would expect for dynamically formed binaries in globular clusters (as you’d expect heavier objects to pair up). It could maybe happen in the discs around active galactic nuclei, although there are lots of uncertainties about this, and since this is only a small part of space, I wouldn’t expect a large numbers of events. Isolated binaries (or higher multiples) can form these mass ratios, but they are rare for binaries that go on to merge. Again, it might be difficult to produce enough systems to explain our observation of GW190814. We need to do some more sleuthing to figure out how binaries form.

Epilogue

The LIGO and Virgo gravitational wave detectors embody decades of work by thousand of scientists across the globe. It took many hard years of research to create the technology capable of observing gravitational waves. Many doubted it would ever be possible. Finally, in 2015, we succeeded. The first detection of gravitational waves opened a new field of astronomy—our goal was not to just detect gravitational waves once, but to use them to explore our Universe. Since then we have continued to work improving our detectors and our analyses. More discoveries have come. LIGO and Virgo are revolutionising our understanding of astrophysics, and GW190814 is the latest advancement in our knowledge. It will not be the last. Gravitational wave astronomy thrives thanks to, and as a consequence of, many people working together towards a common goal.

If a few thousand people can work together to imagine, create and operate gravitational wave detectors, think what we could achieve if millions, or billions, or if we all worked together. Let’s get to work.

Title: GW190814: Gravitational waves from the coalescence of a 23 solar mass black hole with a 2.6 solar mass compact object
Journal: Astrophysical Journal Letters; 896(2):L44(20); 2020
arXiv: 2006.12611 [astro.ph-HE]
Science summary: The curious case of GW190814: The coalescence of a stellar-mass black hole and a mystery compact object
Data release: Gravitational Wave Open Science Center; Parameter estimation results
Rating: 🍩🐦🦚🦆❔

Bonus notes

MESA Summer School

Modules for Experiments in Stellar Astrophysics (MESA) is a code for simulating the evolution of stars. It’s pretty neat, and can do all sorts of cool things. The summer school is a chance to be taught how to use it as well as some theory behind the lives of stars. The school is aimed at students (advanced undergrads and postgrads) and postdocs starting out using or developing the code, but there’ll let faculty attend if there’s space. I was lucky enough to get a spot together with my fantastic students Chase, Monica and Kyle. I was extremely impressed by everything. The ratio of demonstrators to students was high, all the sessions were well thought out, and ice cream was plentiful. I would definitely recommend attending if you are interested in stellar evolution, and if you want to build the user base for your scientific code, this is certainly a wonderful model to follow.

Detection significance

For our final (for now) detection significance we only used data from LIGO Livingston and Virgo. Although the Hanford data are good, we wouldn’t have looked at this time without the prompt from the other detectors. We therefore need to be careful not to bias ourselves. For simplicity we’ve stuck with using just the two detectors. Since Hanford would boost the significance, these results should be conservative. GstLAL and PyCBC identified the event with false alarm rates of better than 1 in 100,000 years and 1 in 42,000 years, respectively.

Distance

The luminosity distance of GW190814’s source is estimated as 241^{+41}_{-45}~\mathrm{Mpc}. The luminosity distance is a measure which incorporates the effects of the signal travelling through an expanding Universe, so it’s not quite the same as the actual distance between us and the source. Given the uncertainties on the luminosity distance, it would have taken the signal somewhere between 600 million and 850 million years to reach us. It therefore set out during the Neoproterozoic era here on Earth, which is pretty cool.

In this travel time, the signal would have covered about 6 sextillion kilometres, or to put it in easier to understand units, about 400,000,000,000,000,000,000,000,000 M&Ms laid end-to-end. Eating that many M&Ms would give you about 2 \times 10^{27} calories. That seems like a lot of energy, but it’s less than 2 \times 10^{-16} of the energy emitted as gravitational waves for GW190814.

Betting

Given current uncertainties on what the maximum mass of a neutron star should be, it is hard to offer odds for whether of not the smaller component of GW190814’s binary is a black hole or neutron star. Since it does seem higher mass than expected for neutron stars from other observations, a black hole origin does seem more favoured, but as GW190425 showed, we might be missing the full picture about the neutron star population. I wouldn’t be too surprised if our understanding shifted over the next few years. Consequently, I’d stretch to offering odds of one peanut butter M&M to one plain chocolate M&M in favour of black holes over neutron stars.

Hubble constant

Using the Dark Energy Survey galaxy catalogue, Palmese et al. (2020) calculate a Hubble constant of H_0 = 66^{+55}_{-18}~\mathrm{km\,s^{-1}\,Mpc^{-1}} (mode and narrowest 68% region) using GW190814. Adding in GW170814 they get H_0 = 68^{+43}_{-21}~\mathrm{km\,s^{-1}\,Mpc^{-1}} as a gravitational-wave-only measurement, and including GW170817 and its electromagnetic counterpart gives H_0 = 69.0^{+14.0}_{-7.5}~\mathrm{km\,s^{-1}\,Mpc^{-1}}.

 

Advertisement

An introduction to LIGO–Virgo data analysis

LIGO and Virgo make their data open for anyone to try analysing [bonus note]. If you’re a student looking for a project, a teacher planning a class activity, or a scientist working on a paper, this data is waiting for you to use. Understanding how to analyse the data can be tricky. In this post, I’ll share some of the resources made by LIGO and Virgo to help introduce gravitational-wave analysis. These papers together should give you a good grounding in how to get started working with gravitational-wave data.

If you’d like a more in-depth understanding, I’d recommend visiting your local library for Michele Maggiore’s  Gravitational Waves: Volume 1.

The Data Analysis Guide

Title: A guide to LIGO-Virgo detector noise and extraction of transient gravitational-wave signals
arXiv:
1908.11170 [gr-qc]
Journal: Classical & Quantum Gravity; 37(5):055002(54); 2020
Tutorial notebook: GitHub;  Google Colab; Binder
Code repository: Data Guide
LIGO science summary: A guide to LIGO-Virgo detector noise and extraction of transient gravitational-wave signals

It took many decades to develop the technology necessary to build gravitational-wave detectors. Similarly, gravitational-wave data analysis has developed over many decades—I’d say LIGO analysis was really kicked off in the early 1990s by Kipp Thorne’s group. There are now hundreds of papers on various aspects of gravitational-wave analysis. If you are new to the area, where should you start? Don’t panic! For the binary sources discovered so far, this Data Analysis Guide has you covered.

More details: The Data Analysis Guide

The GWOSC Paper

Title: Open data from the first and second observing runs of Advanced LIGO and Advanced Virgo
arXiv:
1912.11716 [gr-qc]
Journal: SoftwareX; 13:100658(20); 2021
Website: Gravitational Wave Open Science Center
LIGO science summary: Open data from the first and second observing runs of Advanced LIGO and Advanced Virgo

Data from the LIGO and Virgo detectors is released by the Gravitational Wave Open Science Center (GWOSC, pronounced, unfortunately, as it is spelt). If you want to try analysing our delicious data yourself, either searching for signals or studying the signals we have found, GWOSC is the place to start. This paper outlines how these data are produced, going from our laser interferometers to your hard-drive. The paper specifically looks at the data released for our first and second observing runs (O1 and O2), however, GWOSC also host data from the initial detectors’ fifth science run (S5) and sixth science run (S6), and will be updated with new data in the future.

If you do use data from GWOSC, please remember to say thank you.

More details: The GWOSC Paper

001100 010010 011110 100001 101101 110011

I thought I saw a 2! Credit: Fox

The Data Analysis Guide

Synopsis: Data Analysis Guide
Read this if: You want an introduction to signal analysis
Favourite part: This is a great resource for new students [bonus note]

Gravitational-wave detectors measure ripples in spacetime. They record a simple time series of the stretching and squeezing of space as a gravitational wave passes. Well, they measure that, plus a whole lot of noise. Most of the time it is just noise. How do we go from this time series to discoveries about the Universe’s black holes and neutron stars? This paper gives the outline, it covers (in order)

  1. An introduction to observations at the time of writing
  2. The basics of LIGO and Virgo data—what it is the we analyse
  3. The basics of detector noise—how we describe sources of noise in our data
  4. Fourier analysis—how we go from time series to looking at the data in the as a function of frequency, which is the most natural way to analyse that data.
  5. Time–frequency analysis and stationarity—how we check the stability of data from our detectors
  6. Detector calibration and data quality—how we make sure we have good quality data
  7. The noise model and likelihood—how we use our understanding of the noise, under the assumption of it being stationary, to work out the likelihood of different signals being in the data
  8. Signal detection—how we identify times in the data which have a transient signal present
  9. Inferring waveform and physical parameters—how we estimate the parameters of the source of a gravitational wave
  10. Residuals around GW150914—a consistency check that we have understood the noise surrounding our first detection

The paper works through things thoroughly, and I would encourage you to work though it if you are interested.

I won’t summarise everything here, I want to focus the (roughly undergraduate-level) foundations of how we do our analysis in the frequency domain. My discussion of the GWOSC Paper goes into more detail on the basics of LIGO and Virgo data, and some details on calibration and data quality. I’ll leave talking about residuals to this bonus note, as it involves a long tangent and me needing to lie down for a while.

Fourier analysis

The signal our detectors measure is a time series d(t). This is may just contain noise, d(t) = n(t), or it may also contain a signal, d(t) = n(t) + h(t).

There are many sources of noise for our detectors. The different sources can affect different frequencies. If we assume that the noise is stationary, so that it’s properties don’t change with time, we can simply describe the properties of the noise with the power spectral density S_n(f). On average we expect the noise at a given frequency to be zero, but with it fluctuating up and down with a variance given by the power spectral density. We typically approximate the noise as Gaussian, such that

n(f) \sim \mathcal{N}(0; S_n(f)/2),

where we use \mathcal{N}(\mu; \sigma^2) to represent a normal distribution with mean \mu and standard deviation \sigma. The approximations of stationary and Gaussian noise are good most of the time. The noise does vary over time, but is usually effectively stationary over the durations we look at for a signal. The noise is also mostly Gaussian except for glitches. These are taken into account when we search for signals, but we’ll ignore them for now. The statistical description of the noise in terms of the power spectral density allows us to understand our data, but this understanding comes as a function of frequency: we must transform of time domain data to frequency domain data.

The go from d(t) to d(f) we can use a Fourier transform. Fourier transforms are a way of converting a function of one variable into a function of a reciprocal variable—in the case of time you convert to frequency. Fourier transforms encode all the information of the original function, so it is possible to convert back and forth as you like. Really, a Fourier transform is just another way of looking at the same function.

The Fourier transform is defined as

d(f) = \mathcal{F}_f\left\{d(t)\right\} = \int_{-\infty}^{\infty} d(t) \exp(-2\pi i f t) \,\mathrm{d}t.

Now, from this you might notice a problem when it comes to real data analysis, namely that the integral is defined over an infinite amount of time. We don’t have that much data. Instead, we only have a short period.

We could recast the integral above over a shorter time if instead of taking the Fourier transform of d(t), we take the Fourier transform of d(t) \times w(t) where w(t) is some window function which goes to zero outside of the time interval we are looking at. What we end up with is a convolution of the function we want with the Fourier transform of the window function,

\mathcal{F}_f\left\{d(t)w(t)\right\} = d(f) \ast w(f).

It is important to pick a window function which minimises the distortion to the signal that we want. If we just take a tophat (also known as a boxcar or rectangular, possibly on account of its infamous criminal background) function which is abruptly cuts off the data at the ends of the time interval, we find that w(f) is a sinc function. This is not a good thing, as it leads to all sorts of unwanted correlations between different frequencies, commonly known as spectral leakage. A much better choice is a function which smoothly tapers to zero at the edges. Using a tapering window, we lose a little data at the edges (we need to be careful choosing the length of the data analysed), but we can avoid the significant nastiness of spectral leakage. A tapering window function should always be used. Then or finite-time Fourier transform is then a good approximation to the exact d(f).

Data treatment to highlight a signal

Data processing to reveal GW150914. The top panel shows raw Hanford data. The second panel shows a window function being applied. The third panel shows the data after being whitened. This cleans up the data, making it easier to pick out the signal from all the low frequency noise. The bottom panel shows the whitened data after a bandpass filter is applied to pick out the signal. We don’t use the bandpass filter in our analysis (it is just for illustration), but the other steps reflect how we treat our data. Figure 2 of the Data Analysis Guide.

Now we have our data in the frequency domain, it is simple enough to compare the data to the expected noise a t a given frequency. If we measure something loud at a frequency with lots of noise we should be less surprised than if we measure something loud at a frequency which is usually quiet. This is kind of like how somewhat shouting is less startling at a rock concert than in a library. The appropriate way to weight is to divide by the square root of power spectral density d_\mathrm{w}(f) \propto d(f)/[S_n(f)]^{1/2}. This is known as whitening. Whitened data should have equal amplitude fluctuations at all frequencies, allowing for easy comparisons.

Now we understand the statistical properties of noise we can do some analysis! We can start by testing our assumption that the data are stationary and Gaussian by checking that that after whitening we get the expected distribution. We can also define the likelihood of obtaining the data d(t) given a model of a gravitational-wave signal h(t), as the properties of the noise mean that d(f) - h(f) \sim \mathcal{N}(0; S_n(f)/2). Combining the likelihood for each individual frequency gives the overall likelihood

\displaystyle p(d|h) \propto \exp\left[-\int_{-\infty}^{\infty} \frac{|d(f) - h(f)|^2}{S_n(f)} \mathrm{d}f \right].

This likelihood is at the heart of parameter estimation, as we can work out the probability of there being a signal with a given set of parameters. The Data Analysis Guide goes through many different analyses (including parameter estimation) and demonstrates how to check that noise is nice and Gaussian.

Gaussian residuals for GW150914

Distribution of residuals for 4 seconds of data around GW150914 after subtracting the maximum likelihood waveform. The residuals are the whitened Fourier amplitudes, and they should be consistent with a unit Gaussian. The residuals follow the expected distribution and show no sign of non-Gaussianity. Figure 14 of the Data Analysis Guide.

Homework

The Data Analysis Guide contains much more material on gravitational-wave data analysis. If you wanted to delve further, there many excellent papers cited. Favourites of mine include Finn (1992); Finn & Chernoff (1993); Cutler & Flanagan (1994); Flanagan & Hughes (1998); Allen (2005), and Allen et al. (2012). I would also recommend the tutorials available from GWOSC and the lectures from the Open Data Workshops.

The GWOSC Paper

Synopsis: GWOSC Paper
Read this if: You want to analyse our gravitational wave data
Favourite part: All the cool projects done with this data

You’re now up-to-speed with some ideas of how to analyse gravitational-wave data, you’ve made yourself a fresh cup of really hot tea, you’re ready to get to work! All you need are the data—this paper explains where this comes from.

Data production

The first step in getting gravitational-wave data is the easy one. You need to design a detector, convince science agencies to invest something like half a billion dollars in building one, then spend 40 years carefully researching the necessary technology and putting it all together as part of an international collaboration of hundreds of scientists, engineers and technicians, before painstakingly commissioning the instrument and operating it. For your convenience, we have done this step for you, but do feel free to try it yourself at home.

Gravitational-wave detectors like Advanced LIGO are built around an interferometer: they have two arms at right angles to each other, and we bounce lasers up and down them to measure their length. A passing gravitational wave will change the relative length of one arm relative to the other. This changes the time taken to travel along one arm compared to the other. Hence, when the two bits of light reach the output of the interferometer, they’ll have a different phase:where normally one light wave would have a peak, it’ll have a trough. This change in phase will change how light from the two arms combine together. When no gravitational wave is present, the light interferes destructively, almost cancelling out so that the output is dark. We measure the brightness of light at the output which tells us about how the length of the arms changes.

We want our detector in measure the gravitational-wave strain. That is the fractional change in length of the arms,

\displaystyle h(t) = \frac{\Delta L(t)}{L},

where \Delta L = L_x - L_y is the relative difference in the length of the two arms, and L is the usually arm length. Since we love jargon in LIGO & Virgo, we’ll often refer to the strain as HOFT (as you would read h(t) as h of t; it took me years to realise this) or DARM (differential arm measurement).

The actual output of the detector is the voltage from a photodiode measuring the intensity of the light. It is necessary to make careful calibration of the detectors. In theory this is simple: we change the position of the mirrors at the end of the arms and see how the output changes. In practise, it is very difficult. The GW150914 Calibration Paper goes into details for O1, more up-to-date descriptions are given in Cahillane et al. (2017) for LIGO and Acernese et al. (2018) for Virgo. The calibration of the detectors can drift over time, improving the calibration is one of the things we do between originally taking the data and releasing the final data.

The data are only celebrated between 10 Hz and 5 kHz, so don’t trust the data outside of that frequency range.

The next stage of our data’s journey is going through detector characterisation and data quality checks. In addition to measuring gravitational-wave strain, we record many other data channels: about 200,000 per detector. These measure all sorts of things, from the internal state of the instrument, to monitoring the physical environment around the detectors. These auxiliary channels are used to check the data quality. In some cases, an auxiliary channel will record a source of noise, like scattered light or the mains power frequency, allowing us to clean up our strain data by subtracting out this noise. In other cases, an auxiliary channel can act as a witness to a glitch in our detector, identifying when it is misbehaving so that we know not to trust that part of the data. The GW150914 Detector Characterisation Paper goes into details of how we check potential detections. In doing data quality checks we are careful to only use the auxiliary channels which record something which would be independent of a passing gravitational wave.

We have 4 flags for data quality:

  1. DATA: All clear. Certified fresh. Eat as much as you like.
  2. CAT1: A critical problem with the instrument. Data from these times are likely to be a dumpster fire of noise. We do not use them in our analyses, and they are currently excluded from our public releases. About 1.7% of Hanford data and 1.0% of time from Livingston was flagged with CAT1 in O1. In O2,  we got this done to 0.001% for Hanford, 0.003% for Livingston and 0.05% for Virgo.
  3. CAT2: Some activity in an auxiliary channel (possibly the electric boogaloo monitor) which has a well understood correlation with the measured strain channel. You would therefore expect to find some form of glitchiness in the data.
  4. CAT3: There is some correlation in an auxiliary channel and the strain channel which is not understood. We’re not currently using this flag, but it’s kept as an option.

It’s important to verify the data quality before starting your analysis. You don’t want to get excited to discover a completely new form of gravitational wave only to realise that it’s actually some noise from nearby logging. Remember, if a tree falls in the forest and no-one is around, LIGO will still know.

To test our systems, we also occasionally perform a signal injection: we move the mirrors to simulate a signal. This is useful for calibration and for testing analysis algorithms. We don’t perform injections very often (they get in the way of looking for real signals), but these times are flagged. Just as for data quality flags, it is important to check for injections before analysing a stretch of data.

Once passing through all these checks, the data is ready to analyse!

Yes!

Excited Data. Credit: Paramount

Accessing the data

After our data have been lovingly prepared, they are served up in two data formats:

  • Hierarchical Data Format HDF, which is a popular data storage format, as it is easily allows for metadata and multiple data sets (like the important data quality flags) to be packaged together.
  • Gravitational Wave Frame GWF, which is the standard format we use internally. Veteran gravitational-wave scientists often get a far-way haunted look when you bring up how the specifications for this file format were decided. It’s best not to mention unless you are also buying them a stiff drink.

In these files, you will find h(t) sampled at either 4096 Hz or 16384 Hz (either are available). Pick the sampling rate you need depending upon the frequency range you are interested in: the 4096 Hz data are good for upto 1.7 kHz, while the 16384 Hz are good to the limit of the calibration range at 5 kHz.

Files can be downloaded from the GWOSC website. If you want to download a large amount, it is recommended to use the CernVM-FS distributed file system.

To check when the gravitational-wave detectors were observing, you can use the Timeline search.

GWOSC Timeline

Screenshot of the GWOSC Timeline showing observing from the fifth science run (S5) on the initial detector era through to the second observing run (O2) of the advanced detector era. Bars show observing of GEO 600 (G1), Hanford (H1 and H2), Livingston (L1) and Virgo (V1). Hanford initial had two detectors housed within its site, the plan in the advanced detector era is to install the equipment as LIGO India instead.

Try this at home

Having gone through all these details, you should now know what are data is, over what ranges it can be analyzed, and how to get access to it. Your cup of tea has also probably gone cold. Why not make yourself a new one, and have a couple of biscuits as reward too. You deserve it!

To help you on your way in starting analysing the data, GWOSC has a set of tutorials (and don’t forget the Data Analysis Guide), and a collection of open source software. Have fun, and remember, it’s never aliens.

Bonus notes

Release schedule

The current policy is that data are released:

  1. In a chunk surrounding an event at time of publication of that event. This enables the new detection to be analysed by anyone. We typically release about an hour of data around an event.
  2. 18 months after the end of the run. This time gives us chance to properly calibrate the data, check the data quality, and then run the analyses we are committed to. A lot of work goes into producing gravitational wave data!

Start marking your calendars now for the release of O3 data.

Summer studenting

In summer 2019, while we were finishing up on the Data Analysis Guide, I gave it to one of my summer students Andrew Kim as an introduction. Andrew was working on gravitational-wave data analysis, so I hoped that he’d find it useful. He ended up working through the draft notebook made to accompany the paper and making a number of useful suggestions! He ended up as an author on the paper because of these contributions, which was nice.

The conspiracy of residuals

The Data Analysis Guide is an extremely useful paper. It explains many details of gravitational-wave analysis. The detections made by LIGO and Virgo over the last few years has increased the interest in analysing gravitational waves, making it the perfect time to write such an article. However, that’s not really what motivated us to write it.

In 2017, a paper appeared on the arXiv making claims of suspicious correlations in our LIGO data around GW150914. Could this call into question the very nature of our detection? No. The paper has two serious flaws.

  1. The first argument in the paper was that there were suspicious phase correlations in the data. This is because the authors didn’t window their data before Fourier transforming.
  2. The second argument was the residuals presented in Figure 1 of the GW150914 Discovery Paper contain a correlation. This is true, but these residuals aren’t actually the results of how we analyse the data. The point of Figure 1 was to that you don’t need our fancy analysis to see the signal—you can spot it by eye. Unfortunately, doing things by eye isn’t perfect, and this imperfection was picked up on.

The first flaw is a rookie mistake—pretty much everyone does it at some point. I did it starting out as a first-year PhD student, and I’ve run into it with all the undergraduates I’ve worked with writing their own analyses. The authors of this paper are rookies in gravitational-wave analysis, so they shouldn’t be judged too harshly for falling into this trap, and it is something so simple I can’t blame the referee of the paper for not thinking to ask. Any physics undergraduate who has met Fourier transforms (the second year of my degree) should grasp the mistake—it’s not something esoteric you need to be an expert in quantum gravity to understand.

The second flaw is something which could have been easily avoided if we had been more careful in the GW150914 Discovery Paper. We could have easily aligned the waveforms properly, or more clearly explained that the treatment used for Figure 1 is not what we actually do. However, we did write many other papers explaining what we did do, so we were hardly being secretive. While Figure 1 was not perfect, it was not wrong—it might not be what you might hope for, but it is described correctly in the text, and none of the LIGO–Virgo results depend on the figure in any way.

Estimated waveforms from different models

Recovered gravitational waveforms from our analysis of GW150914. The grey line shows the data whitened by the noise spectrum. The dark band shows our estimate for the waveform without assuming a particular source. The light bands show results if we assume it is a binary black hole (BBH) as predicted by general relativity. This plot more accurately represents how we analyse gravitational-wave data. Figure 6 of the GW150914 Parameter Estimation Paper.

Both mistakes are easy to fix. They are at the level of “Oops, that’s embarrassing! Give me 10 minutes. OK, that looks better”. Unfortunately, that didn’t happen.

The paper regrettably got picked up by science blogs, and caused much of a flutter. There were demands that LIGO and Virgo publically explain ourselves. This was difficult—the Collaboration is set up to do careful science, not handle a PR disaster. One of the problems was that we didn’t want to be seen to policing the use of our data. We can’t check that every paper ever using our data does everything perfectly. We don’t have time, and that probably wouldn’t encourage people to use our data if they knew any mistake would be pulled up by this 1000 person collaboration. A second problem was that getting anything approved as an official Collaboration document takes ages—getting consensus amongst so many people isn’t always easy. What would you do—would you want to be the faceless Collaboration persecuting the helpless, plucky scientists trying to check results?

There were private communications between people in the Collaboration and the authors. It took us a while to isolate the sources of the problems. In the meantime, pressure was mounting for an official™ response. It’s hard to justify why your analysis is correct by gesturing to a stack of a dozen papers—people don’t have time to dig through all that (I actually sent links to 16 papers to a science journalist who contacted me back in July 2017). Our silence may have been perceived as arrogance or guilt.

It was decided that we would put out an unofficial response. Ian Harry had been communicating with the authors, and wrote up his notes which Sean Carroll kindly shared on his blog. Unfortunately, this didn’t really make anyone too happy. The authors of the paper weren’t happy that something was shared via such an informal medium; the post is too technical for the general public to appreciate, and there was a minor typo in the accompanying code which (since fixed) was seized upon. It became necessary to write a formal paper.

Oh, won't somebody please think of the children?

Peer review will save the children! Credit: Fox

We did continue to try to explain the errors to the authors. I have colleagues who spent many hours in a room in Copenhagen trying to explain the mistakes. However, little progress was made, and it was not a fun time™. I can imagine at this point that the authors of the paper were sufficiently angry not to want to listen, which is a shame.

Now that the Data Analysis Guide is published, everyone will be satisfied, right? A refereed journal article should quash all fears, surely? Sadly, I doubt this will be the case. I expect these doubts will keep circulating for years. After all, there are those who still think vaccines cause autism. Fortunately, not believing in gravitational waves won’t kill any children. If anyone asks though, you can tell them that any doubts on LIGO’s analysis have been quashed, and that vaccines cause adults!

For a good account of the back and forth, Natalie Wolchover wrote a nice article in Quanta, and for a more acerbic view, try Mark Hannam’s blog.

Advanced LIGO: O1 is here!

The LIGO sites

Aerial views of LIGO Hanford (left) and LIGO Livingston (right). Both have 4 km long arms (arranged in an L shape) which house the interferometer beams. Credit: LIGO/Caltech/MIT.

The first observing run (O1) of Advanced LIGO began just over a week ago. We officially started at 4 pm British Summer Time, Friday 18 September. It was a little low key: you don’t want lots of fireworks and popping champagne corks next to instruments incredibly sensitive to vibrations. It was a smooth transition from our last engineering run (ER8), so I don’t even think there were any giant switches to throw. Of course, I’m not an instrumentalist, so I’m not qualified to say. In any case, it is an exciting time, and it is good to see some media attention for the Collaboration (with stories from Nature, the BBC and Science).

I would love to keep everyone up to date with the latest happenings from LIGO. However, like everyone in the Collaboration, I am bound by a confidentiality agreement. (You don’t want to cross people with giant lasers). We can’t have someone saying that we have detected a binary black hole (or that we haven’t) before we’ve properly analysed all the data, finalised calibration, reviewed all the code, double checked our results, and agreed amongst ourselves that we know what’s going on. When we are ready, announcements will come from the LIGO Spokespreson Gabriela González and the Virgo Spokesperson Fulvio Ricci. Event rates are uncertain and we’re not yet at final sensitivity, so don’t expect too much of O1.

There are a couple of things that I can share about our status. Whereas normally everything I write is completely unofficial, these are suggested replies to likely questions.

Have you started taking data?
We began collecting science quality data at the beginning of September, in preparation of the first Observing Run that started on Friday, September 18, and are planning on collecting data for about 4 months

We certainly do have data, but there’s nothing new about that (other than the improved sensitivity). Data from the fifth and sixth science runs of initial LIGO are now publicly available from the Gravitational Wave Open Science Center. You can go through it and try to find anything we missed (which is pretty cool).

Have you seen anything in the data yet?
We analyse the data “online” in an effort to provide fast information to astronomers for possible follow up of triggers using a relatively low statistical significance (a false alarm rate of ~1/month). We have been tuning the details of the communication procedures, and we have not yet automated all the steps that can be, but we will send alerts to astronomers above the threshold agreed as soon as we can after those triggers are identified. Since analysis to validate and candidate in gravitational-wave data can take months, we will not be able to say anything about results in the data on short time scales. We will share any and all results when ready, though probably not before the end of the Observing Run. 

Analysing the data is tricky, and requires lots of computing time, as well as carefully calibration of the instruments (including how many glitches they produce which could look like a gravitational-wave trigger). It takes a while to get everything done.

We heard that you sent a gravitational-wave trigger to astronomers already—is that true?
During O1, we will send alerts to astronomers above a relatively low significance threshold; we have been practising communication with astronomers in ER8. We are following this policy with partners who have signed agreement with us and have observational capabilities ready to follow up triggers. Because we cannot validate gravitational-wave events until we have enough statistics and diagnostics, we have confidentiality agreements about any triggers that hare shared, and we hope all involved abide by those rules.

I expect this is a pre-emptive question and answer. It would be amazing if we could see an electromagnetic (optical, gamma-ray, radio, etc.) counterpart to a gravitational wave. (I’ve done some work on how well we can localise gravitational-wave sources on the sky). It’s likely that any explosion or afterglow that is visible will fade quickly, so we want astronomers to be able to start looking straight-away. This means candidate events are sent out before they’re fully vetted: they could just be noise, they could be real, or they could be a blind injection. A blind injection is when a fake signal is introduced to the data secretly; this is done to keep us honest and check that our analysis does work as expected (since we know what results we should get for the signal that was injected). There was a famous blind injection during the run of initial LIGO called Big Dog. (We take gravitational-wave detection seriously). We’ve learnt a lot from injections, even if they are disappointing. Alerts will be sent out for events with false alarm rates of about one per month, so we expect a few across O1 just because of random noise.

While I can’t write more about the science from O1, I will still be posting about astrophysics, theory and how we analyse data. Those who are impatient can be reassured that gravitational waves have been detected, just indirectly, from observations of binary pulsars.

Periastron shift of binary pulsar

The orbital decay of the Hulse-Taylor binary pulsar (PSR B1913+16). The points are measured values, while the curve is the theoretical prediction for gravitational waves. I love this plot. Credit: Weisberg & Taylor (2005).

Update: Advanced LIGO detects gravitational waves!

LIGO Magazine: Issue 7

It is an exciting time time in LIGO. The start of the first observing run (O1) is imminent. I think they just need to sort out a button that is big enough and red enough (or maybe gather a little more calibration data… ), and then it’s all systems go. Making the first direct detection of gravitational waves with LIGO would be an enormous accomplishment, but that’s not all we can hope to achieve: what I’m really interested in is what we can learn from these gravitational waves.

The LIGO Magazine gives a glimpse inside the workings of the LIGO Scientific Collaboration, covering everything from the science of the detector to what collaboration members like to get up to in their spare time. The most recent issue was themed around how gravitational-wave science links in with the rest of astronomy. I enjoyed it, as I’ve been recently working on how to help astronomers look for electromagnetic counterparts to gravitational-wave signals. It also features a great interview with Joseph Taylor Jr., one of the discoverers of the famous Hulse–Taylor binary pulsar. The back cover features an article I wrote about parameter estimation: an expanded version is below.

How does parameter estimation work?

Detecting gravitational waves is one of the great challenges in experimental physics. A detection would be hugely exciting, but it is not the end of the story. Having observed a signal, we need to work out where it came from. This is a job for parameter estimation!

How we analyse the data depends upon the type of signal and what information we want to extract. I’ll use the example of a compact binary coalescence, that is the inspiral (and merger) of two compact objects—neutron stars or black holes (not marshmallows). Parameters that we are interested in measuring are things like the mass and spin of the binary’s components, its orientation, and its position.

For a particular set of parameters, we can calculate what the waveform should look like. This is actually rather tricky; including all the relevant physics, like precession of the binary, can make for some complicated and expensive-to-calculate waveforms. The first part of the video below shows a simulation of the coalescence of a black-hole binary, you can see the gravitational waveform (with characteristic chirp) at the bottom.

We can compare our calculated waveform with what we measured to work out how well they fit together. If we take away the wave from what we measured with the interferometer, we should be left with just noise. We understand how our detectors work, so we can model how the noise should behave; this allows us to work out how likely it would be to get the precise noise we need to make everything match up.

To work out the probability that the system has a given parameter, we take the likelihood for our left-over noise and fold in what we already knew about the values of the parameters—for example, that any location on the sky is equally possible, that neutron-star masses are around 1.4 solar masses, or that the total mass must be larger than that of a marshmallow. For those who like details, this is done using Bayes’ theorem.

We now want to map out this probability distribution, to find the peaks of the distribution corresponding to the most probable parameter values and also chart how broad these peaks are (to indicate our uncertainty). Since we can have many parameters, the space is too big to cover with a grid: we can’t just systematically chart parameter space. Instead, we randomly sample the space and construct a map of its valleys, ridges and peaks. Doing this efficiently requires cunning tricks for picking how to jump between spots: exploring the landscape can take some time, we may need to calculate millions of different waveforms!

Having computed the probability distribution for our parameters, we can now tell an astronomer how much of the sky they need to observe to have a 90% chance of looking at the source, give the best estimate for the mass (plus uncertainty), or even figure something out about what neutron stars are made of (probably not marshmallow). This is the beginning of gravitational-wave astronomy!

Monty and Carla map parameter space

Monty, Carla and the other samplers explore the probability landscape. Nutsinee Kijbunchoo drew the version for the LIGO Magazine.

Advanced LIGO (the paper)

Continuing with my New Year’s resolution to write a post on every published paper, the start of March see another full author list LIGO publication. Appearing in Classical & Quantum Gravity, the minimalistically titled Advanced LIGO is an instrumental paper. It appears a part of a special focus issue on advanced gravitational-wave detectors, and is happily free to read (good work there). This is The Paper™ for describing how the advanced detectors operate. I think it’s fair to say that my contribution to this paper is 0%.

LIGO stands for Laser Interferometer Gravitational-wave Observatory. As you might imagine, LIGO tries to observe gravitational waves by measuring them with a laser interferometer. (It won’t protect your fencing). Gravitational waves are tiny, tiny stretches and squeezes of space. To detect them we need to measure changes in length extremely accurately. I had assumed that Advanced LIGO will achieve this supreme sensitivity through some dark magic invoked by sacrificing the blood, sweat, tears and even coffee of many hundreds of PhD students upon the altar of science. However, this paper actually shows it’s just really, really, REALLY careful engineering. And giant frickin’ laser beams.

The paper goes through each aspect of the design of the LIGO detectors. It starts with details of the interferometer. LIGO uses giant lasers to measure distances extremely accurately. Lasers are bounced along two 3994.5 m arms and interfered to measure a change in length between the two. In spirit, it is a giant Michelson interferometer, but it has some cunning extra features. Each arm is a Fabry–Pérot etalon, which means that the laser is bounced up and down the arms many times to build up extra sensitivity to any change in length. There are various extra components to make sure that the laser beam is as stable as possible, all in all, there are rather a lot of mirrors, each of which is specially tweaked to make sure that some acronym is absolutely perfect.

Advanced LIGO optical configuration. IT's a bit more complicated than a basic Michelson interferometer.

Fig. 1 from Aasi et al. (2015), the Advanced LIGO optical configuration. All the acronyms have to be carefully placed in order for things to work. The laser beam starts from the left, passing through subsystems to make sure it’s stable. It is split in two to pass into the interferometer arms at the top and right of the diagram. The laser is bounced many times between the mirrors to build up sensitivity. The interference pattern is read out at the bottom. Normally, the light should interfere destructively, so the output is dark. A change to this indicates a change in length between the arms. That could be because of a passing gravitational wave.

The next section deals with all the various types of noise that affect the detector. It’s this noise that makes it such fun to look for the signals. To be honest, pretty much everything I know about the different types of noise I learnt from Space-Time Quest. This is a lovely educational game developed by people here at the University of Birmingham. In the game, you have to design the best gravitational-wave detector that you can for a given budget. There’s a lot of science that goes into working out how sensitive the detector is. It takes a bit of practice to get into it (remember to switch on the laser first), but it’s very easy to get competitive. We often use the game as part of outreach workshops, and we’ve had some school groups get quite invested in the high-score tables. My tip is that going underground doesn’t seem to be worth the money. Of course, if you happen to be reviewing the proposal to build the Einstein Telescope, you should completely ignore that, and just concentrate how cool the digging machine looks. Space-Time Quest shows how difficult it can be optimising sensitivity. There are trade-offs between different types of noise, and these have been carefully studied. What Space-Time Quest doesn’t show, is just how much work it takes to engineer a detector.

The fourth section is a massive shopping list of components needed to build Advanced LIGO. There are rather more options than in Space-Time Quest, but many are familiar, even if given less friendly names. If this section were the list of contents for some Ikea furniture, you would know that you’ve made a terrible life-choice; there’s no way you’re going to assemble this before Monday. Highlights include the 40 kg mirrors. I’m sure breaking one of those would incur more than seven years bad luck. For those of you playing along with Space-Time Quest at home, the mirrors are fused silica. Section 4.8.4 describes how to get the arms to lock, one of the key steps in commissioning the detectors. The section concludes with details of how to control such a complicated instrument, the key seems to be to have so many acronyms that there’s no space for any component to move in an unwanted way.

The paper closes with on outlook for the detector sensitivity. With such a complicated instrument it is impossible to be certain how things will go. However, things seem to have been going smoothly so far, so let’s hope that this continues. The current plan is:

  • 2015 3 months observing at a binary neutron star (BNS) range of 40–80 Mpc.
  • 2016–2017 6 months observing at a BNS range of 80–120 Mpc.
  • 2017–2018 9 months observing at a BNS range of 120–170 Mpc.
  • 2019 Achieve full sensitivity of a BNS range of 200 Mpc.

The BNS range is the distance at which a typical binary made up of two 1.4 solar mass neutrons stars could be detected when averaging over all orientations. If you have a perfectly aligned binary, you can detect it out to a further distance, the BNS horizon, which is about 2.26 times the BNS range. There are a couple of things to note from the plan. First, the initial observing run (O1 to the cool kids) is this year! The second is how much the range will extend before hitting design sensitivity. This should significantly increase the number of possible detections, as each doubling of the range corresponds to a volume change of a factor of eight. Coupling this with the increasing length of the observing runs should mean that the chance of a detection increases every year. It will be an exciting few years for Advanced LIGO.

arXiv: 1411.4547 [gr-qc]
Journal: Classical & Quantum Gravity; 32(7):074001(41); 2015
Science summary: Introduction to LIGO & Gravitational Waves
Space-Time Quest high score: 34.859 Mpc

Narrow-band search of continuous gravitational-wave signals from Crab and Vela pulsars in Virgo VSR4 data

Collaboration papers

I’ve been a member of the LIGO Scientific Collaboration for just over a year now. It turns out that designing, building and operating a network of gravitational-wave detectors is rather tricky, maybe even harder than completing Super Mario Bros. 3, so it takes a lot of work. There are over 900 collaboration members, all working on different aspects of the project. Since so much of the research is inter-related, certain papers (such as those that use data from the instruments) written by collaboration members have to include the name of everyone who works (at least half the time) on LIGO-related things. After a year in the collaboration, I have now levelled up to be included in the full author list (if there was an initiation ritual, I’ve suppressed the memory). This is weird: papers appear with my name on that I’ve not actually done any work for. It seems sort of like having to bring cake into your office on your birthday: you do have to share your (delicious) cupcakes with everyone else, but in return you get cake even when your birthday is nowhere near. Perhaps all those motivational posters where right about the value of teamwork? I do feel a little guilty about all the extra trees that will die because of people printing out these papers.

My New Year’s resolution was to write a post about every paper I have published. I am going to try to do the LIGO papers too. This should at least make sure that I actually read them all. There are official science summaries written by the people who did actually do the work, which may be better if you actually want an accurate explanation. My first collaboration paper is a joint publication of the LIGO and Virgo collaborations (even more sharing).

Searching for gravitational waves from pulsars

Neutron stars are formed from the cores of dead stars. When a star’s nuclear fuel starts to run out, their core collapses. The most massive form black holes, the lightest (like our Sun) form white dwarfs, and the ones in the middle form neutron stars. These are really dense, they have about the same mass as our entire Sun (perhaps twice the Sun’s mass), but are just a few kilometres across. Pulsars are a type of neutron star, they emit a beam of radiation that sweeps across the sky as they rotate, sort of like a light-house. If one of these beams hits the Earth, we see a radio pulse. The pulses come regularly, so you can work out how fast the pulsar is spinning (and do some other cool things too).

A pulsar

The mandatory cartoon of a pulsar that everyone uses. The top part shows the pulsar and its beams rotating, and the bottom part shows the signal measured on Earth. We not really sure where the beams come from, it’ll be something to do with magnetic fields. Credit: M. Kramer

Because pulsars rotate really quickly, if they have a little bump on their surface, they can emit (potentially detectable) gravitational waves. This paper searches for these signals from the Crab and Vela pulsars. We know where these pulsars are, and how quickly they are rotating, so it’s possible to do a targeted search for gravitational waves (only checking the data for signals that are close to what we expect). Importantly, some wiggle room in the frequency is allowed just in case different parts of the pulsar slosh around at slightly different rates and so the gravitational-wave frequency doesn’t perfectly match what we’d expect from the frequency of pulses; the search is done in a narrow band of frequencies around the expected one. The data used is from Virgo’s fourth science run (VSR4). That was taken back in 2011 (around the time that Captain America was released). The search technique is new (Astone et al., 2014), it’s the first one that incorporates this searching in a narrow band of frequencies; I think the point was to test their search technique on real data before the advanced detectors start producing new data.

Composite Crab

Composite image of Hubble (red) optical observations and Chandra (blue) X-ray observations of the Crab pulsar. The pulsar has a mass of 1.4 solar masses and rotates every 30 ms. Credit: Hester et al.

The pulsars emit gravitational waves continuously, they just keep humming as they rotate. The frequency will slow gradually as the pulsar loses energy. As the Earth rotates, the humming gets louder and quieter because the sensitivity of gravitational-wave detectors depends upon where the source is in the sky. Putting this all together gives you a good template for what the signal should look like, and you can see how well it fits the data. It’s kind of like trying to find the right jigsaw piece by searching for the one that interlocks best with those around it. Of course, there is a lot of noise in our detectors, so it’s like if the jigsaw was actually made out of jelly: you could get many pieces to fit if you squeeze them the right way, but then people wouldn’t believe that you’ve actually found the right one. Some detection statistics (which I don’t particularly like, but probably give a sensible answer) are used to quantify how likely it is that they’ve found a piece that fits (that there is a signal). The whole pipeline is tested by analysing some injected signals (artificial signals made to see if things work made both by adding signals digitally to the data and by actually jiggling the mirrors of the interferometer). It seems to do OK here.

Turning to the actual data, they very carefully show that they don’t think they’ve detected anything for either Vela or Crab. Of course, all the cool kids don’t detect gravitational waves, so that’s not too surprising.

Zoidberg is an expert on crabs, pulsing or otherwise

This paper doesn’t claim a detection of gravitational waves, but it doesn’t stink like Zoidberg.

Having not detected anything, you can place an upper limit of the amplitude of any waves that are emitted (because if they were larger, you would’ve detected them). This amplitude can then be compared with what’s expected from the spin-down limit: the amplitude that would be required to explain the slowing of the pulsar. We know how the pulsars are slowing, but not why; it could be because of energy being lost to magnetic fields (the energy for the beams has to come from somewhere), it could be through energy lost as gravitational waves, it could be because of some internal damping, it could all be gnomes. The spin-down limit assumes that it’s all because of gravitational waves, you couldn’t have bigger amplitude waves than this unless something else (that would have to be gnomes) was pumping energy into the pulsar to keep it spinning. The upper limit for the Vela pulsar is about the same as the spin-down limit, so we’ve not learnt anything new. For the Crab pulsar, the upper limit is about half the spin-down limit, which is something, but not really exciting. Hopefully, doing the same sort of searches with data from the advanced detectors will be more interesting.

In conclusion, the contents of this paper are well described by its title:

  • Narrow-band search: It uses a new search technique that is not restricted to the frequency assumed from timing pulses
  • of continuous gravitational-wave signals: It’s looking for signals from rotating neutron stars (that just keep going) and so are always in the data
  • from Crab and Vela pulsars: It considers two particular sources, so we know where in parameter space to look for signals
  • in Virgo VSR4 data: It uses real data, but from the first generation detectors, so it’s not surprising it doesn’t see anything

It’s probably less fun that eating a jigsaw-shaped jelly, but it might be more useful in the future.

arXiv: 1410.8310 [gr-qc]
Journal: Physical Review D; 91(2):022004(15); 2015
Science summary: An Extended Search for Gravitational Waves from the Crab and Vela Pulsars
Percentage of paper that is author list: ~30%