An introduction to LIGO–Virgo data analysis

LIGO and Virgo make their data open for anyone to try analysing [bonus note]. If you’re a student looking for a project, a teacher planning a class activity, or a scientist working on a paper, this data is waiting for you to use. Understanding how to analyse the data can be tricky. In this post, I’ll share some of the resources made by LIGO and Virgo to help introduce gravitational-wave analysis. These papers together should give you a good grounding in how to get started working with gravitational-wave data.

If you’d like a more in-depth understanding, I’d recommend visiting your local library for Michele Maggiore’s  Gravitational Waves: Volume 1.

The Data Analysis Guide

Title: A guide to LIGO-Virgo detector noise and extraction of transient gravitational-wave signals
arXiv:
1908.11170 [gr-qc]
Journal: Classical & Quantum Gravity;37(5):055002(54); 2020
Tutorial notebook: GitHub;  Google Colab; Binder
Code repository: Data Guide
LIGO science summary: A guide to LIGO-Virgo detector noise and extraction of transient gravitational-wave signals

It took many decades to develop the technology necessary to build gravitational-wave detectors. Similarly, gravitational-wave data analysis has developed over many decades—I’d say LIGO analysis was really kicked off in the early 1990s by Kipp Thorne’s group. There are now hundreds of papers on various aspects of gravitational-wave analysis. If you are new to the area, where should you start? Don’t panic! For the binary sources discovered so far, this Data Analysis Guide has you covered.

More details: The Data Analysis Guide

The GWOSC Paper

Title: Open data from the first and second observing runs of Advanced LIGO and Advanced Virgo
arXiv:
1912.11716 [gr-qc]
Website: Gravitational Wave Open Science Center
LIGO science summary: Open data from the first and second observing runs of Advanced LIGO and Advanced Virgo

Data from the LIGO and Virgo detectors is released by the Gravitational Wave Open Science Center (GWOSC, pronounced, unfortunately, as it is spelt). If you want to try analysing our delicious data yourself, either searching for signals or studying the signals we have found, GWOSC is the place to start. This paper outlines how these data are produced, going from our laser interferometers to your hard-drive. The paper specifically looks at the data released for our first and second observing runs (O1 and O2), however, GWOSC also host data from the initial detectors’ fifth science run (S5) and sixth science run (S6), and will be updated with new data in the future.

If you do use data from GWOSC, please remember to say thank you.

More details: The GWOSC Paper

001100 010010 011110 100001 101101 110011

I thought I saw a 2! Credit: Fox

The Data Analysis Guide

Synopsis: Data Analysis Guide
Read this if: You want an introduction to signal analysis
Favourite part: This is a great resource for new students [bonus note]

Gravitational-wave detectors measure ripples in spacetime. They record a simple time series of the stretching and squeezing of space as a gravitational wave passes. Well, they measure that, plus a whole lot of noise. Most of the time it is just noise. How do we go from this time series to discoveries about the Universe’s black holes and neutron stars? This paper gives the outline, it covers (in order)

  1. An introduction to observations at the time of writing
  2. The basics of LIGO and Virgo data—what it is the we analyse
  3. The basics of detector noise—how we describe sources of noise in our data
  4. Fourier analysis—how we go from time series to looking at the data in the as a function of frequency, which is the most natural way to analyse that data.
  5. Time–frequency analysis and stationarity—how we check the stability of data from our detectors
  6. Detector calibration and data quality—how we make sure we have good quality data
  7. The noise model and likelihood—how we use our understanding of the noise, under the assumption of it being stationary, to work out the likelihood of different signals being in the data
  8. Signal detection—how we identify times in the data which have a transient signal present
  9. Inferring waveform and physical parameters—how we estimate the parameters of the source of a gravitational wave
  10. Residuals around GW150914—a consistency check that we have understood the noise surrounding our first detection

The paper works through things thoroughly, and I would encourage you to work though it if you are interested.

I won’t summarise everything here, I want to focus the (roughly undergraduate-level) foundations of how we do our analysis in the frequency domain. My discussion of the GWOSC Paper goes into more detail on the basics of LIGO and Virgo data, and some details on calibration and data quality. I’ll leave talking about residuals to this bonus note, as it involves a long tangent and me needing to lie down for a while.

Fourier analysis

The signal our detectors measure is a time series d(t). This is may just contain noise, d(t) = n(t), or it may also contain a signal, d(t) = n(t) + h(t).

There are many sources of noise for our detectors. The different sources can affect different frequencies. If we assume that the noise is stationary, so that it’s properties don’t change with time, we can simply describe the properties of the noise with the power spectral density S_n(f). On average we expect the noise at a given frequency to be zero, but with it fluctuating up and down with a variance given by the power spectral density. We typically approximate the noise as Gaussian, such that

n(f) \sim \mathcal{N}(0; S_n(f)/2),

where we use \mathcal{N}(\mu; \sigma^2) to represent a normal distribution with mean \mu and standard deviation \sigma. The approximations of stationary and Gaussian noise are good most of the time. The noise does vary over time, but is usually effectively stationary over the durations we look at for a signal. The noise is also mostly Gaussian except for glitches. These are taken into account when we search for signals, but we’ll ignore them for now. The statistical description of the noise in terms of the power spectral density allows us to understand our data, but this understanding comes as a function of frequency: we must transform of time domain data to frequency domain data.

The go from d(t) to d(f) we can use a Fourier transform. Fourier transforms are a way of converting a function of one variable into a function of a reciprocal variable—in the case of time you convert to frequency. Fourier transforms encode all the information of the original function, so it is possible to convert back and forth as you like. Really, a Fourier transform is just another way of looking at the same function.

The Fourier transform is defined as

d(f) = \mathcal{F}_f\left\{d(t)\right\} = \int_{-\infty}^{\infty} d(t) \exp(-2\pi i f t) \,\mathrm{d}t.

Now, from this you might notice a problem when it comes to real data analysis, namely that the integral is defined over an infinite amount of time. We don’t have that much data. Instead, we only have a short period.

We could recast the integral above over a shorter time if instead of taking the Fourier transform of d(t), we take the Fourier transform of d(t) \times w(t) where w(t) is some window function which goes to zero outside of the time interval we are looking at. What we end up with is a convolution of the function we want with the Fourier transform of the window function,

\mathcal{F}_f\left\{d(t)w(t)\right\} = d(f) \ast w(f).

It is important to pick a window function which minimises the distortion to the signal that we want. If we just take a tophat (also known as a boxcar or rectangular, possibly on account of its infamous criminal background) function which is abruptly cuts off the data at the ends of the time interval, we find that w(f) is a sinc function. This is not a good thing, as it leads to all sorts of unwanted correlations between different frequencies, commonly known as spectral leakage. A much better choice is a function which smoothly tapers to zero at the edges. Using a tapering window, we lose a little data at the edges (we need to be careful choosing the length of the data analysed), but we can avoid the significant nastiness of spectral leakage. A tapering window function should always be used. Then or finite-time Fourier transform is then a good approximation to the exact d(f).

Data treatment to highlight a signal

Data processing to reveal GW150914. The top panel shows raw Hanford data. The second panel shows a window function being applied. The third panel shows the data after being whitened. This cleans up the data, making it easier to pick out the signal from all the low frequency noise. The bottom panel shows the whitened data after a bandpass filter is applied to pick out the signal. We don’t use the bandpass filter in our analysis (it is just for illustration), but the other steps reflect how we treat our data. Figure 2 of the Data Analysis Guide.

Now we have our data in the frequency domain, it is simple enough to compare the data to the expected noise a t a given frequency. If we measure something loud at a frequency with lots of noise we should be less surprised than if we measure something loud at a frequency which is usually quiet. This is kind of like how somewhat shouting is less startling at a rock concert than in a library. The appropriate way to weight is to divide by the square root of power spectral density d_\mathrm{w}(f) \propto d(f)/[S_n(f)]^{1/2}. This is known as whitening. Whitened data should have equal amplitude fluctuations at all frequencies, allowing for easy comparisons.

Now we understand the statistical properties of noise we can do some analysis! We can start by testing our assumption that the data are stationary and Gaussian by checking that that after whitening we get the expected distribution. We can also define the likelihood of obtaining the data d(t) given a model of a gravitational-wave signal h(t), as the properties of the noise mean that d(f) - h(f) \sim \mathcal{N}(0; S_n(f)/2). Combining the likelihood for each individual frequency gives the overall likelihood

\displaystyle p(d|h) \propto \exp\left[-\int_{-\infty}^{\infty} \frac{|d(f) - h(f)|^2}{S_n(f)} \mathrm{d}f \right].

This likelihood is at the heart of parameter estimation, as we can work out the probability of there being a signal with a given set of parameters. The Data Analysis Guide goes through many different analyses (including parameter estimation) and demonstrates how to check that noise is nice and Gaussian.

Gaussian residuals for GW150914

Distribution of residuals for 4 seconds of data around GW150914 after subtracting the maximum likelihood waveform. The residuals are the whitened Fourier amplitudes, and they should be consistent with a unit Gaussian. The residuals follow the expected distribution and show no sign of non-Gaussianity. Figure 14 of the Data Analysis Guide.

Homework

The Data Analysis Guide contains much more material on gravitational-wave data analysis. If you wanted to delve further, there many excellent papers cited. Favourites of mine include Finn (1992); Finn & Chernoff (1993); Cutler & Flanagan (1994); Flanagan & Hughes (1998); Allen (2005), and Allen et al. (2012). I would also recommend the tutorials available from GWOSC and the lectures from the Open Data Workshops.

The GWOSC Paper

Synopsis: GWOSC Paper
Read this if: You want to analyse our gravitational wave data
Favourite part: All the cool projects done with this data

You’re now up-to-speed with some ideas of how to analyse gravitational-wave data, you’ve made yourself a fresh cup of really hot tea, you’re ready to get to work! All you need are the data—this paper explains where this comes from.

Data production

The first step in getting gravitational-wave data is the easy one. You need to design a detector, convince science agencies to invest something like half a billion dollars in building one, then spend 40 years carefully researching the necessary technology and putting it all together as part of an international collaboration of hundreds of scientists, engineers and technicians, before painstakingly commissioning the instrument and operating it. For your convenience, we have done this step for you, but do feel free to try it yourself at home.

Gravitational-wave detectors like Advanced LIGO are built around an interferometer: they have two arms at right angles to each other, and we bounce lasers up and down them to measure their length. A passing gravitational wave will change the relative length of one arm relative to the other. This changes the time taken to travel along one arm compared to the other. Hence, when the two bits of light reach the output of the interferometer, they’ll have a different phase:where normally one light wave would have a peak, it’ll have a trough. This change in phase will change how light from the two arms combine together. When no gravitational wave is present, the light interferes destructively, almost cancelling out so that the output is dark. We measure the brightness of light at the output which tells us about how the length of the arms changes.

We want our detector in measure the gravitational-wave strain. That is the fractional change in length of the arms,

\displaystyle h(t) = \frac{\Delta L(t)}{L},

where \Delta L = L_x - L_y is the relative difference in the length of the two arms, and L is the usually arm length. Since we love jargon in LIGO & Virgo, we’ll often refer to the strain as HOFT (as you would read h(t) as h of t; it took me years to realise this) or DARM (differential arm measurement).

The actual output of the detector is the voltage from a photodiode measuring the intensity of the light. It is necessary to make careful calibration of the detectors. In theory this is simple: we change the position of the mirrors at the end of the arms and see how the output changes. In practise, it is very difficult. The GW150914 Calibration Paper goes into details for O1, more up-to-date descriptions are given in Cahillane et al. (2017) for LIGO and Acernese et al. (2018) for Virgo. The calibration of the detectors can drift over time, improving the calibration is one of the things we do between originally taking the data and releasing the final data.

The data are only celebrated between 10 Hz and 5 kHz, so don’t trust the data outside of that frequency range.

The next stage of our data’s journey is going through detector characterisation and data quality checks. In addition to measuring gravitational-wave strain, we record many other data channels: about 200,000 per detector. These measure all sorts of things, from the internal state of the instrument, to monitoring the physical environment around the detectors. These auxiliary channels are used to check the data quality. In some cases, an auxiliary channel will record a source of noise, like scattered light or the mains power frequency, allowing us to clean up our strain data by subtracting out this noise. In other cases, an auxiliary channel can act as a witness to a glitch in our detector, identifying when it is misbehaving so that we know not to trust that part of the data. The GW150914 Detector Characterisation Paper goes into details of how we check potential detections. In doing data quality checks we are careful to only use the auxiliary channels which record something which would be independent of a passing gravitational wave.

We have 4 flags for data quality:

  1. DATA: All clear. Certified fresh. Eat as much as you like.
  2. CAT1: A critical problem with the instrument. Data from these times are likely to be a dumpster fire of noise. We do not use them in our analyses, and they are currently excluded from our public releases. About 1.7% of Hanford data and 1.0% of time from Livingston was flagged with CAT1 in O1. In O2,  we got this done to 0.001% for Hanford, 0.003% for Livingston and 0.05% for Virgo.
  3. CAT2: Some activity in an auxiliary channel (possibly the electric boogaloo monitor) which has a well understood correlation with the measured strain channel. You would therefore expect to find some form of glitchiness in the data.
  4. CAT3: There is some correlation in an auxiliary channel and the strain channel which is not understood. We’re not currently using this flag, but it’s kept as an option.

It’s important to verify the data quality before starting your analysis. You don’t want to get excited to discover a completely new form of gravitational wave only to realise that it’s actually some noise from nearby logging. Remember, if a tree falls in the forest and no-one is around, LIGO will still know.

To test our systems, we also occasionally perform a signal injection: we move the mirrors to simulate a signal. This is useful for calibration and for testing analysis algorithms. We don’t perform injections very often (they get in the way of looking for real signals), but these times are flagged. Just as for data quality flags, it is important to check for injections before analysing a stretch of data.

Once passing through all these checks, the data is ready to analyse!

Yes!

Excited Data. Credit: Paramount

Accessing the data

After our data have been lovingly prepared, they are served up in two data formats:

  • Hierarchical Data Format HDF, which is a popular data storage format, as it is easily allows for metadata and multiple data sets (like the important data quality flags) to be packaged together.
  • Gravitational Wave Frame GWF, which is the standard format we use internally. Veteran gravitational-wave scientists often get a far-way haunted look when you bring up how the specifications for this file format were decided. It’s best not to mention unless you are also buying them a stiff drink.

In these files, you will find h(t) sampled at either 4096 Hz or 16384 Hz (either are available). Pick the sampling rate you need depending upon the frequency range you are interested in: the 4096 Hz data are good for upto 1.7 kHz, while the 16384 Hz are good to the limit of the calibration range at 5 kHz.

Files can be downloaded from the GWOSC website. If you want to download a large amount, it is recommended to use the CernVM-FS distributed file system.

To check when the gravitational-wave detectors were observing, you can use the Timeline search.

GWOSC Timeline

Screenshot of the GWOSC Timeline showing observing from the fifth science run (S5) on the initial detector era through to the second observing run (O2) of the advanced detector era. Bars show observing of GEO 600 (G1), Hanford (H1 and H2), Livingston (L1) and Virgo (V1). Hanford initial had two detectors housed within its site, the plan in the advanced detector era is to install the equipment as LIGO India instead.

Try this at home

Having gone through all these details, you should now know what are data is, over what ranges it can be analyzed, and how to get access to it. Your cup of tea has also probably gone cold. Why not make yourself a new one, and have a couple of biscuits as reward too. You deserve it!

To help you on your way in starting analysing the data, GWOSC has a set of tutorials (and don’t forget the Data Analysis Guide), and a collection of open source software. Have fun, and remember, it’s never aliens.

Bonus notes

Release schedule

The current policy is that data are released:

  1. In a chunk surrounding an event at time of publication of that event. This enables the new detection to be analysed by anyone. We typically release about an hour of data around an event.
  2. 18 months after the end of the run. This time gives us chance to properly calibrate the data, check the data quality, and then run the analyses we are committed to. A lot of work goes into producing gravitational wave data!

Start marking your calendars now for the release of O3 data.

Summer studenting

In summer 2019, while we were finishing up on the Data Analysis Guide, I gave it to one of my summer students Andrew Kim as an introduction. Andrew was working on gravitational-wave data analysis, so I hoped that he’d find it useful. He ended up working through the draft notebook made to accompany the paper and making a number of useful suggestions! He ended up as an author on the paper because of these contributions, which was nice.

The conspiracy of residuals

The Data Analysis Guide is an extremely useful paper. It explains many details of gravitational-wave analysis. The detections made by LIGO and Virgo over the last few years has increased the interest in analysing gravitational waves, making it the perfect time to write such an article. However, that’s not really what motivated us to write it.

In 2017, a paper appeared on the arXiv making claims of suspicious correlations in our LIGO data around GW150914. Could this call into question the very nature of our detection? No. The paper has two serious flaws.

  1. The first argument in the paper was that there were suspicious phase correlations in the data. This is because the authors didn’t window their data before Fourier transforming.
  2. The second argument was the residuals presented in Figure 1 of the GW150914 Discovery Paper contain a correlation. This is true, but these residuals aren’t actually the results of how we analyse the data. The point of Figure 1 was to that you don’t need our fancy analysis to see the signal—you can spot it by eye. Unfortunately, doing things by eye isn’t perfect, and this imperfection was picked up on.

The first flaw is a rookie mistake—pretty much everyone does it at some point. I did it starting out as a first-year PhD student, and I’ve run into it with all the undergraduates I’ve worked with writing their own analyses. The authors of this paper are rookies in gravitational-wave analysis, so they shouldn’t be judged too harshly for falling into this trap, and it is something so simple I can’t blame the referee of the paper for not thinking to ask. Any physics undergraduate who has met Fourier transforms (the second year of my degree) should grasp the mistake—it’s not something esoteric you need to be an expert in quantum gravity to understand.

The second flaw is something which could have been easily avoided if we had been more careful in the GW150914 Discovery Paper. We could have easily aligned the waveforms properly, or more clearly explained that the treatment used for Figure 1 is not what we actually do. However, we did write many other papers explaining what we did do, so we were hardly being secretive. While Figure 1 was not perfect, it was not wrong—it might not be what you might hope for, but it is described correctly in the text, and none of the LIGO–Virgo results depend on the figure in any way.

Estimated waveforms from different models

Recovered gravitational waveforms from our analysis of GW150914. The grey line shows the data whitened by the noise spectrum. The dark band shows our estimate for the waveform without assuming a particular source. The light bands show results if we assume it is a binary black hole (BBH) as predicted by general relativity. This plot more accurately represents how we analyse gravitational-wave data. Figure 6 of the GW150914 Parameter Estimation Paper.

Both mistakes are easy to fix. They are at the level of “Oops, that’s embarrassing! Give me 10 minutes. OK, that looks better”. Unfortunately, that didn’t happen.

The paper regrettably got picked up by science blogs, and caused much of a flutter. There were demands that LIGO and Virgo publically explain ourselves. This was difficult—the Collaboration is set up to do careful science, not handle a PR disaster. One of the problems was that we didn’t want to be seen to policing the use of our data. We can’t check that every paper ever using our data does everything perfectly. We don’t have time, and that probably wouldn’t encourage people to use our data if they knew any mistake would be pulled up by this 1000 person collaboration. A second problem was that getting anything approved as an official Collaboration document takes ages—getting consensus amongst so many people isn’t always easy. What would you do—would you want to be the faceless Collaboration persecuting the helpless, plucky scientists trying to check results?

There were private communications between people in the Collaboration and the authors. It took us a while to isolate the sources of the problems. In the meantime, pressure was mounting for an official™ response. It’s hard to justify why your analysis is correct by gesturing to a stack of a dozen papers—people don’t have time to dig through all that (I actually sent links to 16 papers to a science journalist who contacted me back in July 2017). Our silence may have been perceived as arrogance or guilt.

It was decided that we would put out an unofficial response. Ian Harry had been communicating with the authors, and wrote up his notes which Sean Carroll kindly shared on his blog. Unfortunately, this didn’t really make anyone too happy. The authors of the paper weren’t happy that something was shared via such an informal medium; the post is too technical for the general public to appreciate, and there was a minor typo in the accompanying code which (since fixed) was seized upon. It became necessary to write a formal paper.

Oh, won't somebody please think of the children?

Peer review will save the children! Credit: Fox

We did continue to try to explain the errors to the authors. I have colleagues who spent many hours in a room in Copenhagen trying to explain the mistakes. However, little progress was made, and it was not a fun time™. I can imagine at this point that the authors of the paper were sufficiently angry not to want to listen, which is a shame.

Now that the Data Analysis Guide is published, everyone will be satisfied, right? A refereed journal article should quash all fears, surely? Sadly, I doubt this will be the case. I expect these doubts will keep circulating for years. After all, there are those who still think vaccines cause autism. Fortunately, not believing in gravitational waves won’t kill any children. If anyone asks though, you can tell them that any doubts on LIGO’s analysis have been quashed, and that vaccines cause adults!

For a good account of the back and forth, Natalie Wolchover wrote a nice article in Quanta, and for a more acerbic view, try Mark Hannam’s blog.

 

Advanced LIGO: O1 is here!

The LIGO sites

Aerial views of LIGO Hanford (left) and LIGO Livingston (right). Both have 4 km long arms (arranged in an L shape) which house the interferometer beams. Credit: LIGO/Caltech/MIT.

The first observing run (O1) of Advanced LIGO began just over a week ago. We officially started at 4 pm British Summer Time, Friday 18 September. It was a little low key: you don’t want lots of fireworks and popping champagne corks next to instruments incredibly sensitive to vibrations. It was a smooth transition from our last engineering run (ER8), so I don’t even think there were any giant switches to throw. Of course, I’m not an instrumentalist, so I’m not qualified to say. In any case, it is an exciting time, and it is good to see some media attention for the Collaboration (with stories from Nature, the BBC and Science).

I would love to keep everyone up to date with the latest happenings from LIGO. However, like everyone in the Collaboration, I am bound by a confidentiality agreement. (You don’t want to cross people with giant lasers). We can’t have someone saying that we have detected a binary black hole (or that we haven’t) before we’ve properly analysed all the data, finalised calibration, reviewed all the code, double checked our results, and agreed amongst ourselves that we know what’s going on. When we are ready, announcements will come from the LIGO Spokespreson Gabriela González and the Virgo Spokesperson Fulvio Ricci. Event rates are uncertain and we’re not yet at final sensitivity, so don’t expect too much of O1.

There are a couple of things that I can share about our status. Whereas normally everything I write is completely unofficial, these are suggested replies to likely questions.

Have you started taking data?
We began collecting science quality data at the beginning of September, in preparation of the first Observing Run that started on Friday, September 18, and are planning on collecting data for about 4 months

We certainly do have data, but there’s nothing new about that (other than the improved sensitivity). Data from the fifth and sixth science runs of initial LIGO are now publicly available from the Gravitational Wave Open Science Center. You can go through it and try to find anything we missed (which is pretty cool).

Have you seen anything in the data yet?
We analyse the data “online” in an effort to provide fast information to astronomers for possible follow up of triggers using a relatively low statistical significance (a false alarm rate of ~1/month). We have been tuning the details of the communication procedures, and we have not yet automated all the steps that can be, but we will send alerts to astronomers above the threshold agreed as soon as we can after those triggers are identified. Since analysis to validate and candidate in gravitational-wave data can take months, we will not be able to say anything about results in the data on short time scales. We will share any and all results when ready, though probably not before the end of the Observing Run. 

Analysing the data is tricky, and requires lots of computing time, as well as carefully calibration of the instruments (including how many glitches they produce which could look like a gravitational-wave trigger). It takes a while to get everything done.

We heard that you sent a gravitational-wave trigger to astronomers already—is that true?
During O1, we will send alerts to astronomers above a relatively low significance threshold; we have been practising communication with astronomers in ER8. We are following this policy with partners who have signed agreement with us and have observational capabilities ready to follow up triggers. Because we cannot validate gravitational-wave events until we have enough statistics and diagnostics, we have confidentiality agreements about any triggers that hare shared, and we hope all involved abide by those rules.

I expect this is a pre-emptive question and answer. It would be amazing if we could see an electromagnetic (optical, gamma-ray, radio, etc.) counterpart to a gravitational wave. (I’ve done some work on how well we can localise gravitational-wave sources on the sky). It’s likely that any explosion or afterglow that is visible will fade quickly, so we want astronomers to be able to start looking straight-away. This means candidate events are sent out before they’re fully vetted: they could just be noise, they could be real, or they could be a blind injection. A blind injection is when a fake signal is introduced to the data secretly; this is done to keep us honest and check that our analysis does work as expected (since we know what results we should get for the signal that was injected). There was a famous blind injection during the run of initial LIGO called Big Dog. (We take gravitational-wave detection seriously). We’ve learnt a lot from injections, even if they are disappointing. Alerts will be sent out for events with false alarm rates of about one per month, so we expect a few across O1 just because of random noise.

While I can’t write more about the science from O1, I will still be posting about astrophysics, theory and how we analyse data. Those who are impatient can be reassured that gravitational waves have been detected, just indirectly, from observations of binary pulsars.

Periastron shift of binary pulsar

The orbital decay of the Hulse-Taylor binary pulsar (PSR B1913+16). The points are measured values, while the curve is the theoretical prediction for gravitational waves. I love this plot. Credit: Weisberg & Taylor (2005).

Update: Advanced LIGO detects gravitational waves!

LIGO Magazine: Issue 7

It is an exciting time time in LIGO. The start of the first observing run (O1) is imminent. I think they just need to sort out a button that is big enough and red enough (or maybe gather a little more calibration data… ), and then it’s all systems go. Making the first direct detection of gravitational waves with LIGO would be an enormous accomplishment, but that’s not all we can hope to achieve: what I’m really interested in is what we can learn from these gravitational waves.

The LIGO Magazine gives a glimpse inside the workings of the LIGO Scientific Collaboration, covering everything from the science of the detector to what collaboration members like to get up to in their spare time. The most recent issue was themed around how gravitational-wave science links in with the rest of astronomy. I enjoyed it, as I’ve been recently working on how to help astronomers look for electromagnetic counterparts to gravitational-wave signals. It also features a great interview with Joseph Taylor Jr., one of the discoverers of the famous Hulse–Taylor binary pulsar. The back cover features an article I wrote about parameter estimation: an expanded version is below.

How does parameter estimation work?

Detecting gravitational waves is one of the great challenges in experimental physics. A detection would be hugely exciting, but it is not the end of the story. Having observed a signal, we need to work out where it came from. This is a job for parameter estimation!

How we analyse the data depends upon the type of signal and what information we want to extract. I’ll use the example of a compact binary coalescence, that is the inspiral (and merger) of two compact objects—neutron stars or black holes (not marshmallows). Parameters that we are interested in measuring are things like the mass and spin of the binary’s components, its orientation, and its position.

For a particular set of parameters, we can calculate what the waveform should look like. This is actually rather tricky; including all the relevant physics, like precession of the binary, can make for some complicated and expensive-to-calculate waveforms. The first part of the video below shows a simulation of the coalescence of a black-hole binary, you can see the gravitational waveform (with characteristic chirp) at the bottom.

We can compare our calculated waveform with what we measured to work out how well they fit together. If we take away the wave from what we measured with the interferometer, we should be left with just noise. We understand how our detectors work, so we can model how the noise should behave; this allows us to work out how likely it would be to get the precise noise we need to make everything match up.

To work out the probability that the system has a given parameter, we take the likelihood for our left-over noise and fold in what we already knew about the values of the parameters—for example, that any location on the sky is equally possible, that neutron-star masses are around 1.4 solar masses, or that the total mass must be larger than that of a marshmallow. For those who like details, this is done using Bayes’ theorem.

We now want to map out this probability distribution, to find the peaks of the distribution corresponding to the most probable parameter values and also chart how broad these peaks are (to indicate our uncertainty). Since we can have many parameters, the space is too big to cover with a grid: we can’t just systematically chart parameter space. Instead, we randomly sample the space and construct a map of its valleys, ridges and peaks. Doing this efficiently requires cunning tricks for picking how to jump between spots: exploring the landscape can take some time, we may need to calculate millions of different waveforms!

Having computed the probability distribution for our parameters, we can now tell an astronomer how much of the sky they need to observe to have a 90% chance of looking at the source, give the best estimate for the mass (plus uncertainty), or even figure something out about what neutron stars are made of (probably not marshmallow). This is the beginning of gravitational-wave astronomy!

Monty and Carla map parameter space

Monty, Carla and the other samplers explore the probability landscape. Nutsinee Kijbunchoo drew the version for the LIGO Magazine.

Advanced LIGO (the paper)

Continuing with my New Year’s resolution to write a post on every published paper, the start of March see another full author list LIGO publication. Appearing in Classical & Quantum Gravity, the minimalistically titled Advanced LIGO is an instrumental paper. It appears a part of a special focus issue on advanced gravitational-wave detectors, and is happily free to read (good work there). This is The Paper™ for describing how the advanced detectors operate. I think it’s fair to say that my contribution to this paper is 0%.

LIGO stands for Laser Interferometer Gravitational-wave Observatory. As you might imagine, LIGO tries to observe gravitational waves by measuring them with a laser interferometer. (It won’t protect your fencing). Gravitational waves are tiny, tiny stretches and squeezes of space. To detect them we need to measure changes in length extremely accurately. I had assumed that Advanced LIGO will achieve this supreme sensitivity through some dark magic invoked by sacrificing the blood, sweat, tears and even coffee of many hundreds of PhD students upon the altar of science. However, this paper actually shows it’s just really, really, REALLY careful engineering. And giant frickin’ laser beams.

The paper goes through each aspect of the design of the LIGO detectors. It starts with details of the interferometer. LIGO uses giant lasers to measure distances extremely accurately. Lasers are bounced along two 3994.5 m arms and interfered to measure a change in length between the two. In spirit, it is a giant Michelson interferometer, but it has some cunning extra features. Each arm is a Fabry–Pérot etalon, which means that the laser is bounced up and down the arms many times to build up extra sensitivity to any change in length. There are various extra components to make sure that the laser beam is as stable as possible, all in all, there are rather a lot of mirrors, each of which is specially tweaked to make sure that some acronym is absolutely perfect.

Advanced LIGO optical configuration. IT's a bit more complicated than a basic Michelson interferometer.

Fig. 1 from Aasi et al. (2015), the Advanced LIGO optical configuration. All the acronyms have to be carefully placed in order for things to work. The laser beam starts from the left, passing through subsystems to make sure it’s stable. It is split in two to pass into the interferometer arms at the top and right of the diagram. The laser is bounced many times between the mirrors to build up sensitivity. The interference pattern is read out at the bottom. Normally, the light should interfere destructively, so the output is dark. A change to this indicates a change in length between the arms. That could be because of a passing gravitational wave.

The next section deals with all the various types of noise that affect the detector. It’s this noise that makes it such fun to look for the signals. To be honest, pretty much everything I know about the different types of noise I learnt from Space-Time Quest. This is a lovely educational game developed by people here at the University of Birmingham. In the game, you have to design the best gravitational-wave detector that you can for a given budget. There’s a lot of science that goes into working out how sensitive the detector is. It takes a bit of practice to get into it (remember to switch on the laser first), but it’s very easy to get competitive. We often use the game as part of outreach workshops, and we’ve had some school groups get quite invested in the high-score tables. My tip is that going underground doesn’t seem to be worth the money. Of course, if you happen to be reviewing the proposal to build the Einstein Telescope, you should completely ignore that, and just concentrate how cool the digging machine looks. Space-Time Quest shows how difficult it can be optimising sensitivity. There are trade-offs between different types of noise, and these have been carefully studied. What Space-Time Quest doesn’t show, is just how much work it takes to engineer a detector.

The fourth section is a massive shopping list of components needed to build Advanced LIGO. There are rather more options than in Space-Time Quest, but many are familiar, even if given less friendly names. If this section were the list of contents for some Ikea furniture, you would know that you’ve made a terrible life-choice; there’s no way you’re going to assemble this before Monday. Highlights include the 40 kg mirrors. I’m sure breaking one of those would incur more than seven years bad luck. For those of you playing along with Space-Time Quest at home, the mirrors are fused silica. Section 4.8.4 describes how to get the arms to lock, one of the key steps in commissioning the detectors. The section concludes with details of how to control such a complicated instrument, the key seems to be to have so many acronyms that there’s no space for any component to move in an unwanted way.

The paper closes with on outlook for the detector sensitivity. With such a complicated instrument it is impossible to be certain how things will go. However, things seem to have been going smoothly so far, so let’s hope that this continues. The current plan is:

  • 2015 3 months observing at a binary neutron star (BNS) range of 40–80 Mpc.
  • 2016–2017 6 months observing at a BNS range of 80–120 Mpc.
  • 2017–2018 9 months observing at a BNS range of 120–170 Mpc.
  • 2019 Achieve full sensitivity of a BNS range of 200 Mpc.

The BNS range is the distance at which a typical binary made up of two 1.4 solar mass neutrons stars could be detected when averaging over all orientations. If you have a perfectly aligned binary, you can detect it out to a further distance, the BNS horizon, which is about 2.26 times the BNS range. There are a couple of things to note from the plan. First, the initial observing run (O1 to the cool kids) is this year! The second is how much the range will extend before hitting design sensitivity. This should significantly increase the number of possible detections, as each doubling of the range corresponds to a volume change of a factor of eight. Coupling this with the increasing length of the observing runs should mean that the chance of a detection increases every year. It will be an exciting few years for Advanced LIGO.

arXiv: 1411.4547 [gr-qc]
Journal: Classical & Quantum Gravity; 32(7):074001(41); 2015
Science summary: Introduction to LIGO & Gravitational Waves
Space-Time Quest high score: 34.859 Mpc

Narrow-band search of continuous gravitational-wave signals from Crab and Vela pulsars in Virgo VSR4 data

Collaboration papers

I’ve been a member of the LIGO Scientific Collaboration for just over a year now. It turns out that designing, building and operating a network of gravitational-wave detectors is rather tricky, maybe even harder than completing Super Mario Bros. 3, so it takes a lot of work. There are over 900 collaboration members, all working on different aspects of the project. Since so much of the research is inter-related, certain papers (such as those that use data from the instruments) written by collaboration members have to include the name of everyone who works (at least half the time) on LIGO-related things. After a year in the collaboration, I have now levelled up to be included in the full author list (if there was an initiation ritual, I’ve suppressed the memory). This is weird: papers appear with my name on that I’ve not actually done any work for. It seems sort of like having to bring cake into your office on your birthday: you do have to share your (delicious) cupcakes with everyone else, but in return you get cake even when your birthday is nowhere near. Perhaps all those motivational posters where right about the value of teamwork? I do feel a little guilty about all the extra trees that will die because of people printing out these papers.

My New Year’s resolution was to write a post about every paper I have published. I am going to try to do the LIGO papers too. This should at least make sure that I actually read them all. There are official science summaries written by the people who did actually do the work, which may be better if you actually want an accurate explanation. My first collaboration paper is a joint publication of the LIGO and Virgo collaborations (even more sharing).

Searching for gravitational waves from pulsars

Neutron stars are formed from the cores of dead stars. When a star’s nuclear fuel starts to run out, their core collapses. The most massive form black holes, the lightest (like our Sun) form white dwarfs, and the ones in the middle form neutron stars. These are really dense, they have about the same mass as our entire Sun (perhaps twice the Sun’s mass), but are just a few kilometres across. Pulsars are a type of neutron star, they emit a beam of radiation that sweeps across the sky as they rotate, sort of like a light-house. If one of these beams hits the Earth, we see a radio pulse. The pulses come regularly, so you can work out how fast the pulsar is spinning (and do some other cool things too).

A pulsar

The mandatory cartoon of a pulsar that everyone uses. The top part shows the pulsar and its beams rotating, and the bottom part shows the signal measured on Earth. We not really sure where the beams come from, it’ll be something to do with magnetic fields. Credit: M. Kramer

Because pulsars rotate really quickly, if they have a little bump on their surface, they can emit (potentially detectable) gravitational waves. This paper searches for these signals from the Crab and Vela pulsars. We know where these pulsars are, and how quickly they are rotating, so it’s possible to do a targeted search for gravitational waves (only checking the data for signals that are close to what we expect). Importantly, some wiggle room in the frequency is allowed just in case different parts of the pulsar slosh around at slightly different rates and so the gravitational-wave frequency doesn’t perfectly match what we’d expect from the frequency of pulses; the search is done in a narrow band of frequencies around the expected one. The data used is from Virgo’s fourth science run (VSR4). That was taken back in 2011 (around the time that Captain America was released). The search technique is new (Astone et al., 2014), it’s the first one that incorporates this searching in a narrow band of frequencies; I think the point was to test their search technique on real data before the advanced detectors start producing new data.

Composite Crab

Composite image of Hubble (red) optical observations and Chandra (blue) X-ray observations of the Crab pulsar. The pulsar has a mass of 1.4 solar masses and rotates every 30 ms. Credit: Hester et al.

The pulsars emit gravitational waves continuously, they just keep humming as they rotate. The frequency will slow gradually as the pulsar loses energy. As the Earth rotates, the humming gets louder and quieter because the sensitivity of gravitational-wave detectors depends upon where the source is in the sky. Putting this all together gives you a good template for what the signal should look like, and you can see how well it fits the data. It’s kind of like trying to find the right jigsaw piece by searching for the one that interlocks best with those around it. Of course, there is a lot of noise in our detectors, so it’s like if the jigsaw was actually made out of jelly: you could get many pieces to fit if you squeeze them the right way, but then people wouldn’t believe that you’ve actually found the right one. Some detection statistics (which I don’t particularly like, but probably give a sensible answer) are used to quantify how likely it is that they’ve found a piece that fits (that there is a signal). The whole pipeline is tested by analysing some injected signals (artificial signals made to see if things work made both by adding signals digitally to the data and by actually jiggling the mirrors of the interferometer). It seems to do OK here.

Turning to the actual data, they very carefully show that they don’t think they’ve detected anything for either Vela or Crab. Of course, all the cool kids don’t detect gravitational waves, so that’s not too surprising.

Zoidberg is an expert on crabs, pulsing or otherwise

This paper doesn’t claim a detection of gravitational waves, but it doesn’t stink like Zoidberg.

Having not detected anything, you can place an upper limit of the amplitude of any waves that are emitted (because if they were larger, you would’ve detected them). This amplitude can then be compared with what’s expected from the spin-down limit: the amplitude that would be required to explain the slowing of the pulsar. We know how the pulsars are slowing, but not why; it could be because of energy being lost to magnetic fields (the energy for the beams has to come from somewhere), it could be through energy lost as gravitational waves, it could be because of some internal damping, it could all be gnomes. The spin-down limit assumes that it’s all because of gravitational waves, you couldn’t have bigger amplitude waves than this unless something else (that would have to be gnomes) was pumping energy into the pulsar to keep it spinning. The upper limit for the Vela pulsar is about the same as the spin-down limit, so we’ve not learnt anything new. For the Crab pulsar, the upper limit is about half the spin-down limit, which is something, but not really exciting. Hopefully, doing the same sort of searches with data from the advanced detectors will be more interesting.

In conclusion, the contents of this paper are well described by its title:

  • Narrow-band search: It uses a new search technique that is not restricted to the frequency assumed from timing pulses
  • of continuous gravitational-wave signals: It’s looking for signals from rotating neutron stars (that just keep going) and so are always in the data
  • from Crab and Vela pulsars: It considers two particular sources, so we know where in parameter space to look for signals
  • in Virgo VSR4 data: It uses real data, but from the first generation detectors, so it’s not surprising it doesn’t see anything

It’s probably less fun that eating a jigsaw-shaped jelly, but it might be more useful in the future.

arXiv: 1410.8310 [gr-qc]
Journal: Physical Review D; 91(2):022004(15); 2015
Science summary: An Extended Search for Gravitational Waves from the Crab and Vela Pulsars
Percentage of paper that is author list: ~30%