An introduction to LIGO–Virgo data analysis

LIGO and Virgo make their data open for anyone to try analysing [bonus note]. If you’re a student looking for a project, a teacher planning a class activity, or a scientist working on a paper, this data is waiting for you to use. Understanding how to analyse the data can be tricky. In this post, I’ll share some of the resources made by LIGO and Virgo to help introduce gravitational-wave analysis. These papers together should give you a good grounding in how to get started working with gravitational-wave data.

If you’d like a more in-depth understanding, I’d recommend visiting your local library for Michele Maggiore’s  Gravitational Waves: Volume 1.

The Data Analysis Guide

Title: A guide to LIGO-Virgo detector noise and extraction of transient gravitational-wave signals
arXiv:
1908.11170 [gr-qc]
Journal: Classical & Quantum Gravity; 37(5):055002(54); 2020
Tutorial notebook: GitHub;  Google Colab; Binder
Code repository: Data Guide
LIGO science summary: A guide to LIGO-Virgo detector noise and extraction of transient gravitational-wave signals

It took many decades to develop the technology necessary to build gravitational-wave detectors. Similarly, gravitational-wave data analysis has developed over many decades—I’d say LIGO analysis was really kicked off in the early 1990s by Kipp Thorne’s group. There are now hundreds of papers on various aspects of gravitational-wave analysis. If you are new to the area, where should you start? Don’t panic! For the binary sources discovered so far, this Data Analysis Guide has you covered.

More details: The Data Analysis Guide

The GWOSC Paper

Title: Open data from the first and second observing runs of Advanced LIGO and Advanced Virgo
arXiv:
1912.11716 [gr-qc]
Journal: SoftwareX; 13:100658(20); 2021
Website: Gravitational Wave Open Science Center
LIGO science summary: Open data from the first and second observing runs of Advanced LIGO and Advanced Virgo

Data from the LIGO and Virgo detectors is released by the Gravitational Wave Open Science Center (GWOSC, pronounced, unfortunately, as it is spelt). If you want to try analysing our delicious data yourself, either searching for signals or studying the signals we have found, GWOSC is the place to start. This paper outlines how these data are produced, going from our laser interferometers to your hard-drive. The paper specifically looks at the data released for our first and second observing runs (O1 and O2), however, GWOSC also host data from the initial detectors’ fifth science run (S5) and sixth science run (S6), and will be updated with new data in the future.

If you do use data from GWOSC, please remember to say thank you.

More details: The GWOSC Paper

001100 010010 011110 100001 101101 110011

I thought I saw a 2! Credit: Fox

The Data Analysis Guide

Synopsis: Data Analysis Guide
Read this if: You want an introduction to signal analysis
Favourite part: This is a great resource for new students [bonus note]

Gravitational-wave detectors measure ripples in spacetime. They record a simple time series of the stretching and squeezing of space as a gravitational wave passes. Well, they measure that, plus a whole lot of noise. Most of the time it is just noise. How do we go from this time series to discoveries about the Universe’s black holes and neutron stars? This paper gives the outline, it covers (in order)

  1. An introduction to observations at the time of writing
  2. The basics of LIGO and Virgo data—what it is the we analyse
  3. The basics of detector noise—how we describe sources of noise in our data
  4. Fourier analysis—how we go from time series to looking at the data in the as a function of frequency, which is the most natural way to analyse that data.
  5. Time–frequency analysis and stationarity—how we check the stability of data from our detectors
  6. Detector calibration and data quality—how we make sure we have good quality data
  7. The noise model and likelihood—how we use our understanding of the noise, under the assumption of it being stationary, to work out the likelihood of different signals being in the data
  8. Signal detection—how we identify times in the data which have a transient signal present
  9. Inferring waveform and physical parameters—how we estimate the parameters of the source of a gravitational wave
  10. Residuals around GW150914—a consistency check that we have understood the noise surrounding our first detection

The paper works through things thoroughly, and I would encourage you to work though it if you are interested.

I won’t summarise everything here, I want to focus the (roughly undergraduate-level) foundations of how we do our analysis in the frequency domain. My discussion of the GWOSC Paper goes into more detail on the basics of LIGO and Virgo data, and some details on calibration and data quality. I’ll leave talking about residuals to this bonus note, as it involves a long tangent and me needing to lie down for a while.

Fourier analysis

The signal our detectors measure is a time series d(t). This is may just contain noise, d(t) = n(t), or it may also contain a signal, d(t) = n(t) + h(t).

There are many sources of noise for our detectors. The different sources can affect different frequencies. If we assume that the noise is stationary, so that it’s properties don’t change with time, we can simply describe the properties of the noise with the power spectral density S_n(f). On average we expect the noise at a given frequency to be zero, but with it fluctuating up and down with a variance given by the power spectral density. We typically approximate the noise as Gaussian, such that

n(f) \sim \mathcal{N}(0; S_n(f)/2),

where we use \mathcal{N}(\mu; \sigma^2) to represent a normal distribution with mean \mu and standard deviation \sigma. The approximations of stationary and Gaussian noise are good most of the time. The noise does vary over time, but is usually effectively stationary over the durations we look at for a signal. The noise is also mostly Gaussian except for glitches. These are taken into account when we search for signals, but we’ll ignore them for now. The statistical description of the noise in terms of the power spectral density allows us to understand our data, but this understanding comes as a function of frequency: we must transform of time domain data to frequency domain data.

The go from d(t) to d(f) we can use a Fourier transform. Fourier transforms are a way of converting a function of one variable into a function of a reciprocal variable—in the case of time you convert to frequency. Fourier transforms encode all the information of the original function, so it is possible to convert back and forth as you like. Really, a Fourier transform is just another way of looking at the same function.

The Fourier transform is defined as

d(f) = \mathcal{F}_f\left\{d(t)\right\} = \int_{-\infty}^{\infty} d(t) \exp(-2\pi i f t) \,\mathrm{d}t.

Now, from this you might notice a problem when it comes to real data analysis, namely that the integral is defined over an infinite amount of time. We don’t have that much data. Instead, we only have a short period.

We could recast the integral above over a shorter time if instead of taking the Fourier transform of d(t), we take the Fourier transform of d(t) \times w(t) where w(t) is some window function which goes to zero outside of the time interval we are looking at. What we end up with is a convolution of the function we want with the Fourier transform of the window function,

\mathcal{F}_f\left\{d(t)w(t)\right\} = d(f) \ast w(f).

It is important to pick a window function which minimises the distortion to the signal that we want. If we just take a tophat (also known as a boxcar or rectangular, possibly on account of its infamous criminal background) function which is abruptly cuts off the data at the ends of the time interval, we find that w(f) is a sinc function. This is not a good thing, as it leads to all sorts of unwanted correlations between different frequencies, commonly known as spectral leakage. A much better choice is a function which smoothly tapers to zero at the edges. Using a tapering window, we lose a little data at the edges (we need to be careful choosing the length of the data analysed), but we can avoid the significant nastiness of spectral leakage. A tapering window function should always be used. Then or finite-time Fourier transform is then a good approximation to the exact d(f).

Data treatment to highlight a signal

Data processing to reveal GW150914. The top panel shows raw Hanford data. The second panel shows a window function being applied. The third panel shows the data after being whitened. This cleans up the data, making it easier to pick out the signal from all the low frequency noise. The bottom panel shows the whitened data after a bandpass filter is applied to pick out the signal. We don’t use the bandpass filter in our analysis (it is just for illustration), but the other steps reflect how we treat our data. Figure 2 of the Data Analysis Guide.

Now we have our data in the frequency domain, it is simple enough to compare the data to the expected noise a t a given frequency. If we measure something loud at a frequency with lots of noise we should be less surprised than if we measure something loud at a frequency which is usually quiet. This is kind of like how somewhat shouting is less startling at a rock concert than in a library. The appropriate way to weight is to divide by the square root of power spectral density d_\mathrm{w}(f) \propto d(f)/[S_n(f)]^{1/2}. This is known as whitening. Whitened data should have equal amplitude fluctuations at all frequencies, allowing for easy comparisons.

Now we understand the statistical properties of noise we can do some analysis! We can start by testing our assumption that the data are stationary and Gaussian by checking that that after whitening we get the expected distribution. We can also define the likelihood of obtaining the data d(t) given a model of a gravitational-wave signal h(t), as the properties of the noise mean that d(f) - h(f) \sim \mathcal{N}(0; S_n(f)/2). Combining the likelihood for each individual frequency gives the overall likelihood

\displaystyle p(d|h) \propto \exp\left[-\int_{-\infty}^{\infty} \frac{|d(f) - h(f)|^2}{S_n(f)} \mathrm{d}f \right].

This likelihood is at the heart of parameter estimation, as we can work out the probability of there being a signal with a given set of parameters. The Data Analysis Guide goes through many different analyses (including parameter estimation) and demonstrates how to check that noise is nice and Gaussian.

Gaussian residuals for GW150914

Distribution of residuals for 4 seconds of data around GW150914 after subtracting the maximum likelihood waveform. The residuals are the whitened Fourier amplitudes, and they should be consistent with a unit Gaussian. The residuals follow the expected distribution and show no sign of non-Gaussianity. Figure 14 of the Data Analysis Guide.

Homework

The Data Analysis Guide contains much more material on gravitational-wave data analysis. If you wanted to delve further, there many excellent papers cited. Favourites of mine include Finn (1992); Finn & Chernoff (1993); Cutler & Flanagan (1994); Flanagan & Hughes (1998); Allen (2005), and Allen et al. (2012). I would also recommend the tutorials available from GWOSC and the lectures from the Open Data Workshops.

The GWOSC Paper

Synopsis: GWOSC Paper
Read this if: You want to analyse our gravitational wave data
Favourite part: All the cool projects done with this data

You’re now up-to-speed with some ideas of how to analyse gravitational-wave data, you’ve made yourself a fresh cup of really hot tea, you’re ready to get to work! All you need are the data—this paper explains where this comes from.

Data production

The first step in getting gravitational-wave data is the easy one. You need to design a detector, convince science agencies to invest something like half a billion dollars in building one, then spend 40 years carefully researching the necessary technology and putting it all together as part of an international collaboration of hundreds of scientists, engineers and technicians, before painstakingly commissioning the instrument and operating it. For your convenience, we have done this step for you, but do feel free to try it yourself at home.

Gravitational-wave detectors like Advanced LIGO are built around an interferometer: they have two arms at right angles to each other, and we bounce lasers up and down them to measure their length. A passing gravitational wave will change the relative length of one arm relative to the other. This changes the time taken to travel along one arm compared to the other. Hence, when the two bits of light reach the output of the interferometer, they’ll have a different phase:where normally one light wave would have a peak, it’ll have a trough. This change in phase will change how light from the two arms combine together. When no gravitational wave is present, the light interferes destructively, almost cancelling out so that the output is dark. We measure the brightness of light at the output which tells us about how the length of the arms changes.

We want our detector in measure the gravitational-wave strain. That is the fractional change in length of the arms,

\displaystyle h(t) = \frac{\Delta L(t)}{L},

where \Delta L = L_x - L_y is the relative difference in the length of the two arms, and L is the usually arm length. Since we love jargon in LIGO & Virgo, we’ll often refer to the strain as HOFT (as you would read h(t) as h of t; it took me years to realise this) or DARM (differential arm measurement).

The actual output of the detector is the voltage from a photodiode measuring the intensity of the light. It is necessary to make careful calibration of the detectors. In theory this is simple: we change the position of the mirrors at the end of the arms and see how the output changes. In practise, it is very difficult. The GW150914 Calibration Paper goes into details for O1, more up-to-date descriptions are given in Cahillane et al. (2017) for LIGO and Acernese et al. (2018) for Virgo. The calibration of the detectors can drift over time, improving the calibration is one of the things we do between originally taking the data and releasing the final data.

The data are only celebrated between 10 Hz and 5 kHz, so don’t trust the data outside of that frequency range.

The next stage of our data’s journey is going through detector characterisation and data quality checks. In addition to measuring gravitational-wave strain, we record many other data channels: about 200,000 per detector. These measure all sorts of things, from the internal state of the instrument, to monitoring the physical environment around the detectors. These auxiliary channels are used to check the data quality. In some cases, an auxiliary channel will record a source of noise, like scattered light or the mains power frequency, allowing us to clean up our strain data by subtracting out this noise. In other cases, an auxiliary channel can act as a witness to a glitch in our detector, identifying when it is misbehaving so that we know not to trust that part of the data. The GW150914 Detector Characterisation Paper goes into details of how we check potential detections. In doing data quality checks we are careful to only use the auxiliary channels which record something which would be independent of a passing gravitational wave.

We have 4 flags for data quality:

  1. DATA: All clear. Certified fresh. Eat as much as you like.
  2. CAT1: A critical problem with the instrument. Data from these times are likely to be a dumpster fire of noise. We do not use them in our analyses, and they are currently excluded from our public releases. About 1.7% of Hanford data and 1.0% of time from Livingston was flagged with CAT1 in O1. In O2,  we got this done to 0.001% for Hanford, 0.003% for Livingston and 0.05% for Virgo.
  3. CAT2: Some activity in an auxiliary channel (possibly the electric boogaloo monitor) which has a well understood correlation with the measured strain channel. You would therefore expect to find some form of glitchiness in the data.
  4. CAT3: There is some correlation in an auxiliary channel and the strain channel which is not understood. We’re not currently using this flag, but it’s kept as an option.

It’s important to verify the data quality before starting your analysis. You don’t want to get excited to discover a completely new form of gravitational wave only to realise that it’s actually some noise from nearby logging. Remember, if a tree falls in the forest and no-one is around, LIGO will still know.

To test our systems, we also occasionally perform a signal injection: we move the mirrors to simulate a signal. This is useful for calibration and for testing analysis algorithms. We don’t perform injections very often (they get in the way of looking for real signals), but these times are flagged. Just as for data quality flags, it is important to check for injections before analysing a stretch of data.

Once passing through all these checks, the data is ready to analyse!

Yes!

Excited Data. Credit: Paramount

Accessing the data

After our data have been lovingly prepared, they are served up in two data formats:

  • Hierarchical Data Format HDF, which is a popular data storage format, as it is easily allows for metadata and multiple data sets (like the important data quality flags) to be packaged together.
  • Gravitational Wave Frame GWF, which is the standard format we use internally. Veteran gravitational-wave scientists often get a far-way haunted look when you bring up how the specifications for this file format were decided. It’s best not to mention unless you are also buying them a stiff drink.

In these files, you will find h(t) sampled at either 4096 Hz or 16384 Hz (either are available). Pick the sampling rate you need depending upon the frequency range you are interested in: the 4096 Hz data are good for upto 1.7 kHz, while the 16384 Hz are good to the limit of the calibration range at 5 kHz.

Files can be downloaded from the GWOSC website. If you want to download a large amount, it is recommended to use the CernVM-FS distributed file system.

To check when the gravitational-wave detectors were observing, you can use the Timeline search.

GWOSC Timeline

Screenshot of the GWOSC Timeline showing observing from the fifth science run (S5) on the initial detector era through to the second observing run (O2) of the advanced detector era. Bars show observing of GEO 600 (G1), Hanford (H1 and H2), Livingston (L1) and Virgo (V1). Hanford initial had two detectors housed within its site, the plan in the advanced detector era is to install the equipment as LIGO India instead.

Try this at home

Having gone through all these details, you should now know what are data is, over what ranges it can be analyzed, and how to get access to it. Your cup of tea has also probably gone cold. Why not make yourself a new one, and have a couple of biscuits as reward too. You deserve it!

To help you on your way in starting analysing the data, GWOSC has a set of tutorials (and don’t forget the Data Analysis Guide), and a collection of open source software. Have fun, and remember, it’s never aliens.

Bonus notes

Release schedule

The current policy is that data are released:

  1. In a chunk surrounding an event at time of publication of that event. This enables the new detection to be analysed by anyone. We typically release about an hour of data around an event.
  2. 18 months after the end of the run. This time gives us chance to properly calibrate the data, check the data quality, and then run the analyses we are committed to. A lot of work goes into producing gravitational wave data!

Start marking your calendars now for the release of O3 data.

Summer studenting

In summer 2019, while we were finishing up on the Data Analysis Guide, I gave it to one of my summer students Andrew Kim as an introduction. Andrew was working on gravitational-wave data analysis, so I hoped that he’d find it useful. He ended up working through the draft notebook made to accompany the paper and making a number of useful suggestions! He ended up as an author on the paper because of these contributions, which was nice.

The conspiracy of residuals

The Data Analysis Guide is an extremely useful paper. It explains many details of gravitational-wave analysis. The detections made by LIGO and Virgo over the last few years has increased the interest in analysing gravitational waves, making it the perfect time to write such an article. However, that’s not really what motivated us to write it.

In 2017, a paper appeared on the arXiv making claims of suspicious correlations in our LIGO data around GW150914. Could this call into question the very nature of our detection? No. The paper has two serious flaws.

  1. The first argument in the paper was that there were suspicious phase correlations in the data. This is because the authors didn’t window their data before Fourier transforming.
  2. The second argument was the residuals presented in Figure 1 of the GW150914 Discovery Paper contain a correlation. This is true, but these residuals aren’t actually the results of how we analyse the data. The point of Figure 1 was to that you don’t need our fancy analysis to see the signal—you can spot it by eye. Unfortunately, doing things by eye isn’t perfect, and this imperfection was picked up on.

The first flaw is a rookie mistake—pretty much everyone does it at some point. I did it starting out as a first-year PhD student, and I’ve run into it with all the undergraduates I’ve worked with writing their own analyses. The authors of this paper are rookies in gravitational-wave analysis, so they shouldn’t be judged too harshly for falling into this trap, and it is something so simple I can’t blame the referee of the paper for not thinking to ask. Any physics undergraduate who has met Fourier transforms (the second year of my degree) should grasp the mistake—it’s not something esoteric you need to be an expert in quantum gravity to understand.

The second flaw is something which could have been easily avoided if we had been more careful in the GW150914 Discovery Paper. We could have easily aligned the waveforms properly, or more clearly explained that the treatment used for Figure 1 is not what we actually do. However, we did write many other papers explaining what we did do, so we were hardly being secretive. While Figure 1 was not perfect, it was not wrong—it might not be what you might hope for, but it is described correctly in the text, and none of the LIGO–Virgo results depend on the figure in any way.

Estimated waveforms from different models

Recovered gravitational waveforms from our analysis of GW150914. The grey line shows the data whitened by the noise spectrum. The dark band shows our estimate for the waveform without assuming a particular source. The light bands show results if we assume it is a binary black hole (BBH) as predicted by general relativity. This plot more accurately represents how we analyse gravitational-wave data. Figure 6 of the GW150914 Parameter Estimation Paper.

Both mistakes are easy to fix. They are at the level of “Oops, that’s embarrassing! Give me 10 minutes. OK, that looks better”. Unfortunately, that didn’t happen.

The paper regrettably got picked up by science blogs, and caused much of a flutter. There were demands that LIGO and Virgo publically explain ourselves. This was difficult—the Collaboration is set up to do careful science, not handle a PR disaster. One of the problems was that we didn’t want to be seen to policing the use of our data. We can’t check that every paper ever using our data does everything perfectly. We don’t have time, and that probably wouldn’t encourage people to use our data if they knew any mistake would be pulled up by this 1000 person collaboration. A second problem was that getting anything approved as an official Collaboration document takes ages—getting consensus amongst so many people isn’t always easy. What would you do—would you want to be the faceless Collaboration persecuting the helpless, plucky scientists trying to check results?

There were private communications between people in the Collaboration and the authors. It took us a while to isolate the sources of the problems. In the meantime, pressure was mounting for an official™ response. It’s hard to justify why your analysis is correct by gesturing to a stack of a dozen papers—people don’t have time to dig through all that (I actually sent links to 16 papers to a science journalist who contacted me back in July 2017). Our silence may have been perceived as arrogance or guilt.

It was decided that we would put out an unofficial response. Ian Harry had been communicating with the authors, and wrote up his notes which Sean Carroll kindly shared on his blog. Unfortunately, this didn’t really make anyone too happy. The authors of the paper weren’t happy that something was shared via such an informal medium; the post is too technical for the general public to appreciate, and there was a minor typo in the accompanying code which (since fixed) was seized upon. It became necessary to write a formal paper.

Oh, won't somebody please think of the children?

Peer review will save the children! Credit: Fox

We did continue to try to explain the errors to the authors. I have colleagues who spent many hours in a room in Copenhagen trying to explain the mistakes. However, little progress was made, and it was not a fun time™. I can imagine at this point that the authors of the paper were sufficiently angry not to want to listen, which is a shame.

Now that the Data Analysis Guide is published, everyone will be satisfied, right? A refereed journal article should quash all fears, surely? Sadly, I doubt this will be the case. I expect these doubts will keep circulating for years. After all, there are those who still think vaccines cause autism. Fortunately, not believing in gravitational waves won’t kill any children. If anyone asks though, you can tell them that any doubts on LIGO’s analysis have been quashed, and that vaccines cause adults!

For a good account of the back and forth, Natalie Wolchover wrote a nice article in Quanta, and for a more acerbic view, try Mark Hannam’s blog.

Advertisement

Comprehensive all-sky search for periodic gravitational waves in the sixth science run LIGO data

The most recent, and most sensitive, all-sky search for continuous gravitational waves shows no signs of a detection. These signals from rotating neutron stars remain elusive. New data from the advanced detectors may change this, but we will have to wait a while to find out. This at least gives us time to try to figure out what to do with a detection, should one be made.

New years and new limits

The start of the new academic year is a good time to make resolutions—much better than wet and windy January. I’m trying to be tidier and neater in my organisation. Amid cleaning up my desk, which is covered in about an inch of papers, I uncovered this recent Collaboration paper, which I had lost track of.

The paper is the latest in the continuous stream of non-detections of continuous gravitational waves. These signals could come from rotating neutron stars which are deformed or excited in some way, and the hope that from such an observation we could learn something about the structure of neutron stars.

The search uses old data from initial LIGO’s sixth science run. Searches for continuous waves require lots of computational power, so they can take longer than even our analyses of binary neutron star coalescences. This is a semi-coherent search, like the recent search of the Orion spur—somewhere between an incoherent search, which looks for signal power of any form in the detectors, and a fully coherent search, which looks for signals which exactly match the way a template wave evolves [bonus note]. The big difference compared to the Orion spur search, is that this one looks at the entire sky. This makes it less sensitive in those narrow directions, but means we are not excluding the possibility of sources from other locations.

Part of the Galaxy searched

Artist’s impression of the local part of the Milky Way. The yellow cones mark the extent of the Orion Spur spotlight search, and the pink circle shows the equivalent sensitivity of this all-sky search. Green stars indicate known pulsars. Original image: NASA/JPL-Caltech/ESO/R. Hurt.

The search identified 16 outliers, but an examination of all of these showed they could be explained either as an injected signal or as detector noise. Since no signals were found, we can instead place some upper limits on the strength of signals.

The plot below translates the calculated upper limits (above which there would have been a ~75%–95% chance of us detected the signal) into the size of neutron star deformations. Each curve shows the limits on detectable signals at different distance, depending upon their frequency and the rate of change of their frequency. The dotted lines show limits on ellipticity \varepsilon, a measure of how bumpy the neutron star is. Larger deformations mean quicker changes of frequency and produce louder signals, therefore they can can be detected further away.

Limits on detectable signals and ellipticities

Range of the PowerFlux search for rotating neutron stars assuming that spin-down is entirely due to gravitational waves. The solid lines show the upper limits as a function of the gravitational-wave frequency and its rate of change; the dashed lines are the corresponding limits on ellipticity, and the dotted line marks the maximum searched spin-down. Figure 6 of Abbott et al. (2016).

Neutron stars are something like giant atomic nuclei. Figuring the properties of the strange matter that makes up neutron stars is an extremely difficult problem. We’ll never be able to recreate such exotic matter in the laboratory. Gravitational waves give us a rare means of gathering experimental data on how this matter behaves. However, exactly how we convert a measurement of a signal into constraints on the behaviour of the matter is still uncertain. I think that making a detection might only be the first step in understanding the sources of continuous gravitational waves.

arXiv: 1605.03233 [gr-qc]
Journal: Physical Review D; 94(4):042002(14); 2016
Other new academic year resolution:
 To attempt to grow a beard. Beard stroking helps you think, right?

Bonus note

The semi-coherent search

As the first step of this search, the PowerFlux algorithm looks for power that changes in frequency as expected for a rotating neutron star: it factors in Doppler shifting due to the motion of the Earth and a plausible spin down (slowing of the rotation) of the neutron star. As a follow up, the Loosely Coherent algorithm is used, which checks for signals which match short stretches of similar templates. Any candidates to make it through all stages of refinement are then examined in more detail. This search strategy is described in detail for the S5 all-sky search.

Search for transient gravitational waves in coincidence with short-duration radio transients during 2007–2013

Gravitational waves give us a new way of observing the Universe. This raises the possibility of multimessenger astronomy, where we study the same system using different methods: gravitational waves, light or neutrinos. Each messenger carries different information, so by using them together we can build up a more complete picture of what’s going on. This paper looks for gravitational waves that coincide with radio bursts. None are found, but we now have a template for how to search in the future.

On a dark night, there are two things which almost everyone will have done: wondered at the beauty of the starry sky and wondered exactly what was it that just went bump… Astronomers do both. Transient astronomy is about figuring out what are the things which go bang in the night—not the things which make suspicious noises, but objects which appear (and usually disappear) suddenly in the sky.

Most processes in astrophysics take a looooong time (our Sun is four-and-a-half billion years old and is just approaching middle age). Therefore, when something happens suddenly, flaring perhaps over just a few seconds, you know that something drastic must be happening! We think that most transients must be tied up with a violent event such as an explosion. However, because transients are so short, it can difficult to figure out exactly where they come from (both because they might have faded by the time you look, and because there’s little information to learn from a blip in the first place).

Radio transients are bursts of radio emission of uncertain origin. We’ve managed to figure out that some come from microwave ovens, but the rest do seem to come from space. This paper looks at two types: rotating radio transients (RRATs) and fast radio bursts (FRBs). RRATs look like the signals from pulsars, except that they don’t have the characteristic period pattern of pulsars. It may be that RRATs come from dying pulsars, flickering before they finally switch off, or it may be that they come from neutron stars which are not normally pulsars, but have been excited by a fracturing of their crust (a starquake). FRBs last a few milliseconds, they could be generated when two neutron stars merge and collapse to form a black hole, or perhaps from a highly-magnetised neutron star. Normally, when astronomers start talking about magnetic fields, it means that we really don’t know what’s going on [bonus note]. That is the case here. We don’t know what causes radio transients, but we are excited to try figuring it out.

This paper searches old LIGO, Virgo and GEO data for any gravitational-wave signals that coincide with observed radio transients. We use a catalogue of RRATs and FRBs from the Green Bank Telescope and the Parkes Observatory, and search around these times. We use a burst search, which doesn’t restrict itself to any particular form of gravitational-wave; however, the search was tuned for damped sinusoids and sine–Gaussians (generic wibbles), cosmic strings (which may give an indication of how uncertain we are of where radio transients could come from), and coalescences of binary neutron stars or neutron star–black hole binaries. Hopefully the search covers all plausible options. Discovering a gravitational wave coincident with a radio transient would give us much welcomed information about the source, and perhaps pin down their origin.

Results from search for gravitational waves conicident with radio transients

Search results for gravitational waves coincident with radio transients. The probabilities for each time containing just noise (blue) match the expected background distribution (dashed). This is consistent with a non-detection.

The search discovered nothing. Results match what we would expect from just noise in the detectors. This is not too surprising since we are using data from the first-generation detectors. We’ll be repeating the analysis with the upgraded detectors, which can find signals from larger distances. If we are lucky, multimessenger astronomy will allow us to figure out exactly what needs to go bump to create a radio transient.

arXiv: 1605.01707 [astro-ph.HE]
Journal: Physical Review D; 93(12):122008(14); 2016
Science summary: Searching for gravitational wave bursts in coincidence with short duration radio bursts
Favourite thing that goes bump in the night: Heffalumps and Woozles [probably not the cause of radio transients]

Bonus note

Magnetism and astrophysics

Magnetic fields complicate calculations. They make things more difficult to model and are therefore often left out. However, we know that magnetic fields are everywhere and that they do play important roles in many situations. Therefore, they are often invoked as an explanation of why models can’t explain what’s going on. I learnt early in my PhD that you could ask “What about magnetic fields?” at the end of almost any astrophysics seminar (it might not work for some observational talks, but then you could usually ask “What about dust?” instead). Handy if ever you fall asleep…

Search of the Orion spur for continuous gravitational waves using a loosely coherent algorithm on data from LIGO interferometers

A cloudy bank holiday Monday is a good time to catch up on blogging. Following the splurge of GW150914 papers, I’ve rather fallen behind. Published back in February, this paper is a search for continuous-wave signals: the almost-constant hum produced by rapidly rotating neutron stars.

Continuous-wave searches are extremely computationally expensive. The searches take a while to do, which can lead to a delay before results are published [bonus note]. This is the result of a search using data from LIGO’s sixth science run (March–October 2010).

To detect a continuous wave, you need to sift the data to find a signal that present through all the data. Rotating neutron stars produce a gravitational-wave signal with a frequency twice their orbital frequency. This frequency is almost constant, but could change as the observation goes on because (i) the neutron star slows down as energy is lost (from gravitational waves, magnetic fields or some form of internal sloshing around); (ii) there is some Doppler shifting because of the Earth’s orbit around the Sun, and, possibly, (iii) the there could be some Doppler shifting because the neutron star is orbiting another object. How do you check for something that is always there?

There are two basic strategies for spotting continuous waves. First, we could look for excess power in a particular frequency bin. If we measure something in addition to what we expect from the detector noise, this could be a signal. Looking at the power is simple, and so not too expensive. However, we’re not using any information about what a real signal should look like, and so it must be really loud for us to be sure that it’s not just noise. Second, we could coherently search for signals using templates for the expected signals. This is much more work, but gives much better sensitivity. Is there a way to compromise between the two strategies to balance cost and sensitivity?

This paper reposts results of a loosely coherent search. Instead of checking how well the data match particular frequencies and frequency evolutions, we average over a family of similar signals. This is less sensitive, as we get a bit more wiggle room in what would be identified as a candidate, but it is also less expensive than checking against a huge number of templates.

We could only detect continuous waves from nearby sources: neutron stars in our own Galaxy. (Perhaps 0.01% of the distance of GW150914). It therefore makes sense to check nearby locations which could be home to neutron stars. This search narrows its range to two directions in the Orion spur, our local band with a high concentration of stars. By focussing in on these spotlight regions, we increase the sensitivity of the search for a given computational cost. This search could possibly dig out signals from twice as far away as if we were considering all possible directions.

Part of the Galaxy searched

Artist’s impression of the local part of the Milky Way. The Orion spur connects the Perseus and Sagittarius arms. The yellow cones mark the extent of the search (the pink circle shows the equivalent all-sky sensitivity). Green stars indicate known pulsars. Original image: NASA/JPL-Caltech/ESO/R. Hurt.

The search found 70 interesting candidates. Follow-up study showed that most were due to instrumental effects. There were three interesting candidates left after these checks, none significant enough to be a detection, but still worth looking at in detail. A full coherent analysis was done for these three candidates. This showed that they were probably caused by noise. We have no detections

arXiv: 1510.03474 [gr-qc]
Journal: Physical Review D; 93(4):042006(14); 2016
Science summary: Scouting our Galactic neighborhood
Other bank holiday activities:
 Scrabble

Scrabble board

Bank holiday family Scrabble game. When thinking about your next turn, you could try seeing if your letters match a particular word (a coherent search which would get you the best score, but take ages), or just if your letters jumble together to make something word-like (an incoherent search, that is quick, but may result in lots of things that aren’t really words).

Bonus note

Niceness

The Continuous Wave teams are polite enough to wait until we’re finished searching for transient gravitational-wave signals (which are more time sensitive) before taking up the LIGO computing clusters. They won’t have any proper results from O1 just yet.

All-sky search for long-duration gravitational wave transients with LIGO

It’s now about 7 weeks since the announcement, and the madness is starting to subside. Although, that doesn’t mean things aren’t busy—we’re now enjoying completely new forms of craziness. In mid March we had our LIGO–Virgo Collaboration Meeting. This was part celebration, part talking about finishing our O1 analysis and part thinking ahead to O2, which is shockingly close. It was fun, there was cake.

Gravitational wave detection cake

Celebratory cake from the March LIGO–Virgo Meeting. It was delicious and had a fruity (strawberry?) filling. The image is February 11th’s Astronomy Picture of the Day. There was a second cake without a picture, that was equally delicious, but the queue was shorter.

All the business means that I’ve fallen behind with my posts, and I’ve rather neglected the final paper published the week starting 8 February. This is perhaps rather apt as this paper has the misfortune to be the first non-detection published in the post-detection world. It is also about a neglected class of signals.

Long-duration transients

We look for several types of signals with LIGO (and hopefully soon Virgo and KAGRA):

  • Compact binary coalescences (like two merging black holes), for which we have templates for the signal. High mass systems might only last a fraction of a second within the detector’s frequency range, but low mass systems could last for a minute (which is a huge pain for us to analyse).
  • Continuous waves from rotating neutron stars which are almost constant throughout our observations.
  • Bursts, which are transient signals where we don’t have a good model. The classic burst source is from a supernova explosion.

We have some effective search pipelines for finding short bursts—signals of about a second or less. Coherent Waveburst, which was the first code to spot GW150914 is perhaps the best known example. This paper looks at finding longer burst signals, a few seconds to a few hundred seconds in length.

There aren’t too many well studied models for these long bursts. Most of the potential sources are related to the collapse of massive stars. There can be a large amount of matter moving around quickly in these situations, which is what you want for gravitational waves.

Massive stars may end their life in a core collapse supernova. Having used up its nuclear fuel, the star no longer has the energy to keep itself fluffy, and its core collapses under its own gravity. The collapse leads to an explosion as material condenses to form a neutron star, blasting off the outer layers of the star. Gravitational waves could be generated by the sloshing of the outer layers as some is shot outwards and some falls back, hitting the surface of the new neutron star. The new neutron star itself will start life puffed up and perhaps rapidly spinning, and can generate gravitational waves at it settles down to a stable state—a similar thing could happen if an older neutron star is disturbed by a glitch (where we think the crust readjusts itself in something like an earthquake, but more cataclysmic), or if a neutron star accretes a large blob of material.

For the most massive stars, the core continues to collapse through being a neutron star to become a black hole. The collapse would just produce a short burst, so it’s not what we’re looking for here. However, once we have a black hole, we might build a disc out of material swirling into the black hole (perhaps remnants of the outer parts of the star, or maybe from a companion star). The disc may be clumpy, perhaps because of eddies or magnetic fields (the usual suspects when astrophysicists don’t know exactly what’s going on), and they rapidly inspiralling blobs could emit a gravitational wave signal.

The potential sources don’t involve as much mass as a compact binary coalescence, so these signals wouldn’t be as loud. Therefore we couldn’t see them quite as far way, but they could give us some insight into these messy processes.

The search

The paper looks at results using old LIGO data from the fifth and sixth science runs (S5 and S6). Virgo was running at this time, but the data wasn’t included as it vastly increases the computational cost while only increasing the search sensitivity by a few percent (although it would have helped with locating a source if there were one). The data is analysed with the Stochastic Transient Analysis Multi-detector Pipeline (STAMP); we’ll be doing a similar thing with O1 data too.

STAMP searches for signals by building a spectrogram: a plot of how much power there is at a particular gravitational wave frequency at a particular time. If there is just noise, you wouldn’t expect the power at one frequency and time to be correlated with that at another frequency and time. Therefore, the search looks for clusters, grouping together times or frequencies closer to one another where there is more power then you might expect.

The analysis is cunning, as it coherently analysis data from both detectors together when constructing the spectrogram, folding in the extra distance a gravitational wave must travel between the detectors for a given sky position.

The significance of events is calculated is a similar way to how we search for binary black holes. The pipeline ranks candidates using a detection statistic, a signal-to-noise ratio for the cluster of interesting time–frequency pixels \mathrm{SNR}_\Gamma (something like the amount of power measured divided by the amount you’d expect randomly). We work out how frequently you’d expect a particular value of \mathrm{SNR}_\Gamma by analysing time-shifted data: where we’ve shifted the data from one of the detectors in time relative to data from the other so that we know there can’t be the same signal found in both.

The distribution of \mathrm{SNR}_\Gamma is shown below from the search (dots) and from the noise background (lines). You can see that things are entirely consistent with our expectations for just noise. The most significant event has a false alarm probability of 54%, so you’re better off betting it’s just noise. There are no detections here.

False alarm rate distribution

False alarm rate (FAR) distribution of triggers from S5 (black circles) and S6 (red triangles) as a function of the
signal-to-noise ratio. The background S5 and S6 noise distributions are shown by the solid black and dashed red lines respectively. An idealised Gaussian noise background is shown in cyan. There are no triggers significantly above the expected background level. Fig. 5 from Abbott et al. (2016).

Since the detectors are now much more sensitive, perhaps there’s something lurking in our new data. I still think this in unlikely since we can’t see sources from a significant distance, but I guess we’ll have to wait for the results of the analysis.

arXiv: 1511.04398 [gr-qc]
Journal: Physical Review D; 93(4):042005(19); 2016
Science summary: Stuck in the middle: an all-sky search for gravitational waves of intermediate duration
Favourite (neglected) middle child:
 Lisa Simpson

View from Guano Point

Sunset over the Grand Canyon. One of the perks of academia is the travel. A group of us from Birmingham went on a small adventure after the LIGO–Virgo Meeting. This is another reason why I’ve not been updating my blog.

Searches for continuous gravitational waves from nine young supernova remnants

The LIGO Scientific Collaboration is busy analysing the data we’re currently taking with Advanced LIGO at the moment. However, the Collaboration is still publishing results from initial LIGO too. The most recent paper is a search for continuous waves—signals that are an almost constant hum throughout the observations. (I expect they’d be quite annoying for the detectors). Searching for continuous waves takes a lot of computing power (you can help by signing up for Einstein@Home), and is not particularly urgent since the sources don’t do much, hence it can take a while for results to appear.

Supernova remnants

Massive stars end their lives with an explosion, a supernova. Their core collapses down and their outer layers are blasted off. The aftermath of the explosion can be beautiful, with the thrown-off debris forming a bubble expanding out into the interstellar medium (the diffuse gas, plasma and dust between stars). This structure is known as a supernova remnant.

The bubble of a supernova remnant

The youngest known supernova remnant, G1.9+0.3 (it’s just 150 years old), observed in X-ray and optical light. The ejected material forms a shock wave as it pushes the interstellar material out of the way. Credit: NASA/CXC/NCSU/DSS/Borkowski et al.

At the centre of the supernova remnant may be what is left following the collapse of the core of the star. Depending upon the mass of the star, this could be a black hole or a neutron star (or it could be nothing). We’re interested in the case it is a neutron star.

Neutron stars

Neutron stars are incredibly dense. One teaspoon’s worth would have about as much mass as 300 million elephants. Neutron stars are like giant atomic nuclei. We’re not sure how matter behaves in such extreme conditions as they are impossible to replicate here on Earth.

If a neutron star rotates rapidly (we know many do) and has an uneven or if there are waves in the the neutron star that moves lots of material around (like Rossby waves on Earth), then it can emit continuous gravitational waves. Measuring these gravitational waves would tell you about how bumpy the neutron star is or how big the waves are, and therefore something about what the neutron star is made from.

Neutron stars are most likely to emit loud gravitational waves when they are young. This is for two reasons. First, the supernova explosion is likely to give the neutron star a big whack, this could ruffle up its surface and set off lots of waves, giving rise to the sort of bumps and wobbles that emit gravitational waves. As the neutron star ages, things can quiet down, the neutron star relaxes, bumps smooth out and waves dissipate. This leaves us with smaller gravitational waves. Second, gravitational waves carry away energy, slowing the rotation of the neutron star. This also means that the signal gets quieter (and harder) to detect as the  neutron star ages.

Since young neutron stars are the best potential sources, this study looked at nine young supernova remnants in the hopes of finding continuous gravitational waves. Searching for gravitational waves from particular sources is less computationally expensive than searching the entire sky. The search included Cassiopeia A, which had been previously searched in LIGO’s fifth science run, and G1.9+0.3, which is only 150 years old, as discovered by Dave Green. The positions of the searched supernova remnants are shown in the map of the Galaxy below.

Galactic map of supernova remnants

The nine young supernova remnants searched for continuous gravitational waves. The yellow dot marks the position of the Solar System. The green markers show the supernova remnants, which are close to the Galactic plane. Two possible positions for Vela Jr (G266.2−1.2) were used, since we are uncertain of its distance. Original image: NASA/JPL-Caltech/ESO/R. Hurt.

Gravitational-wave limits

No gravitational waves were found. The search checks how well template waveforms match up with the data. We tested that this works by injecting some fake signals into the data.  Since we didn’t detect anything, we can place upper limits on how loud any gravitational waves could be. These limits were double-checked by injecting some more fake signals at the limit, to see if we could detect them. We quoted 95% upper limits, that is where we expect that if a signal was present we could see it 95% of the time. The results actually have a small safety margin built in, so the injected signals were typically found 96%–97% of the time. In any case, we are fairly sure that there aren’t gravitational waves at or above the upper limits.

These upper limits are starting to tell us interesting things about the size of neutron-star bumps and waves. Hopefully, with data from Advanced LIGO and Advanced Virgo, we’ll actually be able to make a detection. Then we’ll not only be able to say that these bumps and waves are smaller than a particular size, but they are this size. Then we might be able to figure out the recipe for making the stuff of neutron stars (I think it might be more interesting than just flour and water).

arXiv: 1412.5942 [astro-ph.HE]
Journal: Astrophysical Journal; 813(1):39(16); 2015
Science summary: Searching for the youngest neutron stars in the Galaxy
Favourite supernova remnant:
 Cassiopeia A