Parameter estimation for binary neutron-star coalescences with realistic noise during the Advanced LIGO era

The first observing run (O1) of Advanced LIGO is nearly here, and with it the prospect of the first direct detection of gravitational waves. That’s all wonderful and exciting (far more exciting than a custard cream or even a chocolate digestive), but there’s a lot to be done to get everything ready. Aside from remembering to vacuum the interferometer tubes and polish the mirrors, we need to see how the data analysis will work out. After all, having put so much effort into the detector, it would be shame if we couldn’t do any science with it!

Parameter estimation

Since joining the University of Birmingham team, I’ve been busy working on trying to figure out how well we can measure things using gravitational waves. I’ve been looking at binary neutron star systems. We expect binary neutron star mergers to be the main source of signals for Advanced LIGO. We’d like to estimate how massive the neutron stars are, how fast they’re spinning, how far away they are, and where in the sky they are. Just published is my first paper on how well we should be able to measure things. This took a lot of hard work from a lot of people, so I’m pleased it’s all done. I think I’ve earnt a celebratory biscuit. Or two.

When we see something that looks like it could be a gravitational wave, we run code to analyse the data and try to work out the properties of the signal. Working out some properties is a bit trickier than others. Sadly, we don’t have an infinite number of computers, so it means it can take a while to get results. Much longer than the time to eat a packet of Jaffa Cakes…

The fastest algorithm we have for binary neutron stars is BAYESTAR. This takes the same time as maybe eating one chocolate finger. Perhaps two, if you’re not worried about the possibility of choking. BAYESTAR is fast as it only estimates where the source is coming from. It doesn’t try to calculate a gravitational-wave signal and match it to the detector measurements, instead it just looks at numbers produced by the detection pipeline—the code that monitors the detectors and automatically flags whenever something interesting appears. As far as I can tell, you give BAYESTAR this information and a fresh cup of really hot tea, and it uses Bayes’ theorem to work out how likely it is that the signal came from each patch of the sky.

To work out further details, we need to know what a gravitational-wave signal looks like and then match this to the data. This is done using a different algorithm, which I’ll refer to as LALInference. (As names go, this isn’t as cool as SKYNET). This explores parameter space (hopping between different masses, distances, orientations, etc.), calculating waveforms and then working out how well they match the data, or rather how likely it is that we’d get just the right noise in the detector to make the waveform fit what we observed. We then use another liberal helping of Bayes’ theorem to work out how probable those particular parameter values are.

It’s rather difficult to work out the waveforms, but some our easier than others. One of the things that makes things trickier is adding in the spins of the neutron stars. If you made a batch of biscuits at the same time you started a LALInference run, they’d still be good by the time a non-spinning run finished. With a spinning run, the biscuits might not be quite so appetising—I generally prefer more chocolate than penicillin on my biscuits. We’re working on speeding things up (if only to prevent increased antibiotic resistance).

In this paper, we were interested in what you could work out quickly, while there’s still chance to catch any explosion that might accompany the merging of the neutron stars. We think that short gamma-ray bursts and kilonovae might be caused when neutron stars merge and collapse down to a black hole. (I find it mildly worrying that we don’t know what causes these massive explosions). To follow-up on a gravitational-wave detection, you need to be able to tell telescopes where to point to see something and manage this while there’s still something that’s worth seeing. This means that using spinning waveforms in LALInference is right out, we just use BAYESTAR and the non-spinning LALInference analysis.

What we did

To figure out what we could learn from binary neutron stars, we generated a large catalogue of fakes signals, and then ran the detection and parameter-estimation codes on this to see how they worked. This has been done before in The First Two Years of Electromagnetic Follow-Up with Advanced LIGO and Virgo which has a rather delicious astrobites write-up. Our paper is the sequel to this (and features most of the same cast). One of the differences is that The First Two Years assumed that the detectors were perfectly behaved and had lovely Gaussian noise. In this paper, we added in some glitches. We took some real data™ from initial LIGO’s sixth science run and stretched this so that it matches the sensitivity Advanced LIGO is expected to have in O1. This process is called recolouring [bonus note]. We now have fake signals hidden inside noise with realistic imperfections, and can treat it exactly as we would real data. We ran it through the detection pipeline, and anything which was flagged as probably being a signal (we used a false alarm rate of once per century), was analysed with the parameter-estimation codes. We looked at how well we could measure the sky location and distance of the source, and the masses of the neutron stars. It’s all good practice for O1, when we’ll be running this analysis on any detections.

What we found

1. The flavour of noise (recoloured or Gaussian) makes no difference to how well we can measure things on average.
2. Sky-localization in O1 isn’t great, typically hundreds of square degrees (the median 90% credible region is 632 deg2), for comparison, the Moon is about a fifth of a square degree. This’ll make things interesting for the people with telescopes.

Probability that of a gravitational-wave signal coming from different points on the sky. The darker the red, the higher the probability. The star indicates the true location. This is one of the worst localized events from our study for O1. You can find more maps in the data release (including 3D versions), this is Figure 6 of Berry et al. (2015).

3. BAYESTAR does just as well as LALInference, despite being about 2000 times faster.

Sky localization (the size of the patch of the sky that we’re 90% sure contains the source location) varies with the signal-to-noise ratio (how loud the signal is). The approximate best fit is $\log_{10}(\mathrm{CR}_{0.9}/\mathrm{deg^2}) \approx -2 \log_{10}(\varrho) +5.06$, where $\mathrm{CR}_{0.9}$ is the 90% sky area and $\varrho$ is the signal-to-noise ratio. The results for BAYESTAR and LALInference agree, as do the results with Gaussian and recoloured noise. This is Figure 9 of Berry et al. (2015).

4. We can’t measure the distance too well: the median 90% credible interval divided by the true distance (which gives something like twice the fractional error) is 0.85.
5. Because we don’t include the spins of the neutron stars, we introduce some error into our mass measurements. The chirp mass, a combination of the individual masses that we’re most sensitive to [bonus note], is still reliably measured (the median offset is 0.0026 of the mass of the Sun, which is tiny), but we’ll have to wait for the full spinning analysis for individual masses.

Fraction of events with difference between the mean estimated and true chirp mass smaller than a given value. There is an error because we are not including the effects of spin, but this is small. Again, the type of noise makes little difference. This is Figure 15 of Berry et al. (2015).

There’s still some work to be done before O1, as we need to finish up the analysis with waveforms that include spin. In the mean time, our results are all available online for anyone to play with.

arXiv: 1411.6934 [astro-ph.HE]
Journal: Astrophysical Journal; 904(2):114(24); 2015
Data release: The First Two Years of Electromagnetic Follow-Up with Advanced LIGO and Virgo
Favourite colour: Blue. No, yellow…

Notes

The colour of noise: Noise is called white if it doesn’t have any frequency dependence. We made ours by taking some noise with initial LIGO’s frequency dependence (coloured noise), removing the frequency dependence (making it white), and then adding in the frequency dependence of Advanced LIGO (recolouring it).

The chirp mass: Gravitational waves from a binary system depend upon the masses of the components, we’ll call these $m_1$ and $m_2$. The chirp mass is a combination these that we can measure really well, as it determines the most significant parts of the shape of the gravitational wave. It’s given by

$\displaystyle \mathcal{M} = \frac{m_1^{3/5} m_2^{3/5}}{(m_1 + m_2)^{1/5}}$.

We get lots of good information on the chirp mass, unfortunately, this isn’t too useful for turning back into the individual masses. For that we next extra information, for example the mass ratio $m_2/m_1$. We can get this from less dominant parts of the waveform, but it’s not typically measured as precisely as the chirp mass, so we’re often left with big uncertainties.

Continuing with my New Year’s resolution to write a post on every published paper, the start of March see another full author list LIGO publication. Appearing in Classical & Quantum Gravity, the minimalistically titled Advanced LIGO is an instrumental paper. It appears a part of a special focus issue on advanced gravitational-wave detectors, and is happily free to read (good work there). This is The Paper™ for describing how the advanced detectors operate. I think it’s fair to say that my contribution to this paper is 0%.

LIGO stands for Laser Interferometer Gravitational-wave Observatory. As you might imagine, LIGO tries to observe gravitational waves by measuring them with a laser interferometer. (It won’t protect your fencing). Gravitational waves are tiny, tiny stretches and squeezes of space. To detect them we need to measure changes in length extremely accurately. I had assumed that Advanced LIGO will achieve this supreme sensitivity through some dark magic invoked by sacrificing the blood, sweat, tears and even coffee of many hundreds of PhD students upon the altar of science. However, this paper actually shows it’s just really, really, REALLY careful engineering. And giant frickin’ laser beams.

The paper goes through each aspect of the design of the LIGO detectors. It starts with details of the interferometer. LIGO uses giant lasers to measure distances extremely accurately. Lasers are bounced along two 3994.5 m arms and interfered to measure a change in length between the two. In spirit, it is a giant Michelson interferometer, but it has some cunning extra features. Each arm is a Fabry–Pérot etalon, which means that the laser is bounced up and down the arms many times to build up extra sensitivity to any change in length. There are various extra components to make sure that the laser beam is as stable as possible, all in all, there are rather a lot of mirrors, each of which is specially tweaked to make sure that some acronym is absolutely perfect.

Fig. 1 from Aasi et al. (2015), the Advanced LIGO optical configuration. All the acronyms have to be carefully placed in order for things to work. The laser beam starts from the left, passing through subsystems to make sure it’s stable. It is split in two to pass into the interferometer arms at the top and right of the diagram. The laser is bounced many times between the mirrors to build up sensitivity. The interference pattern is read out at the bottom. Normally, the light should interfere destructively, so the output is dark. A change to this indicates a change in length between the arms. That could be because of a passing gravitational wave.

The next section deals with all the various types of noise that affect the detector. It’s this noise that makes it such fun to look for the signals. To be honest, pretty much everything I know about the different types of noise I learnt from Space-Time Quest. This is a lovely educational game developed by people here at the University of Birmingham. In the game, you have to design the best gravitational-wave detector that you can for a given budget. There’s a lot of science that goes into working out how sensitive the detector is. It takes a bit of practice to get into it (remember to switch on the laser first), but it’s very easy to get competitive. We often use the game as part of outreach workshops, and we’ve had some school groups get quite invested in the high-score tables. My tip is that going underground doesn’t seem to be worth the money. Of course, if you happen to be reviewing the proposal to build the Einstein Telescope, you should completely ignore that, and just concentrate how cool the digging machine looks. Space-Time Quest shows how difficult it can be optimising sensitivity. There are trade-offs between different types of noise, and these have been carefully studied. What Space-Time Quest doesn’t show, is just how much work it takes to engineer a detector.

The fourth section is a massive shopping list of components needed to build Advanced LIGO. There are rather more options than in Space-Time Quest, but many are familiar, even if given less friendly names. If this section were the list of contents for some Ikea furniture, you would know that you’ve made a terrible life-choice; there’s no way you’re going to assemble this before Monday. Highlights include the 40 kg mirrors. I’m sure breaking one of those would incur more than seven years bad luck. For those of you playing along with Space-Time Quest at home, the mirrors are fused silica. Section 4.8.4 describes how to get the arms to lock, one of the key steps in commissioning the detectors. The section concludes with details of how to control such a complicated instrument, the key seems to be to have so many acronyms that there’s no space for any component to move in an unwanted way.

The paper closes with on outlook for the detector sensitivity. With such a complicated instrument it is impossible to be certain how things will go. However, things seem to have been going smoothly so far, so let’s hope that this continues. The current plan is:

• 2015 3 months observing at a binary neutron star (BNS) range of 40–80 Mpc.
• 2016–2017 6 months observing at a BNS range of 80–120 Mpc.
• 2017–2018 9 months observing at a BNS range of 120–170 Mpc.
• 2019 Achieve full sensitivity of a BNS range of 200 Mpc.

The BNS range is the distance at which a typical binary made up of two 1.4 solar mass neutrons stars could be detected when averaging over all orientations. If you have a perfectly aligned binary, you can detect it out to a further distance, the BNS horizon, which is about 2.26 times the BNS range. There are a couple of things to note from the plan. First, the initial observing run (O1 to the cool kids) is this year! The second is how much the range will extend before hitting design sensitivity. This should significantly increase the number of possible detections, as each doubling of the range corresponds to a volume change of a factor of eight. Coupling this with the increasing length of the observing runs should mean that the chance of a detection increases every year. It will be an exciting few years for Advanced LIGO.

arXiv: 1411.4547 [gr-qc]
Journal: Classical & Quantum Gravity; 32(7):074001(41); 2015
Science summary: Introduction to LIGO & Gravitational Waves
Space-Time Quest high score: 34.859 Mpc