Dirichlet Process Gaussian-mixture model: An application to localizing coalescing binary neutron stars with gravitational-wave observations

Where do gravitational waves like GW170817 come from? Using our network of detectors, we cannot pinpoint a source, but we can make a good estimate—the amplitude of the signal tells us about the distance; the time delay between the signal arriving at different detectors, and relative amplitudes of the signal in different detectors tells us about the sky position (see the excellent video by Leo Singer below).

In this paper we look at full three-dimensional localization of gravitational-wave sources; we important a (rather cunning) technique from computer vision to construct a probability distribution for the source’s location, and then explore how well we could localise a set of simulated binary neutron stars. Knowing the source location enables lots of cool science. First, it aids direct follow-up observations with non-gravitational-wave observatories, searching for electromagnetic or neutrino counterparts. It’s especially helpful if you can cross-reference with galaxy catalogues, to find the most probable source locations (this technique was used to find the kilonova associated with GW170817). Even without finding a counterpart, knowing the most probable host galaxy helps us figure out how the source formed (have lots of stars been born recently, or are all the stars old?), and allows us measure the expansion of the Universe. Having a reliable technique to reconstruct source locations is useful!

This was a fun paper to write [bonus note]. I’m sure it will be valuable, both for showing how to perform this type of reconstruction of a multi-dimensional probability density, and for its implications for source localization and follow-up of gravitational-wave signals. I go into details of both below, first discussing our statistical model (this is a bit technical), then looking at our results for a set of binary neutron stars (which have implications for hunting for counterparts) .

Dirichlet process Gaussian mixture model

When we analyse gravitational-wave data to infer the source properties (location, masses, etc.), we map out parameter space with a set of samples: a list of points in the parameter space, with there being more around more probable locations and fewer in less probable locations. These samples encode everything about the probability distribution for the different parameters, we just need to extract it…

For our application, we want a nice smooth probability density. How do we convert a bunch of discrete samples to a smooth distribution? The simplest thing is to bin the samples. However, picking the right bin size is difficult, and becomes much harder in higher dimensions. Another popular option is to use kernel density estimation. This is better at ensuring smooth results, but you now have to worry about the size of your kernels.

Our approach is in essence to use a kernel density estimate, but to learn the size and position of the kernels (as well as the number) from the data as an extra layer of inference. The “Gaussian mixture model” part of the name refers to the kernels—we use several different Gaussians. The “Dirichlet process” part refers to how we assign their properties (their means and standard deviations). What I really like about this technique, as opposed to the usual rule-of-thumb approaches used for kernel density estimation,  is that it is well justified from a theoretical point of view.

I hadn’t come across a Dirchlet process before. Section 2 of the paper is a walkthrough of how I built up an understanding of this mathematical object, and it contains lots of helpful references if you’d like to dig deeper.

In our application, you can think of the Dirichlet process as being a probability distribution for probability distributions. We want a probability distribution describing the source location. Given our samples, we infer what this looks like. We could put all the probability into one big Gaussian, or we could put it into lots of little Gaussians. The Gaussians could be wide or narrow or a mix. The Dirichlet distribution allows us to assign probabilities to each configuration of Gaussians; for example, if our samples are all in the northern hemisphere, we probably want Gaussians centred around there, rather than in the southern hemisphere.

With the resulting probability distribution for the source location, we can quickly evaluate it at a single point. This means we can rapidly produce a list of most probable source galaxies—extremely handy if you need to know where to point a telescope before a kilonova fades away (or someone else finds it).

Gravitational-wave localization

To verify our technique works, and develop an intuition for three-dimensional localizations, we used studied a set of simulated binary neutron star signals created for the First 2 Years trilogy of papers. This data set is well studied now, it illustrates performance it what we anticipated to be the first two observing runs of the advanced detectors, which turned out to be not too far from the truth. We have previously looked at three-dimensional localizations for these signals using a super rapid approximation.

The plots below show how well we could localise the sources of our binary neutron star sources. Specifically, the plots show the size of the volume which has a 90% probability of containing the source verses the signal-to-noise ratio (the loudness) of the signal. Typically, volumes are 10^410^5~\mathrm{Mpc}^3, which is about 10^{68}10^{69} Olympic swimming pools. Such a volume would contain something like 1001000 galaxies.

Volume verses signal-to-noise ratio

Localization volume as a function of signal-to-noise ratio. The top panel shows results for two-detector observations: the LIGO-Hanford and LIGO-Livingston (HL) network similar to in the first observing run, and the LIGO and Virgo (HLV) network similar to the second observing run. The bottom panel shows all observations for the HLV network including those with all three detectors which are colour coded by the fraction of the total signal-to-noise ratio from Virgo. In both panels, there are fiducial lines scaling inversely with the sixth power of the signal-to-noise ratio. Adapted from Fig. 4 of Del Pozzo et al. (2018).

Looking at the results in detail, we can learn a number of things

  1. The localization volume is roughly inversely proportional to the sixth power of the signal-to-noise ratio [bonus note]. Loud signals are localized much better than quieter ones!
  2. The localization dramatically improves when we have three-detector observations. The extra detector improves the sky localization, which reduces the localization volume.
  3. To get the benefit of the extra detector, the source needs to be close enough that all the detectors could get a decent amount of the signal-to-noise ratio. In our case, Virgo is the least sensitive, and we see the the best localizations are when it has a fair share of the signal-to-noise ratio.
  4. Considering the cases where we only have two detectors, localization volumes get bigger at a given signal-to-noise ration as the detectors get more sensitive. This is because we can detect sources at greater distances.

Putting all these bits together, I think in the future, when we have lots of detections, it would make most sense to prioritise following up the loudest signals. These are the best localised, and will also be the brightest since they are the closest, meaning there’s the greatest potential for actually finding a counterpart. As the sensitivity of the detectors improves, its only going to get more difficult to find a counterpart to a typical gravitational-wave signal, as sources will be further away and less well localized. However, having more sensitive detectors also means that we are more likely to have a really loud signal, which should be really well localized.

Banana vs cucumber

Left: Localization (yellow) with a network of two low-sensitivity detectors. The sky location is uncertain, but we know the source must be nearby. Right: Localization (green) with a network of three high-sensitivity detectors. We have good constraints on the source location, but it could now be at a much greater range of distances. Not to scale.

Using our localization volumes as a guide, you would only need to search one galaxy to find the true source in about 7% of cases with a three-detector network similar to at the end of our second observing run. Similarly, only ten would need to be searched in 23% of cases. It might be possible to get even better performance by considering which galaxies are most probable because they are the biggest or the most likely to produce merging binary neutron stars. This is definitely a good approach to follow.

Three-dimensional localization with galaxy catalgoue

Galaxies within the 90% credible volume of an example simulated source, colour coded by probability. The galaxies are from the GLADE Catalog; incompleteness in the plane of the Milky Way causes the missing wedge of galaxies. The true source location is marked by a cross [bonus note]. Part of Figure 5 of Del Pozzo et al. (2018).

arXiv: 1801.08009 [astro-ph.IM]
Journal: Monthly Notices of the Royal Astronomical Society; 479(1):601–614; 2018
Code: 3d_volume
Buzzword bingo: Interdisciplinary (we worked with computer scientist Tom Haines); machine learning (the inference involving our Dirichlet process Gaussian mixture model); multimessenger astronomy (as our results are useful for following up gravitational-wave signals in the search for counterparts)

Bonus notes

Writing

We started writing this paper back before the first observing run of Advanced LIGO. We had a pretty complete draft on Friday 11 September 2015. We just needed to gather together a few extra numbers and polish up the figures and we’d be done! At 10:50 am on Monday 14 September 2015, we made our first detection of gravitational waves. The paper was put on hold. The pace of discoveries over the coming years meant we never quite found enough time to get it together—I’ve rewritten the introduction a dozen times. It’s extremely satisfying to have it done. This is a shame, as it meant that this study came out much later than our other three-dimensional localization study. The delay has the advantage of justifying one of my favourite acknowledgement sections.

Sixth power

We find that the localization volume \Delta V is inversely proportional to the sixth power of the signal-to-noise ration \varrho. This is what you would expect. The localization volume depends upon the angular uncertainty on the sky \Delta \Omega, the distance to the source D, and the distance uncertainty \Delta D,

\Delta V \sim D^2 \Delta \Omega \Delta D.

Typically, the uncertainty on a parameter (like the masses) scales inversely with the signal-to-noise ratio. This is the case for the logarithm of the distance, which means

\displaystyle \frac{\Delta D}{D} \propto \varrho^{-1}.

The uncertainty in the sky location (being two dimensional) scales inversely with the square of the signal-to-noise ration,

\Delta \Omega \propto \varrho^{-2}.

The signal-to-noise ratio itself is inversely proportional to the distance to the source (sources further way are quieter. Therefore, putting everything together gives

\Delta V \propto \varrho^{-6}.

Treasure

We all know that treasure is marked by a cross. In the case of a binary neutron star merger, dense material ejected from the neutron stars will decay to heavy elements like gold and platinum, so there is definitely a lot of treasure at the source location.

Going the distance: Mapping host galaxies of LIGO and Virgo sources in three dimensions using local cosmography and targeted follow-up

GW150914 claimed the title of many firsts—it was the first direct observation of gravitational waves, the first observation of a binary black hole system, the first observation of two black holes merging, the first time time we’ve tested general relativity in such extreme conditions… However, there are still many firsts for gravitational-wave astronomy yet to come (hopefully, some to be accompanied by cake). One of the most sought after, is the first is signal to have a clear electromagnetic counterpart—a glow in some part of the spectrum of light (from radio to gamma-rays) that we can observe with telescopes.

Identifying a counterpart is challenging, as it is difficult to accurately localise a gravitational-wave source. electromagnetic observers must cover a large area of sky before any counterparts fade. Then, if something is found, it can be hard to determine if that is from the same source as the gravitational waves, or some thing else…

To help the search, it helps to have as much information as possible about the source. Especially useful is the distance to the source. This can help you plan where to look. For nearby sources, you can cross-reference with galaxy catalogues, and perhaps pick out the biggest galaxies as the most likely locations for the source [bonus note]. Distance can also help plan your observations: you might want to start with regions of the sky where the source would be closer and so easiest to spot, or you may want to prioritise points where it is further and so you’d need to observe longer to detect it (I’m not sure there’s a best strategy, it depends on the telescope and the amount of observing time available). In this paper we describe a method to provide easy-to-use distance information, which could be supplied to observers to help their search for a counterpart.

Going the distance

This work is the first spin-off from the First 2 Years trilogy of papers, which looked a sky localization and parameter estimation for binary neutron stars in the first two observing runs of the advance-detector era. Binary neutron star coalescences are prime candidates for electromagnetic counterparts as we think there should be a bigger an explosion as they merge. I was heavily involved in the last two papers of the trilogy, but this study was led by Leo Singer: I think I mostly annoyed Leo by being a stickler when it came to writing up the results.

3D localization with the two LIGO detectors

Three-dimensional localization showing the 20%, 50%, and 90% credible levels for a typical two-detector early Advanced LIGO event. The Earth is shown at the centre, marked by \oplus. The true location is marked by the cross. Leo poetically described this as looking like the seeds of the jacaranda tree, and less poetically as potato chips. Figure 1 of Singer et al. (2016).

The idea is to provide a convenient means of sharing a 3D localization for a gravitational wave source. The full probability distribution is rather complicated, but it can be made more manageable if you break it up into pixels on the sky. Since astronomers need to decide where to point their telescopes, breaking up the 3D information along different lines of sight, should be useful for them.

Each pixel covers a small region of the sky, and along each line of sight, the probability distribution for distance D can be approximated using an ansatz

\displaystyle p(D|\mathrm{data}) \propto D^2\exp\left[-\frac{(D - \mu)^2}{2\sigma}\right],

where \mu and \sigma are calculated for each pixel individually.  The form of this ansatz can be understood as the posterior probability distribution is proportional to the product of the prior and the likelihood. Our prior is that sources are uniformly distributed in volume, which means \propto D^2, and the likelihood can often be well approximated as a Gaussian distribution, which gives the other piece [bonus note].

The ansatz doesn’t always fit perfectly, but it performs well on average. Considering the catalogue of binary neutron star signals used in the earlier papers, we find that roughly 50% of the time sources are found within the 50% credible volume, 90% are found in the 90% volume, etc. We looked at a more sophisticated means of constructing the localization volume in a companion paper.

The 3D localization is easy to calculate, and Leo has worked out a cunning way to evaluate the ansatz with BAYESTAR, our rapid sky-localization code, meaning that we can produce it on minute time-scales. This means that observers should have something to work with straight-away, even if we’ll need to wait a while for the full, final results. We hope that this will improve prospects for finding counterparts—some potential examples are sketched out in the penultimate section of the paper.

If you are interested in trying out the 3D information, there is a data release and the supplement contains a handy Python tutorial. We are hoping that the Collaboration will use the format for alerts for LIGO and Virgo’s upcoming observing run (O2).

arXiv: 1603.07333 [astro-ph.HE]; 1605.04242 [astro-ph.IM]
Journal: Astrophysical Journal Letters; 829(1):L15(7); 2016; Astrophysical Journal Supplement Series; 226(1):10(8); 2016
Data release: Going the distance
Favourite crisp flavour: Salt & vinegar
Favourite jacaranda: Jacaranda mimosifolia

Bonus notes

Catalogue shopping

The Event’s source has a luminosity distance of around 250–570 Mpc. This is sufficiently distant that galaxy catalogues are incomplete and not much use when it comes to searching. GW151226 and LVT151012 have similar problems, being at around the same distance or even further.

The gravitational-wave likelihood

For the professionals interested in understanding more about the shape of the likelihood, I’d recommend Cutler & Flanagan (1994). This is a fantastic paper which contains many clever things [bonus bonus note]. This work is really the foundation of gravitational-wave parameter estimation. From it, you can see how the likelihood can be approximated as a Gaussian. The uncertainty can then be evaluated using Fisher matrices. Many studies have been done using Fisher matrices, but it important to check that this is a valid approximation, as nicely explained in Vallisneri (2008). I ran into a case when it didn’t during my PhD.

Mergin’

As a reminder that smart people make mistakes, Cutler & Flanagan have a typo in the title of arXiv posting of their paper. This is probably the most important thing to take away from this paper.

Parameter estimation on gravitational waves from neutron-star binaries with spinning components

blIn gravitation-wave astronomy, some parameters are easier to measure than others. We are sensitive to properties which change the form of the wave, but sometimes the effect of changing one parameter can be compensated by changing another. We call this a degeneracy. In signals for coalescing binaries (two black holes or neutron stars inspiralling together), there is a degeneracy between between the masses and spins. In this recently published paper, we look at what this means for observing binary neutron star systems.

History

This paper has been something of an albatross, and I’m extremely pleased that we finally got it published. I started working on it when I began my post-doc at Birmingham in 2013. Back then I was sharing an office with Ben Farr, and together with others in the Parameter Estimation Group, we were thinking about the prospect of observing binary neutron star signals (which we naively thought were the most likely) in LIGO’s first observing run.

One reason that this work took so long is that binary neutron star signals can be computationally expensive to analyse [bonus note]. The signal slowly chirps up in frequency, and can take up to a minute to sweep through the range of frequencies LIGO is sensitive to. That gives us a lot of gravitational wave to analyse. (For comparison, GW150914 lasted 0.2 seconds). We need to calculate waveforms to match to the observed signals, and these can be especially complicated when accounting for the effects of spin.

A second reason is shortly after submitting the paper in August 2015, we got a little distracted

This paper was the third of a trilogy look at measuring the properties of binary neutron stars. I’ve written about the previous instalment before. We knew that getting the final results for binary neutron stars, including all the important effects like spin, would take a long time, so we planned to follow up any detections in stages. A probable sky location can be computed quickly, then we can have a first try at estimating other parameters like masses using waveforms that don’t include spin, then we go for the full results with spin. The quicker results would be useful for astronomers trying to find any explosions that coincided with the merger of the two neutron stars. The first two papers looked at results from the quicker analyses (especially at sky localization); in this one we check what effect neglecting spin has on measurements.

What we did

We analysed a population of 250 binary neutron star signals (these are the same as the ones used in the first paper of the trilogy). We used what was our best guess for the sensitivity of the two LIGO detectors in the first observing run (which was about right).

The simulated neutron stars all have small spins of less than 0.05 (where 0 is no spin, and 1 would be the maximum spin of a black hole). We expect neutron stars in these binaries to have spins of about this range. The maximum observed spin (for a neutron star not in a binary neutron star system) is around 0.4, and we think neutron stars should break apart for spins of 0.7. However, since we want to keep an open mind regarding neutron stars, when measuring spins we considered spins all the way up to 1.

What we found

Our results clearly showed the effect of the mass–spin degeneracy. The degeneracy increases the uncertainty for both the spins and the masses.

Even though the true spins are low, we find that across the 250 events, the median 90% upper limit on the spin of the more massive (primary) neutron star is 0.70, and the 90% limit on the less massive (secondary) neutron star is 0.86. We learn practically nothing about the spin of the secondary, but a little more about the spin of the primary, which is more important for the inspiral. Measuring spins is hard.

The effect of the mass–spin degeneracy for mass measurements is shown in the plot below. Here we show a random selection of events. The banana-shaped curves are the 90% probability intervals. They are narrow because we can measure a particular combination of masses, the chirp mass, really well. The mass–spin degeneracy determines how long the banana is. If we restrict the range of spins, we explore less of the banana (and potentially introduce an offset in our results).

Neutron star mass distributions

Rough outlines for 90% credible regions for component masses for a random assortments of signals. The circles show the true values. The coloured lines indicate the extent of the distribution with different limits on the spins. The grey area is excluded from our convention on masses m_1 \geq m_2. Figure 5 from Farr et al. (2016).

Although you can’t see it in the plot above, including spin does also increase the uncertainty in the chirp mass too. The plots below show the standard deviation (a measure width of the posterior probability distribution), divided by the mean for several mass parameters. This gives a measure of the fractional uncertainty in our measurements. We show the chirp mass \mathcal{M}_\mathrm{c}, the mass ratio q = m_2/m_1 and the total mass M = m_1 + m_2, where m_1 and m_2 are the masses of the primary and secondary neutron stars respectively. The uncertainties are small for louder signals (higher signal-to-noise ratio). If we neglect the spin, the true chirp mass can lie outside the posterior distribution, the average is about 5 standard deviations from the mean, but if we include spin, the offset is just 0.7 from the mean (there’s still some offset as we’re allowing for spins all the way up to 1).

Mass measurements for binary neutron stars with and without spin

Fractional statistical uncertainties in chirp mass (top), mass ratio (middle) and total mass (bottom) estimates as a function of network signal-to-noise ratio for both the fully spinning analysis and the quicker non-spinning analysis. The lines indicate approximate power-law trends to guide the eye. Figure 2 of Farr et al. (2016).

We need to allow for spins when measuring binary neutron star masses in order to explore for the possible range of masses.

Sky localization and distance, however, are not affected by the spins here. This might not be the case for sources which are more rapidly spinning, but assuming that binary neutron stars do have low spin, we are safe using the easier-to-calculate results. This is good news for astronomers who need to know promptly where to look for explosions.

arXiv: 1508.05336 [astro-ph.HE]
Journal: Astrophysical Journal825(2):116(10); 2016
Authorea [bonus note]: Parameter estimation on gravitational waves from neutron-star binaries with spinning components
Conference proceedings:
 Early Advanced LIGO binary neutron-star sky localization and parameter estimation
Favourite albatross:
 Wilbur

Bonus notes

How long?

The plot below shows how long it took to analyse each of the binary neutron star signals.

Run time for different analyses of binary neutron stars

Distribution of run times for binary neutron star signals. Low-latency sky localization is done with BAYESTAR; medium-latency non-spinning parameter estimation is done with LALInference and TaylorF2 waveforms, and high-latency fully spinning parameter estimation is done with LALInference and SpinTaylorT4 waveforms. The LALInference results are for 2000 posterior samples. Figure 9 from Farr et al. (2016).

BAYESTAR provides a rapid sky localization, taking less than ten seconds. This is handy for astronomers who want to catch a flash caused by the merger before it fades.

Estimates for the other parameters are computed with LALInference. How long this takes to run depends on which waveform you are using and how many samples from the posterior probability distribution you want (the more you have, the better you can map out the shape of the distribution). Here we show times for 2000 samples, which is enough to get a rough idea (we collected ten times more for GW150914 and friends). Collecting twice as many samples takes (roughly) twice as long. Prompt results can be obtained with a waveform that doesn’t include spin (TaylorF2), these take about a day at most.

For this work, we considered results using a waveform which included the full effects of spin (SpinTaylorT4). These take about twenty times longer than the non-spinning analyses. The maximum time was 172 days. I have a strong suspicion that the computing time cost more than my salary.

Gravitational-wave arts and crafts

Waiting for LALInference runs to finish gives you some time to practise hobbies. This is a globe knitted by Hannah. The two LIGO sites marked in red, and a typical gravitational-wave sky localization stitched on.

In order to get these results, we had to add check-pointing to our code, so we could stop it and restart it; we encountered a new type of error in the software which manages jobs running on our clusters, and Hannah Middleton and I got several angry emails from cluster admins (who are wonderful people) for having too many jobs running.

In comparison, analysing GW150914, LVT151012 and GW151226 was a breeze. Grudgingly, I have to admit that getting everything sorted out for this study made us reasonably well prepared for the real thing. Although, I’m not looking forward to that first binary neutron star signal…

Authorea

Authorea is an online collaborative writing service. It allows people to work together on documents, editing text, adding comments, and chatting with each other. By the time we came to write up the paper, Ben was no longer in Birmingham, and many of our coauthors are scattered across the globe. Ben thought Authorea might be useful for putting together the paper.

Writing was easy, and the ability to add comments on the text was handy for getting feedback from coauthors. The chat was going for quickly sorting out issues like plots. Overall, I was quite pleased, up to the point we wanted to get the final document. Extracted a nicely formatted PDF was awkward. For this I switched to using the Github back-end. On reflection, a simple git repo, plus a couple of Skype calls might have been a smoother way of writing, at least for a standard journal article.

Authorea promises to be an open way of producing documents, and allows for others to comment on papers. I don’t know if anyone’s looked at our Authorea article. For astrophysics, most people use the arXiv, which is free to everyone, and I’m not sure if there’s enough appetite for interaction (beyond the occasional email to authors) to motivate people to look elsewhere. At least, not yet.

In conclusion, I think Authorea is a nice idea, and I would try out similar collaborative online writing tools again, but I don’t think I can give it a strong recommendation for your next paper unless you have a particular idea in mind of how to make the most of it.

Parameter estimation for binary neutron-star coalescences with realistic noise during the Advanced LIGO era

The first observing run (O1) of Advanced LIGO is nearly here, and with it the prospect of the first direct detection of gravitational waves. That’s all wonderful and exciting (far more exciting than a custard cream or even a chocolate digestive), but there’s a lot to be done to get everything ready. Aside from remembering to vacuum the interferometer tubes and polish the mirrors, we need to see how the data analysis will work out. After all, having put so much effort into the detector, it would be shame if we couldn’t do any science with it!

Parameter estimation

Since joining the University of Birmingham team, I’ve been busy working on trying to figure out how well we can measure things using gravitational waves. I’ve been looking at binary neutron star systems. We expect binary neutron star mergers to be the main source of signals for Advanced LIGO. We’d like to estimate how massive the neutron stars are, how fast they’re spinning, how far away they are, and where in the sky they are. Just published is my first paper on how well we should be able to measure things. This took a lot of hard work from a lot of people, so I’m pleased it’s all done. I think I’ve earnt a celebratory biscuit. Or two.

When we see something that looks like it could be a gravitational wave, we run code to analyse the data and try to work out the properties of the signal. Working out some properties is a bit trickier than others. Sadly, we don’t have an infinite number of computers, so it means it can take a while to get results. Much longer than the time to eat a packet of Jaffa Cakes…

The fastest algorithm we have for binary neutron stars is BAYESTAR. This takes the same time as maybe eating one chocolate finger. Perhaps two, if you’re not worried about the possibility of choking. BAYESTAR is fast as it only estimates where the source is coming from. It doesn’t try to calculate a gravitational-wave signal and match it to the detector measurements, instead it just looks at numbers produced by the detection pipeline—the code that monitors the detectors and automatically flags whenever something interesting appears. As far as I can tell, you give BAYESTAR this information and a fresh cup of really hot tea, and it uses Bayes’ theorem to work out how likely it is that the signal came from each patch of the sky.

To work out further details, we need to know what a gravitational-wave signal looks like and then match this to the data. This is done using a different algorithm, which I’ll refer to as LALInference. (As names go, this isn’t as cool as SKYNET). This explores parameter space (hopping between different masses, distances, orientations, etc.), calculating waveforms and then working out how well they match the data, or rather how likely it is that we’d get just the right noise in the detector to make the waveform fit what we observed. We then use another liberal helping of Bayes’ theorem to work out how probable those particular parameter values are.

It’s rather difficult to work out the waveforms, but some our easier than others. One of the things that makes things trickier is adding in the spins of the neutron stars. If you made a batch of biscuits at the same time you started a LALInference run, they’d still be good by the time a non-spinning run finished. With a spinning run, the biscuits might not be quite so appetising—I generally prefer more chocolate than penicillin on my biscuits. We’re working on speeding things up (if only to prevent increased antibiotic resistance).

In this paper, we were interested in what you could work out quickly, while there’s still chance to catch any explosion that might accompany the merging of the neutron stars. We think that short gamma-ray bursts and kilonovae might be caused when neutron stars merge and collapse down to a black hole. (I find it mildly worrying that we don’t know what causes these massive explosions). To follow-up on a gravitational-wave detection, you need to be able to tell telescopes where to point to see something and manage this while there’s still something that’s worth seeing. This means that using spinning waveforms in LALInference is right out, we just use BAYESTAR and the non-spinning LALInference analysis.

What we did

To figure out what we could learn from binary neutron stars, we generated a large catalogue of fakes signals, and then ran the detection and parameter-estimation codes on this to see how they worked. This has been done before in The First Two Years of Electromagnetic Follow-Up with Advanced LIGO and Virgo which has a rather delicious astrobites write-up. Our paper is the sequel to this (and features most of the same cast). One of the differences is that The First Two Years assumed that the detectors were perfectly behaved and had lovely Gaussian noise. In this paper, we added in some glitches. We took some real data™ from initial LIGO’s sixth science run and stretched this so that it matches the sensitivity Advanced LIGO is expected to have in O1. This process is called recolouring [bonus note]. We now have fake signals hidden inside noise with realistic imperfections, and can treat it exactly as we would real data. We ran it through the detection pipeline, and anything which was flagged as probably being a signal (we used a false alarm rate of once per century), was analysed with the parameter-estimation codes. We looked at how well we could measure the sky location and distance of the source, and the masses of the neutron stars. It’s all good practice for O1, when we’ll be running this analysis on any detections.

What we found

  1. The flavour of noise (recoloured or Gaussian) makes no difference to how well we can measure things on average.
  2. Sky-localization in O1 isn’t great, typically hundreds of square degrees (the median 90% credible region is 632 deg2), for comparison, the Moon is about a fifth of a square degree. This’ll make things interesting for the people with telescopes.

    Sky localization map for O1.

    Probability that of a gravitational-wave signal coming from different points on the sky. The darker the red, the higher the probability. The star indicates the true location. This is one of the worst localized events from our study for O1. You can find more maps in the data release (including 3D versions), this is Figure 6 of Berry et al. (2015).

  3. BAYESTAR does just as well as LALInference, despite being about 2000 times faster.

    Sky localization for binary neutron stars during O1.

    Sky localization (the size of the patch of the sky that we’re 90% sure contains the source location) varies with the signal-to-noise ratio (how loud the signal is). The approximate best fit is \log_{10}(\mathrm{CR}_{0.9}/\mathrm{deg^2}) \approx -2 \log_{10}(\varrho) +5.06, where \mathrm{CR}_{0.9} is the 90% sky area and \varrho is the signal-to-noise ratio. The results for BAYESTAR and LALInference agree, as do the results with Gaussian and recoloured noise. This is Figure 9 of Berry et al. (2015).

  4. We can’t measure the distance too well: the median 90% credible interval divided by the true distance (which gives something like twice the fractional error) is 0.85.
  5. Because we don’t include the spins of the neutron stars, we introduce some error into our mass measurements. The chirp mass, a combination of the individual masses that we’re most sensitive to [bonus note], is still reliably measured (the median offset is 0.0026 of the mass of the Sun, which is tiny), but we’ll have to wait for the full spinning analysis for individual masses.

    Mean offset in chirp-mass estimates when not including the effects of spin.

    Fraction of events with difference between the mean estimated and true chirp mass smaller than a given value. There is an error because we are not including the effects of spin, but this is small. Again, the type of noise makes little difference. This is Figure 15 of Berry et al. (2015).

There’s still some work to be done before O1, as we need to finish up the analysis with waveforms that include spin. In the mean time, our results are all available online for anyone to play with.

arXiv: 1411.6934 [astro-ph.HE]
Journal: Astrophysical Journal; 904(2):114(24); 2015
Data release: The First Two Years of Electromagnetic Follow-Up with Advanced LIGO and Virgo
Favourite colour: Blue. No, yellow…

Notes

The colour of noise: Noise is called white if it doesn’t have any frequency dependence. We made ours by taking some noise with initial LIGO’s frequency dependence (coloured noise), removing the frequency dependence (making it white), and then adding in the frequency dependence of Advanced LIGO (recolouring it).

The chirp mass: Gravitational waves from a binary system depend upon the masses of the components, we’ll call these m_1 and m_2. The chirp mass is a combination these that we can measure really well, as it determines the most significant parts of the shape of the gravitational wave. It’s given by

\displaystyle \mathcal{M} = \frac{m_1^{3/5} m_2^{3/5}}{(m_1 + m_2)^{1/5}}.

We get lots of good information on the chirp mass, unfortunately, this isn’t too useful for turning back into the individual masses. For that we next extra information, for example the mass ratio m_2/m_1. We can get this from less dominant parts of the waveform, but it’s not typically measured as precisely as the chirp mass, so we’re often left with big uncertainties.