Observing run 1—The papers

The second observing run (O2) of the advanced gravitational wave detectors is now over, which has reminded me how dreadfully behind I am in writing about papers. In this post I’ll summarise results from our first observing run (O1), which ran from September 2015 to January 2016.

I’ll add to this post as I get time, and as papers are published. I’ve started off with just papers searching for compact binary coalescences (as these are closest to my own research). There are separate posts on our detections GW150914 (and its follow-up papers: set I, set II) and GW151226 (this post includes our end-of-run summary of the search for binary black holes, including details of LVT151012).

Transient searches

The O1 Binary Neutron Star/Neutron Star–Black Hole Paper

Title: Upper limits on the rates of binary neutron star and neutron-star–black-hole mergers from Advanced LIGO’s first observing run
arXiv: 1607.07456 [astro-ph.HE]
Journal: Astrophysical Journal Letters; 832(2):L21(15); 2016

Our main search for compact binary coalescences targets binary black holes (binaries of two black holes), binary neutron stars (two neutron stars) and neutron-star–black-hole binaries (one of each). Having announced the results of our search for binary black holes, this paper gives the detail of the rest. Since we didn’t make any detections, we set some new, stricter upper limits on their merger rates. For binary neutron stars, this is 12,600~\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1} .

More details: O1 Binary Neutron Star/Neutron Star–Black Hole Paper Paper summary

The O1 Gamma-Ray Burst Paper

Title: Search for gravitational waves associated with gamma-ray bursts during the first Advanced LIGO observing run and implications for the origin of GRB 150906B
arXiv: 1611.07947 [astro-ph.HE]
Journal: Astrophysical Journal; 841(2):89(18); 2016
LIGO science summary: What’s behind the mysterious gamma-ray bursts? LIGO’s search for clues to their origins

Some binary neutron star or neutron-star–black-hole mergers may be accompanied by a gamma-ray burst. This paper describes our search for signals coinciding with observations of gamma-ray bursts (including GRB 150906B, which was potentially especially close by). Knowing when to look makes it easy to distinguish a signal from noise. We don’t find anything, so we we can exclude any close binary mergers as sources of these gamma-ray bursts.

More details: O1 Gamma-Ray Burst Paper summary

The O1 Intermediate Mass Black Hole Binary Paper

Title: Search for intermediate mass black hole binaries in the first observing run of Advanced LIGO
arXiv: 1704.04628 [gr-qc]
Journal: Physical Review D; 96(2):022001(14); 2017
LIGO science summary: Search for mergers of intermediate-mass black holes

Our main search for binary black holes in O1 targeted systems with masses less than about 100 solar masses. There could be more massive black holes out there. Our detectors are sensitive to signals from binaries up to a few hundred solar masses, but these are difficult to detect because they are so short. This paper describes our specially designed such systems. This combines techniques which use waveform templates and those which look for unmodelled transients (bursts). Since we don’t find anything, we set some new upper limits on merger rates.

More details: O1 Intermediate Mass Black Hole Binary Paper summary

The O1 Binary Neutron Star/Neutron Star–Black Hole Paper

Synopsis: O1 Binary Neutron Star/Neutron Star–Black Hole Paper
Read this if: You want a change from black holes
Favourite part: We’re getting closer to detection (and it’ll still be interesting if we don’t find anything)

The Compact Binary Coalescence (CBC) group target gravitational waves from three different flavours of binary in our main search: binary neutron stars, neutron star–black hole binaries and binary black holes. Before O1, I would have put my money on us detecting a binary neutron star first, around-about O3. Reality had other ideas, and we discovered binary black holes. Those results were reported in the O1 Binary Black Hole Paper; this paper goes into our results for the others (which we didn’t detect).

To search for signals from compact binaries, we use a bank of gravitational wave signals  to match against the data. This bank goes up to total masses of 100 solar masses. We split the bank up, so that objects below 2 solar masses are considered neutron stars. This doesn’t make too much difference to the waveforms we use to search (neutrons stars, being made of stuff, can be tidally deformed by their companion, which adds some extra features to the waveform, but we don’t include these in the search). However, we do limit the spins for neutron stars to less the 0.05, as this encloses the range of spins estimated for neutron star binaries from binary pulsars. This choice shouldn’t impact our ability to detect neutron stars with moderate spins too much.

We didn’t find any interesting events: the results were consistent with there just being background noise. If you read really carefully, you might have deduced this already from the O1 Binary Black Hole Paper, as the results from the different types of binaries are completely decoupled. Since we didn’t find anything, we can set some upper limits on the merger rates for binary neutron stars and neutron star–black hole binaries.

The expected number of events found in the search is given by

\Lambda = R \langle VT \rangle

where R is the merger rate, and $latex \langle VT \rangle$ is the surveyed time–volume (you expect more detections if your detectors are more sensitive, so that they can find signals from further away, or if you leave them on for longer). We can estimate $latex \langle VT \rangle$ by performing a set of injections and seeing how many are found/missed at a given threshold. Here, we use a false alarm rate of one per century. Given our estimate for $latex \langle VT \rangle$ and our observation of zero detections we can, calculate a probability distribution for R using Bayes’ theorem. This requires a choice for a prior distribution of \Lambda. We use a uniform prior, for consistency with what we’ve done in the past.

With a uniform prior, the c confidence level limit on the rate is

\displaystyle R_c = \frac{-\ln(1-c)}{\langle VT \rangle},

so the 90% confidence upper limit is R_{90\%} = 2.30/\langle VT \rangle. This is quite commonly used, for example we make use of it in the O1 Intermediate Mass Black Hole Binary Search. For comparison, if we had used a Jeffrey’s prior of 1/\sqrt{\Lambda}, the equivalent results is

\displaystyle R_c = \frac{\left[\mathrm{erf}^{-1}(c)\right]^2}{\langle VT \rangle},

and hence R_{90\%} = 1.35/\langle VT \rangle, so results would be the same to within a factor of 2, but the results with the uniform prior are more conservative.

The plot below shows upper limits for different neutron star masses, assuming that neutron spins are (uniformly distributed) between 0 and 0.05 and isotropically orientated. From our observations of binary pulsars, we have seen that most of these neutron stars have masses of ~1.35 solar masses, so we can also put a limit of the binary neutron star merger rate assuming that their masses are normally distributed with mean of 1.35 solar masses and standard deviation of 0.13 solar masses. This gives an upper limit of R_{90\%} = 12,100~\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1} for isotropic spins up to 0.05, and R_{90\%} = 12,600~\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1} if you allow the spins up to 0.4.

Upper merger rate limits for binary neutron stars

90% confidence upper limits on the binary neutron star merger rate. These rates assume randomly orientated spins up to 0.05. Results are calculated using PyCBC, one of our search algorithms; GstLAL gives similar results. Figure 4 of the O1 Binary Neutron Star/Neutron Star–Black Hole Paper

For neutron star–black hole binaries there’s a greater variation in possible merger rates because the black holes can have a greater of masses and spins. The upper limits range from about R_{90\%} = 1,200~\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1} to 3,600~\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1} for a 1.4 solar mass neutron star and a black hole between 30 and 5 solar masses and a range of different spins (Table II of the paper).

It’s not surprising that we didn’t see anything in O1, but what about in future runs. The plots below compare projections for our future sensitivity with various predictions for the merger rates of binary neutron stars and neutron star–black hole binaries. A few things have changed since we made these projections, for example O2 ended up being 9 months instead of 6 months, but I think we’re still somewhere in the O2 band. We’ll have to see for O3. From these, it’s clear that a detection on O1 was overly optimistic. In O2 and O3 it becomes more plausible. This means even if we don’t see anything, we’ll be still be doing some interesting astrophysics as we can start ruling out some models.

Comparison of merger rates

Comparison of upper limits for binary neutron star (BNS; top) and neutron star–black hole binaries (NSBH; bottom) merger rates with theoretical and observational limits. The blue bars show O1 limits, the green and orange bars show projections for future observing runs. Figure 6 and 7 from the O1 Binary Neutron Star/Neutron Star–Black Hole Paper

Binary neutron star or neutron star–black hole mergers may be the sources of gamma-ray bursts. These are some of the most energetic explosions in the Universe, but we’re not sure where they come from (I actually find that kind of worrying). We look at this connection a bit more in the O1 Gamma-Ray Burst Paper. The theory is that during the merger, neutron star matter gets ripped apart, squeezed and heated, and as part of this we get jets blasted outwards from the swirling material. There are always jets in these type of things. We see the gamma-ray burst if we are looking down the jet: the wider the jet, the larger the fraction of gamma-ray bursts we see. By comparing our estimated merger rates, with the estimated rate of gamma-ray bursts, we can place some lower limits on the opening angle of the jet. If all gamma-ray bursts come from binary neutron stars, the opening angle needs to be bigger than 2.3_{-1.7}^{+1.7}~\mathrm{deg} and if they all come from neutron star–black hole mergers the angle needs to be bigger than 4.3_{-1.9}^{+3.1}~\mathrm{deg}.

The O1 Gamma-Ray Burst Paper

Synopsis: O1 Gamma-Ray Burst Paper
Read this if: You like explosions. But from a safe distance
Favourite part: We exclude GRB 150906B from being associated with galaxy NGC 3313

Gamma-ray bursts are extremely violent explosions. They come in two (overlapping) classes: short and long. Short gamma-ray bursts are typically shorter than ~2 seconds and have a harder spectrum (more high energy emission). We think that these may come from the coalescence of neutron star binaries. Long gamma-ray bursts are (shockingly) typically longer than ~2 seconds, and have a softer spectrum (less high energy emission). We think that these could originate from the collapse of massive stars (like a supernova explosion). The introduction of the paper contains a neat review of the physics of both these types of sources. Both types of progenitors would emit gravitational waves that could be detected if the source was close enough.

The binary mergers could be picked up by our templated search (as reported in the O1 Binary Neutron Star/Neutron Star–Black Hole Paper): we have a good models for what these signals look like, which allows us to efficiently search for them. We don’t have good models for the collapse of stars, but our unmodelled searches could pick these up. These look for the same signal in multiple detectors, but since they don’t know what they are looking for, it is harder to distinguish a signal from noise than for the templated search. Cross-referencing our usual searches with the times of gamma-ray bursts could help us boost the significance of a trigger: it might not be noteworthy as just a weak gravitational-wave (or gamma-ray) candidate, but considering them together makes it much more unlikely that a coincidence would happen by chance. The on-line RAVEN pipeline monitors for alerts to minimise the chance that miss a coincidence. As well as relying on our standard searches, we also do targeted searches following up on gamma-ray bursts, using the information from these external triggers.

We used two search algorithms:

  • X-Pipeline is an unmodelled search (similar to cWB) which looks for a coherent signal, consistent with the sky position of the gamma-ray burst. This was run for all the gamma-ray bursts (long and short) for which we have good data from both LIGO detectors and a good sky location.
  • PyGRB is a modelled search which looks for binary signals using templates. Our main binary search algorithms check for coincident signals: a signal matching the same template in both detectors with compatible times. This search looks for coherent signals, factoring the source direction. This gives extra sensitivity (~20%–25% in terms of distance). Since we know what the signal looks like, we can also use this algorithm to look for signals when only one detector is taking data. We used this algorithm on all short (or ambiguously classified) gamma-ray bursts for which we data from at least one detector.

In total we analysed times corresponding to 42 gamma-ray bursts: 41 which occurred during O1 plus GRB 150906B. This happening in the engineering run before the start of O1, and luckily Handord was in a stable observing state at the time. GRB 150906B was localised to come from part of the sky close to the galaxy NGC 3313, which is only 54 megaparsec away. This is within the regime where we could have detected a binary merger. This caused much excitement at the time—people thought that this could be the most interesting result of O1—but this dampened down a week later with the detection of GW150914.

GRB 150906B sky location

Interplanetary Network (IPN) localization for GRB 150906B and nearby galaxies. Figure 1 from the O1 Gamma-Ray Burst Paper.

We didn’t find any gravitational-wave counterparts. These means that we could place some lower limits on how far away their sources could be. We performed injections of signals—using waveforms from binaries, collapsing stars (approximated with circular sine–Gaussian waveforms), and unstable discs (using an accretion disc instability model)—to see how far away we could have detected a signal, and set 90% probability limits on the distances (see Table 3 of the paper). The best of these are ~100–200 megaparsec (the worst is just 4 megaparsec, which is basically next door). These results aren’t too interesting yet, they will become more so in the future, and around the time we hit design sensitivity we will start overlapping with electromagnetic measurements of distances for short gamma-ray bursts. However, we can rule out GRB 150906B coming from NGC 3133 at high probability!

The O1 Intermediate Mass Black Hole Binary Paper

Synopsis: O1 Intermediate Mass Black Hole Binary Paper
Read this if: You like intermediate mass black holes (black holes of ~100 solar masses)
Favourite part: The teamwork between different searches

Black holes could come in many sizes. We know of stellar-mass black holes, the collapsed remains of dead stars, which are a few to a few tens of times the mas of our Sun, and we know of (super)massive black holes, lurking in the centres of galaxies, which are tens of thousands to billions of times the mass of our Sun. Between the two, lie the elusive intermediate mass black holes. There have been repeated claims of observational evidence for their existence, but these are notoriously difficult to confirm. Gravitational waves provide a means of confirming the reality of intermediate mass black holes, if they do exist.

The gravitational wave signal emitted by a binary depends upon the mass of its components. More massive objects produce louder signals, but these signals also end at lower frequencies. The merger frequency of a binary is inversely proportional to the total mass. Ground-based detectors can’t detect massive black hole binaries as they are too low frequency, but they can detect binaries of a few hundred solar masses. We look for these in this search.

Our flagship search for binary black holes looks for signals using matched filtering: we compare the data to a bank of template waveforms. The bank extends up to a total mass of 100 solar masses. This search continues above this (there’s actually some overlap as we didn’t want to miss anything, but we shouldn’t have worried). Higher mass binaries are hard to detect as they as shorter, and so more difficult to distinguish from a little blip of noise, which is why this search was treated differently.

As well as using templates, we can do an unmodelled (burst) search for signals by looking for coherent signals in both detectors. This type of search isn’t as sensitive, as you don’t know what you are looking for, but can pick up short signals (like GW150914).

Our search for intermediate mass black holes uses both a modelled search (with templates spanning total masses of 50 to 600 solar masses) and a specially tuned burst search. Both make sure to include low frequency data in their analysis. This work is one of the few cross-working group (CBC for the templated search, and Burst for the unmodelled) projects, and I was pleased with the results.

This is probably where you expect me to say that we didn’t detect anything so we set upper limits. That is actually not the case here: we did detect something! Unfortunately, it wasn’t what we were looking for. We detected GW150914, which was a relief as it did lie within the range we where searching, as well as LVT151012 and GW151226. These were more of a surprise. GW151226 has a total mass of just ~24 solar masses (as measured with cosmological redshift), and so is well outside our bank. It was actually picked up just on the edge, but still, it’s impressive that the searches can find things beyond what they are aiming to pick up. Having found no intermediate mass black holes, we went and set some upper limits. (Yay!)

To set our upper limits, we injected some signals from binaries with specific masses and spins, and then saw how many would have be found with greater significance than our most significant trigger (after excluding GW150914, LVT151012 and GW151226). This is effectively asking the question of when would we see something as significant as this trigger which we think is just noise. This gives us a sensitive time–volume \langle VT \rangle which we have surveyed and found no mergers. We use this number of events to set 90% upper limits on the merge rates R_{90\%} = 2.3/\langle VT \rangle, and define an effective distance D_{\langle VT \rangle} defined so that \langle VT \rangle = T_a (4\pi D_{\langle VT \rangle}^3/3) where T_a is the analysed amount of time. The plot below show our limits on rate and effective distance for our different injections.

Intermediate mass black hole binary search results

Results from the O1 search for intermediate mass black hole binaries. The left panel shows the 90% confidence upper limit on the merger rate. The right panel shows the effective search distance. Each circle is a different injection. All have zero spin, except two 100+100 solar mass sets, where \chi indicates the spin aligned with the orbital angular momentum. Figure 2 of the O1 Intermediate Mass Black Hole Binary Paper.

There are a couple of caveats associated with our limits. The waveforms we use don’t include all the relevant physics (like orbital eccentricity and spin precession). Including everything is hard: we may use some numerical relativity waveforms in the future. However, they should give a good impression on our sensitivity. There’s quite a big improvement compared to previous searches (S6 Burst Search; S6 Templated Search). This comes form the improvement of Advanced LIGO’s sensitivity at low frequencies compared to initial LIGO. Future improvements to the low frequency sensitivity should increase our probability of making a detection.

I spent a lot of time working on this search as I was the review chair. As a reviewer, I had to make sure everything was done properly, and then reported accurately. I think our review team did a thorough job. I was glad when we were done, as I dislike being the bad cop.

 

Advertisements

Parameter estimation on gravitational waves from neutron-star binaries with spinning components

In gravitation-wave astronomy, some parameters are easier to measure than others. We are sensitive to properties which change the form of the wave, but sometimes the effect of changing one parameter can be compensated by changing another. We call this a degeneracy. In signals for coalescing binaries (two black holes or neutron stars inspiralling together), there is a degeneracy between between the masses and spins. In this recently published paper, we look at what this means for observing binary neutron star systems.

History

This paper has been something of an albatross, and I’m extremely pleased that we finally got it published. I started working on it when I began my post-doc at Birmingham in 2013. Back then I was sharing an office with Ben Farr, and together with others in the Parameter Estimation Group, we were thinking about the prospect of observing binary neutron star signals (which we naively thought were the most likely) in LIGO’s first observing run.

One reason that this work took so long is that binary neutron star signals can be computationally expensive to analyse [bonus note]. The signal slowly chirps up in frequency, and can take up to a minute to sweep through the range of frequencies LIGO is sensitive to. That gives us a lot of gravitational wave to analyse. (For comparison, GW150914 lasted 0.2 seconds). We need to calculate waveforms to match to the observed signals, and these can be especially complicated when accounting for the effects of spin.

A second reason is shortly after submitting the paper in August 2015, we got a little distracted

This paper was the third of a trilogy look at measuring the properties of binary neutron stars. I’ve written about the previous instalment before. We knew that getting the final results for binary neutron stars, including all the important effects like spin, would take a long time, so we planned to follow up any detections in stages. A probable sky location can be computed quickly, then we can have a first try at estimating other parameters like masses using waveforms that don’t include spin, then we go for the full results with spin. The quicker results would be useful for astronomers trying to find any explosions that coincided with the merger of the two neutron stars. The first two papers looked at results from the quicker analyses (especially at sky localization); in this one we check what effect neglecting spin has on measurements.

What we did

We analysed a population of 250 binary neutron star signals (these are the same as the ones used in the first paper of the trilogy). We used what was our best guess for the sensitivity of the two LIGO detectors in the first observing run (which was about right).

The simulated neutron stars all have small spins of less than 0.05 (where 0 is no spin, and 1 would be the maximum spin of a black hole). We expect neutron stars in these binaries to have spins of about this range. The maximum observed spin (for a neutron star not in a binary neutron star system) is around 0.4, and we think neutron stars should break apart for spins of 0.7. However, since we want to keep an open mind regarding neutron stars, when measuring spins we considered spins all the way up to 1.

What we found

Our results clearly showed the effect of the mass–spin degeneracy. The degeneracy increases the uncertainty for both the spins and the masses.

Even though the true spins are low, we find that across the 250 events, the median 90% upper limit on the spin of the more massive (primary) neutron star is 0.70, and the 90% limit on the less massive (secondary) black hole is 0.86. We learn practically nothing about the spin of the secondary, but a little more about the spin of the primary, which is more important for the inspiral. Measuring spins is hard.

The effect of the mass–spin degeneracy for mass measurements is shown in the plot below. Here we show a random selection of events. The banana-shaped curves are the 90% probability intervals. They are narrow because we can measure a particular combination of masses, the chirp mass, really well. The mass–spin degeneracy determines how long the banana is. If we restrict the range of spins, we explore less of the banana (and potentially introduce an offset in our results).

Neutron star mass distributions

Rough outlines for 90% credible regions for component masses for a random assortments of signals. The circles show the true values. The coloured lines indicate the extent of the distribution with different limits on the spins. The grey area is excluded from our convention on masses m_1 \geq m_2. Figure 5 from Farr et al. (2016).

Although you can’t see it in the plot above, including spin does also increase the uncertainty in the chirp mass too. The plots below show the standard deviation (a measure width of the posterior probability distribution), divided by the mean for several mass parameters. This gives a measure of the fractional uncertainty in our measurements. We show the chirp mass \mathcal{M}_\mathrm{c}, the mass ratio q = m_2/m_1 and the total mass M = m_1 + m_2, where m_1 and m_2 are the masses of the primary and secondary neutron stars respectively. The uncertainties are small for louder signals (higher signal-to-noise ratio). If we neglect the spin, the true chirp mass can lie outside the posterior distribution, the average is about 5 standard deviations from the mean, but if we include spin, the offset is just 0.7 from the mean (there’s still some offset as we’re allowing for spins all the way up to 1).

Mass measurements for binary neutron stars with and without spin

Fractional statistical uncertainties in chirp mass (top), mass ratio (middle) and total mass (bottom) estimates as a function of network signal-to-noise ratio for both the fully spinning analysis and the quicker non-spinning analysis. The lines indicate approximate power-law trends to guide the eye. Figure 2 of Farr et al. (2016).

We need to allow for spins when measuring binary neutron star masses in order to explore for the possible range of masses.

Sky localization and distance, however, are not affected by the spins here. This might not be the case for sources which are more rapidly spinning, but assuming that binary neutron stars do have low spin, we are safe using the easier-to-calculate results. This is good news for astronomers who need to know promptly where to look for explosions.

arXiv: 1508.05336 [astro-ph.HE]
Journal: Astrophysical Journal825(2):116(10); 2016
Authorea [bonus note]: Parameter estimation on gravitational waves from neutron-star binaries with spinning components
Conference proceedings:
 Early Advanced LIGO binary neutron-star sky localization and parameter estimation
Favourite albatross:
 Wilbur

Bonus notes

How long?

The plot below shows how long it took to analyse each of the binary neutron star signals.

Run time for different analyses of binary neutron stars

Distribution of run times for binary neutron star signals. Low-latency sky localization is done with BAYESTAR; medium-latency non-spinning parameter estimation is done with LALInference and TaylorF2 waveforms, and high-latency fully spinning parameter estimation is done with LALInference and SpinTaylorT4 waveforms. The LALInference results are for 2000 posterior samples. Figure 9 from Farr et al. (2016).

BAYESTAR provides a rapid sky localization, taking less than ten seconds. This is handy for astronomers who want to catch a flash caused by the merger before it fades.

Estimates for the other parameters are computed with LALInference. How long this takes to run depends on which waveform you are using and how many samples from the posterior probability distribution you want (the more you have, the better you can map out the shape of the distribution). Here we show times for 2000 samples, which is enough to get a rough idea (we collected ten times more for GW150914 and friends). Collecting twice as many samples takes (roughly) twice as long. Prompt results can be obtained with a waveform that doesn’t include spin (TaylorF2), these take about a day at most.

For this work, we considered results using a waveform which included the full effects of spin (SpinTaylorT4). These take about twenty times longer than the non-spinning analyses. The maximum time was 172 days. I have a strong suspicion that the computing time cost more than my salary.

Gravitational-wave arts and crafts

Waiting for LALInference runs to finish gives you some time to practise hobbies. This is a globe knitted by Hannah. The two LIGO sites marked in red, and a typical gravitational-wave sky localization stitched on.

In order to get these results, we had to add check-pointing to our code, so we could stop it and restart it; we encountered a new type of error in the software which manages jobs running on our clusters, and Hannah Middleton and I got several angry emails from cluster admins (who are wonderful people) for having too many jobs running.

In comparison, analysing GW150914, LVT151012 and GW151226 was a breeze. Grudgingly, I have to admit that getting everything sorted out for this study made us reasonably well prepared for the real thing. Although, I’m not looking forward to that first binary neutron star signal…

Authorea

Authorea is an online collaborative writing service. It allows people to work together on documents, editing text, adding comments, and chatting with each other. By the time we came to write up the paper, Ben was no longer in Birmingham, and many of our coauthors are scattered across the globe. Ben thought Authorea might be useful for putting together the paper.

Writing was easy, and the ability to add comments on the text was handy for getting feedback from coauthors. The chat was going for quickly sorting out issues like plots. Overall, I was quite pleased, up to the point we wanted to get the final document. Extracted a nicely formatted PDF was awkward. For this I switched to using the Github back-end. On reflection, a simple git repo, plus a couple of Skype calls might have been a smoother way of writing, at least for a standard journal article.

Authorea promises to be an open way of producing documents, and allows for others to comment on papers. I don’t know if anyone’s looked at our Authorea article. For astrophysics, most people use the arXiv, which is free to everyone, and I’m not sure if there’s enough appetite for interaction (beyond the occasional email to authors) to motivate people to look elsewhere. At least, not yet.

In conclusion, I think Authorea is a nice idea, and I would try out similar collaborative online writing tools again, but I don’t think I can give it a strong recommendation for your next paper unless you have a particular idea in mind of how to make the most of it.

Parameter estimation for binary neutron-star coalescences with realistic noise during the Advanced LIGO era

The first observing run (O1) of Advanced LIGO is nearly here, and with it the prospect of the first direct detection of gravitational waves. That’s all wonderful and exciting (far more exciting than a custard cream or even a chocolate digestive), but there’s a lot to be done to get everything ready. Aside from remembering to vacuum the interferometer tubes and polish the mirrors, we need to see how the data analysis will work out. After all, having put so much effort into the detector, it would be shame if we couldn’t do any science with it!

Parameter estimation

Since joining the University of Birmingham team, I’ve been busy working on trying to figure out how well we can measure things using gravitational waves. I’ve been looking at binary neutron star systems. We expect binary neutron star mergers to be the main source of signals for Advanced LIGO. We’d like to estimate how massive the neutron stars are, how fast they’re spinning, how far away they are, and where in the sky they are. Just published is my first paper on how well we should be able to measure things. This took a lot of hard work from a lot of people, so I’m pleased it’s all done. I think I’ve earnt a celebratory biscuit. Or two.

When we see something that looks like it could be a gravitational wave, we run code to analyse the data and try to work out the properties of the signal. Working out some properties is a bit trickier than others. Sadly, we don’t have an infinite number of computers, so it means it can take a while to get results. Much longer than the time to eat a packet of Jaffa Cakes…

The fastest algorithm we have for binary neutron stars is BAYESTAR. This takes the same time as maybe eating one chocolate finger. Perhaps two, if you’re not worried about the possibility of choking. BAYESTAR is fast as it only estimates where the source is coming from. It doesn’t try to calculate a gravitational-wave signal and match it to the detector measurements, instead it just looks at numbers produced by the detection pipeline—the code that monitors the detectors and automatically flags whenever something interesting appears. As far as I can tell, you give BAYESTAR this information and a fresh cup of really hot tea, and it uses Bayes’ theorem to work out how likely it is that the signal came from each patch of the sky.

To work out further details, we need to know what a gravitational-wave signal looks like and then match this to the data. This is done using a different algorithm, which I’ll refer to as LALInference. (As names go, this isn’t as cool as SKYNET). This explores parameter space (hopping between different masses, distances, orientations, etc.), calculating waveforms and then working out how well they match the data, or rather how likely it is that we’d get just the right noise in the detector to make the waveform fit what we observed. We then use another liberal helping of Bayes’ theorem to work out how probable those particular parameter values are.

It’s rather difficult to work out the waveforms, but some our easier than others. One of the things that makes things trickier is adding in the spins of the neutron stars. If you made a batch of biscuits at the same time you started a LALInference run, they’d still be good by the time a non-spinning run finished. With a spinning run, the biscuits might not be quite so appetising—I generally prefer more chocolate than penicillin on my biscuits. We’re working on speeding things up (if only to prevent increased antibiotic resistance).

In this paper, we were interested in what you could work out quickly, while there’s still chance to catch any explosion that might accompany the merging of the neutron stars. We think that short gamma-ray bursts and kilonovae might be caused when neutron stars merge and collapse down to a black hole. (I find it mildly worrying that we don’t know what causes these massive explosions). To follow-up on a gravitational-wave detection, you need to be able to tell telescopes where to point to see something and manage this while there’s still something that’s worth seeing. This means that using spinning waveforms in LALInference is right out, we just use BAYESTAR and the non-spinning LALInference analysis.

What we did

To figure out what we could learn from binary neutron stars, we generated a large catalogue of fakes signals, and then ran the detection and parameter-estimation codes on this to see how they worked. This has been done before in The First Two Years of Electromagnetic Follow-Up with Advanced LIGO and Virgo which has a rather delicious astrobites write-up. Our paper is the sequel to this (and features most of the same cast). One of the differences is that The First Two Years assumed that the detectors were perfectly behaved and had lovely Gaussian noise. In this paper, we added in some glitches. We took some real data™ from initial LIGO’s sixth science run and stretched this so that it matches the sensitivity Advanced LIGO is expected to have in O1. This process is called recolouring [bonus note]. We now have fake signals hidden inside noise with realistic imperfections, and can treat it exactly as we would real data. We ran it through the detection pipeline, and anything which was flagged as probably being a signal (we used a false alarm rate of once per century), was analysed with the parameter-estimation codes. We looked at how well we could measure the sky location and distance of the source, and the masses of the neutron stars. It’s all good practice for O1, when we’ll be running this analysis on any detections.

What we found

  1. The flavour of noise (recoloured or Gaussian) makes no difference to how well we can measure things on average.
  2. Sky-localization in O1 isn’t great, typically hundreds of square degrees (the median 90% credible region is 632 deg2), for comparison, the Moon is about a fifth of a square degree. This’ll make things interesting for the people with telescopes.

    Sky localization map for O1.

    Probability that of a gravitational-wave signal coming from different points on the sky. The darker the red, the higher the probability. The star indicates the true location. This is one of the worst localized events from our study for O1. You can find more maps in the data release (including 3D versions), this is Figure 6 of Berry et al. (2015).

  3. BAYESTAR does just as well as LALInference, despite being about 2000 times faster.

    Sky localization for binary neutron stars during O1.

    Sky localization (the size of the patch of the sky that we’re 90% sure contains the source location) varies with the signal-to-noise ratio (how loud the signal is). The approximate best fit is \log_{10}(\mathrm{CR}_{0.9}/\mathrm{deg^2}) \approx -2 \log_{10}(\varrho) +5.06, where \mathrm{CR}_{0.9} is the 90% sky area and \varrho is the signal-to-noise ratio. The results for BAYESTAR and LALInference agree, as do the results with Gaussian and recoloured noise. This is Figure 9 of Berry et al. (2015).

  4. We can’t measure the distance too well: the median 90% credible interval divided by the true distance (which gives something like twice the fractional error) is 0.85.
  5. Because we don’t include the spins of the neutron stars, we introduce some error into our mass measurements. The chirp mass, a combination of the individual masses that we’re most sensitive to [bonus note], is still reliably measured (the median offset is 0.0026 of the mass of the Sun, which is tiny), but we’ll have to wait for the full spinning analysis for individual masses.

    Mean offset in chirp-mass estimates when not including the effects of spin.

    Fraction of events with difference between the mean estimated and true chirp mass smaller than a given value. There is an error because we are not including the effects of spin, but this is small. Again, the type of noise makes little difference. This is Figure 15 of Berry et al. (2015).

There’s still some work to be done before O1, as we need to finish up the analysis with waveforms that include spin. In the mean time, our results are all available online for anyone to play with.

arXiv: 1411.6934 [astro-ph.HE]
Journal: Astrophysical Journal; 904(2):114(24); 2015
Data release: The First Two Years of Electromagnetic Follow-Up with Advanced LIGO and Virgo
Favourite colour: Blue. No, yellow…

Notes

The colour of noise: Noise is called white if it doesn’t have any frequency dependence. We made ours by taking some noise with initial LIGO’s frequency dependence (coloured noise), removing the frequency dependence (making it white), and then adding in the frequency dependence of Advanced LIGO (recolouring it).

The chirp mass: Gravitational waves from a binary system depend upon the masses of the components, we’ll call these m_1 and m_2. The chirp mass is a combination these that we can measure really well, as it determines the most significant parts of the shape of the gravitational wave. It’s given by

\displaystyle \mathcal{M} = \frac{m_1^{3/5} m_2^{3/5}}{(m_1 + m_2)^{1/5}}.

We get lots of good information on the chirp mass, unfortunately, this isn’t too useful for turning back into the individual masses. For that we next extra information, for example the mass ratio m_2/m_1. We can get this from less dominant parts of the waveform, but it’s not typically measured as precisely as the chirp mass, so we’re often left with big uncertainties.