GW170814—Enter Virgo

On 14 August 2017 a gravitational wave signal (GW170814), originating from the coalescence of a binary black hole system, was observed by the global gravitational-wave observatory network of the two Advanced LIGO detectors and Advanced Virgo.  That’s right, Virgo is in the game!

A new foe appeared

Very few things excite me like unlocking a new character in Smash Bros. A new gravitational wave observatory might come close.

Advanced Virgo joined O2, the second observing run of the advanced detector era, on 1 August. This was a huge achievement. It has not been an easy route commissioning the new detector—it never ceases to amaze me how sensitive these machines are. Together, Advanced Virgo (near Pisa) and the two Advanced LIGO detectors (in Livingston and Hanford in the US) would take data until the end of O2 on 25 August.

On 14 August, we found a signal. A signal that was observable in all three detectors [bonus note]. Virgo is less sensitive than the LIGO instruments, so there is no impressive plot that shows something clearly popping out, but the Virgo data do complement the LIGO observations, indicating a consistent signal in all three detectors [bonus note].

Three different ways of visualising GW170814: an SNR time series, a spectrogram and a waveform reconstruction

A cartoon of three different ways to visualise GW170814 in the three detectors. These take a bit of explaining. The top panel shows the signal-to-noise ratio the search template that matched GW170814. They peak at the time corresponding to the merger. The peaks are clear in Hanford and Livingston. The peak in Virgo is less exceptional, but it matches the expected time delay and amplitude for the signal. The middle panels show time–frequency plots. The upward sweeping chirp is visible in Hanford and Livingston, but less so in Virgo as it is less sensitive. The plot is zoomed in so that its possible to pick out the detail in Virgo, but the chirp is visible for a longer stretch of time than plotted in Livingston. The bottom panel shows whitened and band-passed strain data, together with the 90% region of the binary black hole templates used to infer the parameters of the source (the narrow dark band), and an unmodelled, coherent reconstruction of the signal (the wider light band) . The agreement between the templates and the reconstruction is a check that the gravitational waves match our expectations for binary black holes. The whitening of the data mirrors how we do the analysis, by weighting noise at different frequency by an estimate of their typical fluctuations. The signal does certainly look like the inspiral, merger and ringdown of a binary black hole. Figure 1 of the GW170814 Paper.

The signal originated from the coalescence of two black holes. GW170814 is thus added to the growing family of GW150914, LVT151012, GW151226 and GW170104.

GW170814 most closely resembles GW150914 and GW170104 (perhaps there’s something about ending with a 4). If we compare the masses of the two component black holes of the binary (m_1 and m_2), and the black hole they merge to form (M_\mathrm{f}), they are all quite similar

  • GW150914: m_1 = 36.2^{+5.2}_{-3.8} M_\odot, m_2 = 29.1^{+3.7}_{-4.4} M_\odot, M_\mathrm{f} = 62.3^{+3.7}_{-3.1} M_\odot;
  • GW170104: m_1 = 31.2^{+5.4}_{-6.0} M_\odot, m_2 = 19.4^{+5.3}_{-5.9} M_\odot, M_\mathrm{f} = 48.7^{+5.7}_{-4.6} M_\odot;
  • GW170814: m_1 = 30.5^{+5.7}_{-3.0} M_\odot, m_2 = 25.3^{+2.8}_{-4.2} M_\odot, M_\mathrm{f} = 53.2^{+3.2}_{-2.5} M_\odot.

GW170814’s source is another high-mass black hole system. It’s not too surprising (now we know that these systems exist) that we observe lots of these, as more massive black holes produce louder gravitational wave signals.

GW170814 is also comparable in terms of black holes spins. Spins are more difficult to measure than masses, so we’ll just look at the effective inspiral spin \chi_\mathrm{eff}, a particular combination of the two component spins that influences how they inspiral together, and the spin of the final black hole a_\mathrm{f}

  • GW150914: \chi_\mathrm{eff} = -0.06^{+0.14}_{-0.14}, a_\mathrm{f} = 0.70^{+0.07}_{-0.05};
  • GW170104:\chi_\mathrm{eff} = -0.12^{+0.21}_{-0.30}, a_\mathrm{f} = 0.64^{+0.09}_{-0.20};
  • GW170814:\chi_\mathrm{eff} = 0.06^{+0.12}_{-0.12}, a_\mathrm{f} = 0.70^{+0.07}_{-0.05}.

There’s some spread, but the effective inspiral spins are all consistent with being close to zero. Small values occur when the individual spins are small, if the spins are misaligned with each other, or some combination of the two. I’m starting to ponder if high-mass black holes might have small spins. We don’t have enough information to tease these apart yet, but this new system is consistent with the story so far.

One of the things Virgo helps a lot with is localizing the source on the sky. Most of the information about the source location comes from the difference in arrival times at the detectors (since we know that gravitational waves should travel at the speed of light). With two detectors, the time delay constrains the source to a ring on the sky; with three detectors, time delays can narrow the possible locations down to a couple of blobs. Folding in the amplitude of the signal as measured by the different detectors adds extra information, since detectors are not equally sensitive to all points on the sky (they are most sensitive to sources over head or underneath). This can even help when you don’t observe the signal in all detectors, as you know the source must be in a direction that detector isn’t too sensitive too. GW170814 arrived at LIGO Livingston first (although it’s not a competition), then ~8 ms later at LIGO Hanford, and finally ~14 ms later at Virgo.  If we only had the two LIGO detectors, we’d have an uncertainty on the source’s sky position of over 1000 square degrees, but adding in Virgo, we get this down to 60 square degrees. That’s still pretty large by astronomical standards (the full Moon is about a quarter of a square degree), but a fantastic improvement [bonus note]!

Sky localization of GW170814

90% probability localizations for GW170814. The large banana shaped (and banana coloured, but not banana flavoured) curve uses just the two LIGO detectors, the area is 1160 square degrees. The green shows the improvement adding Virgo, the area is just 100 square degrees. Both of these are calculated using BAYESTAR, a rapid localization algorithm. The purple map is the final localization from our full parameter estimation analysis (LALInference), its area is just 60 square degrees! Whereas BAYESTAR only uses the best matching template from the search, the full parameter estimation analysis is free to explore a range of different templates. Part of Figure 3 of the GW170814 Paper.

Having additional detectors can help improve gravitational wave measurements in other ways too. One of the predictions of general relativity is that gravitational waves come in two polarizations. These polarizations describe the pattern of stretching and squashing as the wave passes, and are illustrated below.

Plus and cross polarizations

The two polarizations of gravitational waves: plus (left) and cross (right). Here, the wave is travelling into or out of the screen. Animations adapted from those by MOBle on Wikipedia.

These two polarizations are the two tensor polarizations, but other patterns of squeezing could be present in modified theories of gravity. If we could detect any of these we would immediately know that general relativity is wrong. The two LIGO detectors are almost exactly aligned, so its difficult to get any information on other polarizations. (We tried with GW150914 and couldn’t say anything either way). With Virgo, we get a little more information. As a first illustration of what we may be able to do, we compared how well the observed pattern of radiation at the detectors matched different polarizations, to see how general relativity’s tensor polarizations compared to a signal of entirely vector or scalar radiation. The tensor polarizations are clearly preferred, so general relativity lives another day. This isn’t too surprising, as most modified theories of gravity with other polarizations predict mixtures of the different polarizations (rather than all of one). To be able to constrain all the  mixtures with these short signals we really need a network of five detectors, so we’ll have to wait for KAGRA and LIGO-India to come on-line.

The siz gravitational wave polarizations

The six polarizations of a metric theory of gravity. The wave is travelling in the z direction. (a) and (b) are the plus and cross tensor polarizations of general relativity. (c) and (d) are the scalar breathing and longitudinal modes, and (e) and (f) are the vector x and y polarizations. The tensor polarizations (in red) are transverse, the vector and longitudinal scalar mode (in green) are longitudinal. The scalar breathing mode (in blue) is an isotropic expansion and contraction, so its a bit of a mix of transverse and longitudinal. Figure 10 from (the excellent) Will (2014).

We’ll be presenting a more detailed analysis of GW170814 later, in papers summarising our O2 results, so stay tuned for more.

Title: GW170814: A three-detector observation of gravitational waves from a binary black hole coalescence
arXiv: 1709.09660 [gr-qc]
Journal: Physical Review Letters; 119(14):141101(16) [bonus note]
Data release: LIGO Open Science Center
Science summary: GW170814: A three-detector observation of gravitational waves from a binary black hole coalescence

If you’re looking for the most up-to-date results regarding GW170814, check out the O2 Catalogue Paper.

Bonus notes

Signs of paranoia

Those of you who have been following the story of gravitational waves for a while may remember the case of the Big Dog. This was a blind injection of a signal during the initial detector era. One of the things that made it an interesting signal to analyse, was that it had been injected with an inconsistent sign in Virgo compared to the two LIGO instruments (basically it was upside down). Making this type of sign error is easy, and we were a little worried that we might make this sort of mistake when analysing the real data. The Virgo calibration team were extremely careful about this, and confident in their results. Of course, we’re quite paranoid, so during the preliminary analysis of GW170814, we tried some parameter estimation runs with the data from Virgo flipped. This was clearly disfavoured compared to the right sign, so we all breathed easily.

I am starting to believe that God may be a detector commissioner. At the start of O1, we didn’t have the hardware injection systems operational, but GW150914 showed that things were working properly. Now, with a third detector on-line, GW170814 shows that the network is functioning properly. Astrophysical injections are definitely the best way to confirm things are working!

Signal hunting

Our usual way to search for binary black hole signals is compare the data to a bank of waveform templates. Since Virgo is less sensitive the the two LIGO detectors, and would only be running for a short amount of time, these main searches weren’t extended to use data from all three detectors. This seemed like a sensible plan, we were confident that this wouldn’t cause us to miss anything, and we can detect GW170814 with high significance using just data from Livingston and Hanford—the false alarm rate is estimated to be less than 1 in 27000 years (meaning that if the detectors were left running in the same state, we’d expect random noise to make something this signal-like less than once every 27000 years). However, we realised that we wanted to be able to show that Virgo had indeed seen something, and the search wasn’t set up for this.

Therefore, for the paper, we list three different checks to show that Virgo did indeed see the signal.

  1. In a similar spirit to the main searches, we took the best fitting template (it doesn’t matter in terms of results if this is the best matching template found by the search algorithms, or the maximum likelihood waveform from parameter estimation), and compared this to a stretch of data. We then calculated the probability of seeing a peak in the signal-to-noise ratio (as shown in the top row of Figure 1) at least as large as identified for GW170814, within the time window expected for a real signal. Little blips of noise can cause peaks in the signal-to-noise ratio, for example, there’s a glitch about 50 ms after GW170814 which shows up. We find that there’s a 0.3% probability of getting a signal-to-ratio peak as large as GW170814. That’s pretty solid evidence for Virgo having seen the signal, but perhaps not overwhelming.
  2. Binary black hole coalescences can also be detected (if the signals are short) by our searches for unmodelled signals. This was the case for GW170814. These searches were using data from all three detectors, so we can compare results with and without Virgo. Using just the two LIGO detectors, we calculate a false alarm rate of 1 per 300 years. This is good enough to claim a detection. Adding in Virgo, the false alarm rate drops to 1 per 5900 years! We see adding in Virgo improves the significance by almost a factor of 20.
  3. Using our parameter estimation analysis, we calculate the evidence (marginal likelihood) for (i) there being a coherent signal in Livingston and Hanford, and Gaussian noise in Virgo, and (ii) there being a coherent signal in all three detectors. We then take the ratio to calculate the Bayes factor. We find that a coherent signal in all three detectors is preferred by a factor of over 1600. This is a variant of a test proposed in Veitch & Vecchio (2010); it could be fooled if the noise in Virgo is non-Gaussian (if there is a glitch), but together with the above we think that the simplest explanation for Virgo’s data is that there is a signal.

In conclusion: Virgo works. Probably.

Follow-up observations

Adding Virgo to the network greatly improves localization of the source, which is a huge advantage when searching for counterparts. For a binary black hole, as we have here, we don’t expect a counterpart (which would make finding one even more exciting). So far, no counterpart has been reported.

i

Announcement

This is the first observation we’ve announced before being published. The draft made public at time at announcement was accepted, pending fixing up some minor points raised by the referees (who were fantastically quick in reporting back). I guess that binary black holes are now familiar enough that we are on solid ground claiming them. I’d be interested to know if people think that it would be good if we didn’t always wait for the rubber stamp of peer review, or whether they would prefer to for detections to be externally vetted? Sharing papers before publication would mean that we get more chance for feedback from the community, which is would be good, but perhaps the Collaboration should be seen to do things properly?

One reason that the draft paper is being shared early is because of an opportunity to present to the G7 Science Ministers Meeting in Italy. I think any excuse to remind politicians that international collaboration is a good thing™ is worth taking. Although I would have liked the paper to be a little more polished [bonus advice]. The opportunity to present here only popped up recently, which is one reason why things aren’t as perfect as usual.

I also suspect that Virgo were keen to demonstrate that they had detected something prior to any Nobel Prize announcement. There’s a big difference between stories being written about LIGO and Virgo’s discoveries, and having as an afterthought that Virgo also ran in August.

The main reason, however, was to get this paper out before the announcement of GW170817. The identification of GW170817’s counterpart relied on us being able to localize the source. In that case, there wasn’t a clear signal in Virgo (the lack of a signal tells us the source wan’t in a direction wasn’t particularly sensitive to). People agreed that we really need to demonstrate that Virgo can detect gravitational waves in order to be convincing that not seeing a signal is useful information. We needed to demonstrate that Virgo does work so that our case for GW170817 was watertight and bulletproof (it’s important to be prepared).

Perfect advice

Some useful advice I was given when a PhD student was that done is better than perfect. Having something finished is often more valuable than having lots of really polished bits that don’t fit together to make a cohesive whole, and having everything absolutely perfect takes forever. This is useful to remember when writing up a thesis. I think it might apply here too: the Paper Writing Team have done a truly heroic job in getting something this advanced in little over a month. There’s always one more thing to do… [one more bonus note]

One more thing

One point I was hoping that the Paper Writing Team would clarify is our choice of prior probability distribution for the black hole spins. We don’t get a lot of information about the spins from the signal, so our choice of prior has an impact on the results.

The paper says that we assume “no restrictions on the spin orientations”, which doesn’t make much sense, as one of the two waveforms we use to analyse the signal only includes spins aligned with the orbital angular momentum! What the paper meant was that we assume a prior distribution which has an isotopic distribution of spins, and for the aligned spin (no precession) waveform, we assume a prior probability distribution on the aligned components of the spins which matches what you would have for an isotropic distribution of spins (in effect, assuming that we can only measure the aligned components of the spins, which is a good approximation).

Observing run 1—The papers

The second observing run (O2) of the advanced gravitational wave detectors is now over, which has reminded me how dreadfully behind I am in writing about papers. In this post I’ll summarise results from our first observing run (O1), which ran from September 2015 to January 2016.

I’ll add to this post as I get time, and as papers are published. I’ve started off with papers searching for compact binary coalescences (as these are closest to my own research). There are separate posts on our detections GW150914 (and its follow-up papers: set I, set II) and GW151226 (this post includes our end-of-run summary of the search for binary black holes, including details of LVT151012).

Transient searches

The O1 Binary Neutron Star/Neutron Star–Black Hole Paper

Title: Upper limits on the rates of binary neutron star and neutron-star–black-hole mergers from Advanced LIGO’s first observing run
arXiv: 1607.07456 [astro-ph.HE]
Journal: Astrophysical Journal Letters; 832(2):L21(15); 2016

Our main search for compact binary coalescences targets binary black holes (binaries of two black holes), binary neutron stars (two neutron stars) and neutron-star–black-hole binaries (one of each). Having announced the results of our search for binary black holes, this paper gives the detail of the rest. Since we didn’t make any detections, we set some new, stricter upper limits on their merger rates. For binary neutron stars, this is 12,600~\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1} .

More details: O1 Binary Neutron Star/Neutron Star–Black Hole Paper Paper summary

The O1 Gamma-Ray Burst Paper

Title: Search for gravitational waves associated with gamma-ray bursts during the first Advanced LIGO observing run and implications for the origin of GRB 150906B
arXiv: 1611.07947 [astro-ph.HE]
Journal: Astrophysical Journal; 841(2):89(18); 2016
LIGO science summary: What’s behind the mysterious gamma-ray bursts? LIGO’s search for clues to their origins

Some binary neutron star or neutron-star–black-hole mergers may be accompanied by a gamma-ray burst. This paper describes our search for signals coinciding with observations of gamma-ray bursts (including GRB 150906B, which was potentially especially close by). Knowing when to look makes it easy to distinguish a signal from noise. We don’t find anything, so we we can exclude any close binary mergers as sources of these gamma-ray bursts.

More details: O1 Gamma-Ray Burst Paper summary

The O1 Intermediate Mass Black Hole Binary Paper

Title: Search for intermediate mass black hole binaries in the first observing run of Advanced LIGO
arXiv: 1704.04628 [gr-qc]
Journal: Physical Review D; 96(2):022001(14); 2017
LIGO science summary: Search for mergers of intermediate-mass black holes

Our main search for binary black holes in O1 targeted systems with masses less than about 100 solar masses. There could be more massive black holes out there. Our detectors are sensitive to signals from binaries up to a few hundred solar masses, but these are difficult to detect because they are so short. This paper describes our specially designed such systems. This combines techniques which use waveform templates and those which look for unmodelled transients (bursts). Since we don’t find anything, we set some new upper limits on merger rates.

More details: O1 Intermediate Mass Black Hole Binary Paper summary

The O1 Burst Paper

Title: All-sky search for short gravitational-wave bursts in the first Advanced LIGO run
arXiv: 1611.02972 [gr-qc]
Journal: Physical Review D; 95(4):042003(14); 2017

If we only search for signals for which we have models, we’ll never discover something new. Unmodelled (burst) searches are more flexible and don’t assume a particular form for the signal. This paper describes our search for short bursts. We successfully find GW150914, as it is short and loud, and burst searches are good for these type of signals, but don’t find anything else. (It’s not too surprising GW151226 and LVT151012 are below the threshold for detection because they are longer and quieter than GW150914).

More details: O1 Burst Paper summary

The O1 Binary Neutron Star/Neutron Star–Black Hole Paper

Synopsis: O1 Binary Neutron Star/Neutron Star–Black Hole Paper
Read this if: You want a change from black holes
Favourite part: We’re getting closer to detection (and it’ll still be interesting if we don’t find anything)

The Compact Binary Coalescence (CBC) group target gravitational waves from three different flavours of binary in our main search: binary neutron stars, neutron star–black hole binaries and binary black holes. Before O1, I would have put my money on us detecting a binary neutron star first, around-about O3. Reality had other ideas, and we discovered binary black holes. Those results were reported in the O1 Binary Black Hole Paper; this paper goes into our results for the others (which we didn’t detect).

To search for signals from compact binaries, we use a bank of gravitational wave signals  to match against the data. This bank goes up to total masses of 100 solar masses. We split the bank up, so that objects below 2 solar masses are considered neutron stars. This doesn’t make too much difference to the waveforms we use to search (neutrons stars, being made of stuff, can be tidally deformed by their companion, which adds some extra features to the waveform, but we don’t include these in the search). However, we do limit the spins for neutron stars to less the 0.05, as this encloses the range of spins estimated for neutron star binaries from binary pulsars. This choice shouldn’t impact our ability to detect neutron stars with moderate spins too much.

We didn’t find any interesting events: the results were consistent with there just being background noise. If you read really carefully, you might have deduced this already from the O1 Binary Black Hole Paper, as the results from the different types of binaries are completely decoupled. Since we didn’t find anything, we can set some upper limits on the merger rates for binary neutron stars and neutron star–black hole binaries.

The expected number of events found in the search is given by

\Lambda = R \langle VT \rangle

where R is the merger rate, and \langle VT \rangle is the surveyed time–volume (you expect more detections if your detectors are more sensitive, so that they can find signals from further away, or if you leave them on for longer). We can estimate \langle VT \rangle by performing a set of injections and seeing how many are found/missed at a given threshold. Here, we use a false alarm rate of one per century. Given our estimate for \langle VT \rangle and our observation of zero detections we can, calculate a probability distribution for R using Bayes’ theorem. This requires a choice for a prior distribution of \Lambda. We use a uniform prior, for consistency with what we’ve done in the past.

With a uniform prior, the c confidence level limit on the rate is

\displaystyle R_c = \frac{-\ln(1-c)}{\langle VT \rangle},

so the 90% confidence upper limit is R_{90\%} = 2.30/\langle VT \rangle. This is quite commonly used, for example we make use of it in the O1 Intermediate Mass Black Hole Binary Search. For comparison, if we had used a Jeffrey’s prior of 1/\sqrt{\Lambda}, the equivalent results is

\displaystyle R_c = \frac{\left[\mathrm{erf}^{-1}(c)\right]^2}{\langle VT \rangle},

and hence R_{90\%} = 1.35/\langle VT \rangle, so results would be the same to within a factor of 2, but the results with the uniform prior are more conservative.

The plot below shows upper limits for different neutron star masses, assuming that neutron spins are (uniformly distributed) between 0 and 0.05 and isotropically orientated. From our observations of binary pulsars, we have seen that most of these neutron stars have masses of ~1.35 solar masses, so we can also put a limit of the binary neutron star merger rate assuming that their masses are normally distributed with mean of 1.35 solar masses and standard deviation of 0.13 solar masses. This gives an upper limit of R_{90\%} = 12,100~\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1} for isotropic spins up to 0.05, and R_{90\%} = 12,600~\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1} if you allow the spins up to 0.4.

Upper merger rate limits for binary neutron stars

90% confidence upper limits on the binary neutron star merger rate. These rates assume randomly orientated spins up to 0.05. Results are calculated using PyCBC, one of our search algorithms; GstLAL gives similar results. Figure 4 of the O1 Binary Neutron Star/Neutron Star–Black Hole Paper.

For neutron star–black hole binaries there’s a greater variation in possible merger rates because the black holes can have a greater of masses and spins. The upper limits range from about R_{90\%} = 1,200~\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1} to 3,600~\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1} for a 1.4 solar mass neutron star and a black hole between 30 and 5 solar masses and a range of different spins (Table II of the paper).

It’s not surprising that we didn’t see anything in O1, but what about in future runs. The plots below compare projections for our future sensitivity with various predictions for the merger rates of binary neutron stars and neutron star–black hole binaries. A few things have changed since we made these projections, for example O2 ended up being 9 months instead of 6 months, but I think we’re still somewhere in the O2 band. We’ll have to see for O3. From these, it’s clear that a detection on O1 was overly optimistic. In O2 and O3 it becomes more plausible. This means even if we don’t see anything, we’ll be still be doing some interesting astrophysics as we can start ruling out some models.

Comparison of merger rates

Comparison of upper limits for binary neutron star (BNS; top) and neutron star–black hole binaries (NSBH; bottom) merger rates with theoretical and observational limits. The blue bars show O1 limits, the green and orange bars show projections for future observing runs. Figures 6 and 7 from the O1 Binary Neutron Star/Neutron Star–Black Hole Paper.

Binary neutron star or neutron star–black hole mergers may be the sources of gamma-ray bursts. These are some of the most energetic explosions in the Universe, but we’re not sure where they come from (I actually find that kind of worrying). We look at this connection a bit more in the O1 Gamma-Ray Burst Paper. The theory is that during the merger, neutron star matter gets ripped apart, squeezed and heated, and as part of this we get jets blasted outwards from the swirling material. There are always jets in these type of things. We see the gamma-ray burst if we are looking down the jet: the wider the jet, the larger the fraction of gamma-ray bursts we see. By comparing our estimated merger rates, with the estimated rate of gamma-ray bursts, we can place some lower limits on the opening angle of the jet. If all gamma-ray bursts come from binary neutron stars, the opening angle needs to be bigger than 2.3_{-1.7}^{+1.7}~\mathrm{deg} and if they all come from neutron star–black hole mergers the angle needs to be bigger than 4.3_{-1.9}^{+3.1}~\mathrm{deg}.

The O1 Gamma-Ray Burst Paper

Synopsis: O1 Gamma-Ray Burst Paper
Read this if: You like explosions. But from a safe distance
Favourite part: We exclude GRB 150906B from being associated with galaxy NGC 3313

Gamma-ray bursts are extremely violent explosions. They come in two (overlapping) classes: short and long. Short gamma-ray bursts are typically shorter than ~2 seconds and have a harder spectrum (more high energy emission). We think that these may come from the coalescence of neutron star binaries. Long gamma-ray bursts are (shockingly) typically longer than ~2 seconds, and have a softer spectrum (less high energy emission). We think that these could originate from the collapse of massive stars (like a supernova explosion). The introduction of the paper contains a neat review of the physics of both these types of sources. Both types of progenitors would emit gravitational waves that could be detected if the source was close enough.

The binary mergers could be picked up by our templated search (as reported in the O1 Binary Neutron Star/Neutron Star–Black Hole Paper): we have a good models for what these signals look like, which allows us to efficiently search for them. We don’t have good models for the collapse of stars, but our unmodelled searches could pick these up. These look for the same signal in multiple detectors, but since they don’t know what they are looking for, it is harder to distinguish a signal from noise than for the templated search. Cross-referencing our usual searches with the times of gamma-ray bursts could help us boost the significance of a trigger: it might not be noteworthy as just a weak gravitational-wave (or gamma-ray) candidate, but considering them together makes it much more unlikely that a coincidence would happen by chance. The on-line RAVEN pipeline monitors for alerts to minimise the chance that miss a coincidence. As well as relying on our standard searches, we also do targeted searches following up on gamma-ray bursts, using the information from these external triggers.

We used two search algorithms:

  • X-Pipeline is an unmodelled search (similar to cWB) which looks for a coherent signal, consistent with the sky position of the gamma-ray burst. This was run for all the gamma-ray bursts (long and short) for which we have good data from both LIGO detectors and a good sky location.
  • PyGRB is a modelled search which looks for binary signals using templates. Our main binary search algorithms check for coincident signals: a signal matching the same template in both detectors with compatible times. This search looks for coherent signals, factoring the source direction. This gives extra sensitivity (~20%–25% in terms of distance). Since we know what the signal looks like, we can also use this algorithm to look for signals when only one detector is taking data. We used this algorithm on all short (or ambiguously classified) gamma-ray bursts for which we data from at least one detector.

In total we analysed times corresponding to 42 gamma-ray bursts: 41 which occurred during O1 plus GRB 150906B. This happening in the engineering run before the start of O1, and luckily Handord was in a stable observing state at the time. GRB 150906B was localised to come from part of the sky close to the galaxy NGC 3313, which is only 54 megaparsec away. This is within the regime where we could have detected a binary merger. This caused much excitement at the time—people thought that this could be the most interesting result of O1—but this dampened down a week later with the detection of GW150914.

GRB 150906B sky location

Interplanetary Network (IPN) localization for GRB 150906B and nearby galaxies. Figure 1 from the O1 Gamma-Ray Burst Paper.

We didn’t find any gravitational-wave counterparts. These means that we could place some lower limits on how far away their sources could be. We performed injections of signals—using waveforms from binaries, collapsing stars (approximated with circular sine–Gaussian waveforms), and unstable discs (using an accretion disc instability model)—to see how far away we could have detected a signal, and set 90% probability limits on the distances (see Table 3 of the paper). The best of these are ~100–200 megaparsec (the worst is just 4 megaparsec, which is basically next door). These results aren’t too interesting yet, they will become more so in the future, and around the time we hit design sensitivity we will start overlapping with electromagnetic measurements of distances for short gamma-ray bursts. However, we can rule out GRB 150906B coming from NGC 3133 at high probability!

The O1 Intermediate Mass Black Hole Binary Paper

Synopsis: O1 Intermediate Mass Black Hole Binary Paper
Read this if: You like intermediate mass black holes (black holes of ~100 solar masses)
Favourite part: The teamwork between different searches

Black holes could come in many sizes. We know of stellar-mass black holes, the collapsed remains of dead stars, which are a few to a few tens of times the mas of our Sun, and we know of (super)massive black holes, lurking in the centres of galaxies, which are tens of thousands to billions of times the mass of our Sun. Between the two, lie the elusive intermediate mass black holes. There have been repeated claims of observational evidence for their existence, but these are notoriously difficult to confirm. Gravitational waves provide a means of confirming the reality of intermediate mass black holes, if they do exist.

The gravitational wave signal emitted by a binary depends upon the mass of its components. More massive objects produce louder signals, but these signals also end at lower frequencies. The merger frequency of a binary is inversely proportional to the total mass. Ground-based detectors can’t detect massive black hole binaries as they are too low frequency, but they can detect binaries of a few hundred solar masses. We look for these in this search.

Our flagship search for binary black holes looks for signals using matched filtering: we compare the data to a bank of template waveforms. The bank extends up to a total mass of 100 solar masses. This search continues above this (there’s actually some overlap as we didn’t want to miss anything, but we shouldn’t have worried). Higher mass binaries are hard to detect as they as shorter, and so more difficult to distinguish from a little blip of noise, which is why this search was treated differently.

As well as using templates, we can do an unmodelled (burst) search for signals by looking for coherent signals in both detectors. This type of search isn’t as sensitive, as you don’t know what you are looking for, but can pick up short signals (like GW150914).

Our search for intermediate mass black holes uses both a modelled search (with templates spanning total masses of 50 to 600 solar masses) and a specially tuned burst search. Both make sure to include low frequency data in their analysis. This work is one of the few cross-working group (CBC for the templated search, and Burst for the unmodelled) projects, and I was pleased with the results.

This is probably where you expect me to say that we didn’t detect anything so we set upper limits. That is actually not the case here: we did detect something! Unfortunately, it wasn’t what we were looking for. We detected GW150914, which was a relief as it did lie within the range we where searching, as well as LVT151012 and GW151226. These were more of a surprise. GW151226 has a total mass of just ~24 solar masses (as measured with cosmological redshift), and so is well outside our bank. It was actually picked up just on the edge, but still, it’s impressive that the searches can find things beyond what they are aiming to pick up. Having found no intermediate mass black holes, we went and set some upper limits. (Yay!)

To set our upper limits, we injected some signals from binaries with specific masses and spins, and then saw how many would have be found with greater significance than our most significant trigger (after excluding GW150914, LVT151012 and GW151226). This is effectively asking the question of when would we see something as significant as this trigger which we think is just noise. This gives us a sensitive time–volume \langle VT \rangle which we have surveyed and found no mergers. We use this number of events to set 90% upper limits on the merge rates R_{90\%} = 2.3/\langle VT \rangle, and define an effective distance D_{\langle VT \rangle} defined so that \langle VT \rangle = T_a (4\pi D_{\langle VT \rangle}^3/3) where T_a is the analysed amount of time. The plot below show our limits on rate and effective distance for our different injections.

Intermediate mass black hole binary search results

Results from the O1 search for intermediate mass black hole binaries. The left panel shows the 90% confidence upper limit on the merger rate. The right panel shows the effective search distance. Each circle is a different injection. All have zero spin, except two 100+100 solar mass sets, where \chi indicates the spin aligned with the orbital angular momentum. Figure 2 of the O1 Intermediate Mass Black Hole Binary Paper.

There are a couple of caveats associated with our limits. The waveforms we use don’t include all the relevant physics (like orbital eccentricity and spin precession). Including everything is hard: we may use some numerical relativity waveforms in the future. However, they should give a good impression on our sensitivity. There’s quite a big improvement compared to previous searches (S6 Burst Search; S6 Templated Search). This comes form the improvement of Advanced LIGO’s sensitivity at low frequencies compared to initial LIGO. Future improvements to the low frequency sensitivity should increase our probability of making a detection.

I spent a lot of time working on this search as I was the review chair. As a reviewer, I had to make sure everything was done properly, and then reported accurately. I think our review team did a thorough job. I was glad when we were done, as I dislike being the bad cop.

The O1 Burst Paper

Synopsis: O1 Burst Paper
Read this if: You like to keep an open mind about what sources could be out there
Favourite part: GW150914 (of course)

The best way to find a signal is to know what you are looking for. This makes it much easier to distinguish a signal from random noise. However, what about the sources for which we don’t have good models? Burst searches aim to find signals regardless of their shape. To do this, they look for coherent signals in multiple detectors. Their flexibility means that they are less sensitive than searches targeting a specific signal—the signal needs to be louder before we can be confident in distinguishing it from noise—but they could potentially detect a wider number of sources, and crucially catch signals missed by other searches.

This paper presents our main results looking for short burst signals (up to a few seconds in length). Complementary burst searches were done as part of the search for intermediate mass black hole binaries (whose signals can be so short that it doesn’t matter too much if you have  a model or not) and for counterparts to gamma-ray bursts.

There are two-and-a-half burst search pipelines. There is coherent WaveBurst (cWB), Omicron–LALInferenceBurst (oLIB), and BayesWave follow-up to cWB. More details of each are found in the GW150914 Burst Companion Paper.

cWB looks for coherent power in the detectors—it looks for clusters of excess power in time and frequency. The search in O1 was split into a low-frequency component (signals below 1024 Hz) and a high-frequency component (1024 Hz). The low-frequency search was further divided into three classes:

  • C1 for signals which have a small range of frequencies (80% of the power in just a 5 Hz range). This is designed to catch blip glitches, short bursts of transient noise in our detectors. We’re not sure what causes blip glitches yet, but we know they are not real signals as they are seen independently in both detectors.
  • C3 looks for signals which increase in frequency with time—chirps. I suspect that this was (cheekily) designed to find binary black hole coalescences.
  • C2 (no, I don’t understand the ordering either) is everything else.

The false alarm rate is calculated independently for each division using time-slides. We analyse data from the two detectors which has been shifted in time, so that there can be no real coincident signals between the two, and compare this background of noise-only triggers to the no-slid data.

oLIB works in two stages. First (the Omicron bit), data from the individual detectors are searches for excess power. If there is anything interesting, the data from both detectors are analysed coherently. We use a sine–Gaussian template, and compare the probability that the same signal is in both detectors, to there being independent noise (potentially a glitch) in the two. This analysis is split too: there is a high-quality factor vs  low quality-factor split, which is similar to cWB’s splitting off C1 to catch narrow band features (the low quality-factor group catches the blip glitches). The false alarm rate is computed with time slides.

BayesWave is run as follow-up to triggers produced by cWB: it is too computationally expensive to run on all the data. BayesWave’s approach is similar to oLIB’s. It compares three hypotheses: just Gaussian noise, Gaussian noise and a glitch, and Gaussian noise and a signal. It constructs its signal using a variable number of sine–Gaussian wavelets. There are no cuts on its data. Again, time slides are used to estimate the false alarm rate.

The search does find a signal: GW150914. It is clearly found by all three algorithms. It is cWB’s C3, with a false alarm rate of less than 1 per 350 years; it is is oLIB’s high quality-factor bin with a false alarm rate of less than 1 per 230 years, and is found by BayesWave with a false alarm rate of less than 1 per 1000 years. You might notice that these results are less stringent than in the initial search results presented at the time of the detection. This is because only a limited number of time slides were done: we could get higher significance if we did more, but it was decided that it wasn’t worth the extra computing time, as we’re already convinced that GW150914 is a real signal. I’m a little sad they took GW150914 out of their plots (I guess it distorted the scale since it’s such an outlier from the background). Aside from GW150914, there are no detections.

Given the lack of detections, we can set some upper limits. I’ll skip over the limits for binary black holes, since our templated search is more sensitive here. The plot below shows limits on the amount of gravitational-wave energy emitted by a burst source at 10 kpc, which could be detected with a false alarm rate of 1 per century 50% of the time. We use some simple waveforms for this calculation. The energy scales with the inverse distance squared, so at a distance of 20 kpc, you need to increase the energy by a factor of 4.

Upper limits on energy at different frequencies

Gravitational-wave energy at 50% detection efficiency for standard sources at a distance of 10 kpc. Results are shown for the three different algorithms. Figure 2 of the O1 Burst Paper.

Maybe next time we’ll find something unexpected, but it will either need to be really energetic (like a binary black hole merger) or really close by (like a supernova in our own Galaxy)

Hierarchical analysis of gravitational-wave measurements of binary black hole spin–orbit misalignments

Gravitational waves allow us to infer the properties of binary black holes (two black holes in orbit about each other), but can we use this information to figure out how the black holes and the binary form? In this paper, we show that measurements of the black holes’ spins can help us this out, but probably not until we have at least 100 detections.

Black hole spins

Black holes are described by their masses (how much they bend spacetime) and their spins (how much they drag spacetime to rotate about them). The orientation of the spins relative to the orbit of the binary could tell us something about the history of the binary [bonus note].

We considered four different populations of spin–orbit alignments to see if we could tell them apart with gravitational-wave observations:

  1. Aligned—matching the idealised example of isolated binary evolution. This stands in for the case where misalignments are small, which might be the case if material blown off during a supernova ends up falling back and being swallowed by the black hole.
  2. Isotropic—matching the expectations for dynamically formed binaries.
  3. Equal misalignments at birth—this would be the case if the spins and orbit were aligned before the second supernova, which then tilted the plane of the orbit. (As the binary inspirals, the spins wobble around, so the two misalignment angles won’t always be the same).
  4. Both spins misaligned by supernova kicks, assuming that the stars were aligned with the orbit before exploding. This gives a more general scatter of unequal misalignments, but typically the primary (bigger and first forming) black hole is more misaligned.

These give a selection of possible spin alignments. For each, we assumed that the spin magnitude was the same and had a value of 0.7. This seemed like a sensible idea when we started this study [bonus note], but is now towards the upper end of what we expect for binary black holes.

Hierarchical analysis

To measurement the properties of the population we need to perform a hierarchical analysis: there are two layers of inference, one for the individual binaries, and one of the population.

From a gravitational wave signal, we infer the properties of the source using Bayes’ theorem. Given the data d_\alpha, we want to know the probability that the parameters \mathbf{\Theta}_\alpha have different values, which is written as p(\mathbf{\Theta}_\alpha|d_\alpha). This is calculated using

\displaystyle p(\mathbf{\Theta}_\alpha|d_\alpha) = \frac{p(d_\alpha | \mathbf{\Theta}_\alpha) p(\mathbf{\Theta}_\alpha)}{p(d_\alpha)},

where p(d_\alpha | \mathbf{\Theta}_\alpha) is the likelihood, which we can calculate from our knowledge of the noise in our gravitational wave detectors, p(\mathbf{\Theta}_\alpha) is the prior on the parameters (what we would have guessed before we had the data), and the normalisation constant p(d_\alpha) is called the evidence. We’ll use the evidence again in the next layer of inference.

Our prior on the parameters should actually depend upon what we believe about the astrophysical population. It is different if we believed that Model 1 were true (when we’d only consider aligned spins) than for Model 2. Therefore, we should really write

\displaystyle p(\mathbf{\Theta}_\alpha|d_\alpha, \lambda) = \frac{p(d_\alpha | \mathbf{\Theta}_\alpha,\lambda) p(\mathbf{\Theta}_\alpha,\lambda)}{p(d_\alpha|\lambda)},

where  \lambda denotes which model we are considering.

This is an important point to remember: if you our using our LIGO results to test your theory of binary formation, you need to remember to correct for our choice of prior. We try to pick non-informative priors—priors that don’t make strong assumptions about the physics of the source—but this doesn’t mean that they match what would be expected from your model.

We are interested in the probability distribution for the different models: how many binaries come from each. Given a set of different observations \{d_\alpha\}, we can work this out using another application of Bayes’ theorem (yay)

\displaystyle p(\mathbf{\lambda}|\{d_\alpha\}) = \frac{p(\{d_\alpha\} | \mathbf{\lambda}) p(\mathbf{\lambda})}{p(\{d_\alpha\})},

where p(\{d_\alpha\} | \mathbf{\lambda}) is just all the evidences for the individual events (given that model) multiplied together, p(\mathbf{\lambda}) is our prior for the different models, and p(\{d_\alpha\}) is another normalisation constant.

Now knowing how to go from a set of observations to the probability distribution on the different channels, let’s give it a go!

Results

To test our approach made a set of mock gravitational wave measurements. We generated signals from binaries for each of our four models, and analysed these as we would for real signals (using LALInference). This is rather computationally expensive, and we wanted a large set of events to analyse, so using these results as a guide, we created a larger catalogue of approximate distributions for the inferred source parameters p(\mathbf{\Theta}_\alpha|d_\alpha). We then fed these through our hierarchical analysis. The GIF below shows how measurements of the fraction of binaries from each population tightens up as we get more detections: the true fraction is marked in blue.

Fraction of binaries from each of the four models

Probability distribution for the fraction of binaries from each of our four spin misalignment populations for different numbers of observations. The blue dot marks the true fraction: and equal fraction from all four channels.

The plot shows that we do zoom in towards the true fraction of events from each model as the number of events increases, but there are significant degeneracies between the different models. Notably, it is difficult to tell apart Models 1 and 3, as both have strong support for both spins being nearly aligned. Similarly, there is a degeneracy between Models 2 and 4 as both allow for the two spins to have very different misalignments (and for the primary spin, which is the better measured one, to be quite significantly misaligned).

This means that we should be able to distinguish aligned from misaligned populations (we estimated that as few as 5 events would be needed to distinguish the case that all events came from either Model 1  or Model 2 if those were the only two allowed possibilities). However, it will be more difficult to distinguish different scenarios which only lead to small misalignments from each other, or disentangle whether there is significant misalignment due to big supernova kicks or because binaries are formed dynamically.

The uncertainty of the fraction of events from each model scales roughly with the square root of the number of observations, so it may be slow progress making these measurements. I’m not sure whether we’ll know the answer to how binary black hole form, or who will sit on the Iron Throne first.

arXiv: 1703.06873 [astro-ph.HE]
Journal: Monthly Notices of the Royal Astronomical Society471(3):2801–2811; 2017
Birmingham science summary: Hierarchical analysis of gravitational-wave measurements of binary black hole spin–orbit misalignment (by Simon)
If you like this you might like: Farr et al. (2017)Talbot & Thrane (2017), Vitale et al. (2017), Trifirò et al. (2016), Minogue (2000)

Bonus notes

Spin misalignments and formation histories

If you have two stars forming in a binary together, you’d expect them to be spinning in roughly the same direction, rotating the same way as they go round in their orbit (like our Solar System). This is because they all formed from the same cloud of swirling gas and dust. Furthermore, if two stars are to form a black hole binary that we can detect gravitational waves from, they need to be close together. This means that there can be tidal forces which gently tug the stars to align their rotation with the orbit. As they get older, stars puff up, meaning that if you have a close-by neighbour, you can share outer layers. This transfer of material will tend to align rotate too. Adding this all together, if you have an isolated binary of stars, you might expect that when they collapse down to become black holes, their spins are aligned with each other and the orbit.

Unfortunately, real astrophysics is rarely so clean. Even if the stars were initially rotating the same way as each other, they doesn’t mean that their black hole remnants will do the same. This depends upon how the star collapses. Massive stars explode as supernova, blasting off their outer layers while their cores collapse down to form black holes. Escaping material could carry away angular momentum, meaning that the black hole is spinning in a different direction to its parent star, or material could be blasted off asymmetrically, giving the new black hole a kick. This would change the plane of the binary’s orbit, misaligning the spins.

Alternatively, the binary could be formed dynamically. Instead of two stars living their lives together, we could have two stars (or black holes) come close enough together to form a binary. This is likely to happen in regions where there’s a high density of stars, such as a globular cluster. In this case, since the binary has been randomly assembled, there’s no reason for the spins to be aligned with each other or the orbit. For dynamically assembled binaries, all spin–orbit misalignments are equally probable.

Slow and steady

This project was led by Simon Stevenson. It was one of the first things we started working on at the beginning of his PhD. He has now graduated, and is off to start a new exciting life as a postdoc in Australia. We got a little distracted by other projects, most notably analysing the first detections of gravitational waves. Simon spent a lot of time developing the COMPAS population code, a code to simulate the evolution of binaries. Looking back, it’s impressive how far he’s come. This paper used a simple approximation to to estimate the masses of our black holes: we called it the Post-it note model, as we wrote it down on a single Post-it. Now Simon’s writing papers including the complexities of common-envelope evolution in order to explain LIGO’s actual observations.