# GW150914—The papers II

GW150914, The Event to its friends, was our first direct observation of gravitational waves. To accompany the detection announcement, the LIGO Scientific & Virgo Collaboration put together a suite of companion papers, each looking at a different aspect of the detection and its implications. Some of the work we wanted to do was not finished at the time of the announcement; in this post I’ll go through the papers we have produced since the announcement.

### The papers

I’ve listed the papers below in an order that makes sense to me when considering them together. Each started off as an investigation to check that we really understood the signal and were confident that the inferences made about the source were correct. We had preliminary results for each at the time of the announcement. Since then, the papers have evolved to fill different niches [bonus points note].

#### 13. The Basic Physics Paper

Title: The basic physics of the binary black hole merger GW150914
arXiv:
1608.01940 [gr-qc]
Journal:
Annalen der Physik529(1–2):1600209(17); 2017

The Event was loud enough to spot by eye after some simple filtering (provided that you knew where to look). You can therefore figure out some things about the source with back-of-the-envelope calculations. In particular, you can convince yourself that the source must be two black holes. This paper explains these calculations at a level suitable for a keen high-school or undergraduate physics student.

More details: The Basic Physics Paper summary

#### 14. The Precession Paper

Title: Improved analysis of GW150914 using a fully spin-precessing waveform model
arXiv:
1606.01210 [gr-qc]
Journal:
Physical Review X; 6(4):041014(19); 2016

To properly measure the properties of GW150914’s source, you need to compare the data to predicted gravitational-wave signals. In the Parameter Estimation Paper, we did this using two different waveform models. These models include lots of features binary black hole mergers, but not quite everything. In particular, they don’t include all the effects of precession (the wibbling of the orbit because of the black holes spins). In this paper, we analyse the signal using a model that includes all the precession effects. We find results which are consistent with our initial ones.

More details: The Precession Paper summary

#### 15. The Systematics Paper

Title: Effects of waveform model systematics on the interpretation of GW150914
arXiv:
1611.07531 [gr-qc]
Journal:
Classical & Quantum Gravity; 34(10):104002(48); 2017

To check how well our waveform models can measure the properties of the source, we repeat the parameter-estimation analysis on some synthetic signals. These fake signals are calculated using numerical relativity, and so should include all the relevant pieces of physics (even those missing from our models). This paper checks to see if there are any systematic errors in results for a signal like GW150914. It looks like we’re OK, but this won’t always be the case.

More details: The Systematics Paper summary

#### 16. The Numerical Relativity Comparison Paper

Title: Directly comparing GW150914 with numerical solutions of Einstein’s equations for binary black hole coalescence
arXiv:
1606.01262 [gr-qc]
Journal:
Physical Review D; 94(6):064035(30); 2016

Since GW150914 was so short, we can actually compare the data directly to waveforms calculated using numerical relativity. We only have a handful of numerical relativity simulations, but these are enough to give an estimate of the properties of the source. This paper reports the results of this investigation. Unsurprisingly, given all the other checks we’ve done, we find that the results are consistent with our earlier analysis.

If you’re interested in numerical relativity, this paper also gives a nice brief introduction to the field.

More details: The Numerical Relativity Comparison Paper summary

### The Basic Physics Paper

Synopsis: Basic Physics Paper
Read this if: You are teaching a class on gravitational waves
Favourite part: This is published in Annalen der Physik, the same journal that Einstein published some of his monumental work on both special and general relativity

It’s fun to play with LIGO data. The LIGO Open Science Center (LOSC), has put together a selection of tutorials to show you some of the basics of analysing signals. I wouldn’t blame you if you went of to try them now, instead of reading the rest of this post. Even though it would mean that no-one read this sentence. Purple monkey dishwasher.

The LOSC tutorials show you how to make your own version of some of the famous plots from the detection announcement. This paper explains how to go from these, using the minimum of theory, to some inferences about the signal’s source: most significantly that it must be the merger of two black holes.

GW150914 is a chirp. It sweeps up from low frequency to high. This is what you would expect of a binary system emitting gravitational waves. The gravitational waves carry away energy and angular momentum, causing the binary’s orbit to shrink. This means that the orbital period gets shorter, and the orbital frequency higher. The gravitational wave frequency is twice the orbital frequency (for circular orbits), so this goes up too.

The rate of change of the frequency depends upon the system’s mass. To first approximation, it is determined by the chirp mass,

$\displaystyle \mathcal{M} = \frac{(m_1 m_2)^{3/5}}{(m_1 + m_2)^{1/5}}$,

where $m_1$ and $m_2$ are the masses of the two components of the binary. By looking at the signal (go on, try the LOSC tutorials), we can estimate the gravitational wave frequency $f_\mathrm{GW}$ at different times, and so track how it changes. You can rewrite the equation for the rate of change of the gravitational wave frequency $\dot{f}_\mathrm{GW}$, to give an expression for the chirp mass

$\displaystyle \mathcal{M} = \frac{c^3}{G}\left(\frac{5}{96} \pi^{-8/3} f_\mathrm{GW}^{-11/3} \dot{f}_\mathrm{GW}\right)^{3/5}$.

Here $c$ and $G$ are the speed of light and the gravitational constant, which usually pop up in general relativity equations. If you use this formula (perhaps fitting for the trend $f_\mathrm{GW}$) you can get an estimate for the chirp mass. By fiddling with your fit, you’ll see there is some uncertainty, but you should end up with a value around $30 M_\odot$ [bonus note].

Next, let’s look at the peak gravitational wave frequency (where the signal is loudest). This should be when the binary finally merges. The peak is at about $150~\mathrm{Hz}$. The orbital frequency is half this, so $f_\mathrm{orb} \approx 75~\mathrm{Hz}$. The orbital separation $R$ is related to the frequency by

$\displaystyle R = \left[\frac{GM}{(2\pi f_\mathrm{orb})^2}\right]^{1/3}$,

where $M = m_1 + m_2$ is the binary’s total mass. This formula is only strictly true in Newtonian gravity, and not in full general relativity, but it’s still a reasonable approximation. We can estimate a value for the total mass from our chirp mass; if we assume the two components are about the same mass, then $M = 2^{6/5} \mathcal{M} \approx 70 M_\odot$. We now want to compare the binary’s separation to the size of black hole with the same mass. A typical size for a black hole is given by the Schwarzschild radius

$\displaystyle R_\mathrm{S} = \frac{2GM}{c^2}$.

If we divide the binary separation by the Schwarzschild radius we get the compactness $\mathcal{R} = R/R_\mathrm{S} \approx 1.7$. A compactness of $\sim 1$ could only happen for black holes. We could maybe get a binary made of two neutron stars to have a compactness of $\sim2$, but the system is too heavy to contain two neutron stars (which have a maximum mass of about $3 M_\odot$). The system is so compact, it must contain black holes!

What I especially like about the compactness is that it is unaffected by cosmological redshifting. The expansion of the Universe will stretch the gravitational wave, such that the frequency gets lower. This impacts our estimates for the true orbital frequency and the masses, but these cancel out in the compactness. There’s no arguing that we have a highly relativistic system.

You might now be wondering what if we don’t assume the binary is equal mass (you’ll find it becomes even more compact), or if we factor in black hole spin, or orbital eccentricity, or that the binary will lose mass as the gravitational waves carry away energy? The paper looks at these and shows that there is some wiggle room, but the signal really constrains you to have black holes. This conclusion is almost as inescapable as a black hole itself.

There are a few things which annoy me about this paper—I think it could have been more polished; “Virgo” is improperly capitalised on the author line, and some of the figures are needlessly shabby. However, I think it is a fantastic idea to put together an introductory paper like this which can be used to show students how you can deduce some properties of GW150914’s source with some simple data analysis. I’m happy to be part of a Collaboration that values communicating our science to all levels of expertise, not just writing papers for specialists!

During my undergraduate degree, there was only a single lecture on gravitational waves [bonus note]. I expect the topic will become more popular now. If you’re putting together such a course and are looking for some simple exercises, this paper might come in handy! Or if you’re a student looking for some project work this might be a good starting reference—bonus points if you put together some better looking graphs for your write-up.

If this paper has whetted your appetite for understanding how different properties of the source system leave an imprint in the gravitational wave signal, I’d recommend looking at the Parameter Estimation Paper for more.

### The Precession Paper

Synopsis: Precession Paper
Read this if: You want our most detailed analysis of the spins of GW150914’s black holes
Favourite part: We might have previously over-estimated our systematic error

The Basic Physics Paper explained how you could work out some properties of GW150914’s source with simple calculations. These calculations are rather rough, and lead to estimates with large uncertainties. To do things properly, you need templates for the gravitational wave signal. This is what we did in the Parameter Estimation Paper.

In our original analysis, we used two different waveforms:

• The first we referred to as EOBNR, short for the lengthy technical name SEOBNRv2_ROM_DoubleSpin. In short: This includes the spins of the two black holes, but assumes they are aligned such that there’s no precession. In detail: The waveform is calculated by using effective-one-body dynamics (EOB), an approximation for the binary’s motion calculated by transforming the relevant equations into those for a single object. The S at the start stands for spin: the waveform includes the effects of both black holes having spins which are aligned (or antialigned) with the orbital angular momentum. Since the spins are aligned, there’s no precession. The EOB waveforms are tweaked (or calibrated, if you prefer) by comparing them to numerical relativity (NR) waveforms, in particular to get the merger and ringdown portions of the waveform right. While it is easier to solve the EOB equations than full NR simulations, they still take a while. To speed things up, we use a reduced-order model (ROM), a surrogate model constructed to match the waveforms, so we can go straight from system parameters to the waveform, skipping calculating the dynamics of the binary.
• The second we refer to as IMRPhenom, short for the technical IMRPhenomPv2. In short: This waveform includes the effects of precession using a simple approximation that captures the most important effects. In detail: The IMR stands for inspiral–merger–ringdown, the three phases of the waveform (which are included in in the EOBNR model too). Phenom is short for phenomenological: the waveform model is constructed by tuning some (arbitrary, but cunningly chosen) functions to match waveforms calculated using a mix of EOB, NR and post-Newtonian theory. This is done for black holes with (anti)aligned spins to first produce the IMRPhenomD model. This is then twisted up, to include the dominant effects of precession to make IMRPhenomPv2. This bit is done by combining the two spins together to create a single parameter, which we call $\chi_\mathrm{p}$, which determines the amount of precession. Since we are combining the two spins into one number, we lose a bit of the richness of the full dynamics, but we get the main part.

The EOBNR and IMRPhenom models are created by different groups using different methods, so they are useful checks of each other. If there is an error in our waveforms, it would lead to systematic errors in our estimated paramters

In this paper, we use another waveform model, a precessing EOBNR waveform, technically known as SEOBNRv3. This model includes all the effects of precession, not just the simple model of the IMRPhenom model. However, it is also computationally expensive, meaning that the analysis takes a long time (we don’t have a ROM to speed things up, as we do for the other EOBNR waveform)—each waveform takes over 20 times as long to calculate as the IMRPhenom model [bonus note].

Our results show that all three waveforms give similar results. The precessing EOBNR results are generally more like the IMRPhenom results than the non-precessing EOBNR results are. The plot below compares results from the different waveforms [bonus note].

Comparison of parameter estimates for GW150914 using different waveform models. The bars show the 90% credible intervals, the dark bars show the uncertainty on the 5%, 50% and 95% quantiles from the finite number of posterior samples. The top bar is for the non-precessing EOBNR model, the middle is for the precessing IMRPhenom model, and the bottom is for the fully precessing EOBNR model. Figure 1 of the Precession Paper; see Figure 9 for a comparison of averaged EOBNR and IMRPhenom results, which we have used for our overall results.

We had used the difference between the EOBNR and IMRPhenom results to estimate potential systematic error from waveform modelling. Since the two precessing models are generally in better agreement, we have may have been too pessimistic here.

The main difference in results is that our new refined analysis gives tighter constraints on the spins. From the plot above you can see that the uncertainty for the spin magnitudes of the heavier black hole $a_1$, the lighter black hole $a_2$ and the final black hole (resulting from the coalescence) $a_\mathrm{f}$, are slightly narrower. This makes sense, as including the extra imprint from the full effects of precession gives us a bit more information about the spins. The plots below show the constraints on the spins from the two precessing waveforms: the distributions are more condensed with the new results.

Comparison of orientations and magnitudes of the two component spins. The spin is perfectly aligned with the orbital angular momentum if the angle is 0. The left disk shows results using the precessing IMRPhenom model, the right using the precessing EOBNR model. In each, the distribution for the more massive black hole is on the left, and for the smaller black hole on the right. Adapted from Figure 5 of the Parameter Estimation Paper and Figure 4 of the Precession Paper.

In conclusion, this analysis had shown that included the full effects of precession do give slightly better estimates of the black hole spins. However, it is safe to trust the IMRPhenom results.

If you are looking for the best parameter estimates for GW150914, these results are better than the original results in the Parameter Estimation Paper. However, I would prefer the results in the O1 Binary Black Hole Paper, even though this doesn’t use the fully precessing EOBNR waveform, because we do use an updated calibration of the detector data. Neither the choice of waveform or the calibration make much of an impact on the results, so for most uses it shouldn’t matter too much.

### The Systematics Paper

Synopsis: Systematics Paper
Read this if: You want to know how parameter estimation could fare for future detections
Favourite part: There’s no need to panic yet

The Precession Paper highlighted how important it is to have good waveform templates. If there is an error in our templates, either because of modelling or because we are missing some physics, then our estimated parameters could be wrong—we would have a source of systematic error.

We know our waveform models aren’t perfect, so there must be some systematic error, the question is how much? From our analysis so far (such as the good agreement between different waveforms in the Precession Paper), we think that systematic error is less significant than the statistical uncertainty which is a consequence of noise in the detectors. In this paper, we try to quantify systematic error for GW150914-like systems.

To asses systematic errors, we analyse waveforms calculated by numerical relativity simulations into data around the time of GW150914. Numerical relativity exactly solves Einstein’s field equations (which govern general relativity), so results of these simulations give the most accurate predictions for the form of gravitational waves. As we know the true parameters for the injected waveforms, we can compare these to the results of our parameter estimation analysis to check for biases.

We use waveforms computed by two different codes: the Spectral Einstein Code (SpEC) and the Bifunctional Adaptive Mesh (BAM) code. (Don’t the names make them sound like such fun?) Most waveforms are injected into noise-free data, so that we know that any offset in estimated parameters is dues to the waveforms and not detector noise; however, we also tried a few injections into real data from around the time of GW150914. The signals are analysed using our standard set-up as used in the Parameter Estimation Paper (a couple of injections are also included in the Precession Paper, where they are analysed with the fully precessing EOBNR waveform to illustrate its accuracy).

The results show that in most cases, systematic errors from our waveform models are small. However, systematic errors can be significant for some orientations of precessing binaries. If we are looking at the orbital plane edge on, then there can be errors in the distance, the mass ratio and the spins, as illustrated below [bonus note]. Thankfully, edge-on binaries are quieter than face-on binaries, and so should make up only a small fraction of detected sources (GW150914 is most probably face off). Furthermore, biases are only significant for some polarization angles (an angle which describes the orientation of the detectors relative to the stretch/squash of the gravitational wave polarizations). Factoring this in, a rough estimate is that about 0.3% of detected signals would fall into the unlucky region where waveform biases are important.

Parameter estimation results for two different GW150914-like numerical relativity waveforms for different inclinations and polarization angles. An inclination of $0^\circ$ means the binary is face on, $180^\circ$ means it face off, and an inclination around $90^\circ$ is edge on. The bands show the recovered 90% credible interval; the dark lines the median values, and the dotted lines show the true values. The (grey) polarization angle $\psi = 82^\circ$ was chosen so that the detectors are approximately insensitive to the $h_+$ polarization. Figure 4 of the Systematics Paper.

While it seems that we don’t have to worry about waveform error for GW150914, this doesn’t mean we can relax. Other systems may show up different aspects of waveform models. For example, our approximants only include the dominant modes (spherical harmonic decompositions of the gravitational waves). Higher-order modes have more of an impact in systems where the two black holes are unequal masses, or where the binary has a higher total mass, so that the merger and ringdown parts of the waveform are more important. We need to continue work on developing improved waveform models (or at least, including our uncertainty about them in our analysis), and remember to check for biases in our results!

### The Numerical Relativity Comparison Paper

Synopsis: Numerical Relativity Comparison Paper
Read this if: You are really suspicious of our waveform models, or really like long tables or numerical data
Favourite part: We might one day have enough numerical relativity waveforms to do full parameter estimation with them

In the Precession Paper we discussed how important it was to have accurate waveforms; in the Systematics Paper we analysed numerical relativity waveforms to check the accuracy of our results. Since we do have numerical relativity waveforms, you might be wondering why we don’t just use these in our analysis? In this paper, we give it a go.

Our standard parameter-estimation code (LALInference) randomly hops around parameter space, for each set of parameters we generate a new waveform and see how this matches the data. This is an efficient way of exploring the parameter space. Numerical relativity waveforms are too computationally expensive to generate one each time we hop. We need a different approach.

The alternative, is to use existing waveforms, and see how each of them match. Each simulation gives the gravitational waves for a particular mass ratio and combination of spins, we can scale the waves to examine different total masses, and it is easy to consider what the waves would look like if measured at a different position (distance, inclination or sky location). Therefore, we can actually cover a fair range of possible parameters with a given set of simulations.

To keep things quick, the code averages over positions, this means we don’t currently get an estimate on the redshift, and so all the masses are given as measured in the detector frame and not as the intrinsic masses of the source.

The number of numerical relativity simulations is still quite sparse, so to get nice credible regions, a simple Gaussian fit is used for the likelihood. I’m not convinced that this capture all the detail of the true likelihood, but it should suffice for a broad estimate of the width of the distributions.

The results of this analysis generally agree with those from our standard analysis. This is a relief, but not surprising given all the other checks that we have done! It hints that we might be able to get slightly better measurements of the spins and mass ratios if we used more accurate waveforms in our standard analysis, but the overall conclusions are  sound.

I’ve been asked if since these results use numerical relativity waveforms, they are the best to use? My answer is no. As well as potential error from the sparse sampling of simulations, there are several small things to be wary of.

• We only have short numerical relativity waveforms. This means that the analysis only goes down to a frequency of $30~\mathrm{Hz}$ and ignores earlier cycles. The standard analysis includes data down to $20~\mathrm{Hz}$, and this extra data does give you a little information about precession. (The limit of the simulation length also means you shouldn’t expect this type of analysis for the longer LVT151012 or GW151226 any time soon).
• This analysis doesn’t include the effects of calibration uncertainty. There is some uncertainty in how to convert from the measured signal at the detectors’ output to the physical strain of the gravitational wave. Our standard analysis fold this in, but that isn’t done here. The estimates of the spin can be affected by miscalibration. (This paper also uses the earlier calibration, rather than the improved calibration of the O1 Binary Black Hole Paper).
• Despite numerical relativity simulations producing waveforms which include all higher modes, not all of them are actually used in the analysis. More are included than in the standard analysis, so this will probably make negligible difference.

Finally, I wanted to mention one more detail, as I think it is not widely appreciated. The gravitational wave likelihood is given by an inner product

$\displaystyle L \propto \exp \left[- \int_{-\infty}^{\infty} \mathrm{d}f \frac{|s(f) - h(f)|^2}{S_n(f)} \right]$,

where $s(f)$ is the signal, $h(f)$ is our waveform template and $S_n(f)$ is the noise spectral density (PSD). These are the three things we need to know to get the right answer. This paper, together with the Precession Paper and the Systematics Paper, has been looking at error from our waveform models $h(f)$. Uncertainty from the calibration of $s(f)$ is included in the standard analysis, so we know how to factor this in (and people are currently working on more sophisticated models for calibration error). This leaves the noise PSD $S_n(f)$

The noise PSD varies all the time, so it needs to be estimated from the data. If you use a different stretch of data, you’ll get a different estimate, and this will impact your results. Ideally, you would want to estimate from the time span that includes the signal itself, but that’s tricky as there’s a signal in the way. The analysis in this paper calculates the noise power spectral density using a different time span and a different method than our standard analysis; therefore, we expect some small difference in the estimated parameters. This might be comparable to (or even bigger than) the difference from switching waveforms! We see from the similarity of results that this cannot be a big effect, but it means that you shouldn’t obsess over small differences, thinking that they could be due to waveform differences, when they could just come from estimation of the noise PSD.

Lots of work is currently going into making sure that the numerator term $|s(f) - h(f)|^2$ is accurate. I think that the denominator $S_n(f)$ needs attention too. Since we have been kept rather busy, including uncertainty in PSD estimation will have to wait for a future set papers.

### Bonus notes

#### Finches

100 bonus points to anyone who folds up the papers to make beaks suitable for eating different foods.

#### The right answer

Our current best estimate for the chirp mass (from the O1 Binary Black Hole Paper) would be $30.6^{+1.9}_{-1.6} M_\odot$. You need proper templates for the gravitational wave signal to calculate this. If you factor in the the gravitational wave gets redshifted (shifted to lower frequency by the expansion of the Universe), then the true chirp mass of the source system is $28.1^{+1.8}_{-1.5} M_\odot$.

#### Formative experiences

My one undergraduate lecture on gravitational waves was the penultimate lecture of the fourth-year general relativity course. I missed this lecture, as I had a PhD interview (at the University of Birmingham). Perhaps if I had sat through it, my research career would have been different?

#### Good things come…

The computational expense of a waveform is important, as when we are doing parameter estimation, we calculate lots (tens of millions) of waveforms for different parameters to see how they match the data. Before O1, the task of using SEOBNRv3 for parameter estimation seemed quixotic. The first detection, however, was enticing enough to give it a try. It was a truly heroic effort by Vivien Raymond and team that produced these results—I am slightly suspicious the Vivien might actually be a wizard.

GW150914 is a short signal, meaning it is relatively quick to analyse. Still, it required us using all the tricks at our disposal to get results in a reasonable time. When it came time to submit final results for the Discovery Paper, we had just about 1,000 samples from the posterior probability distribution for the precessing EOBNR waveform. For comparison, we had over 45,000 sample for the non-precessing EOBNR waveform. 1,000 samples isn’t enough to accurately map out the probability distributions, so we decided to wait and collect more samples. The preliminary results showed that things looked similar, so there wouldn’t be a big difference in the science we could do. For the Precession Paper, we finally collected 2,700 samples. This is still a relatively small number, so we carefully checked the uncertainty in our results due to the finite number of samples.

The Precession Paper has shown that it is possible to use the precessing EOBNR for parameter estimation, but don’t expect it to become the norm, at least until we have a faster implementation of it. Vivien is only human, and I’m sure his family would like to see him occasionally.

#### Parameter key

In case you are wondering what all the symbols in the results plots stand for, here are their usual definitions. First up, the various masses

• $m_1$—the mass of the heavier black hole, sometimes called the primary black hole;
• $m_2$—the mass of the lighter black hole, sometimes called the secondary black hole;
• $M$—the total mass of the binary, $M = m_1 + m_2$;
• $M_\mathrm{f}$—the mass of the final black hole (after merger);
• $\mathcal{M}$—the chirp mass, the combination of the two component masses which sets how the binary inspirals together;
• $q$—the mass ratio, $q = m_1/m_2 \leq 1$. Confusingly, numerical relativists often use the opposite  convention $q = m_2/m_1 \geq 1$ (which is why the Numerical Relativity Comparison Paper discusses results in terms of $1/q$: we can keep the standard definition, but all the numbers are numerical relativist friendly).

A superscript “source” is sometimes used to distinguish the actual physical masses of the source from those measured by the detector which have been affected by cosmological redshift. The measured detector-frame mass is $m = (1 + z) m^\mathrm{source}$, where $m^\mathrm{source}$ is the true, redshift-corrected source-frame mass and $z$ is the redshift. The mass ratio $q$ is independent of the redshift. On the topic of redshift, we have

• $z$—the cosmological redshift ($z = 0$ would be now);
• $D_\mathrm{L}$—the luminosity distance.

The luminosity distance sets the amplitude of the signal, as does the orientation which we often describe using

• $\iota$—the inclination, the angle between the line of sight and the orbital angular momentum ($\boldsymbol{L}$). This is zero for a face-on binary.
• $\theta_{JN}$—the angle between the line of sight ($\boldsymbol{N}$) and the total angular momentum of the binary ($\boldsymbol{J}$); this is approximately equal to the inclination, but is easier to use for precessing binaries.

As well as masses, black holes have spins

• $a_1$—the (dimensionless) spin magnitude of the heavier black hole, which is between $0$ (no spin) and $1$ (maximum spin);
• $a_2$—the (dimensionless) spin magnitude of the lighter black hole;
• $a_\mathrm{f}$—the (dimensionless) spin magnitude of the final black hole;
• $\chi_\mathrm{eff}$—the effective inspiral spin parameter, a combinations of the two component spins which has the largest impact on the rate of inspiral (think of it as the spin equivalent of the chirp mass);
• $\chi_\mathrm{p}$—the effective precession spin parameter, a combination of spins which indicate the dominant effects of precession, it’s $0$ for no precession and $1$ for maximal precession;
• $\theta_{LS_1}$—the primary tilt angle, the angle between the orbital angular momentum and the heavier black holes spin ($\boldsymbol{S_1}$). This is zero for aligned spin.
• $\theta_{LS_2}$—the secondary tilt angle, the angle between the orbital angular momentum and the lighter black holes spin ($\boldsymbol{S_2}$).
• $\phi_{12}$—the angle between the projections of the two spins on the orbital plane.

The orientation angles change in precessing binaries (when the spins are not perfectly aligned or antialigned with the orbital angular momentum), so we quote values at a reference time corresponding to when the gravitational wave frequency is $20~\mathrm{Hz}$. Finally (for the plots shown here)

• $\psi$—the polarization angle, this is zero when the detector arms are parallel to the $h_+$ polarization’s stretch/squash axis.

For more detailed definitions, check out the Parameter Estimation Paper or the LALInference Paper.

# First low frequency all-sky search for continuous gravitational wave signals

It is the time of year for applying for academic jobs and so I have been polishing up my CV. In doing so I spotted that I had missed the publication of one of the LIGO Scientific–Virgo Collaboration papers. In my defence, it was published the week of 8–14 February, which saw the publication of one or two other papers [bonus note]. The paper I was missing is on a search for continuous gravitational waves.

Continuous gravitational waves are near constant hums. Unlike the chirps of coalescing binaries, continuous signals are always on. We think that they could be generated by rotating neutron stars, assuming that they are not perfectly smooth. This is the first search to look for continuous waves from anywhere on the sky with frequencies below 50 Hz. The gravitational-wave frequency is twice the rotational frequency of the neutron star, so this is the first time we’ve looked for neutron stars spinning slower than 25 times per second (which is still pretty fast, I’d certainly feel more than a little queasy). The search uses data from the second and fourth Virgo Science Runs (VSR2 and VSR4): the detector didn’t behave as well in VSR3, which is why that data isn’t used.

The frequency of a rotating neutron star isn’t quite constant for two reasons. First, as the Earth orbits around the Sun it’ll move towards and away from the source. This leads to the signal being Doppler shifted. For a given position on the sky, this can be corrected for, and this is done in the search. Second, the neutron star will slow down (a process known as spin-down) because it looses energy and angular momentum. There are various processes that could slow a neutron star, emitting gravitational waves is one, some form of internal sloshing around is another which could also cause things to speed up, or perhaps some braking from its magnetic field. We’re not too sure exactly how quickly spin down will happen, so we search over a range of possible values from $-1.0\times10^{-10}~\mathrm{Hz\,s^{-1}}$ to $+1.5\times10^{-11}~\mathrm{Hz\,s^{-1}}$.

The particular search technique used is called FrequencyHough. This chops the detector output into different chunks of time. In each we calculate how much power is at each frequency. We then look for a pattern, where we can spot a signal across different times, allowing for some change from spin-down. Recognising the track of a signal with a consistent frequency evolution is done using a Hough transform, a technique from image processing that is good at spotting lines.

The search didn’t find any signals. This is not too surprising. Therefore, we did the usual thing of setting some upper limits. The plot below shows 90% confidence limits (that is where we’d expect to detect 9/10 signals) on the signal amplitude at different frequencies.

90% confidence upper limits on the gravitational-wave strain at different frequencies. Each dot is for a different 1 Hz band. Some bands are noisy and feature instrumental artefacts which have to be excluded from the analysis, these are noted as the filled (magenta) circles. In this case, the upper limit only applies to the part of the band away from the disturbance. Figure 12 of Aasi et al. (2016).

Given that the paper only reports a non-detection, it is rather lengthy. The opening sections do give a nice introduction to continuous waves and how we hunt for them, so this might be a good paper is you’re new to the area but want to learn some of the details. Be warned that it does use $\jmath = \sqrt{-1}$ for some reason. After the introduction, it does get technical, so it’s probably only for insomniacs. However, if you like a good conspiracy and think we might be hiding something, the appendices go through all the details of removing instrumental noise and checking outliers found by the search.

In summary, this was the first low-frequency search for continuous gravitational waves. We didn’t find anything in the best data from the initial detector era, but the advanced detectors will be much more sensitive to this frequency range. Slowly rotating neutron stars can’t hide forever.

arXiv: 1510.03621 [astro-ph.IM]
Journal: Physical Review D; 93(4):042007(25); 2016
Science summary: First search for low frequency continuous gravitational waves emitted by unseen neutron stars
Greatest regret:
I didn’t convince the authors to avoid using “air quotes” around jargon.

### Bonus note

#### Better late than never

I feel less guilty about writing a late blog post about this paper as I know that it has been a long time in the making. As a collaboration, we are careful in reviewing our results; this can sometimes lead to delays in announcing results, but hopefully means that we get the right answer. This paper took over three years to review, a process which included over 85 telecons!

# Going the distance: Mapping host galaxies of LIGO and Virgo sources in three dimensions using local cosmography and targeted follow-up

GW150914 claimed the title of many firsts—it was the first direct observation of gravitational waves, the first observation of a binary black hole system, the first observation of two black holes merging, the first time time we’ve tested general relativity in such extreme conditions… However, there are still many firsts for gravitational-wave astronomy yet to come (hopefully, some to be accompanied by cake). One of the most sought after, is the first is signal to have a clear electromagnetic counterpart—a glow in some part of the spectrum of light (from radio to gamma-rays) that we can observe with telescopes.

Identifying a counterpart is challenging, as it is difficult to accurately localise a gravitational-wave source. electromagnetic observers must cover a large area of sky before any counterparts fade. Then, if something is found, it can be hard to determine if that is from the same source as the gravitational waves, or some thing else…

To help the search, it helps to have as much information as possible about the source. Especially useful is the distance to the source. This can help you plan where to look. For nearby sources, you can cross-reference with galaxy catalogues, and perhaps pick out the biggest galaxies as the most likely locations for the source [bonus note]. Distance can also help plan your observations: you might want to start with regions of the sky where the source would be closer and so easiest to spot, or you may want to prioritise points where it is further and so you’d need to observe longer to detect it (I’m not sure there’s a best strategy, it depends on the telescope and the amount of observing time available). In this paper we describe a method to provide easy-to-use distance information, which could be supplied to observers to help their search for a counterpart.

### Going the distance

This work is the first spin-off from the First 2 Years trilogy of papers, which looked a sky localization and parameter estimation for binary neutron stars in the first two observing runs of the advance-detector era. Binary neutron star coalescences are prime candidates for electromagnetic counterparts as we think there should be a bigger an explosion as they merge. I was heavily involved in the last two papers of the trilogy, but this study was led by Leo Singer: I think I mostly annoyed Leo by being a stickler when it came to writing up the results.

Three-dimensional localization showing the 20%, 50%, and 90% credible levels for a typical two-detector early Advanced LIGO event. The Earth is shown at the centre, marked by $\oplus$. The true location is marked by the cross. Leo poetically described this as looking like the seeds of the jacaranda tree, and less poetically as potato chips. Figure 1 of Singer et al. (2016).

The idea is to provide a convenient means of sharing a 3D localization for a gravitational wave source. The full probability distribution is rather complicated, but it can be made more manageable if you break it up into pixels on the sky. Since astronomers need to decide where to point their telescopes, breaking up the 3D information along different lines of sight, should be useful for them.

Each pixel covers a small region of the sky, and along each line of sight, the probability distribution for distance $D$ can be approximated using an ansatz

$\displaystyle p(D|\mathrm{data}) \propto D^2\exp\left[-\frac{(D - \mu)^2}{2\sigma}\right]$,

where $\mu$ and $\sigma$ are calculated for each pixel individually.  The form of this ansatz can be understood as the posterior probability distribution is proportional to the product of the prior and the likelihood. Our prior is that sources are uniformly distributed in volume, which means $\propto D^2$, and the likelihood can often be well approximated as a Gaussian distribution, which gives the other piece [bonus note].

The ansatz doesn’t always fit perfectly, but it performs well on average. Considering the catalogue of binary neutron star signals used in the earlier papers, we find that roughly 50% of the time sources are found within the 50% credible volume, 90% are found in the 90% volume, etc.

The 3D localization is easy to calculate, and Leo has worked out a cunning way to evaluate the ansatz with BAYESTAR, our rapid sky localization code, meaning that we can produce it on minute time-scales. This means that observers should have something to work with straight-away, even if we’ll need to wait a while for the full, final results. We hope that this will improve prospects for finding counterparts—some potential examples are sketched out in the penultimate section of the paper.

If you are interested in trying out the 3D information, there is a data release and the supplement contains a handy Python tutorial. We are hoping that the Collaboration will use the format for alerts for LIGO and Virgo’s upcoming observing run (O2).

arXiv: 1603.07333 [astro-ph.HE]; 1605.04242 [astro-ph.IM]
Journal: Astrophysical Journal Letters; 829(1):L15(7); 2016; Astrophysical Journal Supplement Series; 226(1):10(8); 2016
Data release: Going the distance
Favourite crisp flavour: Salt & vinegar
Favourite jacaranda: Jacaranda mimosifolia

### Bonus notes

#### Catalogue shopping

The Event’s source has a luminosity distance of around 250–570 Mpc. This is sufficiently distant that galaxy catalogues are incomplete and not much use when it comes to searching. GW151226 and LVT151012 have similar problems, being at around the same distance or even further.

#### The gravitational-wave likelihood

For the professionals interested in understanding more about the shape of the likelihood, I’d recommend Cutler & Flanagan (1994). This is a fantastic paper which contains many clever things [bonus bonus note]. This work is really the foundation of gravitational-wave parameter estimation. From it, you can see how the likelihood can be approximated as a Gaussian. The uncertainty can then be evaluated using Fisher matrices. Many studies have been done using Fisher matrices, but it important to check that this is a valid approximation, as nicely explained in Vallisneri (2008). I ran into a case when it didn’t during my PhD.

#### Mergin’

As a reminder that smart people make mistakes, Cutler & Flanagan have a typo in the title of arXiv posting of their paper. This is probably the most important thing to take away from this paper.

# Comprehensive all-sky search for periodic gravitational waves in the sixth science run LIGO data

The most recent, and most sensitive, all-sky search for continuous gravitational waves shows no signs of a detection. These signals from rotating neutron stars remain elusive. New data from the advanced detectors may change this, but we will have to wait a while to find out. This at least gives us time to try to figure out what to do with a detection, should one be made.

### New years and new limits

The start of the new academic year is a good time to make resolutions—much better than wet and windy January. I’m trying to be tidier and neater in my organisation. Amid cleaning up my desk, which is covered in about an inch of papers, I uncovered this recent Collaboration paper, which I had lost track of.

The paper is the latest in the continuous stream of non-detections of continuous gravitational waves. These signals could come from rotating neutron stars which are deformed or excited in some way, and the hope that from such an observation we could learn something about the structure of neutron stars.

The search uses old data from initial LIGO’s sixth science run. Searches for continuous waves require lots of computational power, so they can take longer than even our analyses of binary neutron star coalescences. This is a semi-coherent search, like the recent search of the Orion spur—somewhere between an incoherent search, which looks for signal power of any form in the detectors, and a fully coherent search, which looks for signals which exactly match the way a template wave evolves [bonus note]. The big difference compared to the Orion spur search, is that this one looks at the entire sky. This makes it less sensitive in those narrow directions, but means we are not excluding the possibility of sources from other locations.

Artist’s impression of the local part of the Milky Way. The yellow cones mark the extent of the Orion Spur spotlight search, and the pink circle shows the equivalent sensitivity of this all-sky search. Green stars indicate known pulsars. Original image: NASA/JPL-Caltech/ESO/R. Hurt.

The search identified 16 outliers, but an examination of all of these showed they could be explained either as an injected signal or as detector noise. Since no signals were found, we can instead place some upper limits on the strength of signals.

The plot below translates the calculated upper limits (above which there would have been a ~75%–95% chance of us detected the signal) into the size of neutron star deformations. Each curve shows the limits on detectable signals at different distance, depending upon their frequency and the rate of change of their frequency. The dotted lines show limits on ellipticity $\varepsilon$, a measure of how bumpy the neutron star is. Larger deformations mean quicker changes of frequency and produce louder signals, therefore they can can be detected further away.

Range of the PowerFlux search for rotating neutron stars assuming that spin-down is entirely due to gravitational waves. The solid lines show the upper limits as a function of the gravitational-wave frequency and its rate of change; the dashed lines are the corresponding limits on ellipticity, and the dotted line marks the maximum searched spin-down. Figure 6 of Abbott et al. (2016).

Neutron stars are something like giant atomic nuclei. Figuring the properties of the strange matter that makes up neutron stars is an extremely difficult problem. We’ll never be able to recreate such exotic matter in the laboratory. Gravitational waves give us a rare means of gathering experimental data on how this matter behaves. However, exactly how we convert a measurement of a signal into constraints on the behaviour of the matter is still uncertain. I think that making a detection might only be the first step in understanding the sources of continuous gravitational waves.

arXiv: 1605.03233 [gr-qc]
Journal: Physical Review D; 94(4):042002(14); 2016
Other new academic year resolution:
To attempt to grow a beard. Beard stroking helps you think, right?

### Bonus note

#### The semi-coherent search

As the first step of this search, the PowerFlux algorithm looks for power that changes in frequency as expected for a rotating neutron star: it factors in Doppler shifting due to the motion of the Earth and a plausible spin down (slowing of the rotation) of the neutron star. As a follow up, the Loosely Coherent algorithm is used, which checks for signals which match short stretches of similar templates. Any candidates to make it through all stages of refinement are then examined in more detail. This search strategy is described in detail for the S5 all-sky search.

# Parameter estimation on gravitational waves from neutron-star binaries with spinning components

In gravitation-wave astronomy, some parameters are easier to measure than others. We are sensitive to properties which change the form of the wave, but sometimes the effect of changing one parameter can be compensated by changing another. We call this a degeneracy. In signals for coalescing binaries (two black holes or neutron stars inspiralling together), there is a degeneracy between between the masses and spins. In this recently published paper, we look at what this means for observing binary neutron star systems.

### History

This paper has been something of an albatross, and I’m extremely pleased that we finally got it published. I started working on it when I began my post-doc at Birmingham in 2013. Back then I was sharing an office with Ben Farr, and together with others in the Parameter Estimation Group, we were thinking about the prospect of observing binary neutron star signals (which we naively thought were the most likely) in LIGO’s first observing run.

One reason that this work took so long is that binary neutron star signals can be computationally expensive to analyse [bonus note]. The signal slowly chirps up in frequency, and can take up to a minute to sweep through the range of frequencies LIGO is sensitive to. That gives us a lot of gravitational wave to analyse. (For comparison, GW150914 lasted 0.2 seconds). We need to calculate waveforms to match to the observed signals, and these can be especially complicated when accounting for the effects of spin.

A second reason is shortly after submitting the paper in August 2015, we got a little distracted

This paper was the third of a trilogy look at measuring the properties of binary neutron stars. I’ve written about the previous instalment before. We knew that getting the final results for binary neutron stars, including all the important effects like spin, would take a long time, so we planned to follow up any detections in stages. A probable sky location can be computed quickly, then we can have a first try at estimating other parameters like masses using waveforms that don’t include spin, then we go for the full results with spin. The quicker results would be useful for astronomers trying to find any explosions that coincided with the merger of the two neutron stars. The first two papers looked at results from the quicker analyses (especially at sky localization); in this one we check what effect neglecting spin has on measurements.

### What we did

We analysed a population of 250 binary neutron star signals (these are the same as the ones used in the first paper of the trilogy). We used what was our best guess for the sensitivity of the two LIGO detectors in the first observing run (which was about right).

The simulated neutron stars all have small spins of less than 0.05 (where 0 is no spin, and 1 would be the maximum spin of a black hole). We expect neutron stars in these binaries to have spins of about this range. The maximum observed spin (for a neutron star not in a binary neutron star system) is around 0.4, and we think neutron stars should break apart for spins of 0.7. However, since we want to keep an open mind regarding neutron stars, when measuring spins we considered spins all the way up to 1.

### What we found

Our results clearly showed the effect of the mass–spin degeneracy. The degeneracy increases the uncertainty for both the spins and the masses.

Even though the true spins are low, we find that across the 250 events, the median 90% upper limit on the spin of the more massive (primary) neutron star is 0.70, and the 90% limit on the less massive (secondary) black hole is 0.86. We learn practically nothing about the spin of the secondary, but a little more about the spin of the primary, which is more important for the inspiral. Measuring spins is hard.

The effect of the mass–spin degeneracy for mass measurements is shown in the plot below. Here we show a random selection of events. The banana-shaped curves are the 90% probability intervals. They are narrow because we can measure a particular combination of masses, the chirp mass, really well. The mass–spin degeneracy determines how long the banana is. If we restrict the range of spins, we explore less of the banana (and potentially introduce an offset in our results).

Rough outlines for 90% credible regions for component masses for a random assortments of signals. The circles show the true values. The coloured lines indicate the extent of the distribution with different limits on the spins. The grey area is excluded from our convention on masses $m_1 \geq m_2$. Figure 5 from Farr et al. (2016).

Although you can’t see it in the plot above, including spin does also increase the uncertainty in the chirp mass too. The plots below show the standard deviation (a measure width of the posterior probability distribution), divided by the mean for several mass parameters. This gives a measure of the fractional uncertainty in our measurements. We show the chirp mass $\mathcal{M}_\mathrm{c}$, the mass ratio $q = m_2/m_1$ and the total mass $M = m_1 + m_2$, where $m_1$ and $m_2$ are the masses of the primary and secondary neutron stars respectively. The uncertainties are small for louder signals (higher signal-to-noise ratio). If we neglect the spin, the true chirp mass can lie outside the posterior distribution, the average is about 5 standard deviations from the mean, but if we include spin, the offset is just 0.7 from the mean (there’s still some offset as we’re allowing for spins all the way up to 1).

Fractional statistical uncertainties in chirp mass (top), mass ratio (middle) and total mass (bottom) estimates as a function of network signal-to-noise ratio for both the fully spinning analysis and the quicker non-spinning analysis. The lines indicate approximate power-law trends to guide the eye. Figure 2 of Farr et al. (2016).

We need to allow for spins when measuring binary neutron star masses in order to explore for the possible range of masses.

Sky localization and distance, however, are not affected by the spins here. This might not be the case for sources which are more rapidly spinning, but assuming that binary neutron stars do have low spin, we are safe using the easier-to-calculate results. This is good news for astronomers who need to know promptly where to look for explosions.

arXiv: 1508.05336 [astro-ph.HE]
Journal: Astrophysical Journal825(2):116(10); 2016
Authorea [bonus note]: Parameter estimation on gravitational waves from neutron-star binaries with spinning components
Conference proceedings:
Early Advanced LIGO binary neutron-star sky localization and parameter estimation
Favourite albatross:
Wilbur

### Bonus notes

#### How long?

The plot below shows how long it took to analyse each of the binary neutron star signals.

Distribution of run times for binary neutron star signals. Low-latency sky localization is done with BAYESTAR; medium-latency non-spinning parameter estimation is done with LALInference and TaylorF2 waveforms, and high-latency fully spinning parameter estimation is done with LALInference and SpinTaylorT4 waveforms. The LALInference results are for 2000 posterior samples. Figure 9 from Farr et al. (2016).

BAYESTAR provides a rapid sky localization, taking less than ten seconds. This is handy for astronomers who want to catch a flash caused by the merger before it fades.

Estimates for the other parameters are computed with LALInference. How long this takes to run depends on which waveform you are using and how many samples from the posterior probability distribution you want (the more you have, the better you can map out the shape of the distribution). Here we show times for 2000 samples, which is enough to get a rough idea (we collected ten times more for GW150914 and friends). Collecting twice as many samples takes (roughly) twice as long. Prompt results can be obtained with a waveform that doesn’t include spin (TaylorF2), these take about a day at most.

For this work, we considered results using a waveform which included the full effects of spin (SpinTaylorT4). These take about twenty times longer than the non-spinning analyses. The maximum time was 172 days. I have a strong suspicion that the computing time cost more than my salary.

Waiting for LALInference runs to finish gives you some time to practise hobbies. This is a globe knitted by Hannah. The two LIGO sites marked in red, and a typical gravitational-wave sky localization stitched on.

In order to get these results, we had to add check-pointing to our code, so we could stop it and restart it; we encountered a new type of error in the software which manages jobs running on our clusters, and Hannah Middleton and I got several angry emails from cluster admins (who are wonderful people) for having too many jobs running.

In comparison, analysing GW150914, LVT151012 and GW151226 was a breeze. Grudgingly, I have to admit that getting everything sorted out for this study made us reasonably well prepared for the real thing. Although, I’m not looking forward to that first binary neutron star signal…

#### Authorea

Authorea is an online collaborative writing service. It allows people to work together on documents, editing text, adding comments, and chatting with each other. By the time we came to write up the paper, Ben was no longer in Birmingham, and many of our coauthors are scattered across the globe. Ben thought Authorea might be useful for putting together the paper.

Writing was easy, and the ability to add comments on the text was handy for getting feedback from coauthors. The chat was going for quickly sorting out issues like plots. Overall, I was quite pleased, up to the point we wanted to get the final document. Extracted a nicely formatted PDF was awkward. For this I switched to using the Github back-end. On reflection, a simple git repo, plus a couple of Skype calls might have been a smoother way of writing, at least for a standard journal article.

Authorea promises to be an open way of producing documents, and allows for others to comment on papers. I don’t know if anyone’s looked at our Authorea article. For astrophysics, most people use the arXiv, which is free to everyone, and I’m not sure if there’s enough appetite for interaction (beyond the occasional email to authors) to motivate people to look elsewhere. At least, not yet.

In conclusion, I think Authorea is a nice idea, and I would try out similar collaborative online writing tools again, but I don’t think I can give it a strong recommendation for your next paper unless you have a particular idea in mind of how to make the most of it.

# Testing general relativity using golden black-hole binaries

Binary black hole mergers are the ultimate laboratory for testing gravity. The gravitational fields are strong, and things are moving at close to the speed of light. these extreme conditions are exactly where we expect our theories could breakdown, which is why we were so exciting by detecting gravitational waves from black hole coalescences. To accompany the first detection of gravitational waves, we performed several tests of Einstein’s theory of general relativity (it passed). This paper outlines the details of one of the tests, one that can be extended to include future detections to put Einstein’s theory to the toughest scrutiny.

One of the difficulties of testing general relativity is what do you compare it to? There are many alternative theories of gravity, but only a few of these have been studied thoroughly enough to give an concrete idea of what a binary black hole merger should look like. Even if general relativity comes out on top when compared to one alternative model, it doesn’t mean that another (perhaps one we’ve not thought of yet) can be ruled out. We need ways of looking for something odd, something which hints that general relativity is wrong, but doesn’t rely on any particular alternative theory of gravity.

The test suggested here is a consistency test. We split the gravitational-wave signal into two pieces,a low frequency part and a high frequency part, and then try to measure the properties of the source from the two parts. If general relativity is correct, we should get answers that agree; if it’s not, and there’s some deviation in the exact shape of the signal at different frequencies, we can get different answers. One way of thinking about this test is imagining that we have two experiments, one where we measure lower frequency gravitational waves and one where we measure higher frequencies, and we are checking to see if their results agree.

To split the waveform, we use a frequency around that of the last stable circular orbit: about the point that the black holes stop orbiting about each other and plunge together and merge [bonus note]. For GW150914, we used 132 Hz, which is about the same as the C an octave below middle C (a little before time zero in the simulation below). This cut roughly splits the waveform into the low frequency inspiral (where the two black hole are orbiting each other), and the higher frequency merger (where the two black holes become one) and ringdown (where the final black hole settles down).

We are fairly confident that we understand what goes on during the inspiral. This is similar physics to where we’ve been testing gravity before, for example by studying the orbits of the planets in the Solar System. The merger and ringdown are more uncertain, as we’ve never before probed these strong and rapidly changing gravitational fields. It therefore seems like a good idea to check the two independently [bonus note].

We use our parameter estimation codes on the two pieces to infer the properties of the source, and we compare the values for the mass $M_f$ and spin $\chi_f$ of the final black hole. We could use other sets of parameters, but this pair compactly sum up the properties of the final black hole and are easy to explain. We look at the difference between the estimated values for the mass and spin, $\Delta M_f$ and $\Delta \chi_f$, if general relativity is a good match to the observations, then we expect everything to match up, and $\Delta M_f$ and $\Delta \chi_f$ to be consistent with zero. They won’t be exactly zero because we have noise in the detector, but hopefully zero will be within the uncertainty region [bonus note]. An illustration of the test is shown below, including one of the tests we did to show that it does spot when general relativity is not correct.

Results from the consistency test. The top panels show the outlines of the 50% and 90% credible levels for the low frequency (inspiral) part of the waveform, the high frequency (merger–ringdown) part, and the entire (inspiral–merger–ringdown, IMR) waveform. The bottom panel shows the fractional difference between the high and low frequency results. If general relativity is correct, we expect the distribution to be consistent with $(0,0)$, indicated by the cross (+). The left panels show a general relativity simulation, and the right panel shows a waveform from a modified theory of gravity. Figure 1 of Ghosh et al. (2016).

A convenient feature of using $\Delta M_f$ and $\Delta \chi_f$ to test agreement with relativity, is that you can combine results from multiple observations. By averaging over lots of signals, you can reduce the uncertainty from noise. This allows you to pin down whether or not things really are consistent, and spot smaller deviations (we could get precision of a few percent after about 100 suitable detections). I look forward to seeing how this test performs in the future!

arXiv: 1602.02453 [gr-qc]
Journal: Physical Review D; 94(2):021101(6); 2016
Favourite golden thing: Golden syrup sponge pudding

### Bonus notes

#### Review

I became involved in this work as a reviewer. The LIGO Scientific Collaboration is a bit of a stickler when it comes to checking its science. We had to check that the test was coded up correctly, that the results made sense, and that calculations done and written up for GW150914 were all correct. Since most of the team are based in India [bonus note], this involved some early morning telecons, but it all went smoothly.

One of our checks was that the test wasn’t sensitive to exact frequency used to split the signal. If you change the frequency cut, the results from the two sections do change. If you lower the frequency, then there’s less of the low frequency signal and the measurement uncertainties from this piece get bigger. Conversely, there’ll be more signal in the high frequency part and so we’ll make a more precise measurement of the parameters from this piece. However, the overall results where you combine the two pieces stay about the same. You get best results when there’s a roughly equal balance between the two pieces, but you don’t have to worry about getting the cut exactly on the innermost stable orbit.

#### Golden binaries

In order for the test to work, we need the two pieces of the waveform to both be loud enough to allow us to measure parameters using them. This type of signals are referred to as golden. Earlier work on tests of general relativity using golden binaries has been done by Hughes & Menou (2015), and Nakano, Tanaka & Nakamura (2015). GW150914 was a golden binary, but GW151226 and LVT151012 were not, which is why we didn’t repeat this test for them.

#### GW150914 results

For The Event, we ran this test, and the results are consistent with general relativity being correct. The plots below show the estimates for the final mass and spin (here denoted $a_f$ rather than $\chi_f$), and the fractional difference between the two measurements. The points $(0,0)$ is at the 28% credible level. This means that if general relativity is correct, we’d expect a deviation at this level to occur around-about 72% of the time due to noise fluctuations. It wouldn’t take a particular rare realisation of noise to cause the assume true value of $(0,0)$ to be found at this probability level, so we’re not too suspicious that something is amiss with general relativity.

Results from the consistency test for The Event. The top panels final mass and spin measurements from the low frequency (inspiral) part of the waveform, the high frequency (post-inspiral) part, and the entire (IMR) waveform. The bottom panel shows the fractional difference between the high and low frequency results. If general relativity is correct, we expect the distribution to be consistent with $(0,0)$, indicated by the cross. Figure 3 of the Testing General Relativity Paper.

### The authors

Abhirup Ghosh and Archisman Ghosh were two of the leads of this study. They are both A. Ghosh at the same institution, which caused some confusion when compiling the LIGO Scientific Collaboration author list. I think at one point one them (they can argue which) was removed as someone thought there was a mistaken duplication. To avoid confusion, they now have their full names used. This is a rare distinction on the Discovery Paper (I’ve spotted just two others). The academic tradition of using first initials plus second name is poorly adapted to names which don’t fit the typical western template, so we should be more flexible.

# Search for transient gravitational waves in coincidence with short-duration radio transients during 2007–2013

Gravitational waves give us a new way of observing the Universe. This raises the possibility of multimessenger astronomy, where we study the same system using different methods: gravitational waves, light or neutrinos. Each messenger carries different information, so by using them together we can build up a more complete picture of what’s going on. This paper looks for gravitational waves that coincide with radio bursts. None are found, but we now have a template for how to search in the future.

On a dark night, there are two things which almost everyone will have done: wondered at the beauty of the starry sky and wondered exactly what was it that just went bump… Astronomers do both. Transient astronomy is about figuring out what are the things which go bang in the night—not the things which make suspicious noises, but objects which appear (and usually disappear) suddenly in the sky.

Most processes in astrophysics take a looooong time (our Sun is four-and-a-half billion years old and is just approaching middle age). Therefore, when something happens suddenly, flaring perhaps over just a few seconds, you know that something drastic must be happening! We think that most transients must be tied up with a violent event such as an explosion. However, because transients are so short, it can difficult to figure out exactly where they come from (both because they might have faded by the time you look, and because there’s little information to learn from a blip in the first place).

Radio transients are bursts of radio emission of uncertain origin. We’ve managed to figure out that some come from microwave ovens, but the rest do seem to come from space. This paper looks at two types: rotating radio transients (RRATs) and fast radio bursts (FRBs). RRATs look like the signals from pulsars, except that they don’t have the characteristic period pattern of pulsars. It may be that RRATs come from dying pulsars, flickering before they finally switch off, or it may be that they come from neutron stars which are not normally pulsars, but have been excited by a fracturing of their crust (a starquake). FRBs last a few milliseconds, they could be generated when two neutron stars merge and collapse to form a black hole, or perhaps from a highly-magnetised neutron star. Normally, when astronomers start talking about magnetic fields, it means that we really don’t know what’s going on [bonus note]. That is the case here. We don’t know what causes radio transients, but we are excited to try figuring it out.

This paper searches old LIGO, Virgo and GEO data for any gravitational-wave signals that coincide with observed radio transients. We use a catalogue of RRATs and FRBs from the Green Bank Telescope and the Parkes Observatory, and search around these times. We use a burst search, which doesn’t restrict itself to any particular form of gravitational-wave; however, the search was tuned for damped sinusoids and sine–Gaussians (generic wibbles), cosmic strings (which may give an indication of how uncertain we are of where radio transients could come from), and coalescences of binary neutron stars or neutron star–black hole binaries. Hopefully the search covers all plausible options. Discovering a gravitational wave coincident with a radio transient would give us much welcomed information about the source, and perhaps pin down their origin.

Search results for gravitational waves coincident with radio transients. The probabilities for each time containing just noise (blue) match the expected background distribution (dashed). This is consistent with a non-detection.

The search discovered nothing. Results match what we would expect from just noise in the detectors. This is not too surprising since we are using data from the first-generation detectors. We’ll be repeating the analysis with the upgraded detectors, which can find signals from larger distances. If we are lucky, multimessenger astronomy will allow us to figure out exactly what needs to go bump to create a radio transient.

arXiv: 1605.01707 [astro-ph.HE]
Journal: Physical Review D; 93(12):122008(14); 2016
Science summary: Searching for gravitational wave bursts in coincidence with short duration radio bursts
Favourite thing that goes bump in the night: Heffalumps and Woozles [probably not the cause of radio transients]

### Bonus note

#### Magnetism and astrophysics

Magnetic fields complicate calculations. They make things more difficult to model and are therefore often left out. However, we know that magnetic fields are everywhere and that they do play important roles in many situations. Therefore, they are often invoked as an explanation of why models can’t explain what’s going on. I learnt early in my PhD that you could ask “What about magnetic fields?” at the end of almost any astrophysics seminar (it might not work for some observational talks, but then you could usually ask “What about dust?” instead). Handy if ever you fall asleep…

# The Boxing Day Event

Advanced LIGO’s first observing run (O1) got off to an auspicious start with the detection of GW150914 (The Event to its friends). O1 was originally planned to be three months long (September to December), but after the first discovery, there were discussions about extending the run. No major upgrades to the detectors were going to be done over the holidays anyway, so it was decided that we might as well leave them running until January.

By the time the Christmas holidays came around, I was looking forward to some time off. And, of course, lots of good food and the Doctor Who Christmas Special. The work on the first detection had been exhausting, and the Collaboration reached the collective decision that we should all take some time off [bonus note]. Not a creature was stirring, not even a mouse.

On Boxing Day, there was a sudden flurry of emails. This could only mean one thing. We had another detection! Merry GW151226 [bonus note]!

I assume someone left out milk and cookies at the observatories. A not too subtle hint from Nutsinee Kijbunchoo’s comic in the LIGO Magazine.

I will always be amazed how lucky we were detecting GW150914. This could have been easily missed if we were just a little later starting observing. If that had happened, we might not have considered extended O1, and would have missed GW151226 too!

GW151226 is another signal from a binary black hole coalescence. This wasn’t too surprising at the time, as we had estimated such signals should be pretty common. It did, however, cause a slight wrinkle in discussions of what to do in the papers about the discovery of GW150914. Should we mention that we had another potential candidate? Should we wait until we had analysed the whole of O1 fully? Should we pack it all in and have another slice of cake? In the end we decided that we shouldn’t delay the first announcement, and we definitely shouldn’t rush the analysis of the full data set. Therefore, we went ahead with the original plan of just writing about the first month of observations and giving slightly awkward answers, mumbling about still having data to analyse, when asked if we had seen anything else [bonus note]. I’m not sure how many people outside the Collaboration suspected.

### The science

What have we learnt from analysing GW151226, and what have we learnt from the whole of O1? We’ve split our results into two papers.

#### 0. The Boxing Day Discovery Paper

Title: GW151226: Observation of gravitational waves from a 22-solar-mass binary black hole
arXiv: 1606.04855 [gr-qc]
Journal: Physical Review Letters116(24):241103(14)
LIGO science summary: GW151226: Observation of gravitational waves from a 22 solar-mass binary black hole (by Hannah Middleton and Carl-Johan Haster)

This paper presents the discovery of GW151226 and some of the key information about it. GW151226 is not as loud as GW150914, you can’t spot it by eye in the data, but it still stands out in our search. This is a clear detection! It is another binary black hole system, but it is a lower mass system than GW150914 (hence the paper’s title—it’s a shame they couldn’t put in the error bars though).

This paper summarises the highlights of the discovery, so below, I’ll explain these without going into too much technical detail.

More details: The Boxing Day Discovery Paper summary

#### 1. The O1 Binary Black Hole Paper

Title: Binary black hole mergers in the first Advanced LIGO observing run
arXiv: 1606.04856 [gr-qc]
Journal: Physical Review X6(4):041015(36)

This paper brings together (almost) everything we’ve learnt about binary black holes from O1. It discusses GW150915, LVT151012 and GW151226, and what we are starting to piece together about stellar-mass binary black holes from this small family of gravitational-wave events.

For the announcement of GW150914, we put together 12 companion papers to go out with the detection announcement. This paper takes on that role. It is Robin, Dr Watson, Hermione and Samwise Gamgee combined. There’s a lot of delicious science packed into this paper (searches, parameter estimation, tests of general relativity, merger rate estimation, and astrophysical implications). In my summary below, I’ll delve into what we have done and what our results mean.

More details: The O1 Binary Black Hole Paper summary

If you are interested in our science results, you can find data releases accompanying the events at the LIGO Open Science Center. These pages also include some wonderful tutorials to play with.

### The Boxing Day Discovery Paper

Synopsis: Boxing Day Discovery Paper
Read this if: You are excited about the discovery of GW151226
Favourite part: We’ve done it again!

#### The signal

GW151226 is not as loud as GW150914, you can’t spot it by eye in the data. Therefore, this paper spends a little more time than GW150914’s Discovery Paper talking about the ingredients for our searches.

GW151226 was found by two pipelines which specifically look for compact binary coalescences: the inspiral and merger of neutron stars or black holes. We have templates for what we think these signals should look like, and we filter the data against a large bank of these to see what matches [bonus note].

For the search to work, we do need accurate templates. Figuring out what the waveforms for binary black coalescence should look like is a difficult job, and has taken almost as long as figuring out how to build the detectors!

The signal arrived at Earth 03:38:53 GMT on 26 December 2015 and was first identified by a search pipeline within 70 seconds. We didn’t have a rapid templated search online at the time of GW150914, but decided it would be a good idea afterwards. This allowed us to send out an alert to our astronomer partners so they could look for any counterparts (I don’t think any have been found [bonus note]).

The unmodelled searches (those which don’t use templates, but just coherent signals in both detectors) which first found GW150914 didn’t find GW151226. This isn’t too surprising, as they are less sensitive. You can think of the templated searches as looking for Wally (or Waldo if you’re North American), using the knowledge that he’s wearing glasses, and a red and white stripped bobble hat, but the unmodelled searches are looking for him just knowing that he’s the person that’s on on every page.

GW151226 is the second most significant event in the search for binary black holes after The Event. Its significance is not quite off the charts, but is great enough that we have a hard time calculating exactly how significant it is. Our two search pipelines give estimates of the p-value (the probability you’d see something at least this signal-like if you only had noise in your detectors) of $< 10^{-7}$ and $3.5 \times 10^{-6}$, which are pretty good!

#### The source

To figure out the properties of the source, we ran our parameter-estimation analysis.

GW151226 comes from a black hole binary with masses of $14.2^{+8.3}_{-3.7} M_\odot$ and $7.5^{+2.3}_{-2.3} M_\odot$ [bonus note], where $M_\odot$ is the mass of our Sun (about 330,000 times the mass of the Earth). The error bars indicate our 90% probability ranges on the parameters. These black holes are less massive than the source of GW150914 (the more massive black hole is similar to the less massive black hole of LVT151012). However, the masses are still above what we believe is the maximum possible mass of a neutron star (around $3 M_\odot$). The masses are similar to those observed for black holes in X-ray binaries, so perhaps these black holes are all part of the same extended family.

A plot showing the probability distributions for the masses is shown below. It makes me happy. Since GW151226 is lower mass than GW150914, we see more of the inspiral, the portion of the signal where the two black holes are spiralling towards each other. This means that we measure the chirp mass, a particular combination of the two masses really well. It is this which gives the lovely banana shape to the distribution. Even though I don’t really like bananas, it’s satisfying to see this behaviour as this is what we have been expecting too see!

Estimated masses for the two black holes in the binary of the Boxing Day Event. The dotted lines mark the edge of our 90% probability intervals. The different coloured curves show different models: they agree which again made me happy! The two-dimensional distribution follows a curve of constant chirp mass. The sharp cut-off at the top-left is because $m_1^\mathrm{source}$ is defined to be bigger than $m_2^\mathrm{source}$. Figure 3 of The Boxing Day Discovery Paper.

The two black holes merge to form a final black hole of $20.8^{+6.1}_{-1.7} M_\odot$ [bonus note].

If you add up the initial binary masses and compare this to the final mass, you’ll notice that something is missing. Across the entire coalescence, gravitational waves carry away $1.0^{+0.1}_{-0.2} M_\odot c^2 \simeq 1.8^{+0.2}_{-0.4} \times 10^{47}~\mathrm{J}$ of energy (where $c$ is the speed of light, which is used to convert masses to energies). This isn’t quite as impressive as the energy of GW150914, but it would take the Sun 1000 times the age of the Universe to output that much energy.

The mass measurements from GW151226 are cool, but what’re really exciting are the spin measurements. Spin, as you might guess, is a measure of how much angular momentum a black hole has. We define it to go from zero (not spinning) to one (spinning as much as is possible). A black hole is fully described by its mass and spin. The black hole masses are most important in defining what a gravitational wave looks like, but the imprint of spin is more subtle. Therefore its more difficult to get a good measurement of the spins than the masses.

For GW150915 and LVT151012, we get a little bit of information on the spins. We can conclude that the spins are probably not large, or at least they are not large and aligned with the orbit of the binary. However, we can’t say for certain that we’ve seen any evidence that the black holes are spinning. For GW151226, al least one of the black holes (although we can’t say which) has to be spinning [bonus note].

The plot below shows the probability distribution for the two spins of the binary black holes. This shows the both the magnitude of the spin and the direction that of the spin (if the tilt is zero the black hole and the binary’s orbit both go around in the same way). You can see we can’t say much about the spin of the lower mass black hole, but we have a good idea about the spin of the more massive black hole (the more extreme the mass ratio, the less important the spin of lower mass black is, making it more difficult to measure). Hopefully we’ll learn more about spins in future detections as these could tell us something about how these black holes formed.

Estimated orientation and magnitude of the two component spins. Calculated with our precessing waveform model. The distribution for the more massive black hole is on the left, and for the smaller black hole on the right. Part of Figure 4 of The Boxing Day Discovery Paper.

There’s still a lot to learn about binary black holes, and future detections will help with this. More information about what we can squeeze out of our current results are given in the O1 Binary Black Hole Paper.

### The O1 Binary Black Hole Paper

Synopsis: O1 Binary Black Hole Paper
Read this if: You want to know everything we’ve learnt about binary black holes
Favourite part: The awesome table of parameters at the end

This paper contains too much science to tackle all at once, so I’ve split it up into more bite-sized pieces, roughly following the flow of the paper. First we discuss how we find signals. Then we discuss the parameters inferred from the signals. This is done assuming that general relativity is correct, so we check for any deviations from predictions in the next section. After that, we consider the rate of mergers and what we expect for the population of binary black holes from our detections. Finally, we discuss our results in the context of wider astrophysics.

#### Searches

Looking for signals hidden amongst the data is the first thing to do. This paper only talks about the template search for binary black holes: other search results (including the results for binaries including neutron stars) we will reported elsewhere.

The binary black hole search was previously described in the Compact Binary Coalescence Paper. We have two pipelines which look for binary black holes using templates: PyCBC and GstLAL. These look for signals which are found in both detectors (within 15 ms of each other) which match waveforms in the template bank. A few specifics of these have been tweaked since the start of O1, but these don’t really change any of the results. An overview of the details for both pipelines are given in Appendix A of the paper.

The big difference from Compact Binary Coalescence Paper is the data. We are now analysing the whole of O1, and we are using an improved version of the calibration (although this really doesn’t affect the search). Search results are given in Section II. We have one new detection: GW151226.

Search results for PyCBC (left) and GstLAL (right). The histograms show the number of candidate events (orange squares) compare to the background. The further an orange square is to the right of the lines, the more significant it is. Different backgrounds are shown including and excluding GW150914 (top row) and GW151226 (bottom row). Figure 3 from the O1 Binary Black Hole Paper.

The plots above show the search results. Candidates are ranked by a detection statistic (a signal-to-noise ratio modified by a self-consistency check $\hat{\rho}_c$ for PyCBC, and a ratio of likelihood for the signal and noise hypotheses $\ln \mathcal{L}$ for GstLAL). A larger detection statistic means something is more signal-like and we assess the significance by comparing with the background of noise events. The further above the background curve an event is, the more significant it is. We have three events that stand out.

Number 1 is GW150914. Its significance has increased a little from the first analysis, as we can now compare it against more background data. If we accept that GW150914 is real, we should remove it from the estimation of the background: this gives us the purple background in the top row, and the black curve in the bottom row.

GW151226 is the second event. It clearly stands out when zooming in for the second row of plots. Identifying GW150914 as a signal greatly improves GW151226’s significance.

The final event is LVT151012. Its significance hasn’t changed much since the initial analysis, and is still below our threshold for detection. I’m rather fond of it, as I do love an underdog.

#### Parameter estimation

To figure out the properties of all three events, we do parameter estimation. This was previously described in the Parameter Estimation Paper. Our results for GW150914 and LVT151012 have been updated as we have reran with the newer calibration of the data. The new calibration has less uncertainty, which improves the precision of our results, although this is really only significant for the sky localization. Technical details of the analysis are given in Appendix B and results are discussed in Section IV. You may recognise the writing style of these sections.

The probability distributions for the masses are shown below. There is quite a spectrum, from the low mass GW151226, which is consistent with measurements of black holes in X-ray binaries, up to GW150914, which contains the biggest stellar-mass black holes ever observed.

Estimated masses for the two binary black holes for each of the events in O1. The contours mark the 50% and 90% credible regions. The grey area is excluded from our convention that $m_1^\mathrm{source} \geq m_2^\mathrm{source}$. Part of Figure 4 of the O1 Binary Black Hole Paper.

The distributions for the lower mass GW151226 and LVT151012 follow the curves of constant chirp mass. The uncertainty is greater for LVT151012 as it is a quieter (lower SNR) signal. GW150914 looks a little different, as the merger and ringdown portions of the waveform are more important. These place tighter constraints on the total mass, explaining the shape of the distribution.

Another difference between the lower mass inspiral-dominated signals and the higher mass GW150915 can be seen in the plot below. The shows the probability distributions for the mass ratio $q = m_2^\mathrm{source}/m_1^\mathrm{source}$ and the effective spin parameter $\chi_\mathrm{eff}$, which is a mass-weighted combination of the spins aligned with the orbital angular momentum. Both play similar parts in determining the evolution of the inspiral, so there are stretching degeneracies for GW151226 and LVT151012, but this isn’t the case for GW150914.

Estimated mass ratios $q$ and effective spins $\chi_\mathrm{eff}$ for each of the events in O1. The contours mark the 50% and 90% credible regions. Part of Figure 4 of the O1 Binary Black Hole Paper.

If you look carefully at the distribution of $\chi_\mathrm{eff}$ for GW151226, you can see that it doesn’t extend down to zero. You cannot have a non-zero $\chi_\mathrm{eff}$ unless at least one of the black holes is spinning, so this clearly shows the evidence for spin.

The final masses of the remnant black holes are shown below. Each is around 5% less than the total mass of the binary which merged to form it, with the rest radiated away as gravitational waves.

Estimated masses $M_\mathrm{f}^\mathrm{source}$ and spins $a_\mathrm{f}$ of the remnant black holes for each of the events in O1. The contours mark the 50% and 90% credible regions. Part of Figure 4 of the O1 Binary Black Hole Paper.

The plot also shows the final spins. These are much better constrained than the component spins as they are largely determined by the angular momentum of the binary as it merged. This is why the spins are all quite similar. To calculate the final spin, we use an updated formula compared to the one in the Parameter Estimation Paper. This now includes the effect of the components’ spin which isn’t aligned with the angular momentum. This doesn’t make much difference for GW150914 or LVT151012, but the change is slightly more for GW151226, as it seems to have more significant component spins.

The luminosity distance for the sources is shown below. We have large uncertainties because the luminosity distance is degenerate with the inclination. For GW151226 and LVT151012 this does result in some beautiful butterfly-like distance–inclination plots. For GW150914, the butterfly only has the face-off inclination wing (probably as consequence of the signal being louder and the location of the source on the sky). The luminosity distances for GW150914 and GW151226 are similar. This may seem odd, because GW151226 is a quieter signal, but that is because it is also lower mass (and so intrinsically quieter).

Probability distributions for the luminosity distance of the source of each of the three events in O1. Part of Figure 4 of the O1 Binary Black Hole Paper.

Sky localization is largely determined by the time delay between the two observatories. This is one of the reasons that having a third detector, like Virgo, is an awesome idea. The plot below shows the localization relative to the Earth. You can see that each event has a localization that is part of a ring which is set by the time delay. GW150914 and GW151226 were seen by Livingston first (apparently there is some gloating about this), and LVT151012 was seen by Hanford first.

Estimated sky localization relative to the Earth for each of the events in O1. The contours mark the 50% and 90% credible regions. H+ and L+ mark the locations of the two observatories. Part of Figure 5 of the O1 Binary Black Hole Paper.

Both GW151226 and LVT151012 are nearly overhead. This isn’t too surprising, as this is where the detectors are most sensitive, and so where we expect to make the most detections.

The improvement in the calibration of the data is most evident in the sky localization. For GW150914, the reduction in calibration uncertainty improves the localization by a factor of ~2–3! For LVT151012 it doesn’t make much difference because of its location and because it is a much quieter signal.

The map below shows the localization on the sky (actually where in Universe the signal came from). The maps have rearranged themselves because of the Earth’s rotation (each event was observed at a different sidereal time).

Estimated sky localization (in right ascension and declination) for each of the events in O1. The contours mark the 50% and 90% credible regions. Part of Figure 5 of the O1 Binary Black Hole Paper.

We’re nowhere near localising sources to single galaxies, so we may never know exactly where these signals originated from.

#### Tests of general relativity

The Testing General Relativity Paper reported several results which compared GW150914 with the predictions of general relativity. Either happily or sadly, depending upon your point of view, it passed them all. In Section V of the paper, we now add GW151226 into the mix. (We don’t add LVT151012 as it’s too quiet to be much use).

A couple of the tests for GW150914 looked at the post-inspiral part of the waveform, looking at the consistency of mass and spin estimates, and trying to match the rigdown frequency. Since GW151226 is lower mass, we can’t extract any meaningful information from the post-inspiral portion of the waveform, and so it’s not worth repeating these tests.

However, the fact that GW151226 has such a lovely inspiral means that we can place some constraints on post-Newtonian parameters. We have lots and lots of cycles, so we are sensitive to any small deviations that arise during inspiral.

The plot below shows constraints on deviations for a set of different waveform parameters. A deviation of zero indicates the value in general relativity. The first four boxes (for parameters referred to as $\varphi_i$ in the Testing General Relativity Paper) are parameters that affect the inspiral. The final box on the right is for parameters which impact the merger and ringdown. The top row shows results for GW150914, these are updated results using the improved calibrated data. The second row shows results for GW151226, and the bottom row shows what happens when you combine the two.

Probability distributions for waveform parameters. The top row shows bounds from just GW150914, the second from just GW151226, and the third from combining the two. A deviation of zero is consistent with general relativity. Figure 6 from the O1 Binary Black hole Paper.

All the results are happily about zero. There were a few outliers for GW150914, but these are pulled back in by GW151226. We see that GW151226 dominates the constraints on the inspiral parameters, but GW150914 is more important for the merger–ringdown $\alpha_i$ parameters.

Again, Einstein’s theory passes the test. There is no sign of inconsistency (yet). It’s clear that adding more results greatly improves our sensitivity to these parameters, so these tests will continue put general relativity through tougher and tougher tests.

#### Rates

We have a small number of events, around 2.9 in total, so any estimates of how often binary black holes merge will be uncertain. Of course, just because something is tricky, it doesn’t mean we won’t give it a go! The Rates Paper discussed estimates after the first 16 days of coincident data, when we had just 1.9 events. Appendix C gives technical details and Section VI discusses results.

The whole of O1 is about 52 days’ worth of coincident data. It’s therefore about 3 times as long as the initial stretch. in that time we’ve observed about 3/2 times as many events. Therefore, you might expect that the event rate is about 1/2 of our original estimates. If you did, get yourself a cookie, as you are indeed about right!

To calculate the rates we need to assume something about the population of binary black holes. We use three fiducial distributions:

1. We assume that binary black holes are either like GW150914, LVT151012 or GW151226. This event-based rate is different from the previous one as it now includes an extra class for GW151226.
2. A flat-in-the-logarithm-of-masses distribution, which we expect gives a sensible lower bound on the rate.
3. A power law slope for the larger black hole of $-2.35$, which we expect gives a sensible upper bound on the rate.

We find that the rates are 1. $54^{+111}_{-40}~\mathrm{Gpc^{-3}\,yr^{-1}}$, 2. $30^{+46}_{-21}~\mathrm{Gpc^{-3}\,yr^{-1}}$, and 3. $97^{+149}_{-68}~\mathrm{Gpc^{-3}\,yr^{-1}}$. As expected, the first rate is nestled between the other two.

Despite the rates being lower, there’s still a good chance we could see 10 events by the end of O2 (although that will depend on the sensitivity of the detectors).

A new results that is included in with the rates, is a simple fit for the distribution of black hole masses [bonus note]. The method is described in Appendix D. It’s just a repeated application of Bayes’ theorem to go from the masses we measured from the detected sources, to the distribution of masses of the entire population.

We assume that the mass of the larger black hole is distributed according to a power law with index $\alpha$, and that the less massive black hole has a mass uniformly distributed in mass ratio, down to a minimum black hole mass of $5 M_\odot$. The cut-off, is the edge of a speculated mass gap between neutron stars and black holes.

We find that $\alpha = 2.5^{+1.5}_{-1.6}$. This has significant uncertainty, so we can’t say too much yet. This is a slightly steeper slope than used for the power-law rate (although entirely consistent with it), which would nudge the rates a little lower. The slope does fit in with fits to the distribution of masses in X-ray binaries. I’m excited to see how O2 will change our understanding of the distribution.

#### Astrophysical implications

With the announcement of GW150914, the Astrophysics Paper reviewed predictions for binary black holes in light of the discovery. The high masses of GW150914 indicated a low metallicity environment, perhaps no more than half of solar metallicity. However, we couldn’t tell if GW150914 came from isolated binary evolution (two stars which have lived and died together) or a dynamical interaction (probably in a globular cluster).

Since then, various studies have been performed looking at both binary evolution (Eldridge & Stanway 2016; Belczynski et al. 2016de Mink & Mandel 2016Hartwig et al. 2016; Inayoshi et al. 2016; Lipunov et al. 2016) and dynamical interactions (O’Leary, Meiron & Kocsis 2016; Mapelli 2016; Rodriguez et al. 2016), even considering binaries around supermassive black holes (Bartos et al. 2016; Stone, Metzger & Haiman 2016). We don’t have enough information to tell the two pathways apart. GW151226 gives some new information. Everything is reviewed briefly in Section VII.

GW151226 and LVT151012 are lower mass systems, and so don’t need to come from as low a metallicity environment as GW150914 (although they still could). Both are also consistent with either binary evolution or dynamical interactions. However, the low masses of GW151226 mean that it probably does not come from one particular binary formation scenario, chemically homogeneous evolution, and it is less likely to come from dynamical interactions.

Building up a population of sources, and getting better measurements of spins and mass ratios will help tease formation mechanisms apart. That will take a while, but perhaps it will be helped if we can do multi-band gravitational-wave astronomy with eLISA.

This section also updates predictions from the Stochastic Paper for the gravitational-wave background from binary black holes. There’s a small change from an energy density of $\Omega_\mathrm{GW} = 1.1^{+2.7}_{-0.9} \times 10^{-9}$ at a frequency of 25 Hz to $\Omega_\mathrm{GW} = 1.2^{+1.9}_{-0.9} \times 10^{-9}$. This might be measurable after a few years at design sensitivity.

#### Conclusion

We are living in the future. We may not have hoverboards, but the era of gravitational-wave astronomy is here. Not in 20 years, not in the next decade, not in five more years, now. LIGO has not just opened a new window, it’s smashed the window and jumped through it just before the explosion blasts the side off the building. It’s so exciting that I can’t even get my metaphors straight. The introductory paragraphs of papers on gravitational-wave astronomy will never be the same again.

Although we were lucky to discover GW150914, it wasn’t just a fluke. Binary black coalescences aren’t that rare and we should be detecting more. Lots more. You know that scene in a movie where the heroes have defeated a wave of enemies and then the camera pans back to show the approaching hoard that stretches to the horizon? That’s where we are now. O2 is coming. The second observing run, will start later this year, and we expect we’ll be adding many entries to our list of binary black holes.

We’re just getting started with LIGO and Virgo. There’ll be lots more science to come.

If you made it this far, you deserve a biscuit. A fancy one too, not just a digestive.

Or, if you’re hungry for more, here are some blogs from my LIGO colleagues

• Daniel Williams (a PhD student at University of Glasgow)
• Matt Pitkin (who is hunting for continuous gravitational waves)
• Shane Larson (who is also investigating mutli-band gravitational-wave astronomy)
• Amber Sturver (who works at the Livingston Observatory)

My group at Birmingham also made some short reaction videos (I’m too embarrassed to watch mine).

### Bonus notes

#### Christmas cease-fire

In the run-up to the holidays, there were lots of emails that contained phrases like “will have to wait until people get back from holidays” or “can’t reply as the group are travelling and have family commitments”. No-one ever said that they were taking a holiday, but just that it was happening in general, so we’d all have to wait for a couple of weeks. No-one ever argued with this, because, of course, while you were waiting for other people to do things, there was nothing you could do, and so you might as well take some time off. And you had been working really hard, so perhaps an evening off and an extra slice of cake was deserved…

Rather guiltily, I must confess to ignoring the first few emails on Boxing Day. (Although I saw them, I didn’t read them for reasons of plausible deniability). I thought it was important that my laptop could have Boxing Day off. Thankfully, others in the Collaboration were more energetic and got things going straight-away.

#### Naming

Gravitational-wave candidates (or at least the short ones from merging binary black holes which we have detected so far), start off life named by a number in our database. This event started life out as G211117. After checks and further analysis, to make sure we can’t identify any environmental effects which could have caused the detector to misbehave, candidates are renamed. Those which are significant enough to be claimed as a detection get the Gravitational Wave (GW) prefix. Those we are less certain of get the LIGO–Virgo Trigger (LVT) prefix. The rest of the name is the date in Coordinated Universal Time (UTC). The new detection is GW151226.

Informally though, it is the Boxing Day Event. I’m rather impressed that this stuck as the Collaboration is largely US based: it was still Christmas Day in the US when the detection was made, and Americans don’t celebrate Boxing Day anyway.

#### Other searches

We are now publishing the results of the O1 search for binary black holes with a template bank which goes up to total observed binary masses of $100 M_\odot$. Therefore we still have to do the same about searches for anything else. The results from searches for other compact binaries should appear soon (binary neutron star and neutron star–black hole upper limits). It may be a while before we have results looking for continuous waves.

#### Matched filtering

The compact binary coalescence search uses matched filtering to hunt for gravitational waves. This is a well established technique in signal processing. You have a template signal, and you see how this correlates with the data. We use the detectors’ sensitivity to filter the data, so that we give more weight to bits which match where we are sensitive, and little weight to matches where we have little sensitivity.

I imagine matched filtering as similar to how I identify a piece of music: I hear a pattern of notes and try to compare to things I know. Dum-dum-dum-daah? Beethoven’s Fifth.

Filtering against a large number of templates takes a lot of computational power, so we need to be cunning as to which templates we include. We don’t want to miss anything, so we need enough templates to cover all possibilities, but signals from similar systems can look almost identical, so we just need one representative template included in the bank. Think of trying to pick out Under Pressure, you could easily do this with a template for Ice Ice Baby, and you don’t need both Mr Brightside and Ode to Joy.

It doesn’t matter if the search doesn’t pick out a template that perfectly fits the properties of the source, as this is what parameter estimation is for.

The figure below shows how effective matched filtering can be.

• The top row shows the data from the two interferometers. It’s been cleaned up a little bit for the plot (to keep the experimentalists happy), but you can see that the noise in the detectors is seemingly much bigger than the best match template (shown in black, the same for both detectors).
• The second row shows the accumulation of signal-to-noise ratio (SNR). If you correlate the data with the template, you see that it matches the template, and keeps matching the template. This is the important part, although, at any moment it looks like there’s just random wibbles in the detector, when you compare with a template you find that there is actually a signal which evolves in a particular way. The SNR increases until the signal stops (because the black holes have merged). It is a little lower in the Livinston detector as this was slightly less sensitive around the time of the Boxing Day Event.
• The third row shows how much total SNR you would get if you moved the best match template around in time. There’s a clear peak. This is trying to show that the way the signal changes is important, and you wouldn’t get a high SNR when the signal isn’t there (you would normally expect it to be about 1).
• The final row shows the amount of energy at a particular frequency at a particular time. Compact binary coalescences have a characteristic chirp, so you would expect a sweep from lower frequencies up to higher frequencies. You can just about make it out in these plots, but it’s not obvious as for GW150914. This again shows the value of matched filtering, but it also shows that there’s no other weird glitchy stuff going on in the detectors at the time.

Observation of The Boxing Day Event in LIGO Hanford and LIGO Livingston. The top row shows filtered data and best match template. The second row shows how this template accumulates signal-to-noise ratio. The third row shows signal-to-noise ratio of this template at different end times. The fourth row shows a spectrogram of the data. Figure 1 of the Boxing Day Discovery Paper.

#### Electromagnetic follow-up

Reports by electromagnetic astronomers on their searches for counterparts so far are:

No counterparts have been claimed, which isn’t surprising for a binary black hole coalescence.

#### Rounding

In various places, the mass of the smaller black hole is given as $8 M_\odot$. The median should really round to $7 M_\odot$ as to three significant figures it is $7.48 M_\odot$. This really confused everyone though, as with rounding you’d have a binary with components of masses $14 M_\odot$ and $7 M_\odot$ and total mass $22 M_\odot$. Rounding is a pain! Fortunately, $8 M_\odot$ lies well within the uncertainty: the 90% range is $5.2\text{--}9.8 M_\odot$.

#### Black holes are massive

I tried to find a way to convert the mass of the final black hole into every day scales. Unfortunately, the thing is so unbelievably massive, it just doesn’t work: it’s no use relating it to elephants or bowling balls. However, I did have some fun looking up numbers. Currently, it costs about £2 to buy a 180 gram bar of Cadbury’s Bourneville. Therefore, to buy an equivalent amount of dark chocolate would require everyone on Earth to save up for about 600 millions times the age of the Universe (assuming GDP stays constant). By this point, I’m sure the chocolate will be past its best, so it’s almost certainly a big waste of time.

#### Maximum minimum spin

One of the statistics people really seemed to latch on to for the Boxing Day Event was that at least one of the binary black holes had to have a spin of greater than 0.2 with 99% probability. It’s a nice number for showing that we have a preference for some spin, but it can be a bit tricky to interpret. If we knew absolutely nothing about the spins, then we would have a uniform distribution on both spins. There’d be a 10% chance that the spin of the more massive black hole is less than 0.1, and a 10% chance that the spin of the other black hole is less than 0.1. Hence, there’s a 99% probability that there is at least one black hole with spin greater than 0.1, even though we have no evidence that the black holes are spinning (or not). Really, you need to look at the full probability distributions for the spins, and not just the summary statistics, to get an idea of what’s going on.

#### Just one more thing…

The fit for the black hole mass distribution was the last thing to go in the paper. It was a bit frantic to get everything reviewed in time. In the last week, there were a couple of loud exclamations from the office next to mine, occupied by John Veitch, who as one of the CBC chairs has to keep everything and everyone organised. (I’m not quite sure how John still has so much of his hair). It seems that we just can’t stop doing science. There is a more sophisticated calculation in the works, but the foot was put down that we’re not trying to cram any more into the current papers.

# Inference on gravitational waves from coalescences of stellar-mass compact objects and intermediate-mass black holes

I love collecting things, there’s something extremely satisfying about completing a set. I suspect that this is one of the alluring features of Pokémon—you’ve gotta catch ’em all. The same is true of black hole hunting. Currently, we know of stellar-mass black holes which are a few times the mass of our Sun, up to a few tens of the mass of our Sun (the black holes of GW150914 are the biggest yet to be observed), and we know of supermassive black holes, which are ten thousand to ten billion times the mass our Sun. However, we are missing intermediate-mass black holes which lie in the middle. We have Charmander and Charizard, but where is Charmeleon? The elusive ones are always the most satisfying to capture.

Adorable black hole (available for adoption). I’m sure this could be a Pokémon. It would be a Dark type. Not that I’ve given it that much thought…

Intermediate-mass black holes have evaded us so far. We’re not even sure that they exist, although that would raise questions about how you end up with the supermassive ones (you can’t just feed the stellar-mass ones lots of rare candy). Astronomers have suggested that you could spot intermediate-mass black holes in globular clusters by the impact of their gravity on the motion of other stars. However, this effect would be small, and near impossible to conclusively spot. Another way (which I’ve discussed before), would to be to look at ultra luminous X-ray sources, which could be from a disc of material spiralling into the black hole.  However, it’s difficult to be certain that we understand the source properly and that we’re not misclassifying it. There could be one sure-fire way of identifying intermediate-mass black holes: gravitational waves.

The frequency of gravitational waves depend upon the mass of the binary. More massive systems produce lower frequencies. LIGO is sensitive to the right range of frequencies for stellar-mass black holes. GW150914 chirped up to the pitch of a guitar’s open B string (just below middle C). Supermassive black holes produce gravitational waves at too low frequency for LIGO (a space-based detector would be perfect for these). We might just be able to detect signals from intermediate-mass black holes with LIGO.

In a recent paper, a group of us from Birmingham looked at what we could learn from gravitational waves from the coalescence of an intermediate-mass black hole and a stellar-mass black hole [bonus note].  We considered how well you would be able to measure the masses of the black holes. After all, to confirm that you’ve found an intermediate-mass black hole, you need to be sure of its mass.

The signals are extremely short: we only can detect the last bit of the two black holes merging together and settling down as a final black hole. Therefore, you might think there’s not much information in the signal, and we won’t be able to measure the properties of the source. We found that this isn’t the case!

We considered a set of simulated signals, and analysed these with our parameter-estimation code [bonus note]. Below are a couple of plots showing the accuracy to which we can infer a couple of different mass parameters for binaries of different masses. We show the accuracy of measuring the chirp mass $\mathcal{M}$ (a much beloved combination of the two component masses which we are usually able to pin down precisely) and the total mass $M_\mathrm{total}$.

Measured chirp mass for systems of different total masses. The shaded regions show the 90% credible interval and the dashed lines show the true values. The mass ratio $q$ is the mass of the stellar-mass black hole divided by the mass of the intermediate-mass black hole. Figure 1 of Haster et al. (2016).

Measured total mass for systems of different total masses. The shaded regions show the 90% credible interval and the dashed lines show the true values. Figure 2 of Haster et al. (2016).

For the lower mass systems, we can measure the chirp mass quite well. This is because we get a little information from the part of the gravitational wave from when the two components are inspiralling together. However, we see less and less of this as the mass increases, and we become more and more uncertain of the chirp mass.

The total mass isn’t as accurately measured as the chirp mass at low masses, but we see that the accuracy doesn’t degrade at higher masses. This is because we get some constraints on its value from the post-inspiral part of the waveform.

We found that the transition from having better fractional accuracy on the chirp mass to having better fractional accuracy on the total mass happened when the total mass was around 200–250 solar masses. This was assuming final design sensitivity for Advanced LIGO. We currently don’t have as good sensitivity at low frequencies, so the transition will happen at lower masses: GW150914 is actually in this transition regime (the chirp mass is measured a little better).

Given our uncertainty on the masses, when can we conclude that there is an intermediate-mass black hole? If we classify black holes with masses more than 100 solar masses as intermediate mass, then we’ll be able to say to claim a discovery with 95% probability if the source has a black hole of at least 130 solar masses. The plot below shows our inferred probability of there being an intermediate-mass black hole as we increase the black hole’s mass (there’s little chance of falsely identifying a lower mass black hole).

Probability that the larger black hole is over 100 solar masses (our cut-off mass for intermediate-mass black holes $M_\mathrm{IMBH}$). Figure 7 of Haster et al. (2016).

Gravitational-wave observations could lead to a concrete detection of intermediate mass black holes if they exist and merge with another black hole. However, LIGO’s low frequency sensitivity is important for detecting these signals. If detector commissioning goes to plan and we are lucky enough to detect such a signal, we’ll finally be able to complete our set of black holes.

arXiv: 1511.01431 [astro-ph.HE]
Journal: Monthly Notices of the Royal Astronomical Society457(4):4499–4506; 2016
Birmingham science summary: Inference on gravitational waves from coalescences of stellar-mass compact objects and intermediate-mass black holes (by Carl)
Other collectables: Breakthrough, Gruber, Shaw, Kavli

### Bonus notes

#### Jargon

The coalescence of an intermediate-mass black hole and a stellar-mass object (black hole or neutron star) has typically been known as an intermediate mass-ratio inspiral (an IMRI). This is similar to the name for the coalescence of a a supermassive black hole and a stellar-mass object: an extreme mass-ratio inspiral (an EMRI). However, my colleague Ilya has pointed out that with LIGO we don’t really see much of the intermediate-mass black hole and the stellar-mass black hole inspiralling together, instead we see the merger and ringdown of the final black hole. Therefore, he prefers the name intermediate mass-ratio coalescence (or IMRAC). It’s a better description of the signal we measure, but the acronym isn’t as good.

#### Parameter-estimation runs

The main parameter-estimation analysis for this paper was done by Zhilu, a summer student. This is notable for two reasons. First, it shows that useful research can come out of a summer project. Second, our parameter-estimation code installed and ran so smoothly that even an undergrad with no previous experience could get some useful results. This made us optimistic that everything would work perfectly in the upcoming observing run (O1). Unfortunately, a few improvements were made to the code before then, and we were back to the usual level of fun in time for The Event.

# Search of the Orion spur for continuous gravitational waves using a loosely coherent algorithm on data from LIGO interferometers

A cloudy bank holiday Monday is a good time to catch up on blogging. Following the splurge of GW150914 papers, I’ve rather fallen behind. Published back in February, this paper is a search for continuous-wave signals: the almost-constant hum produced by rapidly rotating neutron stars.

Continuous-wave searches are extremely computationally expensive. The searches take a while to do, which can lead to a delay before results are published [bonus note]. This is the result of a search using data from LIGO’s sixth science run (March–October 2010).

To detect a continuous wave, you need to sift the data to find a signal that present through all the data. Rotating neutron stars produce a gravitational-wave signal with a frequency twice their orbital frequency. This frequency is almost constant, but could change as the observation goes on because (i) the neutron star slows down as energy is lost (from gravitational waves, magnetic fields or some form of internal sloshing around); (ii) there is some Doppler shifting because of the Earth’s orbit around the Sun, and, possibly, (iii) the there could be some Doppler shifting because the neutron star is orbiting another object. How do you check for something that is always there?

There are two basic strategies for spotting continuous waves. First, we could look for excess power in a particular frequency bin. If we measure something in addition to what we expect from the detector noise, this could be a signal. Looking at the power is simple, and so not too expensive. However, we’re not using any information about what a real signal should look like, and so it must be really loud for us to be sure that it’s not just noise. Second, we could coherently search for signals using templates for the expected signals. This is much more work, but gives much better sensitivity. Is there a way to compromise between the two strategies to balance cost and sensitivity?

This paper reposts results of a loosely coherent search. Instead of checking how well the data match particular frequencies and frequency evolutions, we average over a family of similar signals. This is less sensitive, as we get a bit more wiggle room in what would be identified as a candidate, but it is also less expensive than checking against a huge number of templates.

We could only detect continuous waves from nearby sources: neutron stars in our own Galaxy. (Perhaps 0.01% of the distance of GW150914). It therefore makes sense to check nearby locations which could be home to neutron stars. This search narrows its range to two directions in the Orion spur, our local band with a high concentration of stars. By focussing in on these spotlight regions, we increase the sensitivity of the search for a given computational cost. This search could possibly dig out signals from twice as far away as if we were considering all possible directions.

Artist’s impression of the local part of the Milky Way. The Orion spur connects the Perseus and Sagittarius arms. The yellow cones mark the extent of the search (the pink circle shows the equivalent all-sky sensitivity). Green stars indicate known pulsars. Original image: NASA/JPL-Caltech/ESO/R. Hurt.

The search found 70 interesting candidates. Follow-up study showed that most were due to instrumental effects. There were three interesting candidates left after these checks, none significant enough to be a detection, but still worth looking at in detail. A full coherent analysis was done for these three candidates. This showed that they were probably caused by noise. We have no detections

arXiv: 1510.03474 [gr-qc]
Journal: Physical Review D; 93(4):042006(14); 2016
Science summary: Scouting our Galactic neighborhood
Other bank holiday activities:
Scrabble

Bank holiday family Scrabble game. When thinking about your next turn, you could try seeing if your letters match a particular word (a coherent search which would get you the best score, but take ages), or just if your letters jumble together to make something word-like (an incoherent search, that is quick, but may result in lots of things that aren’t really words).

### Bonus note

#### Niceness

The Continuous Wave teams are polite enough to wait until we’re finished searching for transient gravitational-wave signals (which are more time sensitive) before taking up the LIGO computing clusters. They won’t have any proper results from O1 just yet.