Observing run 1—The papers

The second observing run (O2) of the advanced gravitational wave detectors is now over, which has reminded me how dreadfully behind I am in writing about papers. In this post I’ll summarise results from our first observing run (O1), which ran from September 2015 to January 2016.

I’ll add to this post as I get time, and as papers are published. I’ve started off with just papers searching for compact binary coalescences (as these are closest to my own research). There are separate posts on our detections GW150914 (and its follow-up papers: set I, set II) and GW151226 (this post includes our end-of-run summary of the search for binary black holes, including details of LVT151012).

Transient searches

The O1 Binary Neutron Star/Neutron Star–Black Hole Paper

Title: Upper limits on the rates of binary neutron star and neutron-star–black-hole mergers from Advanced LIGO’s first observing run
arXiv: 1607.07456 [astro-ph.HE]
Journal: Astrophysical Journal Letters; 832(2):L21(15); 2016

Our main search for compact binary coalescences targets binary black holes (binaries of two black holes), binary neutron stars (two neutron stars) and neutron-star–black-hole binaries (one of each). Having announced the results of our search for binary black holes, this paper gives the detail of the rest. Since we didn’t make any detections, we set some new, stricter upper limits on their merger rates. For binary neutron stars, this is 12,600~\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1} .

More details: O1 Binary Neutron Star/Neutron Star–Black Hole Paper Paper summary

The O1 Gamma-Ray Burst Paper

Title: Search for gravitational waves associated with gamma-ray bursts during the first Advanced LIGO observing run and implications for the origin of GRB 150906B
arXiv: 1611.07947 [astro-ph.HE]
Journal: Astrophysical Journal; 841(2):89(18); 2016
LIGO science summary: What’s behind the mysterious gamma-ray bursts? LIGO’s search for clues to their origins

Some binary neutron star or neutron-star–black-hole mergers may be accompanied by a gamma-ray burst. This paper describes our search for signals coinciding with observations of gamma-ray bursts (including GRB 150906B, which was potentially especially close by). Knowing when to look makes it easy to distinguish a signal from noise. We don’t find anything, so we we can exclude any close binary mergers as sources of these gamma-ray bursts.

More details: O1 Gamma-Ray Burst Paper summary

The O1 Intermediate Mass Black Hole Binary Paper

Title: Search for intermediate mass black hole binaries in the first observing run of Advanced LIGO
arXiv: 1704.04628 [gr-qc]
Journal: Physical Review D; 96(2):022001(14); 2017
LIGO science summary: Search for mergers of intermediate-mass black holes

Our main search for binary black holes in O1 targeted systems with masses less than about 100 solar masses. There could be more massive black holes out there. Our detectors are sensitive to signals from binaries up to a few hundred solar masses, but these are difficult to detect because they are so short. This paper describes our specially designed such systems. This combines techniques which use waveform templates and those which look for unmodelled transients (bursts). Since we don’t find anything, we set some new upper limits on merger rates.

More details: O1 Intermediate Mass Black Hole Binary Paper summary

The O1 Binary Neutron Star/Neutron Star–Black Hole Paper

Synopsis: O1 Binary Neutron Star/Neutron Star–Black Hole Paper
Read this if: You want a change from black holes
Favourite part: We’re getting closer to detection (and it’ll still be interesting if we don’t find anything)

The Compact Binary Coalescence (CBC) group target gravitational waves from three different flavours of binary in our main search: binary neutron stars, neutron star–black hole binaries and binary black holes. Before O1, I would have put my money on us detecting a binary neutron star first, around-about O3. Reality had other ideas, and we discovered binary black holes. Those results were reported in the O1 Binary Black Hole Paper; this paper goes into our results for the others (which we didn’t detect).

To search for signals from compact binaries, we use a bank of gravitational wave signals  to match against the data. This bank goes up to total masses of 100 solar masses. We split the bank up, so that objects below 2 solar masses are considered neutron stars. This doesn’t make too much difference to the waveforms we use to search (neutrons stars, being made of stuff, can be tidally deformed by their companion, which adds some extra features to the waveform, but we don’t include these in the search). However, we do limit the spins for neutron stars to less the 0.05, as this encloses the range of spins estimated for neutron star binaries from binary pulsars. This choice shouldn’t impact our ability to detect neutron stars with moderate spins too much.

We didn’t find any interesting events: the results were consistent with there just being background noise. If you read really carefully, you might have deduced this already from the O1 Binary Black Hole Paper, as the results from the different types of binaries are completely decoupled. Since we didn’t find anything, we can set some upper limits on the merger rates for binary neutron stars and neutron star–black hole binaries.

The expected number of events found in the search is given by

\Lambda = R \langle VT \rangle

where R is the merger rate, and $latex \langle VT \rangle$ is the surveyed time–volume (you expect more detections if your detectors are more sensitive, so that they can find signals from further away, or if you leave them on for longer). We can estimate $latex \langle VT \rangle$ by performing a set of injections and seeing how many are found/missed at a given threshold. Here, we use a false alarm rate of one per century. Given our estimate for $latex \langle VT \rangle$ and our observation of zero detections we can, calculate a probability distribution for R using Bayes’ theorem. This requires a choice for a prior distribution of \Lambda. We use a uniform prior, for consistency with what we’ve done in the past.

With a uniform prior, the c confidence level limit on the rate is

\displaystyle R_c = \frac{-\ln(1-c)}{\langle VT \rangle},

so the 90% confidence upper limit is R_{90\%} = 2.30/\langle VT \rangle. This is quite commonly used, for example we make use of it in the O1 Intermediate Mass Black Hole Binary Search. For comparison, if we had used a Jeffrey’s prior of 1/\sqrt{\Lambda}, the equivalent results is

\displaystyle R_c = \frac{\left[\mathrm{erf}^{-1}(c)\right]^2}{\langle VT \rangle},

and hence R_{90\%} = 1.35/\langle VT \rangle, so results would be the same to within a factor of 2, but the results with the uniform prior are more conservative.

The plot below shows upper limits for different neutron star masses, assuming that neutron spins are (uniformly distributed) between 0 and 0.05 and isotropically orientated. From our observations of binary pulsars, we have seen that most of these neutron stars have masses of ~1.35 solar masses, so we can also put a limit of the binary neutron star merger rate assuming that their masses are normally distributed with mean of 1.35 solar masses and standard deviation of 0.13 solar masses. This gives an upper limit of R_{90\%} = 12,100~\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1} for isotropic spins up to 0.05, and R_{90\%} = 12,600~\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1} if you allow the spins up to 0.4.

Upper merger rate limits for binary neutron stars

90% confidence upper limits on the binary neutron star merger rate. These rates assume randomly orientated spins up to 0.05. Results are calculated using PyCBC, one of our search algorithms; GstLAL gives similar results. Figure 4 of the O1 Binary Neutron Star/Neutron Star–Black Hole Paper

For neutron star–black hole binaries there’s a greater variation in possible merger rates because the black holes can have a greater of masses and spins. The upper limits range from about R_{90\%} = 1,200~\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1} to 3,600~\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1} for a 1.4 solar mass neutron star and a black hole between 30 and 5 solar masses and a range of different spins (Table II of the paper).

It’s not surprising that we didn’t see anything in O1, but what about in future runs. The plots below compare projections for our future sensitivity with various predictions for the merger rates of binary neutron stars and neutron star–black hole binaries. A few things have changed since we made these projections, for example O2 ended up being 9 months instead of 6 months, but I think we’re still somewhere in the O2 band. We’ll have to see for O3. From these, it’s clear that a detection on O1 was overly optimistic. In O2 and O3 it becomes more plausible. This means even if we don’t see anything, we’ll be still be doing some interesting astrophysics as we can start ruling out some models.

Comparison of merger rates

Comparison of upper limits for binary neutron star (BNS; top) and neutron star–black hole binaries (NSBH; bottom) merger rates with theoretical and observational limits. The blue bars show O1 limits, the green and orange bars show projections for future observing runs. Figure 6 and 7 from the O1 Binary Neutron Star/Neutron Star–Black Hole Paper

Binary neutron star or neutron star–black hole mergers may be the sources of gamma-ray bursts. These are some of the most energetic explosions in the Universe, but we’re not sure where they come from (I actually find that kind of worrying). We look at this connection a bit more in the O1 Gamma-Ray Burst Paper. The theory is that during the merger, neutron star matter gets ripped apart, squeezed and heated, and as part of this we get jets blasted outwards from the swirling material. There are always jets in these type of things. We see the gamma-ray burst if we are looking down the jet: the wider the jet, the larger the fraction of gamma-ray bursts we see. By comparing our estimated merger rates, with the estimated rate of gamma-ray bursts, we can place some lower limits on the opening angle of the jet. If all gamma-ray bursts come from binary neutron stars, the opening angle needs to be bigger than 2.3_{-1.7}^{+1.7}~\mathrm{deg} and if they all come from neutron star–black hole mergers the angle needs to be bigger than 4.3_{-1.9}^{+3.1}~\mathrm{deg}.

The O1 Gamma-Ray Burst Paper

Synopsis: O1 Gamma-Ray Burst Paper
Read this if: You like explosions. But from a safe distance
Favourite part: We exclude GRB 150906B from being associated with galaxy NGC 3313

Gamma-ray bursts are extremely violent explosions. They come in two (overlapping) classes: short and long. Short gamma-ray bursts are typically shorter than ~2 seconds and have a harder spectrum (more high energy emission). We think that these may come from the coalescence of neutron star binaries. Long gamma-ray bursts are (shockingly) typically longer than ~2 seconds, and have a softer spectrum (less high energy emission). We think that these could originate from the collapse of massive stars (like a supernova explosion). The introduction of the paper contains a neat review of the physics of both these types of sources. Both types of progenitors would emit gravitational waves that could be detected if the source was close enough.

The binary mergers could be picked up by our templated search (as reported in the O1 Binary Neutron Star/Neutron Star–Black Hole Paper): we have a good models for what these signals look like, which allows us to efficiently search for them. We don’t have good models for the collapse of stars, but our unmodelled searches could pick these up. These look for the same signal in multiple detectors, but since they don’t know what they are looking for, it is harder to distinguish a signal from noise than for the templated search. Cross-referencing our usual searches with the times of gamma-ray bursts could help us boost the significance of a trigger: it might not be noteworthy as just a weak gravitational-wave (or gamma-ray) candidate, but considering them together makes it much more unlikely that a coincidence would happen by chance. The on-line RAVEN pipeline monitors for alerts to minimise the chance that miss a coincidence. As well as relying on our standard searches, we also do targeted searches following up on gamma-ray bursts, using the information from these external triggers.

We used two search algorithms:

  • X-Pipeline is an unmodelled search (similar to cWB) which looks for a coherent signal, consistent with the sky position of the gamma-ray burst. This was run for all the gamma-ray bursts (long and short) for which we have good data from both LIGO detectors and a good sky location.
  • PyGRB is a modelled search which looks for binary signals using templates. Our main binary search algorithms check for coincident signals: a signal matching the same template in both detectors with compatible times. This search looks for coherent signals, factoring the source direction. This gives extra sensitivity (~20%–25% in terms of distance). Since we know what the signal looks like, we can also use this algorithm to look for signals when only one detector is taking data. We used this algorithm on all short (or ambiguously classified) gamma-ray bursts for which we data from at least one detector.

In total we analysed times corresponding to 42 gamma-ray bursts: 41 which occurred during O1 plus GRB 150906B. This happening in the engineering run before the start of O1, and luckily Handord was in a stable observing state at the time. GRB 150906B was localised to come from part of the sky close to the galaxy NGC 3313, which is only 54 megaparsec away. This is within the regime where we could have detected a binary merger. This caused much excitement at the time—people thought that this could be the most interesting result of O1—but this dampened down a week later with the detection of GW150914.

GRB 150906B sky location

Interplanetary Network (IPN) localization for GRB 150906B and nearby galaxies. Figure 1 from the O1 Gamma-Ray Burst Paper.

We didn’t find any gravitational-wave counterparts. These means that we could place some lower limits on how far away their sources could be. We performed injections of signals—using waveforms from binaries, collapsing stars (approximated with circular sine–Gaussian waveforms), and unstable discs (using an accretion disc instability model)—to see how far away we could have detected a signal, and set 90% probability limits on the distances (see Table 3 of the paper). The best of these are ~100–200 megaparsec (the worst is just 4 megaparsec, which is basically next door). These results aren’t too interesting yet, they will become more so in the future, and around the time we hit design sensitivity we will start overlapping with electromagnetic measurements of distances for short gamma-ray bursts. However, we can rule out GRB 150906B coming from NGC 3133 at high probability!

The O1 Intermediate Mass Black Hole Binary Paper

Synopsis: O1 Intermediate Mass Black Hole Binary Paper
Read this if: You like intermediate mass black holes (black holes of ~100 solar masses)
Favourite part: The teamwork between different searches

Black holes could come in many sizes. We know of stellar-mass black holes, the collapsed remains of dead stars, which are a few to a few tens of times the mas of our Sun, and we know of (super)massive black holes, lurking in the centres of galaxies, which are tens of thousands to billions of times the mass of our Sun. Between the two, lie the elusive intermediate mass black holes. There have been repeated claims of observational evidence for their existence, but these are notoriously difficult to confirm. Gravitational waves provide a means of confirming the reality of intermediate mass black holes, if they do exist.

The gravitational wave signal emitted by a binary depends upon the mass of its components. More massive objects produce louder signals, but these signals also end at lower frequencies. The merger frequency of a binary is inversely proportional to the total mass. Ground-based detectors can’t detect massive black hole binaries as they are too low frequency, but they can detect binaries of a few hundred solar masses. We look for these in this search.

Our flagship search for binary black holes looks for signals using matched filtering: we compare the data to a bank of template waveforms. The bank extends up to a total mass of 100 solar masses. This search continues above this (there’s actually some overlap as we didn’t want to miss anything, but we shouldn’t have worried). Higher mass binaries are hard to detect as they as shorter, and so more difficult to distinguish from a little blip of noise, which is why this search was treated differently.

As well as using templates, we can do an unmodelled (burst) search for signals by looking for coherent signals in both detectors. This type of search isn’t as sensitive, as you don’t know what you are looking for, but can pick up short signals (like GW150914).

Our search for intermediate mass black holes uses both a modelled search (with templates spanning total masses of 50 to 600 solar masses) and a specially tuned burst search. Both make sure to include low frequency data in their analysis. This work is one of the few cross-working group (CBC for the templated search, and Burst for the unmodelled) projects, and I was pleased with the results.

This is probably where you expect me to say that we didn’t detect anything so we set upper limits. That is actually not the case here: we did detect something! Unfortunately, it wasn’t what we were looking for. We detected GW150914, which was a relief as it did lie within the range we where searching, as well as LVT151012 and GW151226. These were more of a surprise. GW151226 has a total mass of just ~24 solar masses (as measured with cosmological redshift), and so is well outside our bank. It was actually picked up just on the edge, but still, it’s impressive that the searches can find things beyond what they are aiming to pick up. Having found no intermediate mass black holes, we went and set some upper limits. (Yay!)

To set our upper limits, we injected some signals from binaries with specific masses and spins, and then saw how many would have be found with greater significance than our most significant trigger (after excluding GW150914, LVT151012 and GW151226). This is effectively asking the question of when would we see something as significant as this trigger which we think is just noise. This gives us a sensitive time–volume \langle VT \rangle which we have surveyed and found no mergers. We use this number of events to set 90% upper limits on the merge rates R_{90\%} = 2.3/\langle VT \rangle, and define an effective distance D_{\langle VT \rangle} defined so that \langle VT \rangle = T_a (4\pi D_{\langle VT \rangle}^3/3) where T_a is the analysed amount of time. The plot below show our limits on rate and effective distance for our different injections.

Intermediate mass black hole binary search results

Results from the O1 search for intermediate mass black hole binaries. The left panel shows the 90% confidence upper limit on the merger rate. The right panel shows the effective search distance. Each circle is a different injection. All have zero spin, except two 100+100 solar mass sets, where \chi indicates the spin aligned with the orbital angular momentum. Figure 2 of the O1 Intermediate Mass Black Hole Binary Paper.

There are a couple of caveats associated with our limits. The waveforms we use don’t include all the relevant physics (like orbital eccentricity and spin precession). Including everything is hard: we may use some numerical relativity waveforms in the future. However, they should give a good impression on our sensitivity. There’s quite a big improvement compared to previous searches (S6 Burst Search; S6 Templated Search). This comes form the improvement of Advanced LIGO’s sensitivity at low frequencies compared to initial LIGO. Future improvements to the low frequency sensitivity should increase our probability of making a detection.

I spent a lot of time working on this search as I was the review chair. As a reviewer, I had to make sure everything was done properly, and then reported accurately. I think our review team did a thorough job. I was glad when we were done, as I dislike being the bad cop.



Hierarchical analysis of gravitational-wave measurements of binary black hole spin–orbit misalignments

Gravitational waves allow us to infer the properties of binary black holes (two black holes in orbit about each other), but can we use this information to figure out how the black holes and the binary form? In this paper, we show that measurements of the black holes’ spins can help us this out, but probably not until we have at least 100 detections.

Black hole spins

Black holes are described by their masses (how much they bend spacetime) and their spins (how much they drag spacetime to rotate about them). The orientation of the spins relative to the orbit of the binary could tell us something about the history of the binary [bonus note].

We considered four different populations of spin–orbit alignments to see if we could tell them apart with gravitational-wave observations:

  1. Aligned—matching the idealised example of isolated binary evolution. This stands in for the case where misalignments are small, which might be the case if material blown off during a supernova ends up falling back and being swallowed by the black hole.
  2. Isotropic—matching the expectations for dynamically formed binaries.
  3. Equal misalignments at birth—this would be the case if the spins and orbit were aligned before the second supernova, which then tilted the plane of the orbit. (As the binary inspirals, the spins wobble around, so the two misalignment angles won’t always be the same).
  4. Both spins misaligned by supernova kicks, assuming that the stars were aligned with the orbit before exploding. This gives a more general scatter of unequal misalignments, but typically the primary (bigger and first forming) black hole is more misaligned.

These give a selection of possible spin alignments. For each, we assumed that the spin magnitude was the same and had a value of 0.7. This seemed like a sensible idea when we started this study [bonus note], but is now towards the upper end of what we expect for binary black holes.

Hierarchical analysis

To measurement the properties of the population we need to perform a hierarchical analysis: there are two layers of inference, one for the individual binaries, and one of the population.

From a gravitational wave signal, we infer the properties of the source using Bayes’ theorem. Given the data d_\alpha, we want to know the probability that the parameters \mathbf{\Theta}_\alpha have different values, which is written as p(\mathbf{\Theta}_\alpha|d_\alpha). This is calculated using

\displaystyle p(\mathbf{\Theta}_\alpha|d_\alpha) = \frac{p(d_\alpha | \mathbf{\Theta}_\alpha) p(\mathbf{\Theta}_\alpha)}{p(d_\alpha)},

where p(d_\alpha | \mathbf{\Theta}_\alpha) is the likelihood, which we can calculate from our knowledge of the noise in our gravitational wave detectors, p(\mathbf{\Theta}_\alpha) is the prior on the parameters (what we would have guessed before we had the data), and the normalisation constant p(d_\alpha) is called the evidence. We’ll use the evidence again in the next layer of inference.

Our prior on the parameters should actually depend upon what we believe about the astrophysical population. It is different if we believed that Model 1 were true (when we’d only consider aligned spins) than for Model 2. Therefore, we should really write

\displaystyle p(\mathbf{\Theta}_\alpha|d_\alpha, \lambda) = \frac{p(d_\alpha | \mathbf{\Theta}_\alpha,\lambda) p(\mathbf{\Theta}_\alpha,\lambda)}{p(d_\alpha|\lambda)},

where  \lambda denotes which model we are considering.

This is an important point to remember: if you our using our LIGO results to test your theory of binary formation, you need to remember to correct for our choice of prior. We try to pick non-informative priors—priors that don’t make strong assumptions about the physics of the source—but this doesn’t mean that they match what would be expected from your model.

We are interested in the probability distribution for the different models: how many binaries come from each. Given a set of different observations \{d_\alpha\}, we can work this out using another application of Bayes’ theorem (yay)

\displaystyle p(\mathbf{\lambda}|\{d_\alpha\}) = \frac{p(\{d_\alpha\} | \mathbf{\lambda}) p(\mathbf{\lambda})}{p(\{d_\alpha\})},

where p(\{d_\alpha\} | \mathbf{\lambda}) is just all the evidences for the individual events (given that model) multiplied together, p(\mathbf{\lambda}) is our prior for the different models, and p(\{d_\alpha\}) is another normalisation constant.

Now knowing how to go from a set of observations to the probability distribution on the different channels, let’s give it a go!


To test our approach made a set of mock gravitational wave measurements. We generated signals from binaries for each of our four models, and analysed these as we would for real signals (using LALInference). This is rather computationally expensive, and we wanted a large set of events to analyse, so using these results as a guide, we created a larger catalogue of approximate distributions for the inferred source parameters p(\mathbf{\Theta}_\alpha|d_\alpha). We then fed these through our hierarchical analysis. The GIF below shows how measurements of the fraction of binaries from each population tightens up as we get more detections: the true fraction is marked in blue.

Fraction of binaries from each of the four models

Probability distribution for the fraction of binaries from each of our four spin misalignment populations for different numbers of observations. The blue dot marks the true fraction: and equal fraction from all four channels.

The plot shows that we do zoom in towards the true fraction of events from each model as the number of events increases, but there are significant degeneracies between the different models. Notably, it is difficult to tell apart Models 1 and 3, as both have strong support for both spins being nearly aligned. Similarly, there is a degeneracy between Models 2 and 4 as both allow for the two spins to have very different misalignments (and for the primary spin, which is the better measured one, to be quite significantly misaligned).

This means that we should be able to distinguish aligned from misaligned populations (we estimated that as few as 5 events would be needed to distinguish the case that all events came from either Model 1  or Model 2 if those were the only two allowed possibilities). However, it will be more difficult to distinguish different scenarios which only lead to small misalignments from each other, or disentangle whether there is significant misalignment due to big supernova kicks or because binaries are formed dynamically.

The uncertainty of the fraction of events from each model scales roughly with the square root of the number of observations, so it may be slow progress making these measurements. I’m not sure whether we’ll know the answer to how binary black hole form, or who will sit on the Iron Throne first.

arXiv: 1703.06873 [astro-ph.HE]
Journal: Monthly Notices of the Royal Astronomical Society471(3):2801–2811; 2017
Birmingham science summary: Hierarchical analysis of gravitational-wave measurements of binary black hole spin–orbit misalignment (by Simon)
If you like this you might like: Farr et al. (2017)Talbot & Thrane (2017), Vitale et al. (2017), Trifirò et al. (2016), Minogue (2000)

Bonus notes

Spin misalignments and formation histories

If you have two stars forming in a binary together, you’d expect them to be spinning in roughly the same direction, rotating the same way as they go round in their orbit (like our Solar System). This is because they all formed from the same cloud of swirling gas and dust. Furthermore, if two stars are to form a black hole binary that we can detect gravitational waves from, they need to be close together. This means that there can be tidal forces which gently tug the stars to align their rotation with the orbit. As they get older, stars puff up, meaning that if you have a close-by neighbour, you can share outer layers. This transfer of material will tend to align rotate too. Adding this all together, if you have an isolated binary of stars, you might expect that when they collapse down to become black holes, their spins are aligned with each other and the orbit.

Unfortunately, real astrophysics is rarely so clean. Even if the stars were initially rotating the same way as each other, they doesn’t mean that their black hole remnants will do the same. This depends upon how the star collapses. Massive stars explode as supernova, blasting off their outer layers while their cores collapse down to form black holes. Escaping material could carry away angular momentum, meaning that the black hole is spinning in a different direction to its parent star, or material could be blasted off asymmetrically, giving the new black hole a kick. This would change the plane of the binary’s orbit, misaligning the spins.

Alternatively, the binary could be formed dynamically. Instead of two stars living their lives together, we could have two stars (or black holes) come close enough together to form a binary. This is likely to happen in regions where there’s a high density of stars, such as a globular cluster. In this case, since the binary has been randomly assembled, there’s no reason for the spins to be aligned with each other or the orbit. For dynamically assembled binaries, all spin–orbit misalignments are equally probable.

Slow and steady

This project was led by Simon Stevenson. It was one of the first things we started working on at the beginning of his PhD. He has now graduated, and is off to start a new exciting life as a postdoc in Australia. We got a little distracted by other projects, most notably analysing the first detections of gravitational waves. Simon spent a lot of time developing the COMPAS population code, a code to simulate the evolution of binaries. Looking back, it’s impressive how far he’s come. This paper used a simple approximation to to estimate the masses of our black holes: we called it the Post-it note model, as we wrote it down on a single Post-it. Now Simon’s writing papers including the complexities of common-envelope evolution in order to explain LIGO’s actual observations.