Eclipses of continuous gravitational waves as a probe of stellar structure

Understanding how stars work is a fundamental problem in astrophysics. We can’t open up a star to investigate its inner workings, which makes it difficult to test our models. Over the years, we have developed several ways to sneak a peek into what must be happening inside stars, such as by measuring solar neutrinos, or using asteroseismology to measure how sounds travels through a star. In this paper, we propose a new way to examine the hearts of stars using gravitational waves.

Gravitational waves interact very weakly with stuff. Whereas light gets blocked by material (meaning that we can’t see deeper than a star’s photosphere), gravitational waves will happily travel through pretty much anything. This property means that gravitational waves are hard to detect, but it also means that there’ll happily pass through an entire star. While the material that makes up a star will not affect the passing of a gravitational wave, its gravity will. The mass of a star can lead to gravitational lensing and a slight deflecting, magnification and delaying of a passing gravitational wave. If we can measure this lensing, we can reconstruct the mass of star, and potentially map out its internal structure.

Eclipsing gravitational wave sources

Two types of eclipse: the eclipse of a distant gravitational wave (GW) source by the Sun, and gravitational waves from an accreting millisecond pulsar (MSP) eclipsed by its companion. Either scenario could enable us to see gravitational waves passing through a star. Figure 2 of Marchant et al. (2020).

We proposed looking at gravitational waves for eclipsing sources—where a gravitational wave source is behind a star. As the alignment of the Earth (and our detectors), the star and the source changes, the gravitational wave will travel through different parts of the star, and we will see a different amount of lensing, allowing us to measure the mass of the star at different radii. This sounds neat, but how often will we be lucky enough to see an eclipsing source?

To date, we have only seen gravitational waves from compact binary coalescences (the inspiral and merger of two black holes or neutron stars). These are not a good source for eclipses. The chances that they travel through a star is small (as space is pretty empty) [bonus note]. Furthermore, we might not even be able to work out that this happened. The signal is relatively short, so we can’t compare the signal before and during an eclipse. Another type of gravitational wave signal would be much better: a continuous gravitational wave signal.

How common are eclipsing gravitational wave sources?

Probability of observing at least one eclipsing source amongst a number of observed sources. Compact binary coalescences (CBCs, shown in purple) are the most rare, continuous gravitational waves (CGWs) eclipsed by the Sun (red) or by a companion (red) are more common. Here we assume companions are stars about a tenth the mass of the neutron star. The number of neutron stars with binary companions is estimated using the COSMIC population synthesis code. Results are shown for eclipses where the gravitational waves get within distance b of the centre of the star. Figure 1 of Marchant et al. (2020).

Continuous gravitational waves are produced by rotating neutron stars. They are pretty much perfect for searching for eclipses. As you might guess from their name, continuous gravitational waves are always there. They happily hum away, sticking to pretty much the same note (they’d get pretty annoying to listen to). Therefore, we can measure them before, during and after an eclipse, and identify any changes due to the gravitational lensing. Furthermore, we’d expect that many neutron stars would be in close binaries, and therefore would be eclipsed by their partner. This would happen each time they orbit, potentially giving us lots of juicy information on these stars. All we need to do is measure the continuous gravitational wave…

The effect of the gravitational lensing by a star is small. We performed detailed calculations for our Sun (using MESA), and found that for the effects to be measurable you would need an extremely loud signal. A signal-to-noise ratio would need to be hundreds during the eclipse for measurement precision to be good enough to notice the imprint of lensing. To map out how things changed as the eclipse progressed, you’d need signal-to-noise ratios many times higher than this. As an eclipse by the Sun is only a small fraction of the time, we’re going to need some really loud signals (at least signal-to-noise ratios of 2500) to see these effects. We will need the next generation of gravitational wave detectors.

We are currently thinking about the next generation of gravitational wave detectors [bonus note]. The leading ideas are successors to LIGO and Virgo: detectors which cover a large range of frequencies to detect many different types of source. These will be expensive (billions of dollars, euros or pounds), and need international collaboration to finance. However, I also like the idea of smaller detectors designed to do one thing really well. Potentially these could be financed by a single national lab. I think eclipsing continuous waves are the perfect source for this—instead of needing a detector sensitive over a wide frequency range, we just need to be sensitive over a really narrow range. We will be able to detect continuous waves before we are able to see the impact of eclipses. Therefore, we’ll know exactly what frequency to tune for. We’ll also know exactly when we need to observe. I think it would be really awesome to have a tunable narrowband detector, which could measure the eclipse of one source, and then be tuned for the next one, and the next. By combining many observations, we could really build up a detailed picture of the Sun. I think this would be an exciting experiment—instrumentalists, put your thinking hats on!

Let’s reach for(the centres of) the stars.

arXiv: 1912.04268 [astro-ph.SR]
Journal: Physical Review D; 101(2):024039(15); 2020
Data release: Eclipses of continuous gravitational waves as a probe of stellar structure
CIERA story: Using gravitational waves to see inside stars
Why does the sun really shine? The Sun is a miasma of incandescent plasma

Bonus notes

Silver lining

Since signals from compact binary coalescences are so unlikely to be eclipsed by a star, we don’t have to worry that our measurements of the source property are being messed up by this type of gravitational lensing distorting the signal. Which is nice.

Prospects with LISA

If you were wondering if we could see these types of eclipses with the space-based gravitational wave observatory LISA, the answer is sadly no. LISA observes lower frequency gravitational waves. Lower frequency means longer wavelength, so long in fact that the wavelength is larger than the size of the Sun! Since the size of the Sun is so small compared to the gravitational wave, it doesn’t leave a same imprint: the wave effectively skips over the gravitational potential.

Advertisement

Can neutron-star mergers explain the r-process enrichment in globular clusters?

Maybe

The mystery of the elements

Where do the elements come from? Hydrogen, helium and a little lithium were made in the big bang. These lighter elements are fused together inside stars, making heavier elements up to around iron. At this point you no longer get energy out by smooshing nuclei together. To build even heavier elements, you need different processes—one being to introduce lots of extra neutrons. Adding neutrons slowly leads to creation of s-process elements, while adding then rapidly leads to the creation of r-process elements. By observing the distribution of elements, we can figure out how often these different processes operate.

Periodic table and element origins

Periodic table showing the origins of different elements found in our Solar System. THis plot assumes that neutron star mergers are the dominant source of r-process elements. Credit: Jennifer Johnson

It has long been theorised that the site of r-process production could be neutron star mergers. Material ejected as the stars are ripped apart or ejected following the collision is naturally neutron rich. This undergoes radioactive decay leading making r-process elements. The discovery of the first binary neutron star collision confirmed this happens. If you have any gold or platinum jewellery, it’s origins can probably be traced back to a pair of neutron stars which collided billions of years ago!

The r-process may also occur in supernova explosions. It is most likely that it occurs in both supernovae and neutron star mergers—the question is which contributes more. Figuring this out would be helpful in our quest to understand how stars live and die.

Hubble image of NGC 1898

Hubble Space Telescope image of the stars of NGC 1898, a globular cluster in the Large Magellanic Cloud. Credit: ESA/Hubble & NASA

In this paper, led by Michael Zevin, we investigated the r-process elements of globular clusters. Globular clusters are big balls of stars. Apart from being beautiful, globular clusters are an excellent laboratory for testing our understanding of stars,as there are so many packed into a (relatively) small space. We considered if observations of r-process enrichment could be explained by binary neutron star mergers?

Enriching globular clusters

The stars in globular clusters are all born around the same time. They should all be made from the same stuff; they should have the same composition, aside from any elements that they have made themselves. Since r-process elements are not made in stars, the stars in a globular cluster should have the same abundances of these elements. However, measurements of elements like lanthanum and europium, show star-to-star variation in some globular clusters.

This variation can happen if some stars were polluted by r-process elements made after the cluster formed. The first stars formed from unpolluted gas, while later stars formed from gas which had been enriched, possibly with stars closer to the source being more enriched than those further away. For this to work, we need (i) a process which can happen quickly [bonus science note], as the time over which stars form is short (they are almost the same age), and (ii) something that will happen in some clusters but not others—we need to hit the goldilocks zone of something not so rare that we’d almost never since enrichment, but not so common that almost all clusters would be enriched. Can binary neutron stars merge quickly enough and with the right rate to explain r-process enrichment?

Making binary neutron stars

There are two ways of making binary neutron stars: dynamically and via isolated evolution. Dynamically formed binaries are made when two stars get close enough to form a pairing, or when a star gets close to an binary existing binary resulting in one member getting ejecting and the interloper taking its place, or when two binaries get close together, resulting in all sorts of madness (Michael has previously looked at binary black holes formed through binary–binary interactions, and I love the animations, as shown below). Isolated evolution happens when you have a pair of stars that live their entire lives together. We examined both channels.

Dynamically formed binaries

With globular clusters having so many stars in such a small space, you might think that dynamical formation is a good bet for binary neutron star formation. We found that this isn’t the case. The problem is that neutron stars are relatively light. This causes two problems. First, generally the heaviest objects generally settle in the centre of a cluster where the density is highest and binaries are most likely to form. Second, in interactions, it is typically the heaviest objects that will be left in the binary. Black holes are more massive than neutron stars, so they will initially take the prime position. Through dynamical interactions, many will be eventually ejected from the cluster; however, even then, many of the remaining stars will be more massive than the neutron stars. It is hard for neutron stars to get the prime binary-forming positions [bonus note].

To check on the dynamical-formation potential, we performed two simulations: one with the standard mix of stars, and one ultimate best case™ where we artificially removed all the black holes. In both cases, we found that binary neutron stars take billions of years to merge. That’s far too long to lead to the necessary r-process enrichment.

Time for binaries to form and merge

Time taken for double black hole (DHB, shown in blue), neutron star–black hole (NSBH, shown in green), and double neutron star (DNS, shown in purple) [bonus note] binaries to form and then inspiral to merge in globular cluster simulations. Circles and dashed histograms show results for the standard cluster model. Triangles and solids histograms show results when black holes are artificially removed. Figure 1 of a Zevin et al. (2019).

Isolated binaries

Considering isolated binaries, we need to work out how many binary neutron stars will merge close enough to a cluster to enrich it. This requires a couple of ingredients: (I) knowing how many binary neutron stars form, and (ii) working how many are still close to the cluster when they merge. Neutron stars will get kicks when they are born in supernova explosions, and these are enough to kick them out of the cluster.  So long as they merge before they get too far, that’s OK for enrichment. Therefore we need to track both those that stay in the cluster, and those which leave but merge before getting too far. To estimate the number of enriching binary neutron stars, we simulated a populations of binary stars.

The evolution of binary neutron stars can be complicated. The neutron stars form from massive stars. In order for them to end up merging, they need to be in a close binary. This means that as the stars evolve and start to expand, they will transfer mass between themselves. This mass transfer can be stable, in which case the orbit widens, faster eventually shutting off the mass transfer, or it can be unstable, when the star expands leading to even more mass transfer (what’s really important is the rate of change of the size of the star compared to the Roche lobe). When mass transfer is extremely rapid, it leads to the formation of a common envelope: the outer layers of the donor ends up encompassing both the core of the star and the companion. Drag experienced in a common envelope can lead to the orbit shrinking, exactly as you’d want for a merger, but it can be too efficient, and the two stars may merge before forming two neutron stars. It’s also not clear what would happen in this case if there isn’t a clear boundary between the envelope and core of the donor star—it’s probable you’d just get a mess and the stars merging. We used COSMIC to see the effects of different assumptions about the physics:

  • Model A: Our base model, which is in my opinion the least plausible. This assumes that helium stars can successfully survive a common envelope. Mass transfer from helium star will be especially important for our results, particularly what is called Case BB mass transfer [bonus note], which occurs once helium burning has finished in the core of a star, and is now burning is a shell outside the core.
  • Model B: Here, we assume that stars without a clear core/envelope boundary will always merge during the common envelope. Stars burning helium in a shell lack a clear core/envelope boundary, and so any common envelopes formed from Case BB mass transfer will result in the stars merging (and no binary neutron star forming). This is a pessimistic model in terms of predicting rates.
  • Model C: The same as Model A, but we use prescriptions from Tauris, Langer & Podsiadlowski (2015) for the orbital evolution and mass loss for mass transfer. These results show that mass transfer from helium stars typically proceeds stably. This means we don’t need to worry about common envelopes from Case BB mass transfer. This is more optimistic in terms of rates.
  • Model D: The same as Model C, except all stars which undergo Case BB mass transfer are assumed to become ultra-stripped. Since they have less material in their envelopes, we give them smaller supernova natal kicks, the same as electron capture supernovae.

All our models can produce some merging neutron stars within 100 million years. However, for Model B, this number is small, so that only a few percent of globular clusters would be enriched. For the others, it would be a few tens of percent, but not all. Model A gives the most enrichment. Model C and D are similar, with Model D producing slightly less enrichment.

Post-supernova binary neutron star properties for population models

Post-supernova binary neutron star properties (systemic velocity v_\mathrm{sys} vs inspiral time t_\mathrm{insp}, and orbital separation a vs eccentricity e) for our population models. The lines in the left-hand plots show the bounds for a binary to enrich a cluster of a given virial radius: viable binaries are below the lines. In both plots, red, blue and green points are the binaries which could enrich clusters of virial radii 1 pc, 3 pc and 10 pc; of the other points, purple indicates systems where the secondary star went through Case BB mass transfer. Figure 2 of Zevin et al. (2019).

Maybe?

Our results show that the r-process enrichment of globular clusters could be explained by binary neutron star mergers if binaries can survive Case BB mass transfer without merging. If Case BB mass transfer is typically unstable and somehow it is possible to survive a common envelope (Model A), ~30−90% of globular clusters should be enriched (depending upon their mass and size). This rate is consistent with consistent with current observations, but it is a stretch to imagine stars surviving common envelopes in this case. However, if Case BB mass transfer is stable (Models C and D), we still have ~10−70% of globular clusters should be enriched. This could plausibly explain everything! If we can measure the enrichment in more clusters and accurately pin down the fraction which are enriched, we may learn something important about how binaries interact.

However, for our idea to work, we do need globular clusters to form stars over an extended period of time. If there’s no gas around to absorb the material ejected from binary neutron star mergers and then form new stars, we have not cracked the problem. The plot below shows that the build up of enriching material happens at around 40 million years after the initial start formation. This is when we need the gas to be around. If this is not the case, we need a different method of enrichment.

r-process enrichment depending upon duration of star formation

Probability of cluster enrichment P_\mathrm{enrich} and number of enriching binary neutron star mergers per cluster \Lambda_\mathrm{enrich} as a function of the timescale of star formation \Delta \tau_\mathrm{SF}. Dashed lines are used of a cluster of a million solar masses and solid lines are used for a cluster of half this mass. Results are shown for Model D. The build up happens around the same time in different models. Figure 5 in Zevin et al. (2019).

It may be interesting to look again at r-process enrichment from supernova.

arXiv: arXiv:1906.11299 [astro-ph.HE]
Journal: Astrophysical Journal; 886(1):4(16); 2019 [bonus note]
Alternative tile: The Europium Report

Bonus notes

Hidden pulsars and GW190425

The most recent gravitational-wave detection, GW190425, comes from a binary neutron star system of an unusually high mass. It’s mass is much higher than the population of binary neutron stars observed in our Galaxy. One explanation for this could be that it represents a population which is short lived, and we’d be unlikely to spot one in our Galaxy, as they’re not around for long. Consequently, the same physics may be important both for this study of globular clusters and for explaining GW190425.

Gravitational-wave sources and dynamical formation

The question of how do binary neutron stars form is important for understanding gravitational-wave sources. The question of whether dynamically formed binary neutron stars could be a significant contribution to the overall rate was recently studied in detail in a paper led by Northwestern PhD student Claire Ye. The conclusions of this work was that the fraction of binary neutron stars formed dynamically in globular clusters was tiny (in agreement with our results). Only about 0.001% of binary neutron stars we observe with gravitational waves would be formed dynamically in globular clusters.

Double vs binary

In this paper we use double black hole = DBH and double neutron star = DNS instead of the usual binary black hole = BBH and binary neutron star = BNS from gravitational-wave astronomy. The terms mean the same. I will use binary instead of double here as B is worth more than D in Scrabble.

Mass transfer cases

The different types of mass transfer have names which I always forget. For regular stars we have:

  • Case A is from a star on the main sequence, when it is burning hydrogen in its core.
  • Case B is from a star which has finished burning hydrogen in its core, and is burning hydrogen in shell/burning helium in the core.
  • Case C is from a start which has finished core helium burning, and is burning helium in a shell. The star will now have carbon it its core, which may later start burning too.

The situation where mass transfer is avoided because the stars are well mixed, and so don’t expand, has also been referred to as Case M. This is more commonly known as (quai)chemically homogenous evolution.

If a star undergoes Case B mass transfer, it can lose its outer hydrogen-rich layers, to leave behind a helium star. This helium star may subsequently expand and undergo a new phase of mass transfer. The mass transfer from this helium star gets named similarly:

  • Case BA is from the helium star while it is on the helium main sequence burning helium in its core.
  • Case BB is from the helium star once it has finished core helium burning, and may be burning helium in a shell.
  • Case BC is from the helium star once it is burning carbon.

If the outer hydrogen-rich layers are lost during Case C mass transfer, we are left with a helium star with a carbon–oxygen core. In this case, subsequent mass transfer is named as:

  • Case CB if helium shell burning is on-going. (I wonder if this could lead to fast radio bursts?)
  • Case CC once core carbon burning has started.

I guess the naming almost makes sense. Case closed!

Page count

Don’t be put off by the length of the paper—the bibliography is extremely detailed. Michael was exceedingly proud of the number of references. I think it is the most in any non-review paper of mine!

Classifying the unknown: Discovering novel gravitational-wave detector glitches using similarity learning

 

Gravity Spy is an awesome project that combines citizen science and machine learning to classify glitches in LIGO and Virgo data. Glitches are short bursts of noise in our detectors which make analysing our data more difficult. Some glitches have known causes, others are more mysterious. Classifying glitches into different types helps us better understand their properties, and in some cases track down their causes and eliminate them! In this paper, led by Scotty Coughlin, we demonstrated the effectiveness of a new tool which are citizen scientists can use to identify new glitch classes.

The Gravity Spy project

Gravitational-wave detectors are complicated machines. It takes a lot of engineering to achieve the required accuracy needed to observe gravitational waves. Most of the time, our detectors perform well. The background noise in our detectors is easy to understand and model. However, our detectors are also subject to glitches, unusual  (sometimes extremely loud and complicated) noise that doesn’t fit the usual properties of noise. Glitches are short, they only appear in a small fraction of the total data, but they are common. This makes detection and analysis of gravitational-wave signals more difficult. Detection is tricky because you need to be careful to distinguish glitches from signals (and possibly glitches and signals together), and understanding the signal is complicated as we may need to model a signal and a glitch together [bonus note]. Understanding glitches is essential if gravitational-wave astronomy is to be a success.

To understand glitches, we need to be able to classify them. We can search for glitches by looking for loud pops, whooshes and splats in our data. The task is then to spot similarities between them. Once we have a set of glitches of the same type, we can examine the state of the instruments at these times. In the best cases, we can identify the cause, and then work to improve the detectors so that this no longer happens. Other times, we might be able to find the source, but we can find one of the monitors in our detectors which acts a witness to the glitch. Then we know that if something appears in that monitor, we expect a glitch of a particular form. This might mean that we throw away that bit of data, or perhaps we can use the witness data to subtract out the glitch. Since glitches are so common, classifying them is a huge amount of work. It is too much for our detector characterisation experts to do by hand.

There are two cunning options for classifying large numbers of glitches

  1. Get a computer to do it. The difficulty  is teaching a computer to identify the different classes. Machine-learning algorithms can do this, if they are properly trained. Training can require a large training set, and careful validation, so the process is still labour intensive.
  2. Get lots of people to help. The difficulty here is getting non-experts up-to-speed on what to look for, and then checking that they are doing a good job. Crowdsourcing classifications is something citizen scientists can do, but we will need a large number of dedicated volunteers to tackle the full set of data.

The idea behind Gravity Spy is to combine the two approaches. We start with a small training set from our detector characterization experts, and train a machine-learning algorithm on them. We then ask citizen scientists (thanks Zooniverse) to classify the glitches. We start them off with glitches the machine-learning algorithm is confident in its classification; these should be easy to identify. As citizen scientists get more experienced, they level up and start tackling more difficult glitches. The citizen scientists validate the classifications of the machine-learning algorithm, and provide a larger training set (especially helpful for the rarer glitch classes) for it. We can then happily apply the machine-learning algorithm to classify the full data set [bonus note].

The Gravity Spy workflow

How Gravity Spy works: the interconnection of machine-learning classification and citizen-scientist classification. The similarity search is used to identify glitches similar to one which do not fit into current classes. Figure 2 of Coughlin et al. (2019).

I especially like the levelling-up system in Gravity Spy. I think it helps keep citizen scientists motivated, as it both prevents them from being overwhelmed when they start and helps them see their own progress. I am currently Level 4.

Gravity Spy works using images of the data. We show spectrograms, plots of how loud the output of the detectors are at different frequencies at different times. A gravitational wave form a binary would show a chirp structure, starting at lower frequencies and sweeping up.

Gravitational-wave chirp

Spectrogram showing the upward-sweeping chirp of gravitational wave GW170104 as seen in Gravity Spy. I correctly classified this as a Chirp.

New glitches

The Gravity Spy system works smoothly. However, it is set up to work with a fixed set of glitch classes. We may be missing new glitch classes, either because they are rare, and hadn’t been spotted by our detector characterization team, or because we changed something in our detectors and new class arose (we expect this to happen as we tune up the detectors between observing runs). We can add more classes to our citizen scientists and machine-learning algorithm to use, but how do we spot new classes in the first place?

Our citizen scientists managed to identify a few new glitches by spotting things which didn’t fit into any of the classes. These get put in the None-of-the-Above class. Occasionally, you’ll come across similar looking glitches, and by collecting a few of these together, build a new class. The Paired Dove and Helix classes were identified early on by our citizen scientists this way; my favourite suggested new class is the Falcon [bonus note]. The difficulty is finding a large number of examples of a new class—you might only recognise a common feature after going past a few examples, backtracking to find the previous examples is hard, and you just have to keep working until you are lucky enough to be given more of the same.

Helix and Paired Dove

Example Helix (left) and Paired Dove glitches. These classes were identified by Gravity Spy citizen scientists. Helix glitches are related to related to hiccups in the auxiliary lasers used to calibrate the detectors by pushing on the mirrors. Paired Dove glitches are related to motion of the beamsplitter in the interferometer. Adapted from Figure 8 of Zevin et al. (2017).

To help our citizen scientists find new glitches, we created a similar search. Having found an interesting glitch, you can search for similar examples, and put quickly put together a collection of your new class. The video below shows how it works. The thing we had to work out was how to define similar?

Transfer learning

Our machine-learning algorithm only knows about the classes we tell it about. It then works out the features we distinguish the different classes, and are common to glitches of the same class. Working in this feature space, glitches form clusters of different classes.

Gravity Spy feature space

Visualisation showing the clustering of different glitches in the Gravity Spy feature space. Each point is a different glitch from our training set. The feature space has more than three dimensions: this visualisation was made using a technique which preserves the separation and clustering of different and similar points. Figure 1 of Coughlin et al. (2019).

For our similarity search, our idea was to measure distances in feature space [bonus note for experts]. This should work well if our current set of classes have a wide enough set of features to capture to characteristics of the new class; however, it won’t be effective if the new class is completely different, so that its unique features are not recognised. As an analogy, imagine that you had an algorithm which classified M&M’s by colour. It would probably do well if you asked it to distinguish a new colour, but would probably do poorly if you asked it to distinguish peanut butter filled M&M’s as they are identified by flavour, which is not a feature it knows about. The strategy of using what a machine learning algorithm learnt about one problem to tackle a new problem is known as transfer learning, and we found this strategy worked well for our similarity search.

Raven Pecks and Water Jets

To test our similarity search, we applied it to two glitches classes not in the Gravity Spy set:

  1. Raven Peck glitches are caused by thirsty ravens pecking ice built up along nitrogen vent lines outside of the Hanford detector. Raven Pecks look like horizontal lines in spectrograms, similar to other Gravity Spy glitch classes (like the Power Line, Low Frequency Line and 1080 Line). The similarity search should therefore do a good job, as we should be able to recognise its important features.
  2. Water Jet glitches were caused by local seismic noise at the Hanford detector which  causes loud bands which disturb the input laser optics. These glitches are found between , over which time there are 26,871 total glitches in GRavity Spy. The Water Jet glitch doesn’t have anything to do with water, it is named based on its appearance (like a fountain, not a weasel). Its features are subtle, and unlike other classes, so we would expect this to be difficult for our similarity search to handle.

These glitches appeared in the data from the second observing run. Raven Pecks appeared between 14 April and 9 August 2017, and Water Jets appeared 4 January and 28 May 2017. Over these intervals there are a total of 13,513 and 26,871 Gravity Spy glitches from all type, so even if you knew exactly when to look, you have a large number to search through to find examples.

Raven Peck and Water Jet glitches

Example Raven Peck (left) and Water Jet (right) glitches. These classes of glitch are not included in the usual Gravity Spy scheme. Adapted from Figure 3 of Coughlin et al. (2019).

We tested using our machine-learning feature space for the similarity search against simpler approaches: using the raw difference in pixels, and using a principal component analysis to create a feature space. Results are shown in the plots below. These show the fraction of glitches we want returned by the similarity search versus the total number of glitches rejected. Ideally, we would want to reject all the glitches except the ones we want, so the search would return 100% of the wanted classes and reject almost 100% of the total set. However, the actual results will depend on the adopted threshold for the similarity search: if we’re very strict we’ll reject pretty much everything, and only get the most similar glitches of the class we want, if we are too accepting, we get everything back, regardless of class. The plots can be read as increasing the range of the similarity search (becoming less strict) as you go left to right.

Similarity search performance

Performance of the similarity search for Raven Peck (left) and Water Jet (right) glitches: the fraction of known glitches of the desired class that have a higher similarity score (compared to an example of that glitch class) than a given percentage of full data set. Results are shown for three different ways of defining similarity: the DIRECT machine-learning algorithm feature space (think line), a principal component analysis (medium line) and a comparison of pixels (thin line). Adapted from Figure 3 of Coughlin et al. (2019).

For the Raven Peck, the similarity search always performs well. We have 50% of Raven Pecks returned while rejecting 99% of the total set of glitches, and we can get the full set while rejecting 92% of the total set! The performance is pretty similar between the different ways of defining feature space. Raven Pecks are easy to spot.

Water Jets are more difficult. When we have 50% of Water Jets returned by the search, our machine-learning feature space can still reject almost all glitches. The simpler approaches do much worse, and will only reject about 30% of the full data set. To get the full set of Water Jets we would need to loosen the similarity search so that it only rejects 55% of the full set using our machine-learning feature space; for the simpler approaches we’d basically get the full set of glitches back. They do not do a good job at narrowing down the hunt for glitches. Despite our suspicion that our machine-learning approach would struggle, it still seems to do a decent job [bonus note for experts].

Do try this at home

Having developed and testing our similarity search tool, it is now live. Citizen scientists can use it to hunt down new glitch classes. Several new glitches classes have been identified in data from LIGO and Virgo’s (currently ongoing) third observing run. If you are looking for a new project, why not give it a go yourself? (Or get your students to give it a go, I’ve had some reasonable results with high-schoolers). There is the real possibility that your work could help us with the next big gravitational-wave discovery.

arXiv: arXiv:1903.04058 [astro-ph.IM]
Journal: Physical Review D; 99(8):082002(8); 2019
Websites: Gravity Spy; Gravity Spy Tools
Gravity Spy blog: Introducing Gravity Spy Tools
Current stats: Gravity Spy has 15,500 registered users, who have made 4.4 million glitch classifications, leading to 200,000 successfully identified glitches.

Bonus notes

Signals and glitches

The best example of a gravitational-wave overlapping a glitch is GW170817. The glitch meant that the signal in the LIGO Livingston detector wasn’t immediately recognised. Fortunately, the signal in the Hanford detector was easy to spot. The glitch was analyse and categorised in Gravity Spy. It is a simple glitch, so it wasn’t too difficult to remove from the data. As our detectors become more sensitive, so that detections become more frequent, we expect that signal overlapping with glitches will become a more common occurrence. Unless we can eliminate glitches, it is only a matter of time before we get a glitch that prevents us from analysing an important signal.

Gravitational-wave alerts

In the third observing run of LIGO and Virgo, we send out automated alerts when we have a new gravitational-wave candidate. Astronomers can then pounce into action to see if they can spot anything coinciding with the source. It is important to quickly check the state of the instruments to ensure we don’t have a false alarm. To help with this, a data quality report is automatically prepared, containing many diagnostics. The classification from the Gravity Spy algorithm is one of many pieces of information included. It is the one I check first.

The Falcon

Excellent Gravity Spy moderator EcceruElme suggested a new glitch class Falcon. This suggestion was followed up by Oli Patane, they found that all the examples identified occured between 6:30 am and 8:30 am on 20 June 2017 in the Hanford detector. The instrument was misbehaving at the time. To solve this, the detector was taken out of observing mode and relocked (the equivalent of switching it off and on again). Since this glitch class was only found in this one 2-hour window, we’ve not added it as a class. I love how it was possible to identify this problematic stretch of time using only Gravity Spy images (which don’t identify when they are from). I think this could be the seed of a good detective story. The Hanfordese Falcon?

Characteristics of Falcon glitches

Examples of the proposed Falcon glitch class, illustrating the key features (and where the name comes from). This new glitch class was suggested by Gravity Spy citizen scientist EcceruElme.

Distance measure

We chose a cosine distance to measure similarity in feature space. We found this worked better than a Euclidean metric. Possibly because for identifying classes it is more important to have the right mix of features, rather than how significant the individual features are. However, we didn’t do a systematic investigation of the optimal means of measuring similarity.

Retraining the neural net

We tested the performance of the machine-learning feature space in the similarity search after modifying properties of our machine-learning algorithm. The algorithm we are using is a deep multiview convolution neural net. We switched the activation function in the fully connected layer of the net, trying tanh and leaukyREU. We also varied the number of training rounds and the number of pairs of similar and dissimilar images that are drawn from the training set each round. We found that there was little variation in results. We found that leakyREU performed a little better than tanh, possibly because it covers a larger dynamic range, and so can allow for cleaner separation of similar and dissimilar features. The number of training rounds and pairs makes negligible difference, possibly because the classes are sufficiently distinct that you don’t need many inputs to identify the basic features to tell them apart. Overall, our results appear robust. The machine-learning approach works well for the similarity search.

Deep and rapid observations of strong-lensing galaxy clusters within the sky localisation of GW170814

Gravitational waves and gravitational lensing are two predictions of general relativity. Gravitational waves are produced whenever masses accelerate. Gravitational lensing is produced by anything with mass. Gravitational lensing can magnify images, making it easier to spot far away things. In theory, gravitational waves can be lensed too. In this paper, we looked for evidence that GW170814 might have been lensed. (We didn’t find any, but this was my first foray into traditional astronomy).

The lensing of gravitational waves

Strong gravitational lensing magnifies a signal. A gravitational wave which has been lensed would therefore have a larger amplitude than if it had not been lensed. We infer the distance to the source of a gravitational wave from the amplitude. If we didn’t know a signal was lensed, we’d therefore think the source is much closer than it really is.

Waveform explained

The shape of the gravitational wave encodes the properties of the source. This information is what lets us infer parameters. The example signal is GW150914 (which is fairly similar to GW170814). I made this explainer with Ban Farr and Nutsinee Kijbunchoo for the LIGO Magazine.

Mismeasuring the distance to a gravitational wave has important consequences for understanding their sources. As the gravitational wave travels across the expanding Universe, it gets stretched (redshifted) so by the time it arrives at our detectors it has a longer wavelength (and shorter frequency). If we assume that a signal came from a closer source, we’ll underestimate the amount of stretching the signal has undergone, and won’t fully correct for it. This means we’ll overestimate the masses when we infer them from the signal.

This possibility got a few people thinking when we announced our first detection, as GW150914 was heavier than previously observed black holes. Could we be seeing lensed gravitational waves?

Such strongly lensed gravitational waves should be multiply imaged. We should be able to see multiple copies of the same signal which have taken different paths from the source and then are bent by the gravity of the lens to reach us at different times. The delay time between images depends on the mass of the lens, with bigger lensing having longer delays. For galaxy clusters, it can be years.

The idea

Some of my former Birmingham colleagues who study gravitational lensing, were thinking about the possibility of having multiply imaged gravitational waves. I pointed out how difficult these would be to identify. They would come from the same part of the sky, and would have the same source parameters. However, since our uncertainties are so large for gravitational wave observations, I thought it would be tough to convince yourself that you’d seen the same signal twice [bonus note]. Lensing is expected to be rare [bonus note], so would you put your money on two signals (possibly years apart) being the same, or there just happening to be two similar systems somewhere in this huge patch of the sky?

However, if there were an optical counterpart to the merger, it would be much easier to tell that it was lensed. Since we know the location of galaxy clusters which could strongly lens a signal, we can target searches looking for counterparts at these clusters. The odds of finding anything are slim, but since this doesn’t take too much telescope time to look it’s still a gamble worth taking, as the potential pay-off would be huge.

Somehow [bonus note], I got involved in observing proposals to look for strongly lensed. We got everything in place for the last month of O2. It was just one month, so I wasn’t anticipating there being that much to do. I was very wrong.

GW170814

For GW170814 there were a couple of galaxy clusters which could serve as being strong gravitational lenses. Abell 3084 started off as the more probably, but as the sky localization for GW170814 was refined, SMACS J0304.3−4401 looked like the better bet.

Sky maps for GW170814 (left: initial Bayestar localization; right: refined LALInference localizations) and two potential gravitational lensing galaxy clusters

Sky localization for GW170814 and the galaxy clusters Abell 3084 (filled circle), and SMACS J0304.3−4401 (open). The left plot shows the low-latency Bayestar localization (LIGO only dotted, LIGO and Virgo solid), and the right shows the refined LALInference sky maps (solid from GCN 21493, which we used for our observations, and dotted from GWTC-1). The dashed lines shows the Galactic plane. Figure 1 of Smith et al. (2019).

We observed both galaxy clusters using the Gemini Multi-Object Spectrographs (GMOS) on Gemini South and the Multi Unit Spectroscopic Explorer (MUSE) on the Very Large Telescope, both in Chile. You’ll never guess what we found…

That’s right, absolutely nothing! [bonus note] That’s not actually too surprising. GW170814‘s source was identified as a binary black hole—assuming no lensing, its source binary had masses around 25 and 30 solar masses. We don’t expect significant electromagnetic emission from a binary black hole merger (which would make it a big discovery if found, but that is a long shot). If there source were lensed, we would have overestimated the source masses, but to get the source into the neutron star mass range would take a ridiculous amount of lensing. However, the important point is that we have demonstrated that such a search for strong lensed images is possible!

The future

In O3 [bonus notebonus note], the team has been targeting lower mass systems, where a neutron star may get mislabelled as a black hole by mistake due to a moderate amount of lensing. A false identification here  could confuse our understanding of the minimum mass of a black hole, and also mean that we miss all sorts of lovely multimessenger observations, so this seems like a good plan to me.

arXiv: 1805.07370 [astro-ph.HE]
Journal: Monthly Notices of the Royal Astronomical Society; 485(4):5180–5191; 2019
Conference proceedings: 1803.07851 [astro-ph.HE] (from when work was still in-progress)
Future research: Are Double Stuf Oreos just gravitationally lensed regular Oreos?

Bonus notes

Statistical analysis

It is possible to do a statistical analysis to calculate the probability of two signals being lensed images of each. The best attempt I’ve seen at this is Hannuksela et al. (2019). They do a nice study considering lensing by galaxies (and find nothing conclusive).

Biasing merger rates

If we included lensed events in our calculations of the merger rate density (the rate of mergers per unit volume of space), without correcting for them being lensed, we would overestimate the merger rate density. We’d assume that all our mergers came from a smaller volume of space than they actually did, as we wouldn’t know that the lensed events are being seen from further away. As long as the fraction of lensed events is small, this shouldn’t be a big problem, so we’re probably safe not to worry about it.

Slippery slope

What actually happened was my then boss, Alberto Vecchio, asked me to do some calculations based upon the sky maps for our detections in O1 as they’d only take me 5 minutes. Obviously, there were then more calculations, advice about gravitational wave alerts, feedback on observing proposals… and eventually I thought that if I’d put in this much time I might as well get a paper to show for it.

It was interesting to see how electromagnetic observing works, but I’m not sure I’d do it again.

Upper limits

Following tradition, when we don’t make a detection, we can set an upper limit on what could be there. In this case, we conclude that there is nothing to see down to an i-band magnitude of 25. This is pretty faint, about 40 million times fainter than something you could see with the naked eye (translating to visibly light). We can set such a good upper limit (compared to other follow-up efforts) as we only needed to point the telescopes at a small patch of sky around the galaxy clusters, and so we could leave them staring for a relatively long time.

O3 lensing hype

In O3, two gravitational wave candidates (S190828j and S190828l) were found just 21 minutes apart—this, for reasons I don’t entirely understand, led to much speculation that they were multiple images of a gravitationally lensed source. For a comprehensive debunking, follow this Twitter thread.

Second star to the right and straight on ’til morning—Astrophysics white papers

What will be the next big thing in astronomy? One of the hard things about research is that you often don’t know what you will discover before you embark on an investigation. An idea might work out, or it might not, or along the way you might discover something unexpected which is far more interesting. As you might imagine, this can make laying definite plans difficult…

However, it is important to have plans for research. While you might not be sure of the outcome, it is necessary to weigh the risks and rewards associated with the probable results before you invest your time and taxpayers’ money!

To help with planning and prioritising, researchers in astrophysics often pull together white papers [bonus note]. These are sketches of ideas for future research, arguing why you think they might be interesting. These can then be discussed within the community to help shape the direction of the field. If other scientists find the paper convincing, you can build support which helps push for funding. If there are gaps in the logic, others can point these out to ave you heading the wrong way. This type of consensus building is especially important for large experiments or missions—you don’t want to spend a billion dollars on something unless you’re really sure it is a good idea and lots of people agree.

I have been involved with a few white papers recently. Here are some key ideas for where research should go.

Ground-based gravitational-wave detectors: The next generation

We’ve done some awesome things with Advanced LIGO and Advanced Virgo. In just a couple of years we have revolutionized our understanding of binary black holes. That’s not bad. However, our current gravitational-wave observatories are limited in what they can detect. What amazing things could we achieve with a new generation of detectors?

It can take decades to develop new instruments, therefore it’s important to start thinking about them early. Obviously, what we would most like is an observatory which can detect everything, but that’s not feasible. In this white paper, we pick the questions we most want answered, and see what the requirements for a new detector would be. A design which satisfies these specifications would therefore be a solid choice for future investment.

Binary black holes are the perfect source for ground-based detectors. What do we most want to know about them?

  1. How many mergers are there, and how does the merger rate change over the history of the Universe? We want to know how binary black holes are made. The merger rate encodes lots of information about how to make binaries, and comparing how this evolves compared with the rate at which the Universe forms stars, will give us a deeper understanding of how black holes are made.
  2. What are the properties (masses and spins) of black holes? The merger rate tells us some things about how black holes form, but other properties like the masses, spins and orbital eccentricity complete the picture. We want to make precise measurements for individual systems, and also understand the population.
  3. Where do supermassive black holes come from? We know that stars can collapse to produce stellar-mass black holes. We also know that the centres of galaxies contain massive black holes. Where do these massive black holes come from? Do they grow from our smaller black holes, or do they form in a different way? Looking for intermediate-mass black holes in the gap in-between will tells us whether there is a missing link in the evolution of black holes.
Detection horizon as a function of binary mass for Advanced LIGO, A+, Cosmic Explorer and the Einstein Telescope

The detection horizon (the distance to which sources can be detected) for Advanced LIGO (aLIGO), its upgrade A+, and the proposed Cosmic Explorer (CE) and Einstein Telescope (ET). The horizon is plotted for binaries with equal-mass, nonspinning components. Adapted from Hall & Evans (2019).

What can we do to answer these questions?

  1. Increase sensitivity! Advanced LIGO and Advanced Virgo can detect a 30 M_\odot + 30 M_\odot binary out to a redshift of about z \approx 1. The planned detector upgrade A+ will see them out to redshift z \approx 2. That’s pretty impressive, it means we’re covering 10 billion years of history. However, the peak in the Universe’s star formation happens at around z \approx 2, so we’d really like to see beyond this in order to measure how the merger rate evolves. Ideally we would see all the way back to cosmic dawn at z \approx 20 when the Universe was only 200 million years old and the first stars light up.
  2. Increase our frequency range! Our current detectors are limited in the range of frequencies they can detect. Pushing to lower frequencies helps us to detect heavier systems. If we want to detect intermediate-mass black holes of 100 M_\odot we need this low frequency sensitivity. At the moment, Advanced LIGO could get down to about 10~\mathrm{Hz}. The plot below shows the signal from a 100 M_\odot + 100 M_\odot binary at z = 10. The signal is completely undetectable at 10~\mathrm{Hz}.

    Gravitational wave signal from a binary of two 100 solar mass black holes at a redshift of 10

    The gravitational wave signal from the final stages of inspiral, merger and ringdown of a two 100 solar mass black holes at a redshift of 10. The signal chirps up in frequency. The colour coding shows parts of the signal above different frequencies. Part of Figure 2 of the Binary Black Holes White Paper.

  3. Increase sensitivity and frequency range! Increasing sensitivity means that we will have higher signal-to-noise ratio detections. For these loudest sources, we will be able to make more precise measurements of the source properties. We will also have more detections overall, as we can survey a larger volume of the Universe. Increasing the frequency range means we can observe a longer stretch of the signal (for the systems we currently see). This means it is easier to measure spin precession and orbital eccentricity. We also get to measure a wider range of masses. Putting the improved sensitivity and frequency range together means that we’ll get better measurements of individual systems and a more complete picture of the population.

How much do we need to improve our observatories to achieve our goals? To quantify this, lets consider the boost in sensitivity relative to A+, which I’ll call \beta_\mathrm{A+}. If the questions can be answered with \beta_\mathrm{A+} = 1, then we don’t need anything beyond the currently planned A+. If we need a slightly larger \beta_\mathrm{A+}, we should start investigating extra ways to improve the A+ design. If we need much larger \beta_\mathrm{A+}, we need to think about new facilities.

The plot below shows the boost necessary to detect a binary (with equal-mass nonspinning components) out to a given redshift. With a boost of \beta_\mathrm{A+} = 10 (blue line) we can survey black holes around 10 M_\odot30 M_\odot across cosmic time.

Boost to detect a binary of a given mass at a given redshift

The boost factor (relative to A+) \beta_\mathrm{A+} needed to detect a binary with a total mass M out to redshift z. The binaries are assumed to have equal-mass, nonspinning components. The colour scale saturates at \log_{10} \beta_\mathrm{A+} = 4.5. The blue curve highlights the reach at a boost factor of \beta_\mathrm{A+} = 10. The solid and dashed white lines indicate the maximum reach of Cosmic Explorer and the Einstein Telescope, respectively. Part of Figure 1 of the Binary Black Holes White Paper.

The plot above shows that to see intermediate-mass black holes, we do need to completely overhaul the low-frequency sensitivity. What do we need to detect a 100 M_\odot + 100 M_\odot binary at z = 10? If we parameterize the noise spectrum (power spectral density) of our detector as S_n(f) = S_{10}(f/10~\mathrm{Hz})^\alpha with a lower cut-off frequency of f_\mathrm{min}, we can investigate the various possibilities. The plot below shows the possible combinations of parameters which meet of requirements.

Noise curve requirements for intermediate-mass black hole detection

Requirements on the low-frequency noise power spectrum necessary to detect an optimally oriented intermediate-mass binary black hole system with two 100 solar mass components at a redshift of 10. Part of Figure 2 of the Binary Black Holes White Paper.

To build up information about the population of black holes, we need lots of detections. Uncertainties scale inversely with the square root of the number of detections, so you would expect few percent uncertainty after 1000 detections. If we want to see how the population evolves, we need these many per redshift bin! The plot below shows the number of detections per year of observing time for different boost factors. The rate starts to saturate once we detect all the binaries in the redshift range. This is as good as you’ll ever going to get.

Detections per redshift bin as a function of boost factor

Expected rate of binary black hole detections R_\mathrm{det} per redshift bin as a function of A+ boost factor \beta_\mathrm{A+} for three redshift bins. The merging binaries are assumed to be uniformly distributed with a constant merger rate roughly consistent with current observations: the solid line is about the current median, while the dashed and dotted lines are roughly the 90% bounds. Figure 3 of the Binary Black Holes White Paper.

Looking at the plots above, it is clear that A+ is not going to satisfy our requirements. We need something with a boost factor of \beta_\mathrm{A+} = 10: a next-generation observatory. Both the Cosmic Explorer and Einstein Telescope designs do satisfy our goals.

Yes!

Data is pleased. Credit: Paramount

Title: Deeper, wider, sharper: Next-generation ground-based gravitational-wave observations of binary black holes
arXiv:
1903.09220 [astro-ph.HE]
Contribution level: ☆☆☆☆☆ Leading author
Theme music: Daft Punk

Extreme mass ratio inspirals are awesome

We have seen gravitational waves from a stellar-mass black hole merging with another stellar-mass black hole, can we observe a stellar-mass black hole merging with a massive black hole? Yes, these are a perfect source for a space-based gravitational wave observatory. We call these systems extreme mass-ratio inspirals (or EMRIs, pronounced em-rees, for short) [bonus note].

Having such an extreme mass ratio, with one black hole much bigger than the other, gives EMRIs interesting properties. The number of orbits over the course of an inspiral scales with the mass ratio: the more extreme the mass ratio, the more orbits there are. Each of these gives us something to measure in the gravitational wave signal.

The intricate structure of an EMRI orbit

A short section of an orbit around a spinning black hole. While inspirals last for years, this would represent only a few hours around a black hole of mass M = 10^6 M_\odot. The position is measured in terms of the gravitational radius r_\mathrm{g} = GM/c^2. The innermost stable orbit for this black hole would be about r_\mathrm{g} = 2.3. Part of Figure 1 of the EMRI White Paper.

As EMRIs are so intricate, we can make exquisit measurements of the source properties. These will enable us to:

Event rates for EMRIs are currently uncertain: there could be just one per year or thousands. From the rate we can figure out the details of what is going in in the nuclei of galaxies, and what types of objects you find there.

With EMRIs you can unravel mysteries in astrophysics, fundamental physics and cosmology.

Have we sold you that EMRIs are awesome? Well then, what do we need to do to observe them? There is only one currently planned mission which can enable us to study EMRIs: LISA. To maximise the science from EMRIs, we have to support LISA.

Lisa Simpson dancing

As an aspiring scientist, Lisa Simpson is a strong supporter of the LISA mission. Credit: Fox

Title: The unique potential of extreme mass-ratio inspirals for gravitational-wave astronomy
arXiv:
1903.03686 [astro-ph.HE]
Contribution level: ☆☆☆☆☆ Leading author
Theme music: Muse

Bonus notes

White paper vs journal article

Since white papers are proposals for future research, they aren’t as rigorous as usual academic papers. They are really attempts to figure out a good question to ask, rather than being answers. White papers are not usually peer reviewed before publication—the point is that you want everybody to comment on them, rather than just one or two anonymous referees.

Whilst white papers aren’t quite the same class as journal articles, they do still contain some interesting ideas, so I thought they still merit a blog post.

Recycling

I have blogged about EMRIs before, so I won’t go into too much detail here. It was one of my former blog posts which inspired the LISA Science Team to get in touch to ask me to write the white paper.

Dirichlet Process Gaussian-mixture model: An application to localizing coalescing binary neutron stars with gravitational-wave observations

Where do gravitational waves like GW170817 come from? Using our network of detectors, we cannot pinpoint a source, but we can make a good estimate—the amplitude of the signal tells us about the distance; the time delay between the signal arriving at different detectors, and relative amplitudes of the signal in different detectors tells us about the sky position (see the excellent video by Leo Singer below).

In this paper we look at full three-dimensional localization of gravitational-wave sources; we important a (rather cunning) technique from computer vision to construct a probability distribution for the source’s location, and then explore how well we could localise a set of simulated binary neutron stars. Knowing the source location enables lots of cool science. First, it aids direct follow-up observations with non-gravitational-wave observatories, searching for electromagnetic or neutrino counterparts. It’s especially helpful if you can cross-reference with galaxy catalogues, to find the most probable source locations (this technique was used to find the kilonova associated with GW170817). Even without finding a counterpart, knowing the most probable host galaxy helps us figure out how the source formed (have lots of stars been born recently, or are all the stars old?), and allows us measure the expansion of the Universe. Having a reliable technique to reconstruct source locations is useful!

This was a fun paper to write [bonus note]. I’m sure it will be valuable, both for showing how to perform this type of reconstruction of a multi-dimensional probability density, and for its implications for source localization and follow-up of gravitational-wave signals. I go into details of both below, first discussing our statistical model (this is a bit technical), then looking at our results for a set of binary neutron stars (which have implications for hunting for counterparts) .

Dirichlet process Gaussian mixture model

When we analyse gravitational-wave data to infer the source properties (location, masses, etc.), we map out parameter space with a set of samples: a list of points in the parameter space, with there being more around more probable locations and fewer in less probable locations. These samples encode everything about the probability distribution for the different parameters, we just need to extract it…

For our application, we want a nice smooth probability density. How do we convert a bunch of discrete samples to a smooth distribution? The simplest thing is to bin the samples. However, picking the right bin size is difficult, and becomes much harder in higher dimensions. Another popular option is to use kernel density estimation. This is better at ensuring smooth results, but you now have to worry about the size of your kernels.

Our approach is in essence to use a kernel density estimate, but to learn the size and position of the kernels (as well as the number) from the data as an extra layer of inference. The “Gaussian mixture model” part of the name refers to the kernels—we use several different Gaussians. The “Dirichlet process” part refers to how we assign their properties (their means and standard deviations). What I really like about this technique, as opposed to the usual rule-of-thumb approaches used for kernel density estimation,  is that it is well justified from a theoretical point of view.

I hadn’t come across a Dirchlet process before. Section 2 of the paper is a walkthrough of how I built up an understanding of this mathematical object, and it contains lots of helpful references if you’d like to dig deeper.

In our application, you can think of the Dirichlet process as being a probability distribution for probability distributions. We want a probability distribution describing the source location. Given our samples, we infer what this looks like. We could put all the probability into one big Gaussian, or we could put it into lots of little Gaussians. The Gaussians could be wide or narrow or a mix. The Dirichlet distribution allows us to assign probabilities to each configuration of Gaussians; for example, if our samples are all in the northern hemisphere, we probably want Gaussians centred around there, rather than in the southern hemisphere.

With the resulting probability distribution for the source location, we can quickly evaluate it at a single point. This means we can rapidly produce a list of most probable source galaxies—extremely handy if you need to know where to point a telescope before a kilonova fades away (or someone else finds it).

Gravitational-wave localization

To verify our technique works, and develop an intuition for three-dimensional localizations, we used studied a set of simulated binary neutron star signals created for the First 2 Years trilogy of papers. This data set is well studied now, it illustrates performance it what we anticipated to be the first two observing runs of the advanced detectors, which turned out to be not too far from the truth. We have previously looked at three-dimensional localizations for these signals using a super rapid approximation.

The plots below show how well we could localise the sources of our binary neutron star sources. Specifically, the plots show the size of the volume which has a 90% probability of containing the source verses the signal-to-noise ratio (the loudness) of the signal. Typically, volumes are 10^410^5~\mathrm{Mpc}^3, which is about 10^{68}10^{69} Olympic swimming pools. Such a volume would contain something like 1001000 galaxies.

Volume verses signal-to-noise ratio

Localization volume as a function of signal-to-noise ratio. The top panel shows results for two-detector observations: the LIGO-Hanford and LIGO-Livingston (HL) network similar to in the first observing run, and the LIGO and Virgo (HLV) network similar to the second observing run. The bottom panel shows all observations for the HLV network including those with all three detectors which are colour coded by the fraction of the total signal-to-noise ratio from Virgo. In both panels, there are fiducial lines scaling inversely with the sixth power of the signal-to-noise ratio. Adapted from Fig. 4 of Del Pozzo et al. (2018).

Looking at the results in detail, we can learn a number of things

  1. The localization volume is roughly inversely proportional to the sixth power of the signal-to-noise ratio [bonus note]. Loud signals are localized much better than quieter ones!
  2. The localization dramatically improves when we have three-detector observations. The extra detector improves the sky localization, which reduces the localization volume.
  3. To get the benefit of the extra detector, the source needs to be close enough that all the detectors could get a decent amount of the signal-to-noise ratio. In our case, Virgo is the least sensitive, and we see the the best localizations are when it has a fair share of the signal-to-noise ratio.
  4. Considering the cases where we only have two detectors, localization volumes get bigger at a given signal-to-noise ration as the detectors get more sensitive. This is because we can detect sources at greater distances.

Putting all these bits together, I think in the future, when we have lots of detections, it would make most sense to prioritise following up the loudest signals. These are the best localised, and will also be the brightest since they are the closest, meaning there’s the greatest potential for actually finding a counterpart. As the sensitivity of the detectors improves, its only going to get more difficult to find a counterpart to a typical gravitational-wave signal, as sources will be further away and less well localized. However, having more sensitive detectors also means that we are more likely to have a really loud signal, which should be really well localized.

Banana vs cucumber

Left: Localization (yellow) with a network of two low-sensitivity detectors. The sky location is uncertain, but we know the source must be nearby. Right: Localization (green) with a network of three high-sensitivity detectors. We have good constraints on the source location, but it could now be at a much greater range of distances. Not to scale.

Using our localization volumes as a guide, you would only need to search one galaxy to find the true source in about 7% of cases with a three-detector network similar to at the end of our second observing run. Similarly, only ten would need to be searched in 23% of cases. It might be possible to get even better performance by considering which galaxies are most probable because they are the biggest or the most likely to produce merging binary neutron stars. This is definitely a good approach to follow.

Three-dimensional localization with galaxy catalgoue

Galaxies within the 90% credible volume of an example simulated source, colour coded by probability. The galaxies are from the GLADE Catalog; incompleteness in the plane of the Milky Way causes the missing wedge of galaxies. The true source location is marked by a cross [bonus note]. Part of Figure 5 of Del Pozzo et al. (2018).

arXiv: 1801.08009 [astro-ph.IM]
Journal: Monthly Notices of the Royal Astronomical Society; 479(1):601–614; 2018
Code: 3d_volume
Buzzword bingo: Interdisciplinary (we worked with computer scientist Tom Haines); machine learning (the inference involving our Dirichlet process Gaussian mixture model); multimessenger astronomy (as our results are useful for following up gravitational-wave signals in the search for counterparts)

Bonus notes

Writing

We started writing this paper back before the first observing run of Advanced LIGO. We had a pretty complete draft on Friday 11 September 2015. We just needed to gather together a few extra numbers and polish up the figures and we’d be done! At 10:50 am on Monday 14 September 2015, we made our first detection of gravitational waves. The paper was put on hold. The pace of discoveries over the coming years meant we never quite found enough time to get it together—I’ve rewritten the introduction a dozen times. It’s extremely satisfying to have it done. This is a shame, as it meant that this study came out much later than our other three-dimensional localization study. The delay has the advantage of justifying one of my favourite acknowledgement sections.

Sixth power

We find that the localization volume \Delta V is inversely proportional to the sixth power of the signal-to-noise ration \varrho. This is what you would expect. The localization volume depends upon the angular uncertainty on the sky \Delta \Omega, the distance to the source D, and the distance uncertainty \Delta D,

\Delta V \sim D^2 \Delta \Omega \Delta D.

Typically, the uncertainty on a parameter (like the masses) scales inversely with the signal-to-noise ratio. This is the case for the logarithm of the distance, which means

\displaystyle \frac{\Delta D}{D} \propto \varrho^{-1}.

The uncertainty in the sky location (being two dimensional) scales inversely with the square of the signal-to-noise ration,

\Delta \Omega \propto \varrho^{-2}.

The signal-to-noise ratio itself is inversely proportional to the distance to the source (sources further way are quieter. Therefore, putting everything together gives

\Delta V \propto \varrho^{-6}.

Treasure

We all know that treasure is marked by a cross. In the case of a binary neutron star merger, dense material ejected from the neutron stars will decay to heavy elements like gold and platinum, so there is definitely a lot of treasure at the source location.

Accuracy of inference on the physics of binary evolution from gravitational-wave observations

Gravitational-wave astronomy lets us observing binary black holes. These systems, being made up of two black holes, are pretty difficult to study by any other means. It has long been argued that with this new information we can unravel the mysteries of stellar evolution. Just as a palaeontologist can discover how long-dead animals lived from their bones, we can discover how massive stars lived by studying their black hole remnants. In this paper, we quantify how much we can really learn from this black hole palaeontology—after 1000 detections, we should pin down some of the most uncertain parameters in binary evolution to a few percent precision.

Life as a binary

There are many proposed ways of making a binary black hole. The current leading contender is isolated binary evolution: start with a binary star system (most stars are in binaries or higher multiples, our lonesome Sun is a little unusual), and let the stars evolve together. Only a fraction will end with black holes close enough to merge within the age of the Universe, but these would be the sources of the signals we see with LIGO and Virgo. We consider this isolated binary scenario in this work [bonus note].

Now, you might think that with stars being so fundamentally important to astronomy, and with binary stars being so common, we’d have the evolution of binaries figured out by now. It turns out it’s actually pretty messy, so there’s lots of work to do. We consider constraining four parameters which describe the bits of binary physics which we are currently most uncertain of:

  • Black hole natal kicks—the push black holes receive when they are born in supernova explosions. We now the neutron stars get kicks, but we’re less certain for black holes [bonus note].
  • Common envelope efficiency—one of the most intricate bits of physics about binaries is how mass is transferred between stars. As they start exhausting their nuclear fuel they puff up, so material from the outer envelope of one star may be stripped onto the other. In the most extreme cases, a common envelope may form, where so much mass is piled onto the companion, that both stars live in a single fluffy envelope. Orbiting inside the envelope helps drag the two stars closer together, bringing them closer to merging. The efficiency determines how quickly the envelope becomes unbound, ending this phase.
  • Mass loss rates during the Wolf–Rayet (not to be confused with Wolf 359) and luminous blue variable phases–stars lose mass through out their lives, but we’re not sure how much. For stars like our Sun, mass loss is low, there is enough to gives us the aurora, but it doesn’t affect the Sun much. For bigger and hotter stars, mass loss can be significant. We consider two evolutionary phases of massive stars where mass loss is high, and currently poorly known. Mass could be lost in clumps, rather than a smooth stream, making it difficult to measure or simulate.

We use parameters describing potential variations in these properties are ingredients to the COMPAS population synthesis code. This rapidly (albeit approximately) evolves a population of stellar binaries to calculate which will produce merging binary black holes.

The question is now which parameters affect our gravitational-wave measurements, and how accurately we can measure those which do?

Merger rate with redshift and chirp mass

Binary black hole merger rate at three different redshifts z as calculated by COMPAS. We show the rate in 30 different chirp mass bins for our default population parameters. The caption gives the total rate for all masses. Figure 2 of Barrett et al. (2018)

Gravitational-wave observations

For our deductions, we use two pieces of information we will get from LIGO and Virgo observations: the total number of detections, and the distributions of chirp masses. The chirp mass is a combination of the two black hole masses that is often well measured—it is the most important quantity for controlling the inspiral, so it is well measured for low mass binaries which have a long inspiral, but is less well measured for higher mass systems. In reality we’ll have much more information, so these results should be the minimum we can actually do.

We consider the population after 1000 detections. That sounds like a lot, but we should have collected this many detections after just 2 or 3 years observing at design sensitivity. Our default COMPAS model predicts 484 detections per year of observing time! Honestly, I’m a little scared about having this many signals…

For a set of population parameters (black hole natal kick, common envelope efficiency, luminous blue variable mass loss and Wolf–Rayet mass loss), COMPAS predicts the number of detections and the fraction of detections as a function of chirp mass. Using these, we can work out the probability of getting the observed number of detections and fraction of detections within different chirp mass ranges. This is the likelihood function: if a given model is correct we are more likely to get results similar to its predictions than further away, although we expect their to be some scatter.

If you like equations, the from of our likelihood is explained in this bonus note. If you don’t like equations, there’s one lurking in the paragraph below. Just remember, that it can’t see you if you don’t move. It’s OK to skip the equation.

To determine how sensitive we are to each of the population parameters, we see how the likelihood changes as we vary these. The more the likelihood changes, the easier it should be to measure that parameter. We wrap this up in terms of the Fisher information matrix. This is defined as

\displaystyle F_{ij} = -\left\langle\frac{\partial^2\ln \mathcal{L}(\mathcal{D}|\left\{\lambda\right\})}{\partial \lambda_i \partial\lambda_j}\right\rangle,

where \mathcal{L}(\mathcal{D}|\left\{\lambda\right\}) is the likelihood for data \mathcal{D} (the number of observations and their chirp mass distribution in our case), \left\{\lambda\right\} are our parameters (natal kick, etc.), and the angular brackets indicate the average over the population parameters. In statistics terminology, this is the variance of the score, which I think sounds cool. The Fisher information matrix nicely quantifies how much information we can lean about the parameters, including the correlations between them (so we can explore degeneracies). The inverse of the Fisher information matrix gives a lower bound on the covariance matrix (the multidemensional generalisation of the variance in a normal distribution) for the parameters \left\{\lambda\right\}. In the limit of a large number of detections, we can use the Fisher information matrix to estimate the accuracy to which we measure the parameters [bonus note].

We simulated several populations of binary black hole signals, and then calculate measurement uncertainties for our four population uncertainties to see what we could learn from these measurements.

Results

Using just the rate information, we find that we can constrain a combination of the common envelope efficiency and the Wolf–Rayet mass loss rate. Increasing the common envelope efficiency ends the common envelope phase earlier, leaving the binary further apart. Wider binaries take longer to merge, so this reduces the merger rate. Similarly, increasing the Wolf–Rayet mass loss rate leads to wider binaries and smaller black holes, which take longer to merge through gravitational-wave emission. Since the two parameters have similar effects, they are anticorrelated. We can increase one and still get the same number of detections if we decrease the other. There’s a hint of a similar correlation between the common envelope efficiency and the luminous blue variable mass loss rate too, but it’s not quite significant enough for us to be certain it’s there.

Correaltions between population parameters

Fisher information matrix estimates for fractional measurement precision of the four population parameters: the black hole natal kick \sigma_\mathrm{kick}, the common envelope efficiency \alpha_\mathrm{CE}, the Wolf–Rayet mass loss rate f_\mathrm{WR}, and the luminous blue variable mass loss rate f_\mathrm{LBV}. There is an anticorrealtion between f_\mathrm{WR} and \alpha_\mathrm{CE}, and hints at a similar anticorrelation between f_|mathrm{LBV} and \alpha_\mathrm{CE}. We show 1500 different realisations of the binary population to give an idea of scatter. Figure 6 of Barrett et al. (2018)

Adding in the chirp mass distribution gives us more information, and improves our measurement accuracies. The fraction uncertainties are about 2% for the two mass loss rates and the common envelope efficiency, and about 5% for the black hole natal kick. We’re less sensitive to the natal kick because the most massive black holes don’t receive a kick, and so are unaffected by the kick distribution [bonus note]. In any case, these measurements are exciting! With this type of precision, we’ll really be able to learn something about the details of binary evolution.

Standard deviation of measurements of population parameters

Measurement precision for the four population parameters after 1000 detections. We quantify the precision with the standard deviation estimated from the Fisher inforamtion matrix. We show results from 1500 realisations of the population to give an idea of scatter. Figure 5 of Barrett et al. (2018)

The accuracy of our measurements will improve (on average) with the square root of the number of gravitational-wave detections. So we can expect 1% measurements after about 4000 observations. However, we might be able to get even more improvement by combining constraints from other types of observation. Combining different types of observation can help break degeneracies. I’m looking forward to building a concordance model of binary evolution, and figuring out exactly how massive stars live their lives.

arXiv: 1711.06287 [astro-ph.HE]
Journal: Monthly Notices of the Royal Astronomical Society; 477(4):4685–4695; 2018
Favourite dinosaur: Professor Science

Bonus notes

Channel selection

In practise, we will need to worry about how binary black holes are formed, via isolated evolution or otherwise, before inferring the parameters describing binary evolution. This makes the problem more complicated. Some parameters, like mass loss rates or black hole natal kicks, might be common across multiple channels, while others are not. There are a number of ways we might be able to tell different formation mechanisms apart, such as by using spin measurements.

Kick distribution

We model the supernova kicks v_\mathrm{kick} as following a Maxwell–Boltzmann distribution,

\displaystyle p(v_\mathrm{kick}) = \sqrt{\frac{2}{\pi}}  \frac{v_\mathrm{kick}^2}{\sigma_\mathrm{kick}^3} \exp\left(\frac{-v_\mathrm{kick}^2}{2\sigma_\mathrm{kick}^2}\right),

where \sigma_\mathrm{kick} is the unknown population parameter. The natal kick received by the black hole v^*_\mathrm{kick} is not the same as this, however, as we assume some of the material ejected by the supernova falls back, reducing the over kick. The final natal kick is

v^*_\mathrm{kick} = (1-f_\mathrm{fb})v_\mathrm{kick},

where f_\mathrm{fb} is the fraction that falls back, taken from Fryer et al. (2012). The fraction is greater for larger black holes, so the biggest black holes get no kicks. This means that the largest black holes are unaffected by the value of \sigma_\mathrm{kick}.

The likelihood

In this analysis, we have two pieces of information: the number of detections, and the chirp masses of the detections. The first is easy to summarise with a single number. The second is more complicated, and we consider the fraction of events within different chirp mass bins.

Our COMPAS model predicts the merger rate \mu and the probability of falling in each chirp mass bin p_k (we factor measurement uncertainty into this). Our observations are the the total number of detections N_\mathrm{obs} and the number in each chirp mass bin c_k (N_\mathrm{obs} = \sum_k c_k). The likelihood is the probability of these observations given the model predictions. We can split the likelihood into two pieces, one for the rate, and one for the chirp mass distribution,

\mathcal{L} = \mathcal{L}_\mathrm{rate} \times \mathcal{L}_\mathrm{mass}.

For the rate likelihood, we need the probability of observing N_\mathrm{obs} given the predicted rate \mu. This is given by a Poisson distribution,

\displaystyle \mathcal{L}_\mathrm{rate} = \exp(-\mu t_\mathrm{obs}) \frac{(\mu t_\mathrm{obs})^{N_\mathrm{obs}}}{N_\mathrm{obs}!},

where t_\mathrm{obs} is the total observing time. For the chirp mass likelihood, we the probability of getting a number of detections in each bin, given the predicted fractions. This is given by a multinomial distribution,

\displaystyle \mathcal{L}_\mathrm{mass} = \frac{N_\mathrm{obs}!}{\prod_k c_k!} \prod_k p_k^{c_k}.

These look a little messy, but they simplify when you take the logarithm, as we need to do for the Fisher information matrix.

When we substitute in our likelihood into the expression for the Fisher information matrix, we get

\displaystyle F_{ij} = \mu t_\mathrm{obs} \left[ \frac{1}{\mu^2} \frac{\partial \mu}{\partial \lambda_i} \frac{\partial \mu}{\partial \lambda_j}  + \sum_k\frac{1}{p_k} \frac{\partial p_k}{\partial \lambda_i} \frac{\partial p_k}{\partial \lambda_j} \right].

Conveniently, although we only need to evaluate first-order derivatives, even though the Fisher information matrix is defined in terms of second derivatives. The expected number of events is \langle N_\mathrm{obs} \rangle = \mu t_\mathrm{obs}. Therefore, we can see that the measurement uncertainty defined by the inverse of the Fisher information matrix, scales on average as N_\mathrm{obs}^{-1/2}.

For anyone worrying about using the likelihood rather than the posterior for these estimates, the high number of detections [bonus note] should mean that the information we’ve gained from the data overwhelms our prior, meaning that the shape of the posterior is dictated by the shape of the likelihood.

Interpretation of the Fisher information matrix

As an alternative way of looking at the Fisher information matrix, we can consider the shape of the likelihood close to its peak. Around the maximum likelihood point, the first-order derivatives of the likelihood with respect to the population parameters is zero (otherwise it wouldn’t be the maximum). The maximum likelihood values of N_\mathrm{obs} = \mu t_\mathrm{obs} and c_k = N_\mathrm{obs} p_k are the same as their expectation values. The second-order derivatives are given by the expression we have worked out for the Fisher information matrix. Therefore, in the region around the maximum likelihood point, the Fisher information matrix encodes all the relevant information about the shape of the likelihood.

So long as we are working close to the maximum likelihood point, we can approximate the distribution as a multidimensional normal distribution with its covariance matrix determined by the inverse of the Fisher information matrix. Our results for the measurement uncertainties are made subject to this approximation (which we did check was OK).

Approximating the likelihood this way should be safe in the limit of large N_\mathrm{obs}. As we get more detections, statistical uncertainties should reduce, with the peak of the distribution homing in on the maximum likelihood value, and its width narrowing. If you take the limit of N_\mathrm{obs} \rightarrow \infty, you’ll see that the distribution basically becomes a delta function at the maximum likelihood values. To check that our N_\mathrm{obs} = 1000 was large enough, we verified that higher-order derivatives were still small.

Michele Vallisneri has a good paper looking at using the Fisher information matrix for gravitational wave parameter estimation (rather than our problem of binary population synthesis). There is a good discussion of its range of validity. The high signal-to-noise ratio limit for gravitational wave signals corresponds to our high number of detections limit.

 

Science with the space-based interferometer LISA. V. Extreme mass-ratio inspirals

The space-based observatory LISA will detect gravitational waves from massive black holes (giant black holes residing in the centres of galaxies). One particularly interesting signal will come from the inspiral of a regular stellar-mass black hole into a massive black hole. These are called extreme mass-ratio inspirals (or EMRIs, pronounced emries, to their friends) [bonus note]. We have never observed such a system. This means that there’s a lot we have to learn about them. In this work, we systematically investigated the prospects for observing EMRIs. We found that even though there’s a wide range in predictions for what EMRIs we will detect, they should be a safe bet for the LISA mission.

EMRI spacetime

Artistic impression of the spacetime for an extreme-mass-ratio inspiral, with a smaller stellar-mass black hole orbiting a massive black hole. This image is mandatory when talking about extreme-mass-ratio inspirals. Credit: NASA

LISA & EMRIs

My previous post discussed some of the interesting features of EMRIs. Because of the extreme difference in masses of the two black holes, it takes a long time for them to complete their inspiral. We can measure tens of thousands of orbits, which allows us to make wonderfully precise measurements of the source properties (if we can accurately pick out the signal from the data). Here, we’ll examine exactly what we could learn with LISA from EMRIs [bonus note].

First we build a model to investigate how many EMRIs there could be.  There is a lot of astrophysics which we are currently uncertain about, which leads to a large spread in estimates for the number of EMRIs. Second, we look at how precisely we could measure properties from the EMRI signals. The astrophysical uncertainties are less important here—we could get a revolutionary insight into the lives of massive black holes.

The number of EMRIs

To build a model of how many EMRIs there are, we need a few different inputs:

  1. The population of massive black holes
  2. The distribution of stellar clusters around massive black holes
  3. The range of orbits of EMRIs

We examine each of these in turn, building a more detailed model than has previously been constructed for EMRIs.

We currently know little about the population of massive black holes. This means we’ll discover lots when we start measuring signals (yay), but it’s rather inconvenient now, when we’re trying to predict how many EMRIs there are (boo). We take two different models for the mass distribution of massive black holes. One is based upon a semi-analytic model of massive black hole formation, the other is at the pessimistic end allowed by current observations. The semi-analytic model predicts massive black hole spins around 0.98, but we also consider spins being uniformly distributed between 0 and 1, and spins of 0. This gives us a picture of the bigger black hole, now we need the smaller.

Observations show that the masses of massive black holes are correlated with their surrounding cluster of stars—bigger black holes have bigger clusters. We consider four different versions of this trend: Gültekin et al. (2009); Kormendy & Ho (2013); Graham & Scott (2013), and Shankar et al. (2016). The stars and black holes about a massive black hole should form a cusp, with the density of objects increasing towards the massive black hole. This is great for EMRI formation. However, the cusp is disrupted if two galaxies (and their massive black holes) merge. This tends to happen—it’s how we get bigger galaxies (and black holes). It then takes some time for the cusp to reform, during which time, we don’t expect as many EMRIs. Therefore, we factor in the amount of time for which there is a cusp for massive black holes of different masses and spins.

Colliding galaxies

That’s a nice galaxy you have there. It would be a shame if it were to collide with something… Hubble image of The Mice. Credit: ACS Science & Engineering Team.

Given a cusp about a massive black hole, we then need to know how often an EMRI forms. Simulations give us a starting point. However, these only consider a snap-shot, and we need to consider how things evolve with time. As stellar-mass black holes inspiral, the massive black hole will grow in mass and the surrounding cluster will become depleted. Both these effects are amplified because for each inspiral, there’ll be many more stars or stellar-mass black holes which will just plunge directly into the massive black hole. We therefore need to limit the number of EMRIs so that we don’t have an unrealistically high rate. We do this by adding in a couple of feedback factors, one to cap the rate so that we don’t deplete the cusp quicker than new objects will be added to it, and one to limit the maximum amount of mass the massive black hole can grow from inspirals and plunges. This gives us an idea for the total number of inspirals.

Finally, we calculate the orbits that EMRIs will be on.  We again base this upon simulations, and factor in how the spin of the massive black hole effects the distribution of orbital inclinations.

Putting all the pieces together, we can calculate the population of EMRIs. We now need to work out how many LISA would be able to detect. This means we need models for the gravitational-wave signal. Since we are simulating a large number, we use a computationally inexpensive analytic model. We know that this isn’t too accurate, but we consider two different options for setting the end of the inspiral (where the smaller black hole finally plunges) which should bound the true range of results.

Number of detected EMRIs

Number of EMRIs for different size massive black holes in different astrophysical models. M1 is our best estimate, the others explore variations on this. M11 and M12 are designed to be cover the extremes, being the most pessimistic and optimistic combinations. The solid and dashed lines are for two different signal models (AKK and AKS), which are designed to give an indication of potential variation. They agree where the massive black hole is not spinning (M10 and M11). The range of masses is similar for all models, as it is set by the sensitivity of LISA. We can detect higher mass systems assuming the AKK signal model as it includes extra inspiral close to highly spinning black holes: for the heaviest black holes, this is the only part of the signal at high enough frequency to be detectable. Figure 8 of Babak et al. (2017).

Allowing for all the different uncertainties, we find that there should be somewhere between 1 and 4200 EMRIs detected per year. (The model we used when studying transient resonances predicted about 250 per year, albeit with a slightly different detector configuration, which is fairly typical of all the models we consider here). This range is encouraging. The lower end means that EMRIs are a pretty safe bet, we’d be unlucky not to get at least one over the course of a multi-year mission (LISA should have at least four years observing). The upper end means there could be lots—we might actually need to worry about them forming a background source of noise if we can’t individually distinguish them!

EMRI measurements

Having shown that EMRIs are a good LISA source, we now need to consider what we could learn by measuring them?

We estimate the precision we will be able to measure parameters using the Fisher information matrix. The Fisher matrix measures how sensitive our observations are to changes in the parameters (the more sensitive we are, the better we should be able to measure that parameter). It should be a lower bound on actual measurement precision, and well approximate the uncertainty in the high signal-to-noise (loud signal) limit. The combination of our use of the Fisher matrix and our approximate signal models means our results will not be perfect estimates of real performance, but they should give an indication of the typical size of measurement uncertainties.

Given that we measure a huge number of cycles from the EMRI signal, we can make really precise measurements of the the mass and spin of the massive black hole, as these parameters control the orbital frequencies. Below are plots for the typical measurement precision from our Fisher matrix analysis. The orbital eccentricity is measured to similar accuracy, as it influences the range of orbital frequencies too. We also get pretty good measurements of the the mass of the smaller black hole, as this sets how quickly the inspiral proceeds (how quickly the orbital frequencies change). EMRIs will allow us to do precision astronomy!

EMRI redshifted mass measurements

Distribution of (one standard deviation) fractional uncertainties for measurements of the  massive black hole (redshifted) mass M_z. Results are shown for the different astrophysical models, and for the different signal models.  The astrophysical model has little impact on the uncertainties. M4 shows a slight difference as it assumes heavier stellar-mass black holes. The results with the two signal models agree when the massive black hole is not spinning (M10 and M11). Otherwise, measurements are more precise with the AKK signal model, as this includes extra signal from the end of the inspiral. Part of Figure 11 of Babak et al. (2017).

EMRI spin measurements

Distribution of (one standard deviation) uncertainties for measurements of the massive black hole spin a. The results mirror those for the masses above. Part of Figure 11 of Babak et al. (2017).

Now, before you get too excited that we’re going to learn everything about massive black holes, there is one confession I should make. In the plot above I show the measurement accuracy for the redshifted mass of the massive black hole. The cosmological expansion of the Universe causes gravitational waves to become stretched to lower frequencies in the same way light is (this makes visible light more red, hence the name). The measured frequency is f_z = (1 + z)f where f is the frequency emitted, and z is the redshift (z= 0 for a nearby source, and is larger for further away sources). Lower frequency gravitational waves correspond to higher mass systems, so it is often convenient to work with the redshifted mass, the mass corresponding to the signal you measure if you ignore redshifting. The redshifted mass of the massive black hole is M_z = (1+z)M where M is the true mass. To work out the true mass, we need the redshift, which means we need to measure the distance to the source.

EMRI lumniosity distance measurement

Distribution of (one standard deviation) fractional uncertainties for measurements of the luminosity distance D_\mathrm{L}. The signal model is not as important here, as the uncertainty only depends on how loud the signal is. Part of Figure 12 of Babak et al. (2017).

The plot above shows the fractional uncertainty on the distance. We don’t measure this too well, as it is determined from the amplitude of the signal, rather than its frequency components. The situation is much as for LIGO. The larger uncertainties on the distance will dominate the overall uncertainty on the black hole masses. We won’t be getting all these to fractions of a percent. However, that doesn’t mean we can’t still figure out what the distribution of masses looks like!

One of the really exciting things we can do with EMRIs is check that the signal matches our expectations for a black hole in general relativity. Since we get such an excellent map of the spacetime of the massive black hole, it is easy to check for deviations. In general relativity, everything about the black hole is fixed by its mass and spin (often referred to as the no-hair theorem). Using the measured EMRI signal, we can check if this is the case. One convenient way of doing this is to describe the spacetime of the massive object in terms of a multipole expansion. The first (most important) terms gives the mass, and the next term the spin. The third term (the quadrupole) is set by the first two, so if we can measure it, we can check if it is consistent with the expected relation. We estimated how precisely we could measure a deviation in the quadrupole. Fortunately, for this consistency test, all factors from redshifting cancel out, so we can get really detailed results, as shown below. Using EMRIs, we’ll be able to check for really small differences from general relativity!

EMRI measurement of bumpy black hole spacetime

Distribution of (one standard deviation) of uncertainties for deviations in the quadrupole moment of the massive object spacetime \mathcal{Q}. Results are similar to the mass and spin measurements. Figure 13 of Babak et al. (2017).

In summary: EMRIS are awesome. We’re not sure how many we’ll detect with LISA, but we’re confident there will be some, perhaps a couple of hundred per year. From the signals we’ll get new insights into the masses and spins of black holes. This should tell us something about how they, and their surrounding galaxies, evolved. We’ll also be able to do some stringent tests of whether the massive objects are black holes as described by general relativity. It’s all pretty exciting, for when LISA launches, which is currently planned about 2034…

Sometimes, it leads to very little, and it seems like it's not worth it, and you wonder why you waited so long for something so disappointing

One of the most valuable traits a student or soldier can have: patience. Credit: Sony/Marvel

arXiv: 1703.09722 [gr-qc]
Journal: Physical Review D; 477(4):4685–4695; 2018
Conference proceedings: 1704.00009 [astro-ph.GA] (from when work was still in-progress)
Estimated number of Marvel films before LISA launch: 48 (starting with Ant-Man and the Wasp)

Bonus notes

Hyphenation

Is it “extreme-mass-ratio inspiral”, “extreme mass-ratio inspiral” or “extreme mass ratio inspiral”? All are used in the literature. This is one of the advantage of using “EMRI”. The important thing is that we’re talking about inspirals that have a mass ratio which is extreme. For this paper, we used “extreme mass-ratio inspiral”, but when I first started my PhD, I was first introduced to “extreme-mass-ratio inspirals”, so they are always stuck that way in my mind.

I think hyphenation is a bit of an art, and there’s no definitive answer here, just like there isn’t for superhero names, where you can have Iron Man, Spider-Man or Iceman.

Science with LISA

This paper is part of a series looking at what LISA could tells us about different gravitational wave sources. So far, this series covers

  1. Massive black hole binaries
  2. Cosmological phase transitions
  3. Standard sirens (for measuring the expansion of the Universe)
  4. Inflation
  5. Extreme-mass-ratio inspirals

You’ll notice there’s a change in the name of the mission from eLISA to LISA part-way through, as things have evolved. (Or devolved?) I think the main take-away so far is that the cosmology group is the most enthusiastic.

Importance of transient resonances in extreme-mass-ratio inspirals

Extreme-mass-ratio inspirals (EMRIs for short) are a promising source for the planned space-borne gravitational-wave observatory LISA. To detect and analyse them we need accurate models for the signals, which are exquisitely intricate. In this paper, we investigated a feature, transient resonances, which have not previously included in our models. They are difficult to incorporate, but can have a big impact on the signal. Fortunately, we find that we can still detect the majority of EMRIs, even without including resonances. Phew!

EMRIs and orbits

EMRIs are a beautiful gravitational wave source. They occur when a stellar-mass black hole slowly inspirals into a massive black hole (as found in the centre of galaxies). The massive black hole can be tens of thousands or millions of times more massive than the stellar-mass black hole (hence extreme mass ratio). This means that the inspiral is slow—we can potentially measure tens of thousands of orbits. This is both the blessing and the curse of EMRIs. The huge numbers of cycles means that we can closely follow the inspiral, and build a detailed map of the massive black hole’s spacetime. EMRIs will give us precision measurements of the properties of massive black holes. However, to do this, we need to be able to find the EMRI signals in the data, we need models which can match the signals over all these cycles. Analysing EMRIs is a huge challenge.

 

EMRI orbits are complicated. At any moment, the orbit can be described by three orbital frequencies: one for radial (in/out) motion \Omega_r, one for polar (north/south if we think of the spin of the massive black hole like the rotation of the Earth) motion \Omega_\theta and one for axial (around in the east/west direction) motion. As gravitational waves are emitted, and the orbit shrinks, these frequencies evolve. The animation above, made by Steve Drasco, illustrates the evolution of an EMRI. Every so often, so can see the pattern freeze—the orbits stays in a constant shape (although this still rotates). This is a transient resonance. Two of the orbital frequencies become commensurate (so we might have 3 north/south cycles and 2 in/out cycles over the same period [bonus note])—this is the resonance. However, because the frequencies are still evolving, we don’t stay locked like this is forever—which is why the resonance is transient. To calculate an EMRI, you need to know how the orbital frequencies evolve.

The evolution of an EMRI is slow—the time taken to inspiral is much longer than the time taken to complete one orbit. Therefore, we can usually split the problem of calculating the trajectory of an EMRI into two parts. On short timescales, we can consider orbits as having fixed frequencies. On long timescale, we can calculate the evolution by averaging over many orbits. You might see the problem with this—around resonances, this averaging breaks down. Whereas normally averaging over many orbits means averaging over a complicated trajectory that hits pretty much all possible points in the orbital range, on resonance, you just average over the same bit again and again. On resonance, terms which usually average to zero can become important. Éanna Flanagan and Tanja Hinderer first pointed out that around resonances the usual scheme (referred to as the adiabatic approximation) doesn’t work.

A non-resonant orbit

A non-resonant EMRI orbit in three dimensions (left) and two dimensions (right), ignoring the rotation in the axial direction. A non-resonant orbit will eventually fill the r\theta plane. Credit: Rob Cole

A 2:3 resonance

For comparison, a resonant EMRI orbit. A 2:3 resonance traces the same parts of the r\theta plane over and over. Credit: Rob Cole

Around a resonance, the evolution will be enhanced or decreased a little relative to the standard adiabatic evolution. We get a kick. This is only small, but because we observe EMRIs for so many orbits, a small difference can grow to become a significant difference later on. Does this mean that we won’t be able to detect EMRIs with our standard models? This was a concern, so back at the end of PhD I began to investigate [bonus note]. The first step is to understand the size of the kick.

Jump for 2:3 resonance

A jump in the orbital energy across a 2:3 resonance. The plot shows the difference between the approximate adiabatic evolution and the instantaneous evolution including the resonance. The thickness of the blue line is from oscillations on the orbital timescale which is too short to resolve here. The dotted red line shows the fitted size of the jump. Time is measured in terms of the resonance time \tau_\mathrm{res} which is defined below. Figure 4 of Berry et al. (2016).

Resonance kicks

If there were no gravitational waves, the orbit would not evolve, it would be fixed. The orbit could then be described by a set of constants of motion. The most commonly used when describing orbits about black holes are the energy, angular momentum and Carter constant. For the purposes of this blog, we’ll not worry too much about what these constants are, we’ll just consider some constant I.

The resonance kick is a change in this constant \Delta I. What should this depend on? There are three ingredients. First, the rate of change of this constant F on the resonant orbit. Second, the time spent on resonance \tau_\mathrm{res}. The bigger these are, the bigger the size of the jump. Therefore,

|\Delta I| \propto F \tau_\mathrm{res}.

However, the jump could be positive or negative. This depends upon the relative phase of the radial and polar motion [bonus note]—for example, do they both reach their maximum point at the same time, or does one lag behind the other? We’ll call this relative phase q. By varying q we explore we can get our resonant trajectory to go through any possible point in space. Therefore, averaging over q should get us back to the adiabatic approximation: the average value of \Delta I must be zero. To complete our picture for the jump, we need a periodic function of the phase,

\Delta I = F \tau_\mathrm{res} f(q),

with \langle f(q) \rangle_q = 0. Now, we know the pieces, we can try to figure out what the pieces are.

The rate of change F is proportional the mass ratio \eta \ll 1: the smaller the stellar-mass black hole is relative to the massive one, the smaller F is. The exact details depend upon gravitational self-force calculations, which we’ll skip over, as they’re pretty hard, but they are the same for all orbits (resonant or not).

We can think of the resonance timescale either as the time for the orbital frequencies to drift apart or the time for the orbit to start filling the space again (so that it’s safe to average). The two pictures yield the same answer—there’s a fuller explanation in Section III A of the paper. To define the resonance timescale, it is useful to define the frequency \Omega = n_r \Omega_r - n_\theta \Omega_\theta, which is zero exactly on resonance. If this is evolving at rate \dot{\Omega}, then the resonance timescale is

\displaystyle \tau_\mathrm{res} = \left[\frac{2\pi}{\dot{\Omega}}\right]^{1/2}.

This bridges the two timescales that usually define EMRIs: the short orbital timescale T and the long evolution timescale \tau_\mathrm{ev}:

T \sim \eta^{1/2} \tau_\mathrm{res} \sim \eta \tau_\mathrm{ev}.

To find the form of for f(q), we need to do some quite involved maths (given in Appendix B of the paper) [bonus note]. This works by treating the evolution far from resonance as depending upon two independent times (effectively defining T and \tau_\mathrm{ev}), and then matching the evolution close to resonance using an expansion in terms of a different time (something like \tau_\mathrm{res}). The solution shows that the jump depends sensitively upon the phase q at resonance, which makes them extremely difficult to calculate.

We numerically evaluated the size of kicks for different orbits and resonances. We found a number of trends. First, higher-order resonances (those with larger n_r and n_\theta) have smaller jumps than lower-order ones. This makes sense, as higher-order resonances come closer to covering all the points in the space, and so are more like averaging over the entire space. Second, jumps are larger for higher eccentricity orbits. This also makes sense, as you can’t have resonances for circular (zero eccentricity orbits) as there’s no radial frequency, so the size of the jumps must tend to zero. We’ll see that these two points are important when it comes to observational consequences of transient resonances.

Astrophysical EMRIs

Now we’ve figured out the impact of passing through a transient resonance, let’s look at what this means for detecting EMRIs. The jump can mean that the evolution post-resonance can soon become out of phase with that pre-resonance. We can’t match both parts with the same adiabatic template. This could significantly hamper our prospects for detection, as we’re limited to the bits of signal we can pick up between resonances.

We created an astrophysical population of simulated EMRIs. We used numerical simulations to estimate a plausible population of massive black holes and distribution of stellar-mass black holes insprialling into them. We then used adiabatic models to see how many LISA (or eLISA as it was called at the time) could potentially detect. We found there were ~510 EMRIs detectable (with a signal-to-noise ratio of 15 or above) for a two-year mission.

We then calculated how much the signal-to-noise ratio would be reduced by passing through transient resonances. The plot below shows the distribution of signal-to-noise ratio for the original population, ignoring resonances, and then after factoring in the reduction. There are now ~490 detectable EMRIs, a loss of 4%. We can still detect the majority of EMRIs!

Signal-to-noise ratio distribution

Distribution of signal-to-noise ratios for EMRIs. In blue (solid outline), we have the results ignoring transient resonances. In orange (dashed outline), we have the distribution including the reduction due to resonance jumps. Events falling below 15 are deemed to be undetectable. Figure 10 of Berry et al. (2016).

We were worried about the impact of transient resonances, we know that jumps can cause them to become undetectable, so why aren’t we seeing a bit effect in our population? The answer lies is in the trends we saw earlier. Jumps are large for low order resonances with high eccentricities. These were the ones first highlighted, as they are obviously the most important. However, low-order resonances are only encountered really close to the massive black hole. This means late in the inspiral, after we have already accumulated lots of signal-to-noise ratio. Losing a little bit of signal right at the end doesn’t hurt detectability too much. On top of this, gravitational wave emission efficiently damps down eccentricity. Orbits typically have low eccentricities by the time they hit low-order resonances, meaning that the jumps are actually quite small. Although small jumps lead to some mismatch, we can still use our signal templates without jumps. Therefore, resonances don’t hamper us (too much) in finding EMRIs!

This may seem like a happy ending, but it is not the end of the story. While we can detect EMRIs, we still need to be able to accurately infer their source properties. Features not included in our signal templates (like jumps), could bias our results. For example, it might be that we can better match a jump by using a template for a different black hole mass or spin. However, if we include jumps, these extra features could give us extra precision in our measurements. The question of what jumps could mean for parameter estimation remains to be answered.

arXiv: 1608.08951 [gr-qc]
Journal: Physical Review D; 94(12):124042(24); 2016
Conference proceedings: 1702.05481 [gr-qc] (only 2 pages—ideal for emergency journal club presentations)
Favourite jumpers: Woolly, Mario, Kangaroos

Bonus notes

Radial and polar only

When discussing resonances, and their impact on orbital evolution, we’ll only care about \Omega_r\Omega_\theta resonances. Resonances with \Omega_\phi are not important because the spacetime is axisymmetric. The equations are exactly identical for all values of the the axial angle \phi, so it doesn’t matter where you are (or if you keep cycling over the same spot) for the evolution of the EMRI.

This, however, doesn’t mean that \Omega_\phi resonances aren’t interesting. They can lead to small kicks to the binary, because you are preferentially emitting gravitational waves in one direction. For EMRIs this are negligibly small, but for more equal mass systems, they could have some interesting consequences as pointed out by Maarten van de Meent.

Extra time

I’m grateful to the Cambridge Philosophical Society for giving me some extra funding to work on resonances. If you’re a Cambridge PhD student, make sure to become a member so you can take advantage of the opportunities they offer.

Calculating jumps

The theory of how to evolve through a transient resonance was developed by Kevorkian and coauthors. I spent a long time studying these calculations before working up the courage to attempt them myself. There are a few technical details which need to be adapted for the case of EMRIs. I finally figured everything out while in Warsaw Airport, coming back from a conference. It was the most I had ever felt like a real physicist.

No you won't

Transient resonances remind me of Spirographs. Thanks Frinkiac

Hierarchical analysis of gravitational-wave measurements of binary black hole spin–orbit misalignments

Gravitational waves allow us to infer the properties of binary black holes (two black holes in orbit about each other), but can we use this information to figure out how the black holes and the binary form? In this paper, we show that measurements of the black holes’ spins can help us this out, but probably not until we have at least 100 detections.

Black hole spins

Black holes are described by their masses (how much they bend spacetime) and their spins (how much they drag spacetime to rotate about them). The orientation of the spins relative to the orbit of the binary could tell us something about the history of the binary [bonus note].

We considered four different populations of spin–orbit alignments to see if we could tell them apart with gravitational-wave observations:

  1. Aligned—matching the idealised example of isolated binary evolution. This stands in for the case where misalignments are small, which might be the case if material blown off during a supernova ends up falling back and being swallowed by the black hole.
  2. Isotropic—matching the expectations for dynamically formed binaries.
  3. Equal misalignments at birth—this would be the case if the spins and orbit were aligned before the second supernova, which then tilted the plane of the orbit. (As the binary inspirals, the spins wobble around, so the two misalignment angles won’t always be the same).
  4. Both spins misaligned by supernova kicks, assuming that the stars were aligned with the orbit before exploding. This gives a more general scatter of unequal misalignments, but typically the primary (bigger and first forming) black hole is more misaligned.

These give a selection of possible spin alignments. For each, we assumed that the spin magnitude was the same and had a value of 0.7. This seemed like a sensible idea when we started this study [bonus note], but is now towards the upper end of what we expect for binary black holes.

Hierarchical analysis

To measurement the properties of the population we need to perform a hierarchical analysis: there are two layers of inference, one for the individual binaries, and one of the population.

From a gravitational wave signal, we infer the properties of the source using Bayes’ theorem. Given the data d_\alpha, we want to know the probability that the parameters \mathbf{\Theta}_\alpha have different values, which is written as p(\mathbf{\Theta}_\alpha|d_\alpha). This is calculated using

\displaystyle p(\mathbf{\Theta}_\alpha|d_\alpha) = \frac{p(d_\alpha | \mathbf{\Theta}_\alpha) p(\mathbf{\Theta}_\alpha)}{p(d_\alpha)},

where p(d_\alpha | \mathbf{\Theta}_\alpha) is the likelihood, which we can calculate from our knowledge of the noise in our gravitational wave detectors, p(\mathbf{\Theta}_\alpha) is the prior on the parameters (what we would have guessed before we had the data), and the normalisation constant p(d_\alpha) is called the evidence. We’ll use the evidence again in the next layer of inference.

Our prior on the parameters should actually depend upon what we believe about the astrophysical population. It is different if we believed that Model 1 were true (when we’d only consider aligned spins) than for Model 2. Therefore, we should really write

\displaystyle p(\mathbf{\Theta}_\alpha|d_\alpha, \lambda) = \frac{p(d_\alpha | \mathbf{\Theta}_\alpha,\lambda) p(\mathbf{\Theta}_\alpha,\lambda)}{p(d_\alpha|\lambda)},

where  \lambda denotes which model we are considering.

This is an important point to remember: if you our using our LIGO results to test your theory of binary formation, you need to remember to correct for our choice of prior. We try to pick non-informative priors—priors that don’t make strong assumptions about the physics of the source—but this doesn’t mean that they match what would be expected from your model.

We are interested in the probability distribution for the different models: how many binaries come from each. Given a set of different observations \{d_\alpha\}, we can work this out using another application of Bayes’ theorem (yay)

\displaystyle p(\mathbf{\lambda}|\{d_\alpha\}) = \frac{p(\{d_\alpha\} | \mathbf{\lambda}) p(\mathbf{\lambda})}{p(\{d_\alpha\})},

where p(\{d_\alpha\} | \mathbf{\lambda}) is just all the evidences for the individual events (given that model) multiplied together, p(\mathbf{\lambda}) is our prior for the different models, and p(\{d_\alpha\}) is another normalisation constant.

Now knowing how to go from a set of observations to the probability distribution on the different channels, let’s give it a go!

Results

To test our approach made a set of mock gravitational wave measurements. We generated signals from binaries for each of our four models, and analysed these as we would for real signals (using LALInference). This is rather computationally expensive, and we wanted a large set of events to analyse, so using these results as a guide, we created a larger catalogue of approximate distributions for the inferred source parameters p(\mathbf{\Theta}_\alpha|d_\alpha). We then fed these through our hierarchical analysis. The GIF below shows how measurements of the fraction of binaries from each population tightens up as we get more detections: the true fraction is marked in blue.

Fraction of binaries from each of the four models

Probability distribution for the fraction of binaries from each of our four spin misalignment populations for different numbers of observations. The blue dot marks the true fraction: and equal fraction from all four channels.

The plot shows that we do zoom in towards the true fraction of events from each model as the number of events increases, but there are significant degeneracies between the different models. Notably, it is difficult to tell apart Models 1 and 3, as both have strong support for both spins being nearly aligned. Similarly, there is a degeneracy between Models 2 and 4 as both allow for the two spins to have very different misalignments (and for the primary spin, which is the better measured one, to be quite significantly misaligned).

This means that we should be able to distinguish aligned from misaligned populations (we estimated that as few as 5 events would be needed to distinguish the case that all events came from either Model 1  or Model 2 if those were the only two allowed possibilities). However, it will be more difficult to distinguish different scenarios which only lead to small misalignments from each other, or disentangle whether there is significant misalignment due to big supernova kicks or because binaries are formed dynamically.

The uncertainty of the fraction of events from each model scales roughly with the square root of the number of observations, so it may be slow progress making these measurements. I’m not sure whether we’ll know the answer to how binary black hole form, or who will sit on the Iron Throne first.

arXiv: 1703.06873 [astro-ph.HE]
Journal: Monthly Notices of the Royal Astronomical Society471(3):2801–2811; 2017
Birmingham science summary: Hierarchical analysis of gravitational-wave measurements of binary black hole spin–orbit misalignment (by Simon)
If you like this you might like: Farr et al. (2017)Talbot & Thrane (2017), Vitale et al. (2017), Trifirò et al. (2016), Minogue (2000)

Bonus notes

Spin misalignments and formation histories

If you have two stars forming in a binary together, you’d expect them to be spinning in roughly the same direction, rotating the same way as they go round in their orbit (like our Solar System). This is because they all formed from the same cloud of swirling gas and dust. Furthermore, if two stars are to form a black hole binary that we can detect gravitational waves from, they need to be close together. This means that there can be tidal forces which gently tug the stars to align their rotation with the orbit. As they get older, stars puff up, meaning that if you have a close-by neighbour, you can share outer layers. This transfer of material will tend to align rotate too. Adding this all together, if you have an isolated binary of stars, you might expect that when they collapse down to become black holes, their spins are aligned with each other and the orbit.

Unfortunately, real astrophysics is rarely so clean. Even if the stars were initially rotating the same way as each other, they doesn’t mean that their black hole remnants will do the same. This depends upon how the star collapses. Massive stars explode as supernova, blasting off their outer layers while their cores collapse down to form black holes. Escaping material could carry away angular momentum, meaning that the black hole is spinning in a different direction to its parent star, or material could be blasted off asymmetrically, giving the new black hole a kick. This would change the plane of the binary’s orbit, misaligning the spins.

Alternatively, the binary could be formed dynamically. Instead of two stars living their lives together, we could have two stars (or black holes) come close enough together to form a binary. This is likely to happen in regions where there’s a high density of stars, such as a globular cluster. In this case, since the binary has been randomly assembled, there’s no reason for the spins to be aligned with each other or the orbit. For dynamically assembled binaries, all spin–orbit misalignments are equally probable.

Slow and steady

This project was led by Simon Stevenson. It was one of the first things we started working on at the beginning of his PhD. He has now graduated, and is off to start a new exciting life as a postdoc in Australia. We got a little distracted by other projects, most notably analysing the first detections of gravitational waves. Simon spent a lot of time developing the COMPAS population code, a code to simulate the evolution of binaries. Looking back, it’s impressive how far he’s come. This paper used a simple approximation to to estimate the masses of our black holes: we called it the Post-it note model, as we wrote it down on a single Post-it. Now Simon’s writing papers including the complexities of common-envelope evolution in order to explain LIGO’s actual observations.