GW190425—First discovery from O3

The first gravitational wave detection of LIGO and Virgo’s third observing run (O3) has been announced: GW190425! [bonus note] The signal comes from the inspiral of two objects which have a combined mass of about 3.4 times the mass of our Sun. These masses are in range expected for neutron stars, this makes GW190425 the second observation of gravitational waves from a binary neutron star inspiral (after GW170817). While the individual masses of the two components agree with the masses of neutron stars found in binaries, the overall mass of the binary (times the mass of our Sun) is noticeably larger than any previously known binary neutron star system. GW190425 may be the first evidence for multiple ways of forming binary neutron stars.

The gravitational wave signal

On 25 April 2019 the LIGO–Virgo network observed a signal. This was promptly shared with the world as candidate event S190425z [bonus note]. The initial source classification was as a binary neutron star. This caused a flurry of excitement in the astronomical community [bonus note], as the smashing together of two neutron stars should lead to the emission of light. Unfortunately, the sky localization was HUGE (the initial 90% area wass about a quarter of the sky, and the refined localization provided the next day wasn’t much improvement), and the distance was four times that of GW170817 (meaning that any counterpart would be about 16 times fainter). Covering all this area is almost impossible. No convincing counterpart has been found [bonus note].

Preliminary sky map for GW190425

Early sky localization for GW190425. Darker areas are more probable. This localization was circulated in GCN 24228 on 26 April and was used to guide follow-up, even though it covers a huge amount of the sky (the 90% area is about 18% of the sky).

The localization for GW19045 was so large because LIGO Hanford (LHO) was offline at the time. Only LIGO Livingston (LLO) and Virgo were online. The Livingston detector was about 2.8 times more sensitive than Virgo, so pretty much all the information came from Livingston. I’m looking forward to when we have a larger network of detectors at comparable sensitivity online (we really need three detectors observing for a good localization).

We typically search for gravitational waves by looking for coincident signals in our detectors. When looking for binaries, we have templates for what the signals look like, so we match these to the data and look for good overlaps. The overlap is quantified by the signal-to-noise ratio. Since our detectors contains all sorts of noise, you’d expect them to randomly match templates from time to time. On average, you’d expect the signal-to-noise ratio to be about 1. The higher the signal-to-noise ratio, the less likely that a random noise fluctuation could account for this.

Our search algorithms don’t just rely on the signal-to-noise ratio. The complication is that there are frequently glitches in our detectors. Glitches can be extremely loud, and so can have a significant overlap with a template, even though they don’t look anything like one. Therefore, our search algorithms also look at the overlap for different parts of the template, to check that these match the expected distribution (for example, there’s not one bit which is really loud, while the others don’t match). Each of our different search algorithms has their own way of doing this, but they are largely based around the ideas from Allen (2005), which is pleasantly readable if you like these sort of things. It’s important to collect lots of data so that we know the expected distribution of signal-to-noise ratio and signal-consistency statistics (sometimes things change in our detectors and new types of noise pop up, which can confuse things).

It is extremely important to check the state of the detectors at the time of an event candidate. In O3, we have unfortunately had to retract various candidate events after we’ve identified that our detectors were in a disturbed state. The signal consistency checks take care of most of the instances, but they are not perfect. Fortunately, it is usually easy to identify that there is a glitch—the difficult question is whether there is a glitch on top of a signal (as was the case for GW170817).  Our checks revealed nothing up with the detectors which could explain the signal (there was a small glitch in Livingston about 60 seconds before the merger time, but this doesn’t overlap with the signal).

Now, the search that identified GW190425 was actually just looking for single-detector events: outliers in the distribution of signal-to-noise ratio and signal-consistency as expected for signals. This was a Good Thing™. While the signal-to-noise ratio in Livingston was 12.9 (pretty darn good), the signal-to-noise ration in Virgo was only 2.5 (pretty meh) [bonus note]. This is below the threshold (signal-to-noise ratio of 4) the search algorithms use to look for coincidences (a threshold is there to cut computational expense: the lower the threshold, the more triggers need to be checked) [bonus note]. The Bad Thing™ about GW190425 being found by the single-detector search, and being missed by the usual multiple detector search, is that it is much harder to estimate the false-alarm rate—it’s much harder to rule out the possibility of some unusual noise when you don’t have another detector to cross-reference against. We don’t have a final estimate for the significance yet. The initial estimate was 1 in 69,000 years (which relies on significant extrapolation). What we can be certain of is that this event is a noticeable outlier: across the whole of O1, O2 and the first 50 days of O3, it comes second only to GW170817. In short, we can say that GW190425 is worth betting on, but I’m not sure (yet) how heavily you want to bet.

Comparison of GW190425 to O1, O2 and start of O3 data

Detection statistics for GW190425 showing how it stands out from the background. The left plot shows the signal-to-noise ratio (SNR) and signal-consistency statistic from the GstLAL algorithm, which made the detection. The coloured density plot shows the distribution of background triggers. Right shows the detection statistic from PyCBC, which combines the SNR and their signal-consistency statistic. The lines show the background distributions. GW190425 is more significant than everything apart from GW170817. Adapted from Figures 1 and 6 of the GW190425 Discovery Paper.

I’m always cautious of single-detector candidates. If you find a high-mass binary black hole (which would be an extremely short template), or something with extremely high spins (indicating that the templates don’t match unless you push to the bounds of what is physical), I would be suspicious. Here, we do have consistent Virgo data, which is good for backing up what is observed in Livingston. It may be a single-detector detection, but it is a multiple-detector observation. To further reassure ourselves about GW190425, we ran our full set of detection algorithms on the Livingston data to check that they all find similar signals, with reasonable signal-consistency test values. Indeed, they do! The best explanation for the data seems to be a gravitational wave.

The source

Given that we have a gravitational wave, where did it come from? The best-measured property of a binary inspiral is its chirp mass—a particular combination of the two component masses. For GW190425, this is 1.44^{+0.02}_{-0.02} solar masses (quoting the 90% range for parameters). This is larger than GW170817’s 1.186^{+0.001}_{-0.001} solar masses: we have a heavier binary.

Binary component masses

Estimated masses for the two components in the binary. We show results for two different spin limits. The two-dimensional shows the 90% probability contour, which follows a line of constant chirp mass. The one-dimensional plot shows individual masses; the dotted lines mark 90% bounds away from equal mass. The masses are in the range expected for neutron stars. Figure 3 of the GW190425 Discovery Paper.

Figuring out the component masses is trickier. There is a degeneracy between the spins and the mass ratio—by increasing the spins of the components it is possible to get more extreme mass ratios to fit the signal. As we did for GW170817, we quote results with two ranges of spins. The low-spin results use a maximum spin of 0.05, which matches the range of spins we see for binary neutron stars in our Galaxy, while the high-spin results use a limit of 0.89, which safely encompasses the upper limit for neutron stars (if they spin faster than about 0.7 they’ll tear themselves apart). We find that the heavier component of the binary has a mass of 1.621.88 solar masses with the low-spin assumption, and 1.612.52 solar masses with the high-spin assumption; the lighter component has a mass 1.451.69 solar masses with the low-spin assumption, and 1.121.68 solar masses with the high-spin. These are the range of masses expected for neutron stars.

Without an electromagnetic counterpart, we cannot be certain that we have two neutron stars. We could tell from the gravitational wave by measuring the imprint in the signal left by the tidal distortion of the neutron star. Black holes have a tidal deformability of 0, so measuring a nonzero tidal deformability would be the smoking gun that we have a neutron star. Unfortunately, the signal isn’t loud enough to find any evidence of these effects. This isn’t surprising—we couldn’t say anything for GW170817, without assuming its source was a binary neutron star, and GW170817 was louder and had a lower mass source (where tidal effects are easier to measure). We did check—it’s probably not the case that the components were made of marshmallow, but there’s not much more we can say (although we can still make pretty simulations). It would be really odd to have black holes this small, but we can’t rule out than at least one of the components was a black hole.

Two binary neutron stars is the most likely explanation for GW190425. How does it compare to other binary neutron stars? Looking at the 17 known binary neutron stars in our Galaxy, we see that GW190425’s source is much heavier. This is intriguing—could there be a different, previously unknown formation mechanism for this binary? Perhaps the survey of Galactic binary neutron stars (thanks to radio observations) is incomplete? Maybe the more massive binaries form in close binaries, which are had to spot in the radio (as the neutron star moves so quickly, the radio signals gets smeared out), or maybe such heavy binaries only form from stars with low metallicity (few elements heavier than hydrogen and helium) from earlier in the Universe’s history, so that they are no longer emitting in the radio today? I think it’s too early to tell—but it’s still fun to speculate. I expect there’ll be a flurry of explanations out soon.

Galactic binary neutron stars and GW190425

Comparison of the total binary mass of the 10 known binary neutron stars in our Galaxy that will merge within a Hubble time and GW190425’s source (with both the high-spin and low-spin assumptions). We also show a Gaussian fit to the Galactic binaries. GW190425’s source is higher mass than previously known binary neutron stars. Figure 5 of the GW190425 Discovery Paper.

Since the source seems to be an outlier in terms of mass compared to the Galactic population, I’m a little cautious about using the low-spin results—if this sample doesn’t reflect the full range of masses, perhaps it doesn’t reflect the full range of spins too? I think it’s good to keep an open mind. The fastest spinning neutron star we know of has a spin of around 0.4, maybe binary neutron star components can spin this fast in binaries too?

One thing we can measure is the distance to the source: 160^{+70}_{-70}~\mathrm{Mpc}. That means the signal was travelling across the Universe for about half a billion years. This is as many times bigger than diameter of Earth’s orbit about the Sun, as the diameter of the orbit is than the height of a LEGO brick. Space is big.

We have now observed two gravitational wave signals from binary neutron stars. What does the new observation mean for the merger rate of binary neutron stars? To go from an observed number of signals to how many binaries are out there in the Universe we need to know how sensitive our detectors are to the sources. This depends on  the masses of the sources, since more massive binaries produce louder signals. We’re not sure of the mass distribution for binary neutron stars yet. If we assume a uniform mass distribution for neutron stars between 0.8 and 2.3 solar masses, then at the end of O2 we estimated a merger rate of 1102520~\mathrm{Gpc^{-3}\,\mathrm{yr}^{-3}}. Now, adding in the first 50 days of O3, we estimate the rate to be 2502470~\mathrm{Gpc^{-3}\,\mathrm{yr}^{-3}}, so roughly the same (which is nice) [bonus note].

Since GW190425’s source looks rather different from other neutron stars, you might be interested in breaking up the merger rates to look at different classes. Using measured masses, we can construct rates for GW170817-like (matching the usual binary neutron star population) and GW190425-like binaries (we did something similar for binary black holes after our first detection). The GW170817-like rate is 1102500~\mathrm{Gpc^{-3}\,\mathrm{yr}^{-3}}, and the GW190425-like rate is lower at 704600~\mathrm{Gpc^{-3}\,\mathrm{yr}^{-3}}. Combining the two (Assuming that binary neutron stars are all one class or the other), gives an overall rate of 2902810~\mathrm{Gpc^{-3}\,\mathrm{yr}^{-3}}, which is not too different than assuming the uniform distribution of masses.

Given these rates, we might expect some more nice binary neutron star signals in the O3 data. There is a lot of science to come.

Future mysteries

GW190425 hints that there might be a greater variety of binary neutron stars out there than previously thought. As we collect more detections, we can start to reconstruct the mass distribution. Using this, together with the merger rate, we can start to pin down the details of how these binaries form.

As we find more signals, we should also find a few which are loud enough to measure tidal effects. With these, we can start to figure out the properties of the Stuff™ which makes up neutron stars, and potentially figure out if there are small black holes in this mass range. Discovering smaller black holes would be extremely exciting—these wouldn’t be formed from collapsing stars, but potentially could be remnants left over from the early Universe.

Neutron star masses and radii for GW190425

Probability distributions for neutron star masses and radii (blue for the more massive neutron star, orange for the lighter), assuming that GW190425’s source is a binary neutron star. The left plots use the high-spin assumption, the right plots use the low-spin assumptions. The top plots use equation-of-state insensitive relations, and the bottom use parametrised equation-of-state models incorporating the requirement that neutron stars can be 1.97 solar masses. Similar analyses were done in the GW170817 Equation-of-state Paper. In the one-dimensional plots, the dashed lines indicate the priors. Figure 16 of the GW190425 Discovery Paper.

With more detections (especially when we have more detectors online), we should also be lucky enough to have a few which are well localised. These are the events when we are most likely to find an electromagnetic counterpart. As our gravitational-wave detectors become more sensitive, we can detect sources further out. These are much harder to find counterparts for, so we mustn’t expect every detection to have a counterpart. However, for nearby sources, we will be able to localise them better, and so increase our odds of finding a counterpart. From such multimessenger observations we can learn a lot. I’m especially interested to see how typical GW170817 really was.

O3 might see gravitational wave detection becoming routine, but that doesn’t mean gravitational wave astronomy is any less exciting!

Title: GW190425: Observation of a compact binary coalescence with total mass ~ 3.4 solar masses
Journal: Astrophysical Journal Letters; 892(1):L3(24); 2020
arXiv: arXiv:2001.01761 [astro-ph.HE] [bonus note]
Science summary: GW190425: The heaviest binary neutron star system ever seen?
Data release: Gravitational Wave Open Science Center; Parameter estimation results
Rating: 🥇😮🥂🥇

Bonus notes

Exceptional events

The plan for publishing papers in O3 is that we would write a paper for any particularly exciting detections (such as a binary neutron star), and then put out a catalogue of all our results later. The initial discovery papers wouldn’t be the full picture, just the key details so that the entire community could get working on them. Our initial timeline was to get the individual papers out in four months—that’s not going so well, it turns out that the most interesting events have lots of interesting properties, which take some time to understand. Who’d have guessed?

We’re still working on getting papers out as soon as possible. We’ll be including full analyses, including results which we can’t do on these shorter timescales in our catalogue papers. The catalogue paper for the first half of O3 (O3a) is currently pencilled in for April 2020.

Naming conventions

The name of a gravitational wave signal is set by the date it is observed. GW190425 is hence the gravitational wave (GW) observed on 2019 April 25th. Our candidates alerts don’t start out with the GW prefix, as we still need to do lots of work to check if they are real. Their names start with S for superevent (not for hope) [bonus bonus note], then the date, and then a letter indicating the order it was uploaded to our database of candidates (we upload candidates with false alarm rates of around one per hour, so there are multiple database entries per day, and most are false alarms). S190425z was the 26th superevent uploaded on 2019 April 25th.

What is a superevent? We call anything flagged by our detection pipelines an event. We have multiple detection pipelines, and often multiple pipelines produce events for the same stretch of data (you’d expect this to happen for real signals). It was rather confusing having multiple events for the same signal (especially when trying to quickly check a candidate to issue an alert), so in O3 we group together events from similar times into SUPERevents.

GRB 190425?

Pozanenko et al. (2019) suggest a gamma-ray burst observed by INTEGRAL (first reported in GCN 24170). The INTEGRAL team themselves don’t find anything in their data, and seem sceptical of the significance of the detection claim. The significance of the claim seems to be based on there being two peaks in the data (one about 0.5 seconds after the merger, one 5.9 seconds after the merger), but I’m not convinced why this should be the case. Nothing was observed by Fermi, which is possibly because the source was obscured by the Earth for them. I’m interested in seeing more study of this possible gamma-ray burst.

EMMA 2019

At the time of GW190425, I was attending the first day of the Enabling Multi-Messenger Astrophysics in the Big Data Era Workshop. This was a meeting bringing together many involved in the search for counterparts to gravitational wave events. The alert for S190425z cause some excitement. I don’t think there was much sleep that week.

Signal-to-noise ratio ratios

The signal-to-noise ratio reported from our search algorithm for LIGO Livingston is 12.9, and the same code gives 2.5 for Virgo. Virgo was about 2.8 times less sensitive that Livingston at the time, so you might be wondering why we have a signal-to-noise ratio of 2.8, instead of 4.6? The reason is that our detectors are not equally sensitive in all directions. They are most sensitive directly to sources directly above and below, and less sensitive to sources from the sides. The relative signal-to-noise ratios, together with the time or arrival at the different detectors, helps us to figure out the directions the signal comes from.

Detection thresholds

In O2, GW170818 was only detected by GstLAL because its signal-to-noise ratios in Hanford and Virgo (4.1 and 4.2 respectively) were below the threshold used by PyCBC for their analysis (in O2 it was 5.5). Subsequently, PyCBC has been rerun on the O2 data to produce the second Open Gravitational-wave Catalog (2-OGC). This is an analysis performed by PyCBC experts both inside and outside the LIGO Scientific & Virgo Collaboration. For this, a threshold of 4 was used, and consequently they found GW170818, which is nice.

I expect that if the threshold for our usual multiple-detector detection pipelines were lowered to ~2, they would find GW190425. Doing so would make the analysis much trickier, so I’m not sure if anyone will ever attempt this. Let’s see. Perhaps the 3-OGC team will be feeling ambitious?

Rates calculations

In comparing rates calculated for this papers and those from our end-of-O2 paper, my student Chase Kimball (who calculated the new numbers) would like me to remember that it’s not exactly an apples-to-apples comparison. The older numbers evaluated our sensitivity to gravitational waves by doing a large number of injections: we simulated signals in our data and saw what fraction of search algorithms could pick out. The newer numbers used an approximation (using a simple signal-to-noise ratio threshold) to estimate our sensitivity. Performing injections is computationally expensive, so we’re saving that for our end-of-run papers. Given that we currently have only two detections, the uncertainty on the rates is large, and so we don’t need to worry too much about the details of calculating the sensitivity. We did calibrate our approximation to past injection results, so I think it’s really an apples-to-pears-carved-into-the-shape-of-apples comparison.

Paper release

The original plan for GW190425 was to have the paper published before the announcement, as we did with our early detections. The timeline neatly aligned with the AAS meeting, so that seemed like an good place to make the announcement. We managed to get the the paper submitted, and referee reports back, but we didn’t quite get everything done in time for the AAS announcement, so Plan B was to have the paper appear on the arXiv just after the announcement. Unfortunately, there was a problem uploading files to the arXiv (too large), and by the time that was fixed the posting deadline had passed. Therefore, we went with Plan C or sharing the paper on the LIGO DCC. Next time you’re struggling to upload something online, remember that it happens to Nobel-Prize winning scientific collaborations too.

On the question of when it is best to share a paper, I’m still not decided. I like the idea of being peer-reviewed before making a big splash in the media. I think it is important to show that science works by having lots of people study a topic, before coming to a consensus. Evidence needs to be evaluated by independent experts. On the other hand, engaging the entire community can lead to greater insights than a couple of journal reviewers, and posting to arXiv gives opportunity to make adjustments before you having the finished article.

I think I am leaning towards early posting in general—the amount of internal review that our Collaboration papers receive, satisfies my requirements that scientists are seen to be careful, and I like getting a wider range of comments—I think this leads to having the best paper in the end.

S

The joke that S stands for super, not hope is recycled from an article I wrote for the LIGO Magazine. The editor, Hannah Middleton wasn’t sure that many people would get the reference, but graciously printed it anyway. Did people get it, or do I need to fly around the world really fast?

Advertisement

Second star to the right and straight on ’til morning—Astrophysics white papers

What will be the next big thing in astronomy? One of the hard things about research is that you often don’t know what you will discover before you embark on an investigation. An idea might work out, or it might not, or along the way you might discover something unexpected which is far more interesting. As you might imagine, this can make laying definite plans difficult…

However, it is important to have plans for research. While you might not be sure of the outcome, it is necessary to weigh the risks and rewards associated with the probable results before you invest your time and taxpayers’ money!

To help with planning and prioritising, researchers in astrophysics often pull together white papers [bonus note]. These are sketches of ideas for future research, arguing why you think they might be interesting. These can then be discussed within the community to help shape the direction of the field. If other scientists find the paper convincing, you can build support which helps push for funding. If there are gaps in the logic, others can point these out to ave you heading the wrong way. This type of consensus building is especially important for large experiments or missions—you don’t want to spend a billion dollars on something unless you’re really sure it is a good idea and lots of people agree.

I have been involved with a few white papers recently. Here are some key ideas for where research should go.

Ground-based gravitational-wave detectors: The next generation

We’ve done some awesome things with Advanced LIGO and Advanced Virgo. In just a couple of years we have revolutionized our understanding of binary black holes. That’s not bad. However, our current gravitational-wave observatories are limited in what they can detect. What amazing things could we achieve with a new generation of detectors?

It can take decades to develop new instruments, therefore it’s important to start thinking about them early. Obviously, what we would most like is an observatory which can detect everything, but that’s not feasible. In this white paper, we pick the questions we most want answered, and see what the requirements for a new detector would be. A design which satisfies these specifications would therefore be a solid choice for future investment.

Binary black holes are the perfect source for ground-based detectors. What do we most want to know about them?

  1. How many mergers are there, and how does the merger rate change over the history of the Universe? We want to know how binary black holes are made. The merger rate encodes lots of information about how to make binaries, and comparing how this evolves compared with the rate at which the Universe forms stars, will give us a deeper understanding of how black holes are made.
  2. What are the properties (masses and spins) of black holes? The merger rate tells us some things about how black holes form, but other properties like the masses, spins and orbital eccentricity complete the picture. We want to make precise measurements for individual systems, and also understand the population.
  3. Where do supermassive black holes come from? We know that stars can collapse to produce stellar-mass black holes. We also know that the centres of galaxies contain massive black holes. Where do these massive black holes come from? Do they grow from our smaller black holes, or do they form in a different way? Looking for intermediate-mass black holes in the gap in-between will tells us whether there is a missing link in the evolution of black holes.
Detection horizon as a function of binary mass for Advanced LIGO, A+, Cosmic Explorer and the Einstein Telescope

The detection horizon (the distance to which sources can be detected) for Advanced LIGO (aLIGO), its upgrade A+, and the proposed Cosmic Explorer (CE) and Einstein Telescope (ET). The horizon is plotted for binaries with equal-mass, nonspinning components. Adapted from Hall & Evans (2019).

What can we do to answer these questions?

  1. Increase sensitivity! Advanced LIGO and Advanced Virgo can detect a 30 M_\odot + 30 M_\odot binary out to a redshift of about z \approx 1. The planned detector upgrade A+ will see them out to redshift z \approx 2. That’s pretty impressive, it means we’re covering 10 billion years of history. However, the peak in the Universe’s star formation happens at around z \approx 2, so we’d really like to see beyond this in order to measure how the merger rate evolves. Ideally we would see all the way back to cosmic dawn at z \approx 20 when the Universe was only 200 million years old and the first stars light up.
  2. Increase our frequency range! Our current detectors are limited in the range of frequencies they can detect. Pushing to lower frequencies helps us to detect heavier systems. If we want to detect intermediate-mass black holes of 100 M_\odot we need this low frequency sensitivity. At the moment, Advanced LIGO could get down to about 10~\mathrm{Hz}. The plot below shows the signal from a 100 M_\odot + 100 M_\odot binary at z = 10. The signal is completely undetectable at 10~\mathrm{Hz}.

    Gravitational wave signal from a binary of two 100 solar mass black holes at a redshift of 10

    The gravitational wave signal from the final stages of inspiral, merger and ringdown of a two 100 solar mass black holes at a redshift of 10. The signal chirps up in frequency. The colour coding shows parts of the signal above different frequencies. Part of Figure 2 of the Binary Black Holes White Paper.

  3. Increase sensitivity and frequency range! Increasing sensitivity means that we will have higher signal-to-noise ratio detections. For these loudest sources, we will be able to make more precise measurements of the source properties. We will also have more detections overall, as we can survey a larger volume of the Universe. Increasing the frequency range means we can observe a longer stretch of the signal (for the systems we currently see). This means it is easier to measure spin precession and orbital eccentricity. We also get to measure a wider range of masses. Putting the improved sensitivity and frequency range together means that we’ll get better measurements of individual systems and a more complete picture of the population.

How much do we need to improve our observatories to achieve our goals? To quantify this, lets consider the boost in sensitivity relative to A+, which I’ll call \beta_\mathrm{A+}. If the questions can be answered with \beta_\mathrm{A+} = 1, then we don’t need anything beyond the currently planned A+. If we need a slightly larger \beta_\mathrm{A+}, we should start investigating extra ways to improve the A+ design. If we need much larger \beta_\mathrm{A+}, we need to think about new facilities.

The plot below shows the boost necessary to detect a binary (with equal-mass nonspinning components) out to a given redshift. With a boost of \beta_\mathrm{A+} = 10 (blue line) we can survey black holes around 10 M_\odot30 M_\odot across cosmic time.

Boost to detect a binary of a given mass at a given redshift

The boost factor (relative to A+) \beta_\mathrm{A+} needed to detect a binary with a total mass M out to redshift z. The binaries are assumed to have equal-mass, nonspinning components. The colour scale saturates at \log_{10} \beta_\mathrm{A+} = 4.5. The blue curve highlights the reach at a boost factor of \beta_\mathrm{A+} = 10. The solid and dashed white lines indicate the maximum reach of Cosmic Explorer and the Einstein Telescope, respectively. Part of Figure 1 of the Binary Black Holes White Paper.

The plot above shows that to see intermediate-mass black holes, we do need to completely overhaul the low-frequency sensitivity. What do we need to detect a 100 M_\odot + 100 M_\odot binary at z = 10? If we parameterize the noise spectrum (power spectral density) of our detector as S_n(f) = S_{10}(f/10~\mathrm{Hz})^\alpha with a lower cut-off frequency of f_\mathrm{min}, we can investigate the various possibilities. The plot below shows the possible combinations of parameters which meet of requirements.

Noise curve requirements for intermediate-mass black hole detection

Requirements on the low-frequency noise power spectrum necessary to detect an optimally oriented intermediate-mass binary black hole system with two 100 solar mass components at a redshift of 10. Part of Figure 2 of the Binary Black Holes White Paper.

To build up information about the population of black holes, we need lots of detections. Uncertainties scale inversely with the square root of the number of detections, so you would expect few percent uncertainty after 1000 detections. If we want to see how the population evolves, we need these many per redshift bin! The plot below shows the number of detections per year of observing time for different boost factors. The rate starts to saturate once we detect all the binaries in the redshift range. This is as good as you’ll ever going to get.

Detections per redshift bin as a function of boost factor

Expected rate of binary black hole detections R_\mathrm{det} per redshift bin as a function of A+ boost factor \beta_\mathrm{A+} for three redshift bins. The merging binaries are assumed to be uniformly distributed with a constant merger rate roughly consistent with current observations: the solid line is about the current median, while the dashed and dotted lines are roughly the 90% bounds. Figure 3 of the Binary Black Holes White Paper.

Looking at the plots above, it is clear that A+ is not going to satisfy our requirements. We need something with a boost factor of \beta_\mathrm{A+} = 10: a next-generation observatory. Both the Cosmic Explorer and Einstein Telescope designs do satisfy our goals.

Yes!

Data is pleased. Credit: Paramount

Title: Deeper, wider, sharper: Next-generation ground-based gravitational-wave observations of binary black holes
arXiv:
1903.09220 [astro-ph.HE]
Contribution level: ☆☆☆☆☆ Leading author
Theme music: Daft Punk

Extreme mass ratio inspirals are awesome

We have seen gravitational waves from a stellar-mass black hole merging with another stellar-mass black hole, can we observe a stellar-mass black hole merging with a massive black hole? Yes, these are a perfect source for a space-based gravitational wave observatory. We call these systems extreme mass-ratio inspirals (or EMRIs, pronounced em-rees, for short) [bonus note].

Having such an extreme mass ratio, with one black hole much bigger than the other, gives EMRIs interesting properties. The number of orbits over the course of an inspiral scales with the mass ratio: the more extreme the mass ratio, the more orbits there are. Each of these gives us something to measure in the gravitational wave signal.

The intricate structure of an EMRI orbit

A short section of an orbit around a spinning black hole. While inspirals last for years, this would represent only a few hours around a black hole of mass M = 10^6 M_\odot. The position is measured in terms of the gravitational radius r_\mathrm{g} = GM/c^2. The innermost stable orbit for this black hole would be about r_\mathrm{g} = 2.3. Part of Figure 1 of the EMRI White Paper.

As EMRIs are so intricate, we can make exquisit measurements of the source properties. These will enable us to:

Event rates for EMRIs are currently uncertain: there could be just one per year or thousands. From the rate we can figure out the details of what is going in in the nuclei of galaxies, and what types of objects you find there.

With EMRIs you can unravel mysteries in astrophysics, fundamental physics and cosmology.

Have we sold you that EMRIs are awesome? Well then, what do we need to do to observe them? There is only one currently planned mission which can enable us to study EMRIs: LISA. To maximise the science from EMRIs, we have to support LISA.

Lisa Simpson dancing

As an aspiring scientist, Lisa Simpson is a strong supporter of the LISA mission. Credit: Fox

Title: The unique potential of extreme mass-ratio inspirals for gravitational-wave astronomy
arXiv:
1903.03686 [astro-ph.HE]
Contribution level: ☆☆☆☆☆ Leading author
Theme music: Muse

Bonus notes

White paper vs journal article

Since white papers are proposals for future research, they aren’t as rigorous as usual academic papers. They are really attempts to figure out a good question to ask, rather than being answers. White papers are not usually peer reviewed before publication—the point is that you want everybody to comment on them, rather than just one or two anonymous referees.

Whilst white papers aren’t quite the same class as journal articles, they do still contain some interesting ideas, so I thought they still merit a blog post.

Recycling

I have blogged about EMRIs before, so I won’t go into too much detail here. It was one of my former blog posts which inspired the LISA Science Team to get in touch to ask me to write the white paper.

Accuracy of inference on the physics of binary evolution from gravitational-wave observations

Gravitational-wave astronomy lets us observing binary black holes. These systems, being made up of two black holes, are pretty difficult to study by any other means. It has long been argued that with this new information we can unravel the mysteries of stellar evolution. Just as a palaeontologist can discover how long-dead animals lived from their bones, we can discover how massive stars lived by studying their black hole remnants. In this paper, we quantify how much we can really learn from this black hole palaeontology—after 1000 detections, we should pin down some of the most uncertain parameters in binary evolution to a few percent precision.

Life as a binary

There are many proposed ways of making a binary black hole. The current leading contender is isolated binary evolution: start with a binary star system (most stars are in binaries or higher multiples, our lonesome Sun is a little unusual), and let the stars evolve together. Only a fraction will end with black holes close enough to merge within the age of the Universe, but these would be the sources of the signals we see with LIGO and Virgo. We consider this isolated binary scenario in this work [bonus note].

Now, you might think that with stars being so fundamentally important to astronomy, and with binary stars being so common, we’d have the evolution of binaries figured out by now. It turns out it’s actually pretty messy, so there’s lots of work to do. We consider constraining four parameters which describe the bits of binary physics which we are currently most uncertain of:

  • Black hole natal kicks—the push black holes receive when they are born in supernova explosions. We now the neutron stars get kicks, but we’re less certain for black holes [bonus note].
  • Common envelope efficiency—one of the most intricate bits of physics about binaries is how mass is transferred between stars. As they start exhausting their nuclear fuel they puff up, so material from the outer envelope of one star may be stripped onto the other. In the most extreme cases, a common envelope may form, where so much mass is piled onto the companion, that both stars live in a single fluffy envelope. Orbiting inside the envelope helps drag the two stars closer together, bringing them closer to merging. The efficiency determines how quickly the envelope becomes unbound, ending this phase.
  • Mass loss rates during the Wolf–Rayet (not to be confused with Wolf 359) and luminous blue variable phases–stars lose mass through out their lives, but we’re not sure how much. For stars like our Sun, mass loss is low, there is enough to gives us the aurora, but it doesn’t affect the Sun much. For bigger and hotter stars, mass loss can be significant. We consider two evolutionary phases of massive stars where mass loss is high, and currently poorly known. Mass could be lost in clumps, rather than a smooth stream, making it difficult to measure or simulate.

We use parameters describing potential variations in these properties are ingredients to the COMPAS population synthesis code. This rapidly (albeit approximately) evolves a population of stellar binaries to calculate which will produce merging binary black holes.

The question is now which parameters affect our gravitational-wave measurements, and how accurately we can measure those which do?

Merger rate with redshift and chirp mass

Binary black hole merger rate at three different redshifts z as calculated by COMPAS. We show the rate in 30 different chirp mass bins for our default population parameters. The caption gives the total rate for all masses. Figure 2 of Barrett et al. (2018)

Gravitational-wave observations

For our deductions, we use two pieces of information we will get from LIGO and Virgo observations: the total number of detections, and the distributions of chirp masses. The chirp mass is a combination of the two black hole masses that is often well measured—it is the most important quantity for controlling the inspiral, so it is well measured for low mass binaries which have a long inspiral, but is less well measured for higher mass systems. In reality we’ll have much more information, so these results should be the minimum we can actually do.

We consider the population after 1000 detections. That sounds like a lot, but we should have collected this many detections after just 2 or 3 years observing at design sensitivity. Our default COMPAS model predicts 484 detections per year of observing time! Honestly, I’m a little scared about having this many signals…

For a set of population parameters (black hole natal kick, common envelope efficiency, luminous blue variable mass loss and Wolf–Rayet mass loss), COMPAS predicts the number of detections and the fraction of detections as a function of chirp mass. Using these, we can work out the probability of getting the observed number of detections and fraction of detections within different chirp mass ranges. This is the likelihood function: if a given model is correct we are more likely to get results similar to its predictions than further away, although we expect their to be some scatter.

If you like equations, the from of our likelihood is explained in this bonus note. If you don’t like equations, there’s one lurking in the paragraph below. Just remember, that it can’t see you if you don’t move. It’s OK to skip the equation.

To determine how sensitive we are to each of the population parameters, we see how the likelihood changes as we vary these. The more the likelihood changes, the easier it should be to measure that parameter. We wrap this up in terms of the Fisher information matrix. This is defined as

\displaystyle F_{ij} = -\left\langle\frac{\partial^2\ln \mathcal{L}(\mathcal{D}|\left\{\lambda\right\})}{\partial \lambda_i \partial\lambda_j}\right\rangle,

where \mathcal{L}(\mathcal{D}|\left\{\lambda\right\}) is the likelihood for data \mathcal{D} (the number of observations and their chirp mass distribution in our case), \left\{\lambda\right\} are our parameters (natal kick, etc.), and the angular brackets indicate the average over the population parameters. In statistics terminology, this is the variance of the score, which I think sounds cool. The Fisher information matrix nicely quantifies how much information we can lean about the parameters, including the correlations between them (so we can explore degeneracies). The inverse of the Fisher information matrix gives a lower bound on the covariance matrix (the multidemensional generalisation of the variance in a normal distribution) for the parameters \left\{\lambda\right\}. In the limit of a large number of detections, we can use the Fisher information matrix to estimate the accuracy to which we measure the parameters [bonus note].

We simulated several populations of binary black hole signals, and then calculate measurement uncertainties for our four population uncertainties to see what we could learn from these measurements.

Results

Using just the rate information, we find that we can constrain a combination of the common envelope efficiency and the Wolf–Rayet mass loss rate. Increasing the common envelope efficiency ends the common envelope phase earlier, leaving the binary further apart. Wider binaries take longer to merge, so this reduces the merger rate. Similarly, increasing the Wolf–Rayet mass loss rate leads to wider binaries and smaller black holes, which take longer to merge through gravitational-wave emission. Since the two parameters have similar effects, they are anticorrelated. We can increase one and still get the same number of detections if we decrease the other. There’s a hint of a similar correlation between the common envelope efficiency and the luminous blue variable mass loss rate too, but it’s not quite significant enough for us to be certain it’s there.

Correaltions between population parameters

Fisher information matrix estimates for fractional measurement precision of the four population parameters: the black hole natal kick \sigma_\mathrm{kick}, the common envelope efficiency \alpha_\mathrm{CE}, the Wolf–Rayet mass loss rate f_\mathrm{WR}, and the luminous blue variable mass loss rate f_\mathrm{LBV}. There is an anticorrealtion between f_\mathrm{WR} and \alpha_\mathrm{CE}, and hints at a similar anticorrelation between f_|mathrm{LBV} and \alpha_\mathrm{CE}. We show 1500 different realisations of the binary population to give an idea of scatter. Figure 6 of Barrett et al. (2018)

Adding in the chirp mass distribution gives us more information, and improves our measurement accuracies. The fraction uncertainties are about 2% for the two mass loss rates and the common envelope efficiency, and about 5% for the black hole natal kick. We’re less sensitive to the natal kick because the most massive black holes don’t receive a kick, and so are unaffected by the kick distribution [bonus note]. In any case, these measurements are exciting! With this type of precision, we’ll really be able to learn something about the details of binary evolution.

Standard deviation of measurements of population parameters

Measurement precision for the four population parameters after 1000 detections. We quantify the precision with the standard deviation estimated from the Fisher inforamtion matrix. We show results from 1500 realisations of the population to give an idea of scatter. Figure 5 of Barrett et al. (2018)

The accuracy of our measurements will improve (on average) with the square root of the number of gravitational-wave detections. So we can expect 1% measurements after about 4000 observations. However, we might be able to get even more improvement by combining constraints from other types of observation. Combining different types of observation can help break degeneracies. I’m looking forward to building a concordance model of binary evolution, and figuring out exactly how massive stars live their lives.

arXiv: 1711.06287 [astro-ph.HE]
Journal: Monthly Notices of the Royal Astronomical Society; 477(4):4685–4695; 2018
Favourite dinosaur: Professor Science

Bonus notes

Channel selection

In practise, we will need to worry about how binary black holes are formed, via isolated evolution or otherwise, before inferring the parameters describing binary evolution. This makes the problem more complicated. Some parameters, like mass loss rates or black hole natal kicks, might be common across multiple channels, while others are not. There are a number of ways we might be able to tell different formation mechanisms apart, such as by using spin measurements.

Kick distribution

We model the supernova kicks v_\mathrm{kick} as following a Maxwell–Boltzmann distribution,

\displaystyle p(v_\mathrm{kick}) = \sqrt{\frac{2}{\pi}}  \frac{v_\mathrm{kick}^2}{\sigma_\mathrm{kick}^3} \exp\left(\frac{-v_\mathrm{kick}^2}{2\sigma_\mathrm{kick}^2}\right),

where \sigma_\mathrm{kick} is the unknown population parameter. The natal kick received by the black hole v^*_\mathrm{kick} is not the same as this, however, as we assume some of the material ejected by the supernova falls back, reducing the over kick. The final natal kick is

v^*_\mathrm{kick} = (1-f_\mathrm{fb})v_\mathrm{kick},

where f_\mathrm{fb} is the fraction that falls back, taken from Fryer et al. (2012). The fraction is greater for larger black holes, so the biggest black holes get no kicks. This means that the largest black holes are unaffected by the value of \sigma_\mathrm{kick}.

The likelihood

In this analysis, we have two pieces of information: the number of detections, and the chirp masses of the detections. The first is easy to summarise with a single number. The second is more complicated, and we consider the fraction of events within different chirp mass bins.

Our COMPAS model predicts the merger rate \mu and the probability of falling in each chirp mass bin p_k (we factor measurement uncertainty into this). Our observations are the the total number of detections N_\mathrm{obs} and the number in each chirp mass bin c_k (N_\mathrm{obs} = \sum_k c_k). The likelihood is the probability of these observations given the model predictions. We can split the likelihood into two pieces, one for the rate, and one for the chirp mass distribution,

\mathcal{L} = \mathcal{L}_\mathrm{rate} \times \mathcal{L}_\mathrm{mass}.

For the rate likelihood, we need the probability of observing N_\mathrm{obs} given the predicted rate \mu. This is given by a Poisson distribution,

\displaystyle \mathcal{L}_\mathrm{rate} = \exp(-\mu t_\mathrm{obs}) \frac{(\mu t_\mathrm{obs})^{N_\mathrm{obs}}}{N_\mathrm{obs}!},

where t_\mathrm{obs} is the total observing time. For the chirp mass likelihood, we the probability of getting a number of detections in each bin, given the predicted fractions. This is given by a multinomial distribution,

\displaystyle \mathcal{L}_\mathrm{mass} = \frac{N_\mathrm{obs}!}{\prod_k c_k!} \prod_k p_k^{c_k}.

These look a little messy, but they simplify when you take the logarithm, as we need to do for the Fisher information matrix.

When we substitute in our likelihood into the expression for the Fisher information matrix, we get

\displaystyle F_{ij} = \mu t_\mathrm{obs} \left[ \frac{1}{\mu^2} \frac{\partial \mu}{\partial \lambda_i} \frac{\partial \mu}{\partial \lambda_j}  + \sum_k\frac{1}{p_k} \frac{\partial p_k}{\partial \lambda_i} \frac{\partial p_k}{\partial \lambda_j} \right].

Conveniently, although we only need to evaluate first-order derivatives, even though the Fisher information matrix is defined in terms of second derivatives. The expected number of events is \langle N_\mathrm{obs} \rangle = \mu t_\mathrm{obs}. Therefore, we can see that the measurement uncertainty defined by the inverse of the Fisher information matrix, scales on average as N_\mathrm{obs}^{-1/2}.

For anyone worrying about using the likelihood rather than the posterior for these estimates, the high number of detections [bonus note] should mean that the information we’ve gained from the data overwhelms our prior, meaning that the shape of the posterior is dictated by the shape of the likelihood.

Interpretation of the Fisher information matrix

As an alternative way of looking at the Fisher information matrix, we can consider the shape of the likelihood close to its peak. Around the maximum likelihood point, the first-order derivatives of the likelihood with respect to the population parameters is zero (otherwise it wouldn’t be the maximum). The maximum likelihood values of N_\mathrm{obs} = \mu t_\mathrm{obs} and c_k = N_\mathrm{obs} p_k are the same as their expectation values. The second-order derivatives are given by the expression we have worked out for the Fisher information matrix. Therefore, in the region around the maximum likelihood point, the Fisher information matrix encodes all the relevant information about the shape of the likelihood.

So long as we are working close to the maximum likelihood point, we can approximate the distribution as a multidimensional normal distribution with its covariance matrix determined by the inverse of the Fisher information matrix. Our results for the measurement uncertainties are made subject to this approximation (which we did check was OK).

Approximating the likelihood this way should be safe in the limit of large N_\mathrm{obs}. As we get more detections, statistical uncertainties should reduce, with the peak of the distribution homing in on the maximum likelihood value, and its width narrowing. If you take the limit of N_\mathrm{obs} \rightarrow \infty, you’ll see that the distribution basically becomes a delta function at the maximum likelihood values. To check that our N_\mathrm{obs} = 1000 was large enough, we verified that higher-order derivatives were still small.

Michele Vallisneri has a good paper looking at using the Fisher information matrix for gravitational wave parameter estimation (rather than our problem of binary population synthesis). There is a good discussion of its range of validity. The high signal-to-noise ratio limit for gravitational wave signals corresponds to our high number of detections limit.