Accuracy of inference on the physics of binary evolution from gravitational-wave observations

Gravitational-wave astronomy lets us observing binary black holes. These systems, being made up of two black holes, are pretty difficult to study by any other means. It has long been argued that with this new information we can unravel the mysteries of stellar evolution. Just as a palaeontologist can discover how long-dead animals lived from their bones, we can discover how massive stars lived by studying their black hole remnants. In this paper, we quantify how much we can really learn from this black hole palaeontology—after 1000 detections, we should pin down some of the most uncertain parameters in binary evolution to a few percent precision.

Life as a binary

There are many proposed ways of making a binary black hole. The current leading contender is isolated binary evolution: start with a binary star system (most stars are in binaries or higher multiples, our lonesome Sun is a little unusual), and let the stars evolve together. Only a fraction will end with black holes close enough to merge within the age of the Universe, but these would be the sources of the signals we see with LIGO and Virgo. We consider this isolated binary scenario in this work [bonus note].

Now, you might think that with stars being so fundamentally important to astronomy, and with binary stars being so common, we’d have the evolution of binaries figured out by now. It turns out it’s actually pretty messy, so there’s lots of work to do. We consider constraining four parameters which describe the bits of binary physics which we are currently most uncertain of:

  • Black hole natal kicks—the push black holes receive when they are born in supernova explosions. We now the neutron stars get kicks, but we’re less certain for black holes [bonus note].
  • Common envelope efficiency—one of the most intricate bits of physics about binaries is how mass is transferred between stars. As they start exhausting their nuclear fuel they puff up, so material from the outer envelope of one star may be stripped onto the other. In the most extreme cases, a common envelope may form, where so much mass is piled onto the companion, that both stars live in a single fluffy envelope. Orbiting inside the envelope helps drag the two stars closer together, bringing them closer to merging. The efficiency determines how quickly the envelope becomes unbound, ending this phase.
  • Mass loss rates during the Wolf–Rayet (not to be confused with Wolf 359) and luminous blue variable phases–stars lose mass through out their lives, but we’re not sure how much. For stars like our Sun, mass loss is low, there is enough to gives us the aurora, but it doesn’t affect the Sun much. For bigger and hotter stars, mass loss can be significant. We consider two evolutionary phases of massive stars where mass loss is high, and currently poorly known. Mass could be lost in clumps, rather than a smooth stream, making it difficult to measure or simulate.

We use parameters describing potential variations in these properties are ingredients to the COMPAS population synthesis code. This rapidly (albeit approximately) evolves a population of stellar binaries to calculate which will produce merging binary black holes.

The question is now which parameters affect our gravitational-wave measurements, and how accurately we can measure those which do?

Merger rate with redshift and chirp mass

Binary black hole merger rate at three different redshifts z as calculated by COMPAS. We show the rate in 30 different chirp mass bins for our default population parameters. The caption gives the total rate for all masses. Figure 2 of Barrett et al. (2018)

Gravitational-wave observations

For our deductions, we use two pieces of information we will get from LIGO and Virgo observations: the total number of detections, and the distributions of chirp masses. The chirp mass is a combination of the two black hole masses that is often well measured—it is the most important quantity for controlling the inspiral, so it is well measured for low mass binaries which have a long inspiral, but is less well measured for higher mass systems. In reality we’ll have much more information, so these results should be the minimum we can actually do.

We consider the population after 1000 detections. That sounds like a lot, but we should have collected this many detections after just 2 or 3 years observing at design sensitivity. Our default COMPAS model predicts 484 detections per year of observing time! Honestly, I’m a little scared about having this many signals…

For a set of population parameters (black hole natal kick, common envelope efficiency, luminous blue variable mass loss and Wolf–Rayet mass loss), COMPAS predicts the number of detections and the fraction of detections as a function of chirp mass. Using these, we can work out the probability of getting the observed number of detections and fraction of detections within different chirp mass ranges. This is the likelihood function: if a given model is correct we are more likely to get results similar to its predictions than further away, although we expect their to be some scatter.

If you like equations, the from of our likelihood is explained in this bonus note. If you don’t like equations, there’s one lurking in the paragraph below. Just remember, that it can’t see you if you don’t move. It’s OK to skip the equation.

To determine how sensitive we are to each of the population parameters, we see how the likelihood changes as we vary these. The more the likelihood changes, the easier it should be to measure that parameter. We wrap this up in terms of the Fisher information matrix. This is defined as

\displaystyle F_{ij} = -\left\langle\frac{\partial^2\ln \mathcal{L}(\mathcal{D}|\left\{\lambda\right\})}{\partial \lambda_i \partial\lambda_j}\right\rangle,

where \mathcal{L}(\mathcal{D}|\left\{\lambda\right\}) is the likelihood for data \mathcal{D} (the number of observations and their chirp mass distribution in our case), \left\{\lambda\right\} are our parameters (natal kick, etc.), and the angular brackets indicate the average over the population parameters. In statistics terminology, this is the variance of the score, which I think sounds cool. The Fisher information matrix nicely quantifies how much information we can lean about the parameters, including the correlations between them (so we can explore degeneracies). The inverse of the Fisher information matrix gives a lower bound on the covariance matrix (the multidemensional generalisation of the variance in a normal distribution) for the parameters \left\{\lambda\right\}. In the limit of a large number of detections, we can use the Fisher information matrix to estimate the accuracy to which we measure the parameters [bonus note].

We simulated several populations of binary black hole signals, and then calculate measurement uncertainties for our four population uncertainties to see what we could learn from these measurements.

Results

Using just the rate information, we find that we can constrain a combination of the common envelope efficiency and the Wolf–Rayet mass loss rate. Increasing the common envelope efficiency ends the common envelope phase earlier, leaving the binary further apart. Wider binaries take longer to merge, so this reduces the merger rate. Similarly, increasing the Wolf–Rayet mass loss rate leads to wider binaries and smaller black holes, which take longer to merge through gravitational-wave emission. Since the two parameters have similar effects, they are anticorrelated. We can increase one and still get the same number of detections if we decrease the other. There’s a hint of a similar correlation between the common envelope efficiency and the luminous blue variable mass loss rate too, but it’s not quite significant enough for us to be certain it’s there.

Correaltions between population parameters

Fisher information matrix estimates for fractional measurement precision of the four population parameters: the black hole natal kick \sigma_\mathrm{kick}, the common envelope efficiency \alpha_\mathrm{CE}, the Wolf–Rayet mass loss rate f_\mathrm{WR}, and the luminous blue variable mass loss rate f_\mathrm{LBV}. There is an anticorrealtion between f_\mathrm{WR} and \alpha_\mathrm{CE}, and hints at a similar anticorrelation between f_|mathrm{LBV} and \alpha_\mathrm{CE}. We show 1500 different realisations of the binary population to give an idea of scatter. Figure 6 of Barrett et al. (2018)

Adding in the chirp mass distribution gives us more information, and improves our measurement accuracies. The fraction uncertainties are about 2% for the two mass loss rates and the common envelope efficiency, and about 5% for the black hole natal kick. We’re less sensitive to the natal kick because the most massive black holes don’t receive a kick, and so are unaffected by the kick distribution [bonus note]. In any case, these measurements are exciting! With this type of precision, we’ll really be able to learn something about the details of binary evolution.

Standard deviation of measurements of population parameters

Measurement precision for the four population parameters after 1000 detections. We quantify the precision with the standard deviation estimated from the Fisher inforamtion matrix. We show results from 1500 realisations of the population to give an idea of scatter. Figure 5 of Barrett et al. (2018)

The accuracy of our measurements will improve (on average) with the square root of the number of gravitational-wave detections. So we can expect 1% measurements after about 4000 observations. However, we might be able to get even more improvement by combining constraints from other types of observation. Combining different types of observation can help break degeneracies. I’m looking forward to building a concordance model of binary evolution, and figuring out exactly how massive stars live their lives.

arXiv: 1711.06287 [astro-ph.HE]
Journal: Monthly Notices of the Royal Astronomical Society; 477(4):4685–4695; 2018
Favourite dinosaur: Professor Science

Bonus notes

Channel selection

In practise, we will need to worry about how binary black holes are formed, via isolated evolution or otherwise, before inferring the parameters describing binary evolution. This makes the problem more complicated. Some parameters, like mass loss rates or black hole natal kicks, might be common across multiple channels, while others are not. There are a number of ways we might be able to tell different formation mechanisms apart, such as by using spin measurements.

Kick distribution

We model the supernova kicks v_\mathrm{kick} as following a Maxwell–Boltzmann distribution,

\displaystyle p(v_\mathrm{kick}) = \sqrt{\frac{2}{\pi}}  \frac{v_\mathrm{kick}^2}{\sigma_\mathrm{kick}^3} \exp\left(\frac{-v_\mathrm{kick}^2}{2\sigma_\mathrm{kick}^2}\right),

where \sigma_\mathrm{kick} is the unknown population parameter. The natal kick received by the black hole v^*_\mathrm{kick} is not the same as this, however, as we assume some of the material ejected by the supernova falls back, reducing the over kick. The final natal kick is

v^*_\mathrm{kick} = (1-f_\mathrm{fb})v_\mathrm{kick},

where f_\mathrm{fb} is the fraction that falls back, taken from Fryer et al. (2012). The fraction is greater for larger black holes, so the biggest black holes get no kicks. This means that the largest black holes are unaffected by the value of \sigma_\mathrm{kick}.

The likelihood

In this analysis, we have two pieces of information: the number of detections, and the chirp masses of the detections. The first is easy to summarise with a single number. The second is more complicated, and we consider the fraction of events within different chirp mass bins.

Our COMPAS model predicts the merger rate \mu and the probability of falling in each chirp mass bin p_k (we factor measurement uncertainty into this). Our observations are the the total number of detections N_\mathrm{obs} and the number in each chirp mass bin c_k (N_\mathrm{obs} = \sum_k c_k). The likelihood is the probability of these observations given the model predictions. We can split the likelihood into two pieces, one for the rate, and one for the chirp mass distribution,

\mathcal{L} = \mathcal{L}_\mathrm{rate} \times \mathcal{L}_\mathrm{mass}.

For the rate likelihood, we need the probability of observing N_\mathrm{obs} given the predicted rate \mu. This is given by a Poisson distribution,

\displaystyle \mathcal{L}_\mathrm{rate} = \exp(-\mu t_\mathrm{obs}) \frac{(\mu t_\mathrm{obs})^{N_\mathrm{obs}}}{N_\mathrm{obs}!},

where t_\mathrm{obs} is the total observing time. For the chirp mass likelihood, we the probability of getting a number of detections in each bin, given the predicted fractions. This is given by a multinomial distribution,

\displaystyle \mathcal{L}_\mathrm{mass} = \frac{N_\mathrm{obs}!}{\prod_k c_k!} \prod_k p_k^{c_k}.

These look a little messy, but they simplify when you take the logarithm, as we need to do for the Fisher information matrix.

When we substitute in our likelihood into the expression for the Fisher information matrix, we get

\displaystyle F_{ij} = \mu t_\mathrm{obs} \left[ \frac{1}{\mu^2} \frac{\partial \mu}{\partial \lambda_i} \frac{\partial \mu}{\partial \lambda_j}  + \sum_k\frac{1}{p_k} \frac{\partial p_k}{\partial \lambda_i} \frac{\partial p_k}{\partial \lambda_j} \right].

Conveniently, although we only need to evaluate first-order derivatives, even though the Fisher information matrix is defined in terms of second derivatives. The expected number of events is \langle N_\mathrm{obs} \rangle = \mu t_\mathrm{obs}. Therefore, we can see that the measurement uncertainty defined by the inverse of the Fisher information matrix, scales on average as N_\mathrm{obs}^{-1/2}.

For anyone worrying about using the likelihood rather than the posterior for these estimates, the high number of detections [bonus note] should mean that the information we’ve gained from the data overwhelms our prior, meaning that the shape of the posterior is dictated by the shape of the likelihood.

Interpretation of the Fisher information matrix

As an alternative way of looking at the Fisher information matrix, we can consider the shape of the likelihood close to its peak. Around the maximum likelihood point, the first-order derivatives of the likelihood with respect to the population parameters is zero (otherwise it wouldn’t be the maximum). The maximum likelihood values of N_\mathrm{obs} = \mu t_\mathrm{obs} and c_k = N_\mathrm{obs} p_k are the same as their expectation values. The second-order derivatives are given by the expression we have worked out for the Fisher information matrix. Therefore, in the region around the maximum likelihood point, the Fisher information matrix encodes all the relevant information about the shape of the likelihood.

So long as we are working close to the maximum likelihood point, we can approximate the distribution as a multidimensional normal distribution with its covariance matrix determined by the inverse of the Fisher information matrix. Our results for the measurement uncertainties are made subject to this approximation (which we did check was OK).

Approximating the likelihood this way should be safe in the limit of large N_\mathrm{obs}. As we get more detections, statistical uncertainties should reduce, with the peak of the distribution homing in on the maximum likelihood value, and its width narrowing. If you take the limit of N_\mathrm{obs} \rightarrow \infty, you’ll see that the distribution basically becomes a delta function at the maximum likelihood values. To check that our N_\mathrm{obs} = 1000 was large enough, we verified that higher-order derivatives were still small.

Michele Vallisneri has a good paper looking at using the Fisher information matrix for gravitational wave parameter estimation (rather than our problem of binary population synthesis). There is a good discussion of its range of validity. The high signal-to-noise ratio limit for gravitational wave signals corresponds to our high number of detections limit.

 

Advertisement

Science with the space-based interferometer LISA. V. Extreme mass-ratio inspirals

The space-based observatory LISA will detect gravitational waves from massive black holes (giant black holes residing in the centres of galaxies). One particularly interesting signal will come from the inspiral of a regular stellar-mass black hole into a massive black hole. These are called extreme mass-ratio inspirals (or EMRIs, pronounced emries, to their friends) [bonus note]. We have never observed such a system. This means that there’s a lot we have to learn about them. In this work, we systematically investigated the prospects for observing EMRIs. We found that even though there’s a wide range in predictions for what EMRIs we will detect, they should be a safe bet for the LISA mission.

EMRI spacetime

Artistic impression of the spacetime for an extreme-mass-ratio inspiral, with a smaller stellar-mass black hole orbiting a massive black hole. This image is mandatory when talking about extreme-mass-ratio inspirals. Credit: NASA

LISA & EMRIs

My previous post discussed some of the interesting features of EMRIs. Because of the extreme difference in masses of the two black holes, it takes a long time for them to complete their inspiral. We can measure tens of thousands of orbits, which allows us to make wonderfully precise measurements of the source properties (if we can accurately pick out the signal from the data). Here, we’ll examine exactly what we could learn with LISA from EMRIs [bonus note].

First we build a model to investigate how many EMRIs there could be.  There is a lot of astrophysics which we are currently uncertain about, which leads to a large spread in estimates for the number of EMRIs. Second, we look at how precisely we could measure properties from the EMRI signals. The astrophysical uncertainties are less important here—we could get a revolutionary insight into the lives of massive black holes.

The number of EMRIs

To build a model of how many EMRIs there are, we need a few different inputs:

  1. The population of massive black holes
  2. The distribution of stellar clusters around massive black holes
  3. The range of orbits of EMRIs

We examine each of these in turn, building a more detailed model than has previously been constructed for EMRIs.

We currently know little about the population of massive black holes. This means we’ll discover lots when we start measuring signals (yay), but it’s rather inconvenient now, when we’re trying to predict how many EMRIs there are (boo). We take two different models for the mass distribution of massive black holes. One is based upon a semi-analytic model of massive black hole formation, the other is at the pessimistic end allowed by current observations. The semi-analytic model predicts massive black hole spins around 0.98, but we also consider spins being uniformly distributed between 0 and 1, and spins of 0. This gives us a picture of the bigger black hole, now we need the smaller.

Observations show that the masses of massive black holes are correlated with their surrounding cluster of stars—bigger black holes have bigger clusters. We consider four different versions of this trend: Gültekin et al. (2009); Kormendy & Ho (2013); Graham & Scott (2013), and Shankar et al. (2016). The stars and black holes about a massive black hole should form a cusp, with the density of objects increasing towards the massive black hole. This is great for EMRI formation. However, the cusp is disrupted if two galaxies (and their massive black holes) merge. This tends to happen—it’s how we get bigger galaxies (and black holes). It then takes some time for the cusp to reform, during which time, we don’t expect as many EMRIs. Therefore, we factor in the amount of time for which there is a cusp for massive black holes of different masses and spins.

Colliding galaxies

That’s a nice galaxy you have there. It would be a shame if it were to collide with something… Hubble image of The Mice. Credit: ACS Science & Engineering Team.

Given a cusp about a massive black hole, we then need to know how often an EMRI forms. Simulations give us a starting point. However, these only consider a snap-shot, and we need to consider how things evolve with time. As stellar-mass black holes inspiral, the massive black hole will grow in mass and the surrounding cluster will become depleted. Both these effects are amplified because for each inspiral, there’ll be many more stars or stellar-mass black holes which will just plunge directly into the massive black hole. We therefore need to limit the number of EMRIs so that we don’t have an unrealistically high rate. We do this by adding in a couple of feedback factors, one to cap the rate so that we don’t deplete the cusp quicker than new objects will be added to it, and one to limit the maximum amount of mass the massive black hole can grow from inspirals and plunges. This gives us an idea for the total number of inspirals.

Finally, we calculate the orbits that EMRIs will be on.  We again base this upon simulations, and factor in how the spin of the massive black hole effects the distribution of orbital inclinations.

Putting all the pieces together, we can calculate the population of EMRIs. We now need to work out how many LISA would be able to detect. This means we need models for the gravitational-wave signal. Since we are simulating a large number, we use a computationally inexpensive analytic model. We know that this isn’t too accurate, but we consider two different options for setting the end of the inspiral (where the smaller black hole finally plunges) which should bound the true range of results.

Number of detected EMRIs

Number of EMRIs for different size massive black holes in different astrophysical models. M1 is our best estimate, the others explore variations on this. M11 and M12 are designed to be cover the extremes, being the most pessimistic and optimistic combinations. The solid and dashed lines are for two different signal models (AKK and AKS), which are designed to give an indication of potential variation. They agree where the massive black hole is not spinning (M10 and M11). The range of masses is similar for all models, as it is set by the sensitivity of LISA. We can detect higher mass systems assuming the AKK signal model as it includes extra inspiral close to highly spinning black holes: for the heaviest black holes, this is the only part of the signal at high enough frequency to be detectable. Figure 8 of Babak et al. (2017).

Allowing for all the different uncertainties, we find that there should be somewhere between 1 and 4200 EMRIs detected per year. (The model we used when studying transient resonances predicted about 250 per year, albeit with a slightly different detector configuration, which is fairly typical of all the models we consider here). This range is encouraging. The lower end means that EMRIs are a pretty safe bet, we’d be unlucky not to get at least one over the course of a multi-year mission (LISA should have at least four years observing). The upper end means there could be lots—we might actually need to worry about them forming a background source of noise if we can’t individually distinguish them!

EMRI measurements

Having shown that EMRIs are a good LISA source, we now need to consider what we could learn by measuring them?

We estimate the precision we will be able to measure parameters using the Fisher information matrix. The Fisher matrix measures how sensitive our observations are to changes in the parameters (the more sensitive we are, the better we should be able to measure that parameter). It should be a lower bound on actual measurement precision, and well approximate the uncertainty in the high signal-to-noise (loud signal) limit. The combination of our use of the Fisher matrix and our approximate signal models means our results will not be perfect estimates of real performance, but they should give an indication of the typical size of measurement uncertainties.

Given that we measure a huge number of cycles from the EMRI signal, we can make really precise measurements of the the mass and spin of the massive black hole, as these parameters control the orbital frequencies. Below are plots for the typical measurement precision from our Fisher matrix analysis. The orbital eccentricity is measured to similar accuracy, as it influences the range of orbital frequencies too. We also get pretty good measurements of the the mass of the smaller black hole, as this sets how quickly the inspiral proceeds (how quickly the orbital frequencies change). EMRIs will allow us to do precision astronomy!

EMRI redshifted mass measurements

Distribution of (one standard deviation) fractional uncertainties for measurements of the  massive black hole (redshifted) mass M_z. Results are shown for the different astrophysical models, and for the different signal models.  The astrophysical model has little impact on the uncertainties. M4 shows a slight difference as it assumes heavier stellar-mass black holes. The results with the two signal models agree when the massive black hole is not spinning (M10 and M11). Otherwise, measurements are more precise with the AKK signal model, as this includes extra signal from the end of the inspiral. Part of Figure 11 of Babak et al. (2017).

EMRI spin measurements

Distribution of (one standard deviation) uncertainties for measurements of the massive black hole spin a. The results mirror those for the masses above. Part of Figure 11 of Babak et al. (2017).

Now, before you get too excited that we’re going to learn everything about massive black holes, there is one confession I should make. In the plot above I show the measurement accuracy for the redshifted mass of the massive black hole. The cosmological expansion of the Universe causes gravitational waves to become stretched to lower frequencies in the same way light is (this makes visible light more red, hence the name). The measured frequency is f_z = (1 + z)f where f is the frequency emitted, and z is the redshift (z= 0 for a nearby source, and is larger for further away sources). Lower frequency gravitational waves correspond to higher mass systems, so it is often convenient to work with the redshifted mass, the mass corresponding to the signal you measure if you ignore redshifting. The redshifted mass of the massive black hole is M_z = (1+z)M where M is the true mass. To work out the true mass, we need the redshift, which means we need to measure the distance to the source.

EMRI lumniosity distance measurement

Distribution of (one standard deviation) fractional uncertainties for measurements of the luminosity distance D_\mathrm{L}. The signal model is not as important here, as the uncertainty only depends on how loud the signal is. Part of Figure 12 of Babak et al. (2017).

The plot above shows the fractional uncertainty on the distance. We don’t measure this too well, as it is determined from the amplitude of the signal, rather than its frequency components. The situation is much as for LIGO. The larger uncertainties on the distance will dominate the overall uncertainty on the black hole masses. We won’t be getting all these to fractions of a percent. However, that doesn’t mean we can’t still figure out what the distribution of masses looks like!

One of the really exciting things we can do with EMRIs is check that the signal matches our expectations for a black hole in general relativity. Since we get such an excellent map of the spacetime of the massive black hole, it is easy to check for deviations. In general relativity, everything about the black hole is fixed by its mass and spin (often referred to as the no-hair theorem). Using the measured EMRI signal, we can check if this is the case. One convenient way of doing this is to describe the spacetime of the massive object in terms of a multipole expansion. The first (most important) terms gives the mass, and the next term the spin. The third term (the quadrupole) is set by the first two, so if we can measure it, we can check if it is consistent with the expected relation. We estimated how precisely we could measure a deviation in the quadrupole. Fortunately, for this consistency test, all factors from redshifting cancel out, so we can get really detailed results, as shown below. Using EMRIs, we’ll be able to check for really small differences from general relativity!

EMRI measurement of bumpy black hole spacetime

Distribution of (one standard deviation) of uncertainties for deviations in the quadrupole moment of the massive object spacetime \mathcal{Q}. Results are similar to the mass and spin measurements. Figure 13 of Babak et al. (2017).

In summary: EMRIS are awesome. We’re not sure how many we’ll detect with LISA, but we’re confident there will be some, perhaps a couple of hundred per year. From the signals we’ll get new insights into the masses and spins of black holes. This should tell us something about how they, and their surrounding galaxies, evolved. We’ll also be able to do some stringent tests of whether the massive objects are black holes as described by general relativity. It’s all pretty exciting, for when LISA launches, which is currently planned about 2034…

Sometimes, it leads to very little, and it seems like it's not worth it, and you wonder why you waited so long for something so disappointing

One of the most valuable traits a student or soldier can have: patience. Credit: Sony/Marvel

arXiv: 1703.09722 [gr-qc]
Journal: Physical Review D; 477(4):4685–4695; 2018
Conference proceedings: 1704.00009 [astro-ph.GA] (from when work was still in-progress)
Estimated number of Marvel films before LISA launch: 48 (starting with Ant-Man and the Wasp)

Bonus notes

Hyphenation

Is it “extreme-mass-ratio inspiral”, “extreme mass-ratio inspiral” or “extreme mass ratio inspiral”? All are used in the literature. This is one of the advantage of using “EMRI”. The important thing is that we’re talking about inspirals that have a mass ratio which is extreme. For this paper, we used “extreme mass-ratio inspiral”, but when I first started my PhD, I was first introduced to “extreme-mass-ratio inspirals”, so they are always stuck that way in my mind.

I think hyphenation is a bit of an art, and there’s no definitive answer here, just like there isn’t for superhero names, where you can have Iron Man, Spider-Man or Iceman.

Science with LISA

This paper is part of a series looking at what LISA could tells us about different gravitational wave sources. So far, this series covers

  1. Massive black hole binaries
  2. Cosmological phase transitions
  3. Standard sirens (for measuring the expansion of the Universe)
  4. Inflation
  5. Extreme-mass-ratio inspirals

You’ll notice there’s a change in the name of the mission from eLISA to LISA part-way through, as things have evolved. (Or devolved?) I think the main take-away so far is that the cosmology group is the most enthusiastic.

Going the distance: Mapping host galaxies of LIGO and Virgo sources in three dimensions using local cosmography and targeted follow-up

GW150914 claimed the title of many firsts—it was the first direct observation of gravitational waves, the first observation of a binary black hole system, the first observation of two black holes merging, the first time time we’ve tested general relativity in such extreme conditions… However, there are still many firsts for gravitational-wave astronomy yet to come (hopefully, some to be accompanied by cake). One of the most sought after, is the first is signal to have a clear electromagnetic counterpart—a glow in some part of the spectrum of light (from radio to gamma-rays) that we can observe with telescopes.

Identifying a counterpart is challenging, as it is difficult to accurately localise a gravitational-wave source. electromagnetic observers must cover a large area of sky before any counterparts fade. Then, if something is found, it can be hard to determine if that is from the same source as the gravitational waves, or some thing else…

To help the search, it helps to have as much information as possible about the source. Especially useful is the distance to the source. This can help you plan where to look. For nearby sources, you can cross-reference with galaxy catalogues, and perhaps pick out the biggest galaxies as the most likely locations for the source [bonus note]. Distance can also help plan your observations: you might want to start with regions of the sky where the source would be closer and so easiest to spot, or you may want to prioritise points where it is further and so you’d need to observe longer to detect it (I’m not sure there’s a best strategy, it depends on the telescope and the amount of observing time available). In this paper we describe a method to provide easy-to-use distance information, which could be supplied to observers to help their search for a counterpart.

Going the distance

This work is the first spin-off from the First 2 Years trilogy of papers, which looked a sky localization and parameter estimation for binary neutron stars in the first two observing runs of the advance-detector era. Binary neutron star coalescences are prime candidates for electromagnetic counterparts as we think there should be a bigger an explosion as they merge. I was heavily involved in the last two papers of the trilogy, but this study was led by Leo Singer: I think I mostly annoyed Leo by being a stickler when it came to writing up the results.

3D localization with the two LIGO detectors

Three-dimensional localization showing the 20%, 50%, and 90% credible levels for a typical two-detector early Advanced LIGO event. The Earth is shown at the centre, marked by \oplus. The true location is marked by the cross. Leo poetically described this as looking like the seeds of the jacaranda tree, and less poetically as potato chips. Figure 1 of Singer et al. (2016).

The idea is to provide a convenient means of sharing a 3D localization for a gravitational wave source. The full probability distribution is rather complicated, but it can be made more manageable if you break it up into pixels on the sky. Since astronomers need to decide where to point their telescopes, breaking up the 3D information along different lines of sight, should be useful for them.

Each pixel covers a small region of the sky, and along each line of sight, the probability distribution for distance D can be approximated using an ansatz

\displaystyle p(D|\mathrm{data}) \propto D^2\exp\left[-\frac{(D - \mu)^2}{2\sigma}\right],

where \mu and \sigma are calculated for each pixel individually.  The form of this ansatz can be understood as the posterior probability distribution is proportional to the product of the prior and the likelihood. Our prior is that sources are uniformly distributed in volume, which means \propto D^2, and the likelihood can often be well approximated as a Gaussian distribution, which gives the other piece [bonus note].

The ansatz doesn’t always fit perfectly, but it performs well on average. Considering the catalogue of binary neutron star signals used in the earlier papers, we find that roughly 50% of the time sources are found within the 50% credible volume, 90% are found in the 90% volume, etc. We looked at a more sophisticated means of constructing the localization volume in a companion paper.

The 3D localization is easy to calculate, and Leo has worked out a cunning way to evaluate the ansatz with BAYESTAR, our rapid sky-localization code, meaning that we can produce it on minute time-scales. This means that observers should have something to work with straight-away, even if we’ll need to wait a while for the full, final results. We hope that this will improve prospects for finding counterparts—some potential examples are sketched out in the penultimate section of the paper.

If you are interested in trying out the 3D information, there is a data release and the supplement contains a handy Python tutorial. We are hoping that the Collaboration will use the format for alerts for LIGO and Virgo’s upcoming observing run (O2).

arXiv: 1603.07333 [astro-ph.HE]; 1605.04242 [astro-ph.IM]
Journal: Astrophysical Journal Letters; 829(1):L15(7); 2016; Astrophysical Journal Supplement Series; 226(1):10(8); 2016
Data release: Going the distance
Favourite crisp flavour: Salt & vinegar
Favourite jacaranda: Jacaranda mimosifolia

Bonus notes

Catalogue shopping

The Event’s source has a luminosity distance of around 250–570 Mpc. This is sufficiently distant that galaxy catalogues are incomplete and not much use when it comes to searching. GW151226 and LVT151012 have similar problems, being at around the same distance or even further.

The gravitational-wave likelihood

For the professionals interested in understanding more about the shape of the likelihood, I’d recommend Cutler & Flanagan (1994). This is a fantastic paper which contains many clever things [bonus bonus note]. This work is really the foundation of gravitational-wave parameter estimation. From it, you can see how the likelihood can be approximated as a Gaussian. The uncertainty can then be evaluated using Fisher matrices. Many studies have been done using Fisher matrices, but it important to check that this is a valid approximation, as nicely explained in Vallisneri (2008). I ran into a case when it didn’t during my PhD.

Mergin’

As a reminder that smart people make mistakes, Cutler & Flanagan have a typo in the title of arXiv posting of their paper. This is probably the most important thing to take away from this paper.