Testing general relativity using golden black-hole binaries

Binary black hole mergers are the ultimate laboratory for testing gravity. The gravitational fields are strong, and things are moving at close to the speed of light. these extreme conditions are exactly where we expect our theories could breakdown, which is why we were so exciting by detecting gravitational waves from black hole coalescences. To accompany the first detection of gravitational waves, we performed several tests of Einstein’s theory of general relativity (it passed). This paper outlines the details of one of the tests, one that can be extended to include future detections to put Einstein’s theory to the toughest scrutiny.

One of the difficulties of testing general relativity is what do you compare it to? There are many alternative theories of gravity, but only a few of these have been studied thoroughly enough to give an concrete idea of what a binary black hole merger should look like. Even if general relativity comes out on top when compared to one alternative model, it doesn’t mean that another (perhaps one we’ve not thought of yet) can be ruled out. We need ways of looking for something odd, something which hints that general relativity is wrong, but doesn’t rely on any particular alternative theory of gravity.

The test suggested here is a consistency test. We split the gravitational-wave signal into two pieces,a low frequency part and a high frequency part, and then try to measure the properties of the source from the two parts. If general relativity is correct, we should get answers that agree; if it’s not, and there’s some deviation in the exact shape of the signal at different frequencies, we can get different answers. One way of thinking about this test is imagining that we have two experiments, one where we measure lower frequency gravitational waves and one where we measure higher frequencies, and we are checking to see if their results agree.

To split the waveform, we use a frequency around that of the last stable circular orbit: about the point that the black holes stop orbiting about each other and plunge together and merge [bonus note]. For GW150914, we used 132 Hz, which is about the same as the C an octave below middle C (a little before time zero in the simulation below). This cut roughly splits the waveform into the low frequency inspiral (where the two black hole are orbiting each other), and the higher frequency merger (where the two black holes become one) and ringdown (where the final black hole settles down).

We are fairly confident that we understand what goes on during the inspiral. This is similar physics to where we’ve been testing gravity before, for example by studying the orbits of the planets in the Solar System. The merger and ringdown are more uncertain, as we’ve never before probed these strong and rapidly changing gravitational fields. It therefore seems like a good idea to check the two independently [bonus note].

We use our parameter estimation codes on the two pieces to infer the properties of the source, and we compare the values for the mass M_f and spin \chi_f of the final black hole. We could use other sets of parameters, but this pair compactly sum up the properties of the final black hole and are easy to explain. We look at the difference between the estimated values for the mass and spin, \Delta M_f and \Delta \chi_f, if general relativity is a good match to the observations, then we expect everything to match up, and \Delta M_f and \Delta \chi_f to be consistent with zero. They won’t be exactly zero because we have noise in the detector, but hopefully zero will be within the uncertainty region [bonus note]. An illustration of the test is shown below, including one of the tests we did to show that it does spot when general relativity is not correct.

Consistency test resuls

Results from the consistency test. The top panels show the outlines of the 50% and 90% credible levels for the low frequency (inspiral) part of the waveform, the high frequency (merger–ringdown) part, and the entire (inspiral–merger–ringdown, IMR) waveform. The bottom panel shows the fractional difference between the high and low frequency results. If general relativity is correct, we expect the distribution to be consistent with (0,0), indicated by the cross (+). The left panels show a general relativity simulation, and the right panel shows a waveform from a modified theory of gravity. Figure 1 of Ghosh et al. (2016).

A convenient feature of using \Delta M_f and \Delta \chi_f to test agreement with relativity, is that you can combine results from multiple observations. By averaging over lots of signals, you can reduce the uncertainty from noise. This allows you to pin down whether or not things really are consistent, and spot smaller deviations (we could get precision of a few percent after about 100 suitable detections). I look forward to seeing how this test performs in the future!

arXiv: 1602.02453 [gr-qc]
Journal: Physical Review D; 94(2):021101(6); 2016
Favourite golden thing: Golden syrup sponge pudding

Bonus notes

Review

I became involved in this work as a reviewer. The LIGO Scientific Collaboration is a bit of a stickler when it comes to checking its science. We had to check that the test was coded up correctly, that the results made sense, and that calculations done and written up for GW150914 were all correct. Since most of the team are based in India [bonus note], this involved some early morning telecons, but it all went smoothly.

One of our checks was that the test wasn’t sensitive to exact frequency used to split the signal. If you change the frequency cut, the results from the two sections do change. If you lower the frequency, then there’s less of the low frequency signal and the measurement uncertainties from this piece get bigger. Conversely, there’ll be more signal in the high frequency part and so we’ll make a more precise measurement of the parameters from this piece. However, the overall results where you combine the two pieces stay about the same. You get best results when there’s a roughly equal balance between the two pieces, but you don’t have to worry about getting the cut exactly on the innermost stable orbit.

Golden binaries

In order for the test to work, we need the two pieces of the waveform to both be loud enough to allow us to measure parameters using them. This type of signals are referred to as golden. Earlier work on tests of general relativity using golden binaries has been done by Hughes & Menou (2015), and Nakano, Tanaka & Nakamura (2015). GW150914 was a golden binary, but GW151226 and LVT151012 were not, which is why we didn’t repeat this test for them.

GW150914 results

For The Event, we ran this test, and the results are consistent with general relativity being correct. The plots below show the estimates for the final mass and spin (here denoted a_f rather than \chi_f), and the fractional difference between the two measurements. The points (0,0) is at the 28% credible level. This means that if general relativity is correct, we’d expect a deviation at this level to occur around-about 72% of the time due to noise fluctuations. It wouldn’t take a particular rare realisation of noise to cause the assume true value of (0,0) to be found at this probability level, so we’re not too suspicious that something is amiss with general relativity.

GW150914 consistency test results

Results from the consistency test for The Event. The top panels final mass and spin measurements from the low frequency (inspiral) part of the waveform, the high frequency (post-inspiral) part, and the entire (IMR) waveform. The bottom panel shows the fractional difference between the high and low frequency results. If general relativity is correct, we expect the distribution to be consistent with (0,0), indicated by the cross. Figure 3 of the Testing General Relativity Paper.

The authors

Abhirup Ghosh and Archisman Ghosh were two of the leads of this study. They are both A. Ghosh at the same institution, which caused some confusion when compiling the LIGO Scientific Collaboration author list. I think at one point one them (they can argue which) was removed as someone thought there was a mistaken duplication. To avoid confusion, they now have their full names used. This is a rare distinction on the Discovery Paper (I’ve spotted just two others). The academic tradition of using first initials plus second name is poorly adapted to names which don’t fit the typical western template, so we should be more flexible.

A black hole Pokémon

The world is currently going mad for Pokémon Go, so it seems like the perfect time to answer the most burning of scientific questions: what would a black hole Pokémon be like?

Black hole Pokémon

Type: Dark/Ghost

Black holes are, well, black. Their gravity is so strong that if you get close enough, nothing, not even light, can escape. I think that’s about as dark as you can get!

After picking Dark as a primary type, I thought Ghost was a good secondary type, since black holes could be thought of as the remains of dead stars. This also fit well with black holes not really being made of anything—they are just warped spacetime—and so are ethereal in nature. Of course, black holes’ properties are grounded in general relativity and not the supernatural.

In the games, having a secondary type has another advantage: Dark types are weak against Fighting types. In reality, punching or kicking a black hole is a Bad Idea™: it will not damage the black hole, but will certainly cause you some difficulties. However, Ghost types are unaffected by Fighting-type moves, so our black hole Pokémon doesn’t have to worry about them.

Height: 0’04″/0.1 m

Real astrophysical black holes are probably a bit too big for Pokémon games.  The smallest Pokémon are currently the electric bug Joltik and fairy Flabébé, so I’ve made our black hole Pokémon the same size as these. It should comfortably fit inside a Pokéball.

Measuring the size of a black hole is actually rather tricky, since they curve spacetime. When talking about the size of a black hole, we normally think in terms of the Schwarzschild radius. Named after Karl Schwarzschild, who first calculated the spacetime of a black hole (although he didn’t realise that at the time), the Schwarzschild radius correspond to the event horizon (the point of no return) of a non-spinning black hole. It’s rather tricky to measure the distance to the centre of a black hole, so really the Schwarzschild radius gives an idea of the circumference (the distance around the edge) of the event horizon: this is 2π times the Schwarschild radius. We’ll take the height to really mean twice the Schwarzschild radius (which would be the Schwarzschild diameter, if that were actually a thing).

Weight: 7.5 × 1025 lbs/3.4 × 1025 kg

Although we made our black hole pocket-sized, it is monstrously heavy. The mass is for a black hole of the size we picked, and it is about 6 times that of the Earth. That’s still quite small for a black hole (it’s 3.6 million times less massive than the black hole that formed from GW150914’s coalescence). With this mass, our Pokémon would have a significant effect on the tides as it would quickly suck in the Earth’s oceans. Still, Pokémon doesn’t need to be too realistic.

Our black hole Pokémon would be by far the heaviest Pokémon, despite being one of the smallest. The heaviest Pokémon currently is the continent Pokémon Primal Groudon. This is 2,204.4 lbs/999.7 kg, so about 34,000,000,000,000,000,000,000 times lighter.

Within the games, having such a large weight would make our black hole Pokémon vulnerable to Grass Knot, a move which trips a Pokémon. The heavier the Pokémon, the more it is hurt by the falling over, so the more damage Grass Knot does. In the case of our Pokémon, when it trips it’s not so much that it hits the ground, but that the Earth hits it, so I think it’s fair that this hurts.

Gender: Unknown

Black holes are beautifully simple, they are described just by their mass, spin and electric charge. There’s no other information you can learn about them, so I don’t think there’s any way to give them a gender. I think this is rather fitting as the sun-like Solrock is also genderless, and it seems right that stars and black holes share this.

Ability: Sticky Hold
Hidden ability:
 Soundproof

Sticky Hold prevents a Pokémon’s item from being taken. (I’d expect wild black hole Pokémon to be sometimes found holding Stardust, from stars they have consumed). Due to their strong gravity, it is difficult to remove an object that is orbiting a black hole—a common misconception is that it is impossible to escape the pull of a black hole, this is only true if you cross the event horizon (if you replaced the Sun with a black hole of the same mass, the Earth would happily continue on its orbit as if nothing had happened).

Soundproof is an ability that protects Pokémon from sound-based moves. I picked it as a reference to sonic (or acoustic) black holes. These are black hole analogues—systems which mimic some of the properties of black holes. A sonic black hole can be made in a fluid which flows faster than its speed of sound. When this happens, sound can no longer escape this rapidly flowing region (it just gets swept away), just like light can’t escape from the event horizon or a regular black hole.

Sonic black holes are fun, because you can make them in the lab. You can them use them to study the properties of black holes—there is much excitement about possibly observing the equivalent of Hawking radiation. Predicted by Stephen Hawking (as you might guess), Hawking radiation is emitted by black holes, and could cause them to evaporate away (if they didn’t absorb more than they emit). Hawking radiation has never been observed from proper black holes, as it is very weak. However, finding the equivalent for sonic black holes might be enough to get Hawking his Nobel Prize…

Moves:

Start — Gravity
Start — Crunch

The starting two moves are straightforward. Gravity is the force which governs black holes; it is gravity which pulls material in and causes the collapse  of stars. I think Crunch neatly captures the idea of material being squeezed down by intense gravity.

Level 16 — Vacuum Wave

Vacuum Wave sounds like a good description of a gravitational wave: it is a ripple in spacetime. Black holes (at least when in a binary) are great sources of gravitational waves (as GW150914 and GW151226 have shown), so this seems like a sensible move for our Pokémon to learn—although I may be biased. Why at level 16? Because Einstein first predicted gravitational waves from his theory of general relativity in 1916.

Level 18 — Discharge

Black holes can have an electric charge, so our Pokémon should learn an Electric-type move. Charged black holes can have some weird properties. We don’t normally worry about charged black holes for two reasons. First, charged black holes are difficult to make: stuff is usually neutral overall, you don’t get a lot of similarly charged material in one place that can collapse down, and even if you did, it would quickly attract the opposite charge to neutralise itself. Second, if you did manage to make a charged black hole, it would quickly lose its charge: the strong electric and magnetic fields about the black hole would lead to the creation of charged particles that would neutralise the black hole. Discharge seems like a good move to describe this process.

Why level 18? The mathematical description of charged black holes was worked out by Hans Reissner and Gunnar Nordström, the second paper was published in 1918.

Level 19 —Light Screen

In general relativity, gravity bends spacetime. It is this warping that causes objects to move along curved paths (like the Earth orbiting the Sun). Light is affected in the same way and gets deflected by gravity, which is called gravitational lensing. This was the first experimental test of general relativity. In 1919, Arthur Eddington led an expedition to measure the deflection of light around the Sun during a solar eclipse.

Black holes, having strong gravity, can strongly lens light. The graphics from the movie Interstellar illustrate this beautifully. Below you can see how the image of the disc orbiting the black hole is distorted. The back of the disc is visible above and below the black hole! If you look closely, you can also see a bright circle inside the disc, close to the black hole’s event horizon. This is known as the light ring. It is where the path of light gets so bent, that it can orbit around and around the black hole many times. This sounds like a Light Screen to me.

Black hole and light bending

Light-bending around the black hole Gargantua in Interstellar. The graphics use proper simulations of black holes, but they did fudge a couple of details to make it look extra pretty. Credit: Warner Bros./Double Negative.

Level 29 — Dark Void
Level 36 — Hyperspace Hole
Level 62 — Shadow Ball

These are three moves which with the most black hole-like names. Dark Void might be “black hole” after a couple of goes through Google Translate. Hyperspace Hole might be a good name for one of the higher dimensional black holes theoreticians like to play around with. (I mean, they like to play with the equations, not actually the black holes, as you’d need more than a pair of safety mittens for that). Shadow Ball captures the idea that a black hole is a three-dimensional volume of space, not just a plug-hole for the Universe. Non-rotating black holes are spherical (rotating ones bulge out at the middle, as I guess many of us do), so “ball” fits well, but they aren’t actually the shadow of anything, so it falls apart there.

I’ve picked the levels to be the masses of the two black holes which inspiralled together to produce GW150914, measured in units of the Sun’s mass, and the mass of the black hole that resulted from their merger. There’s some uncertainty on these measurements, so it would be OK if the moves were learnt a few levels either way.

Level 63 — Whirlpool
Level 63 — Rapid Spin

When gas falls into a black hole, it often spirals around and forms into an accretion disc. You can see an artistic representation of one in the image from Instellar above. The gas swirls around like water going down the drain, making Whirlpool and apt move. As it orbits, the gas closer to the black hole is moving quicker than that further away. Different layers rub against each other, and, just like when you rub your hands together on a cold morning, they heat up. One of the ways we look for black holes is by spotting the X-rays emitted by these hot discs.

As the material spirals into a black hole, it spins it up. If a black hole swallows enough things that were all orbiting the same way, it can end up rotating extremely quickly. Therefore, I thought our black hole Pokémon should learn Rapid Spin as the same time as Whirlpool.

I picked level 63, as the solution for a rotating black hole was worked out by Roy Kerr in 1963. While Schwarzschild found the solution for a non-spinning black hole soon after Einstein worked out the details of general relativity in 1915, and the solution for a charged black hole came just after these, there’s a long gap before Kerr’s breakthrough. It was some quite cunning maths! (The solution for a rotating charged black hole was quickly worked out after this, in 1965).

Level 77 — Hyper Beam

Another cool thing about discs is that they could power jets. As gas sloshes around towards a black hole, magnetic fields can get tangled up. This leads to some of the material to be blasted outwards along the axis of the field. We’ve some immensely powerful jets of material, like the one below, and it’s difficult to imagine anything other than a black hole that could create such high energies! Important work on this was done by Roger Blandford and Roman Znajek in 1977, which is why I picked the level. Hyper Beam is no exaggeration in describing these jets.

Galaxy-scale radio jets

Jets from Centaurus A are bigger than the galaxy itself! This image is a composite of X-ray (blue), microwave (orange) and visible light. You can see the jets pushing out huge bubbles above and below the galaxy. We think the jets are powered by the galaxy’s central supermassive black hole. Credit: ESO/WFI/MPIfR/APEX/NASA/CXC/CfA/A.Weiss et al./R.Kraft et al.

After using Hyper Beam, a Pokémon must recharge for a turn. It’s an exhausting move. A similar thing may happen with black holes. If they accrete a lot of stuff, the radiation produced by the infalling material blasts away other gas and dust, cutting off the black hole’s supply of food. Black holes in the centres of galaxies may go through cycles of feeding, with discs forming, blowing away the surrounding material, and then a new disc forming once everything has settled down. This link between the black hole and its environment may explain why we see a trend between the size of supermassive black holes and the properties of their host galaxies.

Level 100 — Spacial Rend
Level 100 — Roar of Time

To finish off, since black holes are warped spacetime, a space move and a time move. Relativity say that space and time are two aspects of the same thing, so these need to be learnt together.

It’s rather tricky to imagine space and time being linked. Wibbly-wobbly, timey-wimey, spacey-wacey stuff gets quickly gets befuddling. If you imagine just two space dimension (forwards/backwards and left/right), then you can see how to change one to the other by just rotating. If you turn to face a different way, you can mix what was left to become forwards, or to become a bit of right and a bit of forwards. Black holes sort of do the same thing with space and time. Normally, we’re used to the fact that we a definitely travelling forwards in time, but if you stray beyond the event horizon of a black hole, you’re definitely travelling towards the centre of the black hole in the same inescapable way. Black holes are the masters when it comes to manipulating space and time.

There we have it, we can now sleep easy knowing what a black hole Pokémon would be like. Well almost, we still need to come up with a name. Something resembling a pun would be traditional. Suggestions are welcome. The next games in the series are Pokémon Sun and Pokémon Moon. Perhaps with this space theme Nintendo might consider a black hole Pokémon too?

Inference on gravitational waves from coalescences of stellar-mass compact objects and intermediate-mass black holes

I love collecting things, there’s something extremely satisfying about completing a set. I suspect that this is one of the alluring features of Pokémon—you’ve gotta catch ’em all. The same is true of black hole hunting. Currently, we know of stellar-mass black holes which are a few times the mass of our Sun, up to a few tens of the mass of our Sun (the black holes of GW150914 are the biggest yet to be observed), and we know of supermassive black holes, which are ten thousand to ten billion times the mass our Sun. However, we are missing intermediate-mass black holes which lie in the middle. We have Charmander and Charizard, but where is Charmeleon? The elusive ones are always the most satisfying to capture.

Knitted black hole

Adorable black hole (available for adoption). I’m sure this could be a Pokémon. It would be a Dark type. Not that I’ve given it that much thought…

Intermediate-mass black holes have evaded us so far. We’re not even sure that they exist, although that would raise questions about how you end up with the supermassive ones (you can’t just feed the stellar-mass ones lots of rare candy). Astronomers have suggested that you could spot intermediate-mass black holes in globular clusters by the impact of their gravity on the motion of other stars. However, this effect would be small, and near impossible to conclusively spot. Another way (which I’ve discussed before), would to be to look at ultra luminous X-ray sources, which could be from a disc of material spiralling into the black hole.  However, it’s difficult to be certain that we understand the source properly and that we’re not misclassifying it. There could be one sure-fire way of identifying intermediate-mass black holes: gravitational waves.

The frequency of gravitational waves depend upon the mass of the binary. More massive systems produce lower frequencies. LIGO is sensitive to the right range of frequencies for stellar-mass black holes. GW150914 chirped up to the pitch of a guitar’s open B string (just below middle C). Supermassive black holes produce gravitational waves at too low frequency for LIGO (a space-based detector would be perfect for these). We might just be able to detect signals from intermediate-mass black holes with LIGO.

In a recent paper, a group of us from Birmingham looked at what we could learn from gravitational waves from the coalescence of an intermediate-mass black hole and a stellar-mass black hole [bonus note].  We considered how well you would be able to measure the masses of the black holes. After all, to confirm that you’ve found an intermediate-mass black hole, you need to be sure of its mass.

The signals are extremely short: we only can detect the last bit of the two black holes merging together and settling down as a final black hole. Therefore, you might think there’s not much information in the signal, and we won’t be able to measure the properties of the source. We found that this isn’t the case!

We considered a set of simulated signals, and analysed these with our parameter-estimation code [bonus note]. Below are a couple of plots showing the accuracy to which we can infer a couple of different mass parameters for binaries of different masses. We show the accuracy of measuring the chirp mass \mathcal{M} (a much beloved combination of the two component masses which we are usually able to pin down precisely) and the total mass M_\mathrm{total}.

Measurement of chirp mass

Measured chirp mass for systems of different total masses. The shaded regions show the 90% credible interval and the dashed lines show the true values. The mass ratio q is the mass of the stellar-mass black hole divided by the mass of the intermediate-mass black hole. Figure 1 of Haster et al. (2016).

Measurement of total mass

Measured total mass for systems of different total masses. The shaded regions show the 90% credible interval and the dashed lines show the true values. Figure 2 of Haster et al. (2016).

For the lower mass systems, we can measure the chirp mass quite well. This is because we get a little information from the part of the gravitational wave from when the two components are inspiralling together. However, we see less and less of this as the mass increases, and we become more and more uncertain of the chirp mass.

The total mass isn’t as accurately measured as the chirp mass at low masses, but we see that the accuracy doesn’t degrade at higher masses. This is because we get some constraints on its value from the post-inspiral part of the waveform.

We found that the transition from having better fractional accuracy on the chirp mass to having better fractional accuracy on the total mass happened when the total mass was around 200–250 solar masses. This was assuming final design sensitivity for Advanced LIGO. We currently don’t have as good sensitivity at low frequencies, so the transition will happen at lower masses: GW150914 is actually in this transition regime (the chirp mass is measured a little better).

Given our uncertainty on the masses, when can we conclude that there is an intermediate-mass black hole? If we classify black holes with masses more than 100 solar masses as intermediate mass, then we’ll be able to say to claim a discovery with 95% probability if the source has a black hole of at least 130 solar masses. The plot below shows our inferred probability of there being an intermediate-mass black hole as we increase the black hole’s mass (there’s little chance of falsely identifying a lower mass black hole).

Intermediate-mass black hole probability

Probability that the larger black hole is over 100 solar masses (our cut-off mass for intermediate-mass black holes M_\mathrm{IMBH}). Figure 7 of Haster et al. (2016).

Gravitational-wave observations could lead to a concrete detection of intermediate mass black holes if they exist and merge with another black hole. However, LIGO’s low frequency sensitivity is important for detecting these signals. If detector commissioning goes to plan and we are lucky enough to detect such a signal, we’ll finally be able to complete our set of black holes.

arXiv: 1511.01431 [astro-ph.HE]
Journal: Monthly Notices of the Royal Astronomical Society457(4):4499–4506; 2016
Birmingham science summary: Inference on gravitational waves from coalescences of stellar-mass compact objects and intermediate-mass black holes (by Carl)
Other collectables: Breakthrough, Gruber, Shaw, Kavli

Bonus notes

Jargon

The coalescence of an intermediate-mass black hole and a stellar-mass object (black hole or neutron star) has typically been known as an intermediate mass-ratio inspiral (an IMRI). This is similar to the name for the coalescence of a a supermassive black hole and a stellar-mass object: an extreme mass-ratio inspiral (an EMRI). However, my colleague Ilya has pointed out that with LIGO we don’t really see much of the intermediate-mass black hole and the stellar-mass black hole inspiralling together, instead we see the merger and ringdown of the final black hole. Therefore, he prefers the name intermediate mass-ratio coalescence (or IMRAC). It’s a better description of the signal we measure, but the acronym isn’t as good.

Parameter-estimation runs

The main parameter-estimation analysis for this paper was done by Zhilu, a summer student. This is notable for two reasons. First, it shows that useful research can come out of a summer project. Second, our parameter-estimation code installed and ran so smoothly that even an undergrad with no previous experience could get some useful results. This made us optimistic that everything would work perfectly in the upcoming observing run (O1). Unfortunately, a few improvements were made to the code before then, and we were back to the usual level of fun in time for The Event.

Prospects for observing and localizing gravitational-wave transients with Advanced LIGO and Advanced Virgo

The week beginning February 8th was a big one for the LIGO and Virgo Collaborations. You might remember something about a few papers on the merger of a couple of black holes; however, those weren’t the only papers we published that week. In fact, they aren’t even (currently) the most cited

Prospects for Observing and Localizing Gravitational-Wave Transients with Advanced LIGO and Advanced Virgo is known within the Collaboration as the Observing Scenarios Document. It has a couple of interesting aspects

  • Its content is a mix of a schedule for detector commissioning and an explanation of data analysis. It is a rare paper that spans both the instrumental and data-analysis sides of the Collaboration.
  • It is a living review: it is intended to be periodically updated as we get new information.

There is also one further point of interest for me: I was heavily involved in producing this latest version.

In this post I’m going to give an outline of the paper’s content, but delve a little deeper into the story of how this paper made it to print.

The Observing Scenarios

The paper is divided up into four sections.

  1. It opens, as is traditional, with the introduction. This has no mentions of windows, which is a good start.
  2. Section 2 is the instrumental bit. Here we give a possible timeline for the commissioning of the LIGO and Virgo detectors and a plausible schedule for our observing runs.
  3. Next we talk about data analysis for transient (short) gravitational waves. We discuss detection and then sky localization.
  4. Finally, we bring everything together to give an estimate of how well we expect to be able to locate the sources of gravitational-wave signals as time goes on.

Packaged up, the paper is useful if you want to know when LIGO and Virgo might be observing or if you want to know how we locate the source of a signal on the sky. The aim was to provide a guide for those interested in multimessenger astronomy—astronomy where you rely on multiple types of signals like electromagnetic radiation (light, radio, X-rays, etc.), gravitational waves, neutrinos or cosmic rays.

The development of the detectors’ sensitivity is shown below. It takes many years of tweaking and optimising to reach design sensitivity, but we don’t wait until then to do some science. It’s just as important to practise running the instruments and analysing the data as it is to improve the sensitivity. Therefore, we have a series of observing runs at progressively higher sensitivity. Our first observing run (O1), featured just the two LIGO detectors, which were towards the better end of the expected sensitivity.

Possible advanced detector sensitivity

Plausible evolution of the Advanced LIGO and Advanced Virgo detectors with time. The lower the sensitivity curve, the further away we can detect sources. The distances quoted is range we could see binary neutrons stars (BNSs) to. The BNS-optimized curve is a proposal to tweak the detectors for finding BNSs. Fig. 1 of the Observing Scenarios Document.

It’s difficult to predict exactly how the detectors will progress (we’re doing many things for the first time ever), but the plot above shows our current best plan.

I’ll not go into any more details about the science in the paper as I’ve already used up my best ideas writing the LIGO science summary.

If you’re particularly interested in sky localization, you might like to check out the data releases for studies using (simulated) binary neutron star and burst signals. The binary neutron star analysis is similar to that we do for any compact binary coalescence (the merger of a binary containing neutron stars or black holes), and the burst analysis works more generally as it doesn’t require a template for the expected signal.

The path to publication

Now, this is the story of how a Collaboration paper got published. I’d like to take a minute to tell you how I became responsible for updating the Observing Scenarios…

In the beginning

The Observing Scenarios has its origins long before I joined the Collaboration. The first version of the document I can find is from July 2012. Amongst the labyrinth of internal wiki pages we have, the earliest reference I’ve uncovered was from August 2012 (the plan was to have a mature draft by September). The aim was to give a road map for the advanced-detector era, so the wider astronomical community would know what to expect.

I imagine it took a huge effort to bring together all the necessary experts from across the Collaboration to sit down and write the document.

Any document detailing our plans would need to be updated regularly as we get a better understanding of our progress on commissioning the detectors (and perhaps understanding what signals we will see). Fortunately, there is a journal that can cope with just that: Living Reviews in Relativity. Living Reviews is designed so that authors can update their articles so that they never become (too) out-of-date.

A version was submitted to Living Reviews early in 2013, around the same time as a version was posted to the arXiv. We had referee reports (from two referees), and were preparing to resubmit. Unfortunately, Living Reviews suspended operations before we could. However, work continued.

Updating sky localization

I joined the LIGO Scientific Collaboration when I started at the University of Birmingham in October 2013. I soon became involved in a variety of activities of the Parameter Estimation group (my boss, Alberto Vecchio, is the chair of the group).

Sky localization was a particularly active area as we prepared for the first runs of Advanced LIGO. The original version of the Observing Scenarios Document used a simple approximate means of estimating sky localization, using just timing triangulation (it didn’t even give numbers for when we only had two detectors running). We knew we could do better.

We had all the code developed, but we needed numbers for a realistic population of signals. I was one of the people who helped running the analyses to get these. We had the results by the summer of 2014; we now needed someone to write up the results. I have a distinct recollection of there being silence on our weekly teleconference. Then Alberto asked me if I would do it? I said yes: it would probably only take me a week or two to write a short technical note.

Saying yes is a slippery slope.

That note became Parameter estimation for binary neutron-star coalescences with realistic noise during the Advanced LIGO era, a 24-page paper (it considers more than just sky localization).

Numbers in hand, it was time to update the Observing Scenarios. Even if things were currently on hold with Living Reviews, we could still update the arXiv version. I thought it would be easiest if I put them in, with a little explanation, myself. I compiled a draft and circulated in the Parameter Estimation group. Then it was time to present to the Data Analysis Council.

The Data Analysis Council either sounds like a shadowy organisation orchestrating things from behind the scene, or a place where people bicker over trivial technical issues. In reality it is a little of both. This is the body that should coordinate all the various bits of analysis done by the Collaboration, and they have responsibility for the Observing Scenarios Document. I presented my update on the last call before Christmas 2014. They were generally happy, but said that the sky localization on the burst side needed updating too! There was once again a silence on the call when it came to the question of who would finish off the document. The Observing Scenarios became my responsibility.

(I had though that if I helped out with this Collaboration paper, I could take the next 900 off. This hasn’t worked out.)

The review

With some help from the Burst group (in particular Reed Essick, who had lead their sky localization study), I soon had a new version with fully up-to-date sky localization. This was ready for our March Collaboration meeting. I didn’t go (I was saving my travel budget for the summer), so Alberto presented on my behalf. It was now agreed that the document should go through internal review.

It’s this which I really want to write about. Peer review is central to modern science. New results are always discussed by experts in the community, to try to understand the value of the work; however, peer review is formalised in the refereeing of journal articles, when one or more (usually anonymous) experts examine work before it can be published. There are many ups and down with this… For Collaboration papers, we want to be sure that things are right before we share them publicly. We go through internal peer review. In my opinion this is much more thorough than journal review, and this shows how seriously the Collaboration take their science.

Unfortunately, setting up the review was also where we hit a hurdle—it took until July. I’m not entirely sure why there was a delay: I suspect it was partly because everyone was busy assembling things ahead of O1 and partly because there were various discussions amongst the high-level management about what exactly we should be aiming for. Working as part of a large collaboration can mean that you get to be involved in wonderful science, but it can means lots of bureaucracy and politics. However, in the intervening time, Living Reviews was back in operation.

The review team consisted of five senior people, each of whom had easily five times as much experience as I do, with expertise in each of the areas covered in the document. The chair of the review was Alan Weinstein, head of the Caltech LIGO Laboratory Astrophysics Group, who has an excellent eye for detail. Our aim was to produce the update for the start of O1 in September. (Spolier: We didn’t make it)

The review team discussed things amongst themselves and I got the first comments at the end of August. The consensus was that we should not just update the sky localization, but update everything too (including the structure of the document). This precipitated a flurry of conversations with the people who organise the schedules for the detectors, those who liaise with our partner astronomers on electromagnetic follow-up, and everyone who does sky localization. I was initially depressed that we wouldn’t make our start of O1 deadline; however, then something happened that altered my perspective.

On September 14, four days before the official start of O1, we made a detection. GW150914 would change everything.

First, we could no longer claim that binary neutron stars were expected to be our most common source—instead they became the source we expect would most commonly have an electromagnetic counterpart.

Second, we needed to be careful how we described engineering runs. GW150914 occurred in our final engineering run (ER8). Practically, there was difference between the state of the detector then and in O1. The point of the final engineering run was to get everything running smoothly so all we needed to do at the official start of O1 was open the champagne. However, we couldn’t make any claims about being able to make detections during engineering runs without being krass and letting the cat out of the bag. I’m rather pleased with the sentence

Engineering runs in the commissioning phase allow us to understand our detectors and analyses in an observational mode; these are not intended to produce astrophysical results, but that does not preclude the possibility of this happening.

I don’t know if anyone noticed the implication. (Checking my notes, this was in the September 18 draft, which shows how quickly we realised the possible significance of The Event).

Finally, since the start of observations proved to be interesting, and because the detectors were running so smoothly, it was decided to extend O1 from three months to four so that it would finish in January. No commissioning was going to be done over the holidays, so it wouldn’t affect the schedule. I’m not sure how happy the people who run the detectors were about working over this period, but they agreed to the plan. (No-one asked if we would be happy to run parameter estimation over the holidays).

After half-a-dozen drafts, the review team were finally happy with the document. It was now October 20, and time to proceed to the next step of review: circulation to the Collaboration.

Collaboration papers go through a sequence of stages. First they are circulated to the everyone for comments. This can be pointing out typos, suggesting references or asking questions about the analysis. This lasts two weeks. During this time, the results must also be presented on a Collaboration-wide teleconference. After comments are addressed, the paper is sent for examination Executive Committees of the LIGO and Virgo Collaborations. After approval from them (and the review team check any changes), the paper is circulated to the Collaboration again for any last comments and checking of the author list. At the same time it is sent to the Gravitational Wave International Committee, a group of all the collaborations interested in gravitational waves. This final stage is a week. Then you can you can submit the paper.

Peer review for the journal doesn’t seem to arduous in comparison does it?

Since things were rather busy with all the analysis of GW150914, the Observing Scenario took a little longer than usual to clear all these hoops. I presented to the Collaboration on Friday 13 November. (This was rather unlucky as I was at a workshop in Italy and I had to miss the tour of the underground Laboratori Nazionali del Gran Sasso). After addressing comments from everyone (the Executive Committees do read things carefully), I got the final sign-off to submit December 21. At least we made it before the end of O1.

Good things come…

This may sound like a tale of frustration and delay. However, I hope that it is more than that, and it shows how careful the Collaboration is. The Observing Scenarios is really a review: it doesn’t contain new science. The updated sky localization results are from studies which have appeared in peer-reviewed journals, and are based upon codes that have been separately reviewed. Despite this, every statement was examined and every number checked and rechecked, and every member of the Collaboration had opportunity to examine the results and comment on the document.

I guess this attention to detail isn’t surprising given that our work is based on measuring a change in length of one part in 1,000,000,000,000,000,000,000.

Since this is how we treat review articles, can you imagine how much scrutiny the Discovery Paper had? Everything had at least one extra layer of review, every number had to be signed-off individually by the appropriate review team, and there were so many comments on the paper that the editors had to switch to using a ticketing system we normally use for tracking bugs in our software. This level of oversight helped me to sleep a little more easily: there are six numbers in the abstract alone I could have potentially messed up.

Of course, all this doesn’t mean we can’t make mistakes…

Looking forward

The Living Reviews version was accepted January 22, just after the end of O1. We made had to make a couple of tweaks to correct tenses. The final version appeared February 8, in time to be the last paper of the pre-discovery era.

It is now time to be thinking about the next update! There are certainly a few things on the to-do list (perhaps even some news on LIGO-India). We are having a Collaboration meeting in a couple of weeks’ time, so hopefully I can start talking to people about it then. Perhaps it’ll be done by the start of O2?

arXiv: 1304.0670 [gr-qc]
Journal: Living Reviews In Relativity; 19:1(39); 2016
Science summary: Planning for a Bright Tomorrow: Prospects for Gravitational-wave Astronomy with Advanced LIGO and Advanced Virgo
Bonus fact:
 This is the only paper whose arXiv ID I know by heart.

General relativity at 100

General relativity, our best theory of gravitation, turns 100 this week!

Where is the cake?

Happy birthday general relativity! Einstein presented his field equations to the Prussian Academy of Science on 25 November 1915.

Gravity is the force which pulls us down towards the ground and keeps the Earth in orbit around the Sun. It is the most important force in astrophysics, causing gas clouds to collapse down to become stars; binding gas, stars and dark matter to become galaxies, and governing the overall evolution of the Universe.

Our understanding of gravity dates back to Isaac Newton. Newton realised that the same force that makes apples fall from trees also controls the motion of the planets. Realising that we could use physics to explain the everyday and the entire cosmos was a big leap! Newton’s theory was hugely successful, but he was never quite satisfied with it. In his theory gravity acted between distant objects (the Earth and an apple or the Earth and the Sun) instantaneously, without any explanation of what was linking them. The solution to this would come over 200 years later from Albert Einstein.

Einstein’s first big idea didn’t come from thinking about gravity, but thinking about electromagnetism. Electromagnetism is the force that is responsible for fridge magnets sticking, atoms binding to form molecules and the inner workings of whatever device you are currently reading this on. According to the rules of electromagnetism, ripples in electromagnetic fields (better known as light) always travel at a particular speed. This tweaked Einstein’s curiosity, as the rules didn’t say what this speed was relative to: you should measure the same speed if standing still, travelling at 100 miles per hour in a train or at a million miles per hour in a spacecraft. Speed is the distance travelled divided by the time taken, so Einstein realised that if the speed is always the same, then distances and times must appear different depending upon how you are moving! Moving clocks tick slower; at everyday speeds this effect is tiny, but we have confirmed that this is indeed the case. These ideas about space and time became known as Einstein’s theory of special relativity. Special relativity has a couple of important consequences, one is the infamous equation, the other is that the speed of light becomes a universal speed limit.

Special relativity says that no information can travel faster than the speed of light; this is a problem for Newton’s theory of gravitation, where the effects of gravity are transmitted instantaneously. Einstein knew that he would have to extend his theory to include gravity and freely falling objects, and he spend almost 11 years pondering on the problem. The result was general relativity.

In special relativity, space and time become linked, merging into one another depending upon how you are moving relative to what you are measuring. General relativity takes this further and has space–time distorted by the energy and matter. This idea can be a little tricky to explain.

In Newtonian mechanics, things (apples, light, billiard balls, etc.) like to travel in straight lines. They keep going at a constant speed in the same direction unless there is a force acting on them. Gravity is a force which pulls things away from their straight line, pulling the Earth into its circular orbit around the Sun, and accelerating an apple towards the ground. In general relativity, we take a different view. Things still travel in a straight line, but the effect of gravity is to bend space–time! A straight line in a curved space is a curve. If we don’t know about the curvature, it looks like the object is pulled off its straight line and there must be a force doing this, which we call gravity. Alternatively, we can say that gravity curves the space–time, and that the object follows its straight line in this. In general relativity, space–time tells matter how to move; matter tells space–time how to curve.

Shotest distance between London and New York

The shortest way to travel from London Heathrow airport to JFK International airport. On a long-distance flight, you may have noticed that it appears that you are moving along a curved line, but that is because the shortest distance across the Earth’s curved surface is a curve. We call this a geodesic, and the same idea applies to curved space–time in general relativity. Credit: Mr Reid.

General relativity solves Newton’s original worries. Objects are connected by space–time. This is not the rigid background of Newtonian physics, but a dynamic object, that is shaped by its contents. Space–time is curved by mass, and when the mass moves or reshapes itself, it takes time for the curvature everywhere else to readjust. When you drop a pebble into a pond, you disturb the surface, but it takes a while for the water further away to know about the splash; there’s a ripple that travels outwards, carrying the information about the disturbance. A similar thing happens for changes in gravity, there are ripples in space–time. Ripples in electromagnetic fields are electromagnetic waves, and these ripples in the gravitational fields are gravitational waves: both travel at the speed of light, in agreement with special relativity.

General relativity is not only a beautiful theory, it has so far passed every experimental test. Right from the start Einstein looked for checks of his theory. One of the calculations he did while formulating his theory was how the orbit of Mercury would change. Mercury is the planet closest to the Sun and so experiences the strongest gravity. Its orbit isn’t a perfect circle, but an ellipse so that Mercury is sometimes a little closer to the Sun, and is sometimes a little further. In Newtonian gravity, each orbit should trace out exactly the same path, but in general relativity there is some extra rotation. Each orbit is slightly shifted with respect to the last, so if you traced out many orbits, you’d end up with a Spirograph-like pattern. This is known as precession of the orbit, and is a consequence of there being slightly greater curvature closer to the Sun. This evolution of Mercury’s orbit had already been measured. Some thought it indicated there was a new planet inside Mercury’s orbit (which was called Vulcan but isn’t Spock’s home) that was giving it a little pull. However, Einstein calculated the general relativity predicted exactly the right amount of extra rotation!

The next test came in 1919. General relativity predicts that the path of light is bent by massive objects. This is gravitational lensing. At the time, the only object that could cause measurable bending was the Sun. If we could measure a change in the position of background stars when the Sun was in front of them, we could check if the amount of bending was as expected. There’s an obvious problem here: the Sun’s so bright that you can’t see stars around it. Arthur Eddington had the idea of making the measurement during an eclipse. He mounted an expedition and confirmed the prediction. This was big news and made Einstein a superstar.

Now, 100 years after Einstein proposed his theory, we are poised to make the most precise tests. There is currently a global effort to directly detect gravitational waves. Measuring the gravitational waves will tell us if ripples in space–time behave as Einstein predicted. The waves will also tell us about the systems that created them, this will give us an up-close glimpse of black holes. Black holes are the regions of strongest gravity; they are where the curvature of space–time becomes so immense that all straight lines lead inwards. Checking that the black holes of Nature match what we expect from general relativity, will test the theory in the most extreme conditions possible.

The Advanced LIGO detectors are currently listening for gravitational-wave signals from merging neutron stars or black holes, and next year Advanced Virgo plans join the hunt too. We don’t (yet) know how often such signals occur, so we can’t say when the first detection will be made. Perhaps this will be soon and we will learn something more about gravitation…

Ripples in space time

Merging black holes create ripples in space time. These can be detected with a laser interferometer. Credit: Gravitational Wave Group.

Searches for continuous gravitational waves from nine young supernova remnants

The LIGO Scientific Collaboration is busy analysing the data we’re currently taking with Advanced LIGO at the moment. However, the Collaboration is still publishing results from initial LIGO too. The most recent paper is a search for continuous waves—signals that are an almost constant hum throughout the observations. (I expect they’d be quite annoying for the detectors). Searching for continuous waves takes a lot of computing power (you can help by signing up for Einstein@Home), and is not particularly urgent since the sources don’t do much, hence it can take a while for results to appear.

Supernova remnants

Massive stars end their lives with an explosion, a supernova. Their core collapses down and their outer layers are blasted off. The aftermath of the explosion can be beautiful, with the thrown-off debris forming a bubble expanding out into the interstellar medium (the diffuse gas, plasma and dust between stars). This structure is known as a supernova remnant.

The bubble of a supernova remnant

The youngest known supernova remnant, G1.9+0.3 (it’s just 150 years old), observed in X-ray and optical light. The ejected material forms a shock wave as it pushes the interstellar material out of the way. Credit: NASA/CXC/NCSU/DSS/Borkowski et al.

At the centre of the supernova remnant may be what is left following the collapse of the core of the star. Depending upon the mass of the star, this could be a black hole or a neutron star (or it could be nothing). We’re interested in the case it is a neutron star.

Neutron stars

Neutron stars are incredibly dense. One teaspoon’s worth would have about as much mass as 300 million elephants. Neutron stars are like giant atomic nuclei. We’re not sure how matter behaves in such extreme conditions as they are impossible to replicate here on Earth.

If a neutron star rotates rapidly (we know many do) and has an uneven or if there are waves in the the neutron star that moves lots of material around (like Rossby waves on Earth), then it can emit continuous gravitational waves. Measuring these gravitational waves would tell you about how bumpy the neutron star is or how big the waves are, and therefore something about what the neutron star is made from.

Neutron stars are most likely to emit loud gravitational waves when they are young. This is for two reasons. First, the supernova explosion is likely to give the neutron star a big whack, this could ruffle up its surface and set off lots of waves, giving rise to the sort of bumps and wobbles that emit gravitational waves. As the neutron star ages, things can quiet down, the neutron star relaxes, bumps smooth out and waves dissipate. This leaves us with smaller gravitational waves. Second, gravitational waves carry away energy, slowing the rotation of the neutron star. This also means that the signal gets quieter (and harder) to detect as the  neutron star ages.

Since young neutron stars are the best potential sources, this study looked at nine young supernova remnants in the hopes of finding continuous gravitational waves. Searching for gravitational waves from particular sources is less computationally expensive than searching the entire sky. The search included Cassiopeia A, which had been previously searched in LIGO’s fifth science run, and G1.9+0.3, which is only 150 years old, as discovered by Dave Green. The positions of the searched supernova remnants are shown in the map of the Galaxy below.

Galactic map of supernova remnants

The nine young supernova remnants searched for continuous gravitational waves. The yellow dot marks the position of the Solar System. The green markers show the supernova remnants, which are close to the Galactic plane. Two possible positions for Vela Jr (G266.2−1.2) were used, since we are uncertain of its distance. Original image: NASA/JPL-Caltech/ESO/R. Hurt.

Gravitational-wave limits

No gravitational waves were found. The search checks how well template waveforms match up with the data. We tested that this works by injecting some fake signals into the data.  Since we didn’t detect anything, we can place upper limits on how loud any gravitational waves could be. These limits were double-checked by injecting some more fake signals at the limit, to see if we could detect them. We quoted 95% upper limits, that is where we expect that if a signal was present we could see it 95% of the time. The results actually have a small safety margin built in, so the injected signals were typically found 96%–97% of the time. In any case, we are fairly sure that there aren’t gravitational waves at or above the upper limits.

These upper limits are starting to tell us interesting things about the size of neutron-star bumps and waves. Hopefully, with data from Advanced LIGO and Advanced Virgo, we’ll actually be able to make a detection. Then we’ll not only be able to say that these bumps and waves are smaller than a particular size, but they are this size. Then we might be able to figure out the recipe for making the stuff of neutron stars (I think it might be more interesting than just flour and water).

arXiv: 1412.5942 [astro-ph.HE]
Journal: Astrophysical Journal; 813(1):39(16); 2015
Science summary: Searching for the youngest neutron stars in the Galaxy
Favourite supernova remnant:
 Cassiopeia A

Advanced LIGO: O1 is here!

The LIGO sites

Aerial views of LIGO Hanford (left) and LIGO Livingston (right). Both have 4 km long arms (arranged in an L shape) which house the interferometer beams. Credit: LIGO/Caltech/MIT.

The first observing run (O1) of Advanced LIGO began just over a week ago. We officially started at 4 pm British Summer Time, Friday 18 September. It was a little low key: you don’t want lots of fireworks and popping champagne corks next to instruments incredibly sensitive to vibrations. It was a smooth transition from our last engineering run (ER8), so I don’t even think there were any giant switches to throw. Of course, I’m not an instrumentalist, so I’m not qualified to say. In any case, it is an exciting time, and it is good to see some media attention for the Collaboration (with stories from Nature, the BBC and Science).

I would love to keep everyone up to date with the latest happenings from LIGO. However, like everyone in the Collaboration, I am bound by a confidentiality agreement. (You don’t want to cross people with giant lasers). We can’t have someone saying that we have detected a binary black hole (or that we haven’t) before we’ve properly analysed all the data, finalised calibration, reviewed all the code, double checked our results, and agreed amongst ourselves that we know what’s going on. When we are ready, announcements will come from the LIGO Spokespreson Gabriela González and the Virgo Spokesperson Fulvio Ricci. Event rates are uncertain and we’re not yet at final sensitivity, so don’t expect too much of O1.

There are a couple of things that I can share about our status. Whereas normally everything I write is completely unofficial, these are suggested replies to likely questions.

Have you started taking data?
We began collecting science quality data at the beginning of September, in preparation of the first Observing Run that started on Friday, September 18, and are planning on collecting data for about 4 months

We certainly do have data, but there’s nothing new about that (other than the improved sensitivity). Data from the fifth and sixth science runs of initial LIGO are now publicly available from the LIGO Open Science Center. You can go through it and try to find anything we missed (which is pretty cool).

Have you seen anything in the data yet?
We analyse the data “online” in an effort to provide fast information to astronomers for possible follow up of triggers using a relatively low statistical significance (a false alarm rate of ~1/month). We have been tuning the details of the communication procedures, and we have not yet automated all the steps that can be, but we will send alerts to astronomers above the threshold agreed as soon as we can after those triggers are identified. Since analysis to validate and candidate in gravitational-wave data can take months, we will not be able to say anything about results in the data on short time scales. We will share any and all results when ready, though probably not before the end of the Observing Run. 

Analysing the data is tricky, and requires lots of computing time, as well as carefully calibration of the instruments (including how many glitches they produce which could look like a gravitational-wave trigger). It takes a while to get everything done. If you would like to help out, you can sign up for Einstein@Home, which will use your computer’s idle time to crunch through data. It doesn’t just analyse LIGO data, but has also discovered pulsars in radio and gamma-ray data. You can find out more about Einstein@Home in the LIGO Magazine.

We heard that you sent a gravitational-wave trigger to astronomers already—is that true?
During O1, we will send alerts to astronomers above a relatively low significance threshold; we have been practising communication with astronomers in ER8. We are following this policy with partners who have signed agreement with us and have observational capabilities ready to follow up triggers. Because we cannot validate gravitational-wave events until we have enough statistics and diagnostics, we have confidentiality agreements about any triggers that hare shared, and we hope all involved abide by those rules.

I expect this is a pre-emptive question and answer. It would be amazing if we could see an electromagnetic (optical, gamma-ray, radio, etc.) counterpart to a gravitational wave. (I’ve done some work on how well we can localise gravitational-wave sources on the sky). It’s likely that any explosion or afterglow that is visible will fade quickly, so we want astronomers to be able to start looking straight-away. This means candidate events are sent out before they’re fully vetted: they could just be noise, they could be real, or they could be a blind injection. A blind injection is when a fake signal is introduced to the data secretly; this is done to keep us honest and check that our analysis does work as expected (since we know what results we should get for the signal that was injected). There was a famous blind injection during the run of initial LIGO called Big Dog. (We take gravitational-wave detection seriously). We’ve learnt a lot from injections, even if they are disappointing. Alerts will be sent out for events with false alarm rates of about one per month, so we expect a few across O1 just because of random noise.

While I can’t write more about the science from O1, I will still be posting about astrophysics, theory and how we analyse data. Those who are impatient can be reassured that gravitational waves have been detected, just indirectly, from observations of binary pulsars.

Periastron shift of binary pulsar

The orbital decay of the Hulse-Taylor binary pulsar (PSR B1913+16). The points are measured values, while the curve is the theoretical prediction for gravitational waves. I love this plot. Credit: Weisberg & Taylor (2005).

LIGO Magazine: Issue 7

It is an exciting time time in LIGO. The start of the first observing run (O1) is imminent. I think they just need to sort out a button that is big enough and red enough (or maybe gather a little more calibration data… ), and then it’s all systems go. Making the first direct detection of gravitational waves with LIGO would be an enormous accomplishment, but that’s not all we can hope to achieve: what I’m really interested in is what we can learn from these gravitational waves.

The LIGO Magazine gives a glimpse inside the workings of the LIGO Scientific Collaboration, covering everything from the science of the detector to what collaboration members like to get up to in their spare time. The most recent issue was themed around how gravitational-wave science links in with the rest of astronomy. I enjoyed it, as I’ve been recently working on how to help astronomers look for electromagnetic counterparts to gravitational-wave signals. It also features a great interview with Joseph Taylor Jr., one of the discoverers of the famous Hulse–Taylor binary pulsar. The back cover features an article I wrote about parameter estimation: an expanded version is below.

How does parameter estimation work?

Detecting gravitational waves is one of the great challenges in experimental physics. A detection would be hugely exciting, but it is not the end of the story. Having observed a signal, we need to work out where it came from. This is a job for parameter estimation!

How we analyse the data depends upon the type of signal and what information we want to extract. I’ll use the example of a compact binary coalescence, that is the inspiral (and merger) of two compact objects—neutron stars or black holes (not marshmallows). Parameters that we are interested in measuring are things like the mass and spin of the binary’s components, its orientation, and its position.

For a particular set of parameters, we can calculate what the waveform should look like. This is actually rather tricky; including all the relevant physics, like precession of the binary, can make for some complicated and expensive-to-calculate waveforms. The first part of the video below shows a simulation of the coalescence of a black-hole binary, you can see the gravitational waveform (with characteristic chirp) at the bottom.

We can compare our calculated waveform with what we measured to work out how well they fit together. If we take away the wave from what we measured with the interferometer, we should be left with just noise. We understand how our detectors work, so we can model how the noise should behave; this allows us to work out how likely it would be to get the precise noise we need to make everything match up.

To work out the probability that the system has a given parameter, we take the likelihood for our left-over noise and fold in what we already knew about the values of the parameters—for example, that any location on the sky is equally possible, that neutron-star masses are around 1.4 solar masses, or that the total mass must be larger than that of a marshmallow. For those who like details, this is done using Bayes’ theorem.

We now want to map out this probability distribution, to find the peaks of the distribution corresponding to the most probable parameter values and also chart how broad these peaks are (to indicate our uncertainty). Since we can have many parameters, the space is too big to cover with a grid: we can’t just systematically chart parameter space. Instead, we randomly sample the space and construct a map of its valleys, ridges and peaks. Doing this efficiently requires cunning tricks for picking how to jump between spots: exploring the landscape can take some time, we may need to calculate millions of different waveforms!

Having computed the probability distribution for our parameters, we can now tell an astronomer how much of the sky they need to observe to have a 90% chance of looking at the source, give the best estimate for the mass (plus uncertainty), or even figure something out about what neutron stars are made of (probably not marshmallow). This is the beginning of gravitational-wave astronomy!

Monty and Carla map parameter space

Monty, Carla and the other samplers explore the probability landscape. Nutsinee Kijbunchoo drew the version for the LIGO Magazine.

Threshold concepts, learning and Pokémon

Last academic year I took a course on teaching and learning in higher education. I enjoyed learning some education theory: I could recognise habits (both good and bad) my students and I practised. I wanted to write up some of the more interesting ideas I came across, I’ve been kept busy by other things (such as writing up the assessment for the course), but here’s the first.

Pokémon PhD

My collection of qualifications.

Threshold concepts

Have you ever had that moment when something just clicked? Perhaps you’ve been struggling with a particular topic for a while, then suddenly you understand, you have that eureka moment, and you get a new view on everything. That’s one of the best moments in studying.

Threshold concepts are a particular class of these troublesome concepts that have a big impact on your development. It’s not just that these take work to come to grips with, but that you can’t master a subject until you’ve figured them out. As a teacher, they’re something to watch out for, as these are the areas where students’ progress can be held up and they need extra support.

Being a student is much like being a Pokémon. When you start out, there’s not much you can do. Then you practise and gain experience. This can be difficult, but you level up. (Sadly, as a student you don’t the nice little jingle when you do). After levelling up, things don’t seem so hard, so you can tackle more difficult battles. Every so often you’ll learn a new technique, a new move (hopefully you won’t forget an old one), and now you are even more awesome.

That’s all pretty straightforward. If you keep training, you get stronger. (It does turn out that studying helps you learn).

Mastering a threshold concept is more like evolving. You get a sudden boost to your abilities, and now you can learn moves that you couldn’t before, perhaps you’ve changed type too. Evolving isn’t straightforward. Sometimes all you need to do is keep working and level up; other times you’ll need a particular item, to learn a special move, to hone one particular aspect, or be in the right place at the right time. Some people might assimilate a threshold concept like any other new idea, while others will have to put in extra time and effort. In any case, the end effect is transformative. Congratulations, your Physics Student has evolved into a Physicist!

Do di do dum-di-dum-di-dum!

Educational evolution. Pokémon art by Ken Sugimori.

Characteristics

Every discipline has its own threshold concepts. For example, in Pokémon training there’s the idea that different types of Pokémon are have advantages over others (water is super effective against fire, which is super effective against grass, etc.), so you should pick your Pokémon (and their moves) appropriately. Threshold concepts share certain attributes, they are:

  • Transformative: Once understood they change how you view the subject (or life in general). Understanding Pokémon types changes how you view battles, if you’re going to go up against a gym leader called Lt. Surge, you know to pack some ground types as they’re good against electric types. It also now makes sense how Iron Man (obviously a steel type), can take on Thor (an electric type) in The Avengers, but gets trashed by some random henchpeople with heat powers (fire types) in Iron Man 3.
  • Irreversible: Once learnt there’s no changing back. You know you’re going to have a bad time if you’ve only packed fire types to go explore an underwater cave.
  • Integrative: Having conquered a threshold concept, you can spot connections to other ideas and progress to develop new skills. Once you’ve realised that your beloved Blastoise has a weakness to electric types, you might consider teaching it Earthquake as a counter. You’ve moved on from just considering the types of Pokémon, to considering their move-sets too. Or you could make sure your team has ground type, so you can switch out your Blastoise. Now you’re considering the entire composition of your team.
  • Troublesome: Threshold concepts are difficult. They may be conceptionally challenging (how do you remember 18 types vs 18 types?), counter-intuitive (why don’t Ghost moves affect Normal types?), or be resisted as they force you to re-evaluate your (deep held) opinions (maybe Gyarados isn’t the best, despite looking ferocious, because it has a double weakness to electric types, and perhaps using your favourite Snorlax in all situations is a bad idea, regardless of how huggable he is).

Using these criteria, you might be able to think of some threshold concepts in other areas, and possibly see why people have problems with them. For example, it might now make more sense why some people have problems accepting global warming is caused by humans. This is certainly a transformative idea, as it makes you reconsider your actions and those of society, as well as the prospects for future generations, and it is certainly troublesome, as one has to accept that the world can change, that our current lifestyle (and perhaps certain economic activities) is not sustainable, and that we are guilty of damaging our only home. The irreversible nature of threshold concepts might also make people resist coming to terms with them, as they prefer their current state of comfortable innocence.

Loss of Arctic ice over 15 years

National Geographic atlases from 1999 to 2014, showing how Arctic ice has melted. At this rate, ice type Pokémon will be extinct in the wild by the end of the century (they’re already the rarest type). It’s super depressing…

Summary

Threshold concepts are key but troublesome concepts within a discipline. If you want to be the very best, you have to master them all. They are so called as they can be thought of as doorways, through which a student must step in order to progress. After moving passed the threshold, they enter a new (larger) room, the next stage in their development. From here, they can continue to the next threshold. Looking back, they also get a new perspective on what they have learnt; they can now see new ways of connecting together old ideas. Students might be hesitant to step through because they are nervous about leaving their current state behind. They might also have problems just because the door is difficult to open. If you are planning teaching, you should consider what threshold concepts you’ll cover, and then how to build your lessons around threshold concepts so no-one gets left behind.

I especially like the idea of threshold concepts, as it shows learning to be made up of a journey through different stages of understanding, rather than building a pile of knowledge. (Education should be more about understanding how to figure out the right answer than knowing what it is). If you’d like to learn more about threshold concepts, I’d recommend browsing the resources compiled by Michael Flanagan of UCL.

BritGrav 15

April was a busy month. Amongst other adventures, I organised the 15th British Gravity (BritGrav) Meeting. This is a conference for everyone involved with research connected to gravitation. I was involved in organising last year’s meeting in Cambridge, and since there were very few fatalities, it was decided that I could be trusted to organise it again. Overall, I think it actually went rather well.

Before I go on to review the details of the meeting, I must thank everyone who helped put things together. Huge thanks to my organisational team who helped with every aspect of the organisation. They did wonderfully, even if Hannah seems to have developed a slight sign-making addiction. Thanks go to Classical & Quantum Gravity and the IOP Gravitational Physics Group for sponsoring the event, and to the College of  Engineering & Physical Sciences’ marketing team for advertising. Finally, thanks to everyone who came along!

Talks

BritGrav is a broad meeting. It turns out there’s rather a lot of research connected to gravity! This has both good and bad aspects. On the plus side, you can make connections with people you wouldn’t normally run across and find out about new areas you wouldn’t hear about at a specialist meeting. On the negative side, there can some talks which go straight-over your head (no matter how fast your reaction are). The 10-minute talk format helps a little here. There’s not enough time to delve into details (which only specialists would appreciate) so speakers should stick to giving an overview that is generally accessible. Even in the event that you do get completely lost, it’s only a few minutes until the next talk, so it’s not too painful. The 10-minute time slot also helps us to fit in a large number of talks, to cover all the relevant areas of research.

Open quantum gravitational systems

Slide from Teodora Oniga’s BritGrav 15 talk on gauge invariant quantum gravitational decoherence. There are not enough cats featured in slides on gravitational physics.

I’ve collected together tweets and links from the science talks: it was a busy two days! We started with Chris Collins talking about testing the inverse-square law here at Birmingham. There were a couple more experimental talks leading into a session on gravitational waves, which I enjoyed particularly. I spoke on a soon-to-be published paper, and Birmingham PhDs Hannah Middleton and Simon Stevenson gave interesting talks on what we could learn about black holes from gravitational waves.

Detecting neutron star–black hole binaries

Slides demonstrating the difficulty of detecting gravitational-wave signals from Alex Nielsen’s talk on searching for neutron star–black hole binaries with gravitational waves. Fortunately we don’t do it by eye (although if you flick between the slides you can notice the difference).

In the afternoon, there were some talks on cosmology (including a nice talk from Maggie Lieu on hierarchical modelling) and on the structure of neutron stars. I was especially pleased to see a talk by Alice Harpole, as she had been one of my students at Cambridge (she was always rather good). The day concluded with some numerical relativity and the latest work generating gravitational-waveform templates (more on that later).

The second day was more theoretical, and somewhat more difficult for me. We had talks on modified gravity and on quantum theories. We had talks on the properties of various spacetimes. Brien Nolan told us that everyone should have a favourite spacetime before going into the details of his: McVittie. That’s not the spacetime around a biscuit, sadly, but could describe a black hole in an expanding Universe, which is almost as cool.

The final talks of the day were from the winners of the Gravitational Physics Group’s Thesis Prize. Anna Heffernan (2014 winner) spoke on the self-force problem. This is important for extreme-mass-ratio systems, such as those we’ll hopefully detect with eLISA. Patricia Schmidt (2105 winner) spoke on including precession in binary black hole waveforms. In general, the spins of black holes won’t be aligned with their orbital angular momentum, causing them to precess. The precession modulates the gravitational waveform, so you need to include this when analysing signals (especially if you want to measure the black holes’ spins). Both talks were excellent and showed how much work had gone into the respective theses.

The meeting closed with the awarding of the best student-talk prize, kindly sponsored by Classical & Quantum Gravity. Runners up were Viraj Sanghai and Umberto Lupo. The winner was Christopher Moore from Cambridge. Chris gave a great talk on how to include uncertainty about your gravitational waveform (which is important if you don’t have all the physics, like precession, accurately included) into your parameter estimation: if your waveform is wrong, you’ll get the wrong answer. We’re currently working on building waveform uncertainty into our parameter-estimation code. Chris showed how you can think about this theoretical uncertainty as another source of noise (in a certain limit).

There was one final talk of the day: Jim Hough gave a public lecture on gravitational-wave detection. I especially enjoyed Jim’s explanation that we need to study gravitational waves to be prepared for the 24th century, and hearing how Joe Weber almost got into a fist fight arguing about his detectors (hopefully we’ll avoid that with LIGO). I hope this talk enthused our audience for the first observations of Advanced LIGO later this year: there were many good questions from the audience and there was considerable interest in our table-top Michelson interferometer afterwards. We had 114 people in the audience (one of the better turn outs for recent outreach activities), which I was delighted with.

Attendance

We had a fair amount of interest in the meeting. We totalled 81 (registered) participants at the meeting: a few more registered but didn’t make it in the end for various reasons and I suspect a couple of Birmingham people sneaked in without registering.

Looking at the attendance in more detail, we can break down the participants by their career-level. One of the aims of BritGrav is to showcase to research of early-career researchers (PhD students and post-docs), so we ask for this information on the registration form. The proportions are shown in the pie-chart below.

Attendance at BritGrav 15 by career level

Proportion of participants at BritGrav 15 by (self-reported) career level.

PhD students make up the largest chunk; there are a few keen individuals who are yet to start a PhD, and a roughly even split between post-docs and permanent staff. We do need to encourage more senior researchers to come along, even if they are not giving talks, so that they can see the research done by others.

We had a total of 50 talks across the two days (including the two thesis-prize talks); the distribution of talks by career level as shown below.

Talks at BritGrav 15 by career level

Proportion of talks at BritGrav 15 by (self-reported) career level. The majority are by PhD students.

PhDs make up an even larger proportion of talks here, and we see that there are many more talks from post-docs than permanent staff members. This is exactly what we’re aiming for! For comparison, at the first BritGrav Meeting only 26% of talks were by PhD students, and 17% of talks were by post-docs. There’s been a radical change in the distribution of talks, shifting from senior to junior, although the contribution by post-docs ends up about the same.

We can also consider at the proportion of participants from different institutions, which is shown below.

Attendance at BritGrav 15 by institution

Proportion of participants at BritGrav 15 by institution. Birmingham, as host, comes out top.

Here, any UK/Ireland institution which has one or no speakers is lumped together under “Other”, all these institutions had fewer than four participants. It’s good to see that we are attracting some international participants: of those from non-UK/Ireland institutions, two are from the USA and the rest are from Europe (France, Germany, The Netherlands and Slovenia). Birmingham makes up the largest chunk, which probably reflects the convenience. The list of top institutions closely resembles the list of institutions that have hosted a BritGrav. This could show that these are THE places for gravitational research in the UK, or possibly that the best advertising for future BritGravs is having been at an institution in the past (so everyone knows how awesome they are). The distribution of talks by institution roughly traces the number of participants, as shown below.

Talks at BritGrav 15 by institution

Proportion of talks at BritGrav 15 by institution.

Again Birmingham comes top, followed by Queen Mary and Southampton. Both of the thesis-prize talks were from people currently outside the UK/Ireland, even though they studied for their PhDs locally. I think we had a good mix of participants, which is one of factors that contributed to the meeting being successful.

I’m pleased with how well everything went at BritGrav 15, and now I’m looking forward to BritGrav 16, which I will not be organising.