Classifying the unknown: Discovering novel gravitational-wave detector glitches using similarity learning

Gravity Spy is an awesome project that combines citizen science and machine learning to classify glitches in LIGO and Virgo data. Glitches are short bursts of noise in our detectors which make analysing our data more difficult. Some glitches have known causes, others are more mysterious. Classifying glitches into different types helps us better understand their properties, and in some cases track down their causes and eliminate them! In this paper, led by Scotty Coughlin, we demonstrated the effectiveness of a new tool which are citizen scientists can use to identify new glitch classes.

The Gravity Spy project

Gravitational-wave detectors are complicated machines. It takes a lot of engineering to achieve the required accuracy needed to observe gravitational waves. Most of the time, our detectors perform well. The background noise in our detectors is easy to understand and model. However, our detectors are also subject to glitches, unusual  (sometimes extremely loud and complicated) noise that doesn’t fit the usual properties of noise. Glitches are short, they only appear in a small fraction of the total data, but they are common. This makes detection and analysis of gravitational-wave signals more difficult. Detection is tricky because you need to be careful to distinguish glitches from signals (and possibly glitches and signals together), and understanding the signal is complicated as we may need to model a signal and a glitch together [bonus note]. Understanding glitches is essential if gravitational-wave astronomy is to be a success.

To understand glitches, we need to be able to classify them. We can search for glitches by looking for loud pops, whooshes and splats in our data. The task is then to spot similarities between them. Once we have a set of glitches of the same type, we can examine the state of the instruments at these times. In the best cases, we can identify the cause, and then work to improve the detectors so that this no longer happens. Other times, we might be able to find the source, but we can find one of the monitors in our detectors which acts a witness to the glitch. Then we know that if something appears in that monitor, we expect a glitch of a particular form. This might mean that we throw away that bit of data, or perhaps we can use the witness data to subtract out the glitch. Since glitches are so common, classifying them is a huge amount of work. It is too much for our detector characterisation experts to do by hand.

There are two cunning options for classifying large numbers of glitches

  1. Get a computer to do it. The difficulty  is teaching a computer to identify the different classes. Machine-learning algorithms can do this, if they are properly trained. Training can require a large training set, and careful validation, so the process is still labour intensive.
  2. Get lots of people to help. The difficulty here is getting non-experts up-to-speed on what to look for, and then checking that they are doing a good job. Crowdsourcing classifications is something citizen scientists can do, but we will need a large number of dedicated volunteers to tackle the full set of data.

The idea behind Gravity Spy is to combine the two approaches. We start with a small training set from our detector characterization experts, and train a machine-learning algorithm on them. We then ask citizen scientists (thanks Zooniverse) to classify the glitches. We start them off with glitches the machine-learning algorithm is confident in its classification; these should be easy to identify. As citizen scientists get more experienced, they level up and start tackling more difficult glitches. The citizen scientists validate the classifications of the machine-learning algorithm, and provide a larger training set (especially helpful for the rarer glitch classes) for it. We can then happily apply the machine-learning algorithm to classify the full data set [bonus note].

The Gravity Spy workflow

How Gravity Spy works: the interconnection of machine-learning classification and citizen-scientist classification. The similarity search is used to identify glitches similar to one which do not fit into current classes. Figure 2 of Coughlin et al. (2019).

I especially like the levelling-up system in Gravity Spy. I think it helps keep citizen scientists motivated, as it both prevents them from being overwhelmed when they start and helps them see their own progress. I am currently Level 4.

Gravity Spy works using images of the data. We show spectrograms, plots of how loud the output of the detectors are at different frequencies at different times. A gravitational wave form a binary would show a chirp structure, starting at lower frequencies and sweeping up.

Gravitational-wave chirp

Spectrogram showing the upward-sweeping chirp of gravitational wave GW170104 as seen in Gravity Spy. I correctly classified this as a Chirp.

New glitches

The Gravity Spy system works smoothly. However, it is set up to work with a fixed set of glitch classes. We may be missing new glitch classes, either because they are rare, and hadn’t been spotted by our detector characterization team, or because we changed something in our detectors and new class arose (we expect this to happen as we tune up the detectors between observing runs). We can add more classes to our citizen scientists and machine-learning algorithm to use, but how do we spot new classes in the first place?

Our citizen scientists managed to identify a few new glitches by spotting things which didn’t fit into any of the classes. These get put in the None-of-the-Above class. Occasionally, you’ll come across similar looking glitches, and by collecting a few of these together, build a new class. The Paired Dove and Helix classes were identified early on by our citizen scientists this way; my favourite suggested new class is the Falcon [bonus note]. The difficulty is finding a large number of examples of a new class—you might only recognise a common feature after going past a few examples, backtracking to find the previous examples is hard, and you just have to keep working until you are lucky enough to be given more of the same.

Helix and Paired Dove

Example Helix (left) and Paired Dove glitches. These classes were identified by Gravity Spy citizen scientists. Helix glitches are related to related to hiccups in the auxiliary lasers used to calibrate the detectors by pushing on the mirrors. Paired Dove glitches are related to motion of the beamsplitter in the interferometer. Adapted from Figure 8 of Zevin et al. (2017).

To help our citizen scientists find new glitches, we created a similar search. Having found an interesting glitch, you can search for similar examples, and put quickly put together a collection of your new class. The video below shows how it works. The thing we had to work out was how to define similar?

Transfer learning

Our machine-learning algorithm only knows about the classes we tell it about. It then works out the features we distinguish the different classes, and are common to glitches of the same class. Working in this feature space, glitches form clusters of different classes.

Gravity Spy feature space

Visualisation showing the clustering of different glitches in the Gravity Spy feature space. Each point is a different glitch from our training set. The feature space has more than three dimensions: this visualisation was made using a technique which preserves the separation and clustering of different and similar points. Figure 1 of Coughlin et al. (2019).

For our similarity search, our idea was to measure distances in feature space [bonus note for experts]. This should work well if our current set of classes have a wide enough set of features to capture to characteristics of the new class; however, it won’t be effective if the new class is completely different, so that its unique features are not recognised. As an analogy, imagine that you had an algorithm which classified M&M’s by colour. It would probably do well if you asked it to distinguish a new colour, but would probably do poorly if you asked it to distinguish peanut butter filled M&M’s as they are identified by flavour, which is not a feature it knows about. The strategy of using what a machine learning algorithm learnt about one problem to tackle a new problem is known as transfer learning, and we found this strategy worked well for our similarity search.

Raven Pecks and Water Jets

To test our similarity search, we applied it to two glitches classes not in the Gravity Spy set:

  1. Raven Peck glitches are caused by thirsty ravens pecking ice built up along nitrogen vent lines outside of the Hanford detector. Raven Pecks look like horizontal lines in spectrograms, similar to other Gravity Spy glitch classes (like the Power Line, Low Frequency Line and 1080 Line). The similarity search should therefore do a good job, as we should be able to recognise its important features.
  2. Water Jet glitches were caused by local seismic noise at the Hanford detector which  causes loud bands which disturb the input laser optics. These glitches are found between , over which time there are 26,871 total glitches in GRavity Spy. The Water Jet glitch doesn’t have anything to do with water, it is named based on its appearance (like a fountain, not a weasel). Its features are subtle, and unlike other classes, so we would expect this to be difficult for our similarity search to handle.

These glitches appeared in the data from the second observing run. Raven Pecks appeared between 14 April and 9 August 2017, and Water Jets appeared 4 January and 28 May 2017. Over these intervals there are a total of 13,513 and 26,871 Gravity Spy glitches from all type, so even if you knew exactly when to look, you have a large number to search through to find examples.

Raven Peck and Water Jet glitches

Example Raven Peck (left) and Water Jet (right) glitches. These classes of glitch are not included in the usual Gravity Spy scheme. Adapted from Figure 3 of Coughlin et al. (2019).

We tested using our machine-learning feature space for the similarity search against simpler approaches: using the raw difference in pixels, and using a principal component analysis to create a feature space. Results are shown in the plots below. These show the fraction of glitches we want returned by the similarity search versus the total number of glitches rejected. Ideally, we would want to reject all the glitches except the ones we want, so the search would return 100% of the wanted classes and reject almost 100% of the total set. However, the actual results will depend on the adopted threshold for the similarity search: if we’re very strict we’ll reject pretty much everything, and only get the most similar glitches of the class we want, if we are too accepting, we get everything back, regardless of class. The plots can be read as increasing the range of the similarity search (becoming less strict) as you go left to right.

Similarity search performance

Performance of the similarity search for Raven Peck (left) and Water Jet (right) glitches: the fraction of known glitches of the desired class that have a higher similarity score (compared to an example of that glitch class) than a given percentage of full data set. Results are shown for three different ways of defining similarity: the DIRECT machine-learning algorithm feature space (think line), a principal component analysis (medium line) and a comparison of pixels (thin line). Adapted from Figure 3 of Coughlin et al. (2019).

For the Raven Peck, the similarity search always performs well. We have 50% of Raven Pecks returned while rejecting 99% of the total set of glitches, and we can get the full set while rejecting 92% of the total set! The performance is pretty similar between the different ways of defining feature space. Raven Pecks are easy to spot.

Water Jets are more difficult. When we have 50% of Water Jets returned by the search, our machine-learning feature space can still reject almost all glitches. The simpler approaches do much worse, and will only reject about 30% of the full data set. To get the full set of Water Jets we would need to loosen the similarity search so that it only rejects 55% of the full set using our machine-learning feature space; for the simpler approaches we’d basically get the full set of glitches back. They do not do a good job at narrowing down the hunt for glitches. Despite our suspicion that our machine-learning approach would struggle, it still seems to do a decent job [bonus note for experts].

Do try this at home

Having developed and testing our similarity search tool, it is now live. Citizen scientists can use it to hunt down new glitch classes. Several new glitches classes have been identified in data from LIGO and Virgo’s (currently ongoing) third observing run. If you are looking for a new project, why not give it a go yourself? (Or get your students to give it a go, I’ve had some reasonable results with high-schoolers). There is the real possibility that your work could help us with the next big gravitational-wave discovery.

arXiv: arXiv:1903.04058 [astro-ph.IM]
Journal: Physical Review D; 99(8):082002(8); 2019
Websites: Gravity Spy; Gravity Spy Tools
Gravity Spy blog: Introducing Gravity Spy Tools
Current stats: Gravity Spy has 15,500 registered users, who have made 4.4 million glitch classifications, leading to 200,000 successfully identified glitches.

Bonus notes

Signals and glitches

The best example of a gravitational-wave overlapping a glitch is GW170817. The glitch meant that the signal in the LIGO Livingston detector wasn’t immediately recognised. Fortunately, the signal in the Hanford detector was easy to spot. The glitch was analyse and categorised in Gravity Spy. It is a simple glitch, so it wasn’t too difficult to remove from the data. As our detectors become more sensitive, so that detections become more frequent, we expect that signal overlapping with glitches will become a more common occurrence. Unless we can eliminate glitches, it is only a matter of time before we get a glitch that prevents us from analysing an important signal.

Gravitational-wave alerts

In the third observing run of LIGO and Virgo, we send out automated alerts when we have a new gravitational-wave candidate. Astronomers can then pounce into action to see if they can spot anything coinciding with the source. It is important to quickly check the state of the instruments to ensure we don’t have a false alarm. To help with this, a data quality report is automatically prepared, containing many diagnostics. The classification from the Gravity Spy algorithm is one of many pieces of information included. It is the one I check first.

The Falcon

Excellent Gravity Spy moderator EcceruElme suggested a new glitch class Falcon. This suggestion was followed up by Oli Patane, they found that all the examples identified occured between 6:30 am and 8:30 am on 20 June 2017 in the Hanford detector. The instrument was misbehaving at the time. To solve this, the detector was taken out of observing mode and relocked (the equivalent of switching it off and on again). Since this glitch class was only found in this one 2-hour window, we’ve not added it as a class. I love how it was possible to identify this problematic stretch of time using only Gravity Spy images (which don’t identify when they are from). I think this could be the seed of a good detective story. The Hanfordese Falcon?

Characteristics of Falcon glitches

Examples of the proposed Falcon glitch class, illustrating the key features (and where the name comes from). This new glitch class was suggested by Gravity Spy citizen scientist EcceruElme.

Distance measure

We chose a cosine distance to measure similarity in feature space. We found this worked better than a Euclidean metric. Possibly because for identifying classes it is more important to have the right mix of features, rather than how significant the individual features are. However, we didn’t do a systematic investigation of the optimal means of measuring similarity.

Retraining the neural net

We tested the performance of the machine-learning feature space in the similarity search after modifying properties of our machine-learning algorithm. The algorithm we are using is a deep multiview convolution neural net. We switched the activation function in the fully connected layer of the net, trying tanh and leaukyREU. We also varied the number of training rounds and the number of pairs of similar and dissimilar images that are drawn from the training set each round. We found that there was little variation in results. We found that leakyREU performed a little better than tanh, possibly because it covers a larger dynamic range, and so can allow for cleaner separation of similar and dissimilar features. The number of training rounds and pairs makes negligible difference, possibly because the classes are sufficiently distinct that you don’t need many inputs to identify the basic features to tell them apart. Overall, our results appear robust. The machine-learning approach works well for the similarity search.

GW170817—The pot of gold at the end of the rainbow

Advanced LIGO and Advanced Virgo have detected their first binary neutron star inspiral. Remarkably, this event was observed not just with gravitational waves, but also across the electromagnetic spectrum, from gamma-rays to radio. This discovery confirms the theory that binary neutron star mergers are the progenitors of short gamma-ray bursts and kilonovae, and may be the primary source of heavy elements like gold.

In this post, I’ll go through some of the story of GW170817. As for GW150914, I’ll write another post on the more technical details of our papers, once I’ve had time to catch up on sleep.

Discovery

The second observing run (O2) of the advanced gravitational-wave detectors started on 30 November 2016. The first detection came in January—GW170104. I was heavily involved in the analysis and paper writing for this. We finally finished up in June, at which point I was thoroughly exhausted. I took some time off in July [bonus note], and was back at work for August. With just one month left in the observing run, it would all be downhill from here, right?

August turned out to be the lava-filled, super-difficult final level of O2. As we have now announced, on August 14, we detected a binary black hole coalescence—GW170814. This was the first clear detection including Virgo, giving us superb sky localization. This is fantastic for astronomers searching for electromagnetic counterparts to our gravitational-wave signals. There was a flurry of excitement, and we thought that this was a fantastic conclusion to O2. We were wrong, this was just the save point before the final opponent. On August 17, we met the final, fire-ball throwing boss.

Text message alert from Thursday 17 August 2017 13:58 BST

Text messages from our gravitational-wave candidate event database GraceDB. The final message is for GW170817, or as it was known at the time, G298048. It certainly caught my attention. The messages above are for GW170814, that was picked up multiple times by our search algorithms. It was a busy week.

At 1:58 pm BST my phone buzzed with a text message, an automated alert of a gravitational-wave trigger. I was obviously excited—I recall that my exact thoughts were “What fresh hell is this?” I checked our online event database and saw that it was a single-detector trigger, it was only seen by our Hanford instrument. I started to relax, this was probably going to turn out to be a glitch. The template masses, were low, in the neutron star range, not like the black holes we’ve been finding. Then I saw the false alarm rate was better than one in 9000 years. Perhaps it wasn’t just some noise after all—even though it’s difficult to estimate false alarm rates accurately online, as especially for single-detector triggers, this was significant! I kept reading. Scrolling down the page there was an external coincident trigger, a gamma-ray burst (GRB 170817A) within a couple of seconds…

Duh-nuh…

We’re gonna need a bigger author list. Credit: Zanuck/Brown Productions

Short gamma-ray bursts are some of the most powerful explosions in the Universe. I’ve always found it mildly disturbing that we didn’t know what causes them. The leading theory has been that they are the result of two neutron stars smashing together. Here seemed to be the proof.

The rapid response call was under way by the time I joined. There was a clear chirp in Hanford, you could be see it by eye! We also had data from Livingston and Virgo too. It was bad luck that they weren’t folded into the online alert. There had been a drop out in the data transfer from Italy to the US, breaking the flow for Virgo. In Livingston, there was a glitch at the time of the signal which meant the data wasn’t automatically included in the search. My heart sank. Glitches are common—check out Gravity Spy for some examples—so it was only a matter of time until one overlapped with a signal [bonus note], and with GW170817 being such a long signal, it wasn’t that surprising. However, this would complicate the analysis. Fortunately, the glitch is short and the signal is long (if this had been a high-mass binary black hole, things might not have been so smooth). We were able to exorcise the glitch. A preliminary sky map using all three detectors was sent out at 12:54 am BST. Not only did we defeat the final boss, we did a speed run on the hard difficulty setting first time [bonus note].

Signal and glitch

Spectrogram of Livingston data showing part of GW170817’s chirp (which sweeps upward in frequncy) as well as the glitch (the big blip at about -0.6~\mathrm{s}). The lower panel shows how we removed the glitch: the grey line shows gating window that was applied for preliminary results, to zero the affected times, the blue shows a fitted model of the glitch that was subtracted for final results. You can clearly see the chirp well before the glitch, so there’s no danger of it being an artefect of the glitch. Figure 2 of the GW170817 Discovery Paper

The three-detector sky map provided a great localization for the source—this preliminary map had a 90% area of ~30 square degrees. It was just in time for that night’s observations. The plot below shows our gravitational-wave localizations in green—the long band is without Virgo, and the smaller is with all three detectors—as with GW170814, Virgo makes a big difference. The blue areas are the localizations from Fermi and INTEGRAL, the gamma-ray observatories which measured the gamma-ray burst. The inset is something new…

Overlapping localizations for GW170817's source

Localization of the gravitational-wave, gamma-ray, and optical signals. The main panel shows initial gravitational-wave 90% areas in green (with and without Virgo) and gamma-rays in blue (the IPN triangulation from the time delay between Fermi and INTEGRAL, and the Fermi GBM localization). The inset shows the location of the optical counterpart (the top panel was taken 10.9 hours after merger, the lower panel is a pre-merger reference without the transient). Figure 1 of the Multimessenger Astronomy Paper.

That night, the discoveries continued. Following up on our sky location, an optical counterpart (AT 2017gfo) was found. The source is just on the outskirts of galaxy NGC 4993, which is right in the middle of the distance range we inferred from the gravitational wave signal. At around 40 Mpc, this is the closest gravitational wave source.

After this source was reported, I think about every single telescope possible was pointed at this source. I think it may well be the most studied transient in the history of astronomy. I think there are ~250 circulars about follow-up. Not only did we find an optical counterpart, but there was emission in X-ray and radio. There was a delay in these appearing, I remember there being excitement at our Collaboration meeting as the X-ray emission was reported (there was a lack of cake though).

The figure below tries to summarise all the observations. As you can see, it’s a mess because there is too much going on!

Gravitational-wave, gamma-ray, ultraviolet, optical, infrared and radio observations

The timeline of observations of GW170817’s source. Shaded dashes indicate times when information was reported in a Circular. Solid lines show when the source was observable in a band: the circles show a comparison of brightnesses for representative observations. Figure 2 of the Multimessenger Astronomy Paper.

The observations paint a compelling story. Two neutron stars insprialled together and merged. Colliding two balls of nuclear density material at around a third of the speed of light causes a big explosion. We get a jet blasted outwards and a gamma-ray burst. The ejected, neutron-rich material decays to heavy elements, and we see this hot material as a kilonova [bonus material]. The X-ray and radio may then be the afterglow formed by the bubble of ejected material pushing into the surrounding interstellar material.

Science

What have we learnt from our results? Here are some gravitational wave highlights.

We measure several thousand cycles from the inspiral. It is the most beautiful chirp! This is the loudest gravitational wave signal yet found, beating even GW150914. GW170817 has a signal-to-noise ratio of 32, while for GW150914 it is just 24.

Normalised spectrograms for GW170817

Time–frequency plots for GW170104 as measured by Hanford, Livingston and Virgo. The signal is clearly visible in the two LIGO detectors as the upward sweeping chirp. It is not visible in Virgo because of its lower sensitivity and the source’s position in the sky. The Livingston data have the glitch removed. Figure 1 of the GW170817 Discovery Paper.

The signal-to-noise ratios in the Hanford, Livingston and Virgo were 19, 26 and 2 respectively. The signal is quiet in Virgo, which is why you can’t spot it by eye in the plots above. The lack of a clear signal is really useful information, as it restricts where on the sky the source could be, as beautifully illustrated in the video below.

While we measure the inspiral nicely, we don’t detect the merger: we can’t tell if a hypermassive neutron star is formed or if there is immediate collapse to a black hole. This isn’t too surprising at current sensitivity, the system would basically need to convert all of its energy into gravitational waves for us to see it.

From measuring all those gravitational wave cycles, we can measure the chirp mass stupidly well. Unfortunately, converting the chirp mass into the component masses is not easy. The ratio of the two masses is degenerate with the spins of the neutron stars, and we don’t measure these well. In the plot below, you can see the probability distributions for the two masses trace out bananas of roughly constant chirp mass. How far along the banana you go depends on what spins you allow. We show results for two ranges: one with spins (aligned with the orbital angular momentum) up to 0.89, the other with spins up to 0.05. There’s nothing physical about 0.89 (it was just convenient for our analysis), but it is designed to be agnostic, and above the limit you’d plausibly expect for neutron stars (they should rip themselves apart at spins of ~0.7); the lower limit of 0.05 should safely encompass the spins of the binary neutron stars (which are close enough to merge in the age of the Universe) we have estimated from pulsar observations. The masses roughly match what we have measured for the neutron stars in our Galaxy. (The combinations at the tip of the banana for the high spins would be a bit odd).

Binary neutron star masses

Estimated masses for the two neutron stars in the binary. We show results for two different spin limits, \chi_z is the component of the spin aligned with the orbital angular momentum. The two-dimensional shows the 90% probability contour, which follows a line of constant chirp mass. The one-dimensional plot shows individual masses; the dotted lines mark 90% bounds away from equal mass. Figure 4 of the GW170817 Discovery Paper.

If we were dealing with black holes, we’d be done: they are only described by mass and spin. Neutron stars are more complicated. Black holes are just made of warped spacetime, neutron stars are made of delicious nuclear material. This can get distorted during the inspiral—tides are raised on one by the gravity of the other. These extract energy from the orbit and accelerate the inspiral. The tidal deformability depends on the properties of the neutron star matter (described by its equation of state). The fluffier a neutron star is, the bigger the impact of tides; the more compact, the smaller the impact. We don’t know enough about neutron star material to predict this with certainty—by measuring the tidal deformation we can learn about the allowed range. Unfortunately, we also didn’t yet have good model waveforms including tides, so for to start we’ve just done a preliminary analysis (an improved analysis was done for the GW170817 Properties Paper). We find that some of the stiffer equations of state (the ones which predict larger neutron stars and bigger tides) are disfavoured; however, we cannot rule out zero tides. This means we can’t rule out the possibility that we have found two low-mass black holes from the gravitational waves alone. This would be an interesting discovery; however, the electromagnetic observations mean that the more obvious explanation of neutron stars is more likely.

From the gravitational wave signal, we can infer the source distance. Combining this with the electromagnetic observations we can do some cool things.

First, the gamma ray burst arrived at Earth 1.7 seconds after the merger. 1.7 seconds is not a lot of difference after travelling something like 85–160 million years (that’s roughly the time since the Cretaceous or Late Jurassic periods). Of course, we don’t expect the gamma-rays to be emitted at exactly the moment of merger, but allowing for a sensible range of emission times, we can bound the difference between the speed of gravity and the speed of light. In general relativity they should be the same, and we find that the difference should be no more than three parts in 10^{15}.

Second, we can combine the gravitational wave distance with the redshift of the galaxy to measure the Hubble constant, the rate of expansion of the Universe. Our best estimates for the Hubble constant, from the cosmic microwave background and from supernova observations, are inconsistent with each other (the most recent supernova analysis only increase the tension). Which is awkward. Gravitational wave observations should have different sources of error and help to resolve the difference. Unfortunately, with only one event our uncertainties are rather large, which leads to a diplomatic outcome.

GW170817 Hubble constant

Posterior probability distribution for the Hubble constant H_0 inferred from GW170817. The lines mark 68% and 95% intervals. The coloured bands are measurements from the cosmic microwave background (Planck) and supernovae (SHoES). Figure 1 of the Hubble Constant Paper.

Finally, we can now change from estimating upper limits on binary neutron star merger rates to estimating the rates! We estimate the merger rate density is in the range 1540^{+3200}_{-1220}~\mathrm{Gpc^{-3}\,yr^{-1}} (assuming a uniform of neutron star masses between one and two solar masses). This is surprisingly close to what the Collaboration expected back in 2010: a rate of between 10~\mathrm{Gpc^{-3}\,yr^{-1}} and 10000~\mathrm{Gpc^{-3}\,yr^{-1}}, with a realistic rate of 1000~\mathrm{Gpc^{-3}\,yr^{-1}}. This means that we are on track to see many more binary neutron stars—perhaps one a week at design sensitivity!

Summary

Advanced LIGO and Advanced Virgo observed a binary neutron star insprial. The rest of the astronomical community has observed what happened next (sadly there are no neutrinos). This is the first time we have such complementary observations—hopefully there will be many more to come. There’ll be a huge number of results coming out over the following days and weeks. From these, we’ll start to piece together more information on what neutron stars are made of, and what happens when you smash them together (take that particle physicists).

Also: I’m exhausted, my inbox is overflowing, and I will have far too many papers to read tomorrow.

GW170817 Discovery Paper: GW170817: Observation of gravitational waves from a binary neutron star inspiral
Multimessenger Astronomy Paper: Multi-messenger observations of a binary neutron star merger
Data release:
 LIGO Open Science Center

If you’re looking for the most up-to-date results regarding GW170817, check out the O2 Catalogue Paper.

Bonus notes

Inbox zero

Over my vacation I cleaned up my email. I had a backlog starting around September 2015.  I think there were over 6000 which I sorted or deleted. I had about 20 left to deal with when I got back to work. GW170817 undid that. Despite doing my best to keep up, there are over a 1000 emails in my inbox…

Worst case scenario

Around the start of O2, I was asked when I expected our results to be public. I said it would depend upon what we found. If it was only high-mass black holes, those are quick to analyse and we know what to do with them, so results shouldn’t take long, now we have the first few out of the way. In this case, perhaps a couple months as we would have been generating results as we went along. However, the worst case scenario would be a binary neutron star overlapping with non-Gaussian noise. Binary neutron stars are more difficult to analyse (they are longer signals, and there are matter effects to worry about), and it would be complicated to get everyone to be happy with our results because we were doing lots of things for the first time. Obviously, if one of these happened at the end of the run, there’d be quite a delay…

I think I got that half-right. We’re done amazingly well analysing GW170817 to get results out in just two months, but I think it will be a while before we get the full O2 set of results out, as we’ve been neglecting otherthings (you’ll notice we’ve not updated our binary black hole merger rate estimate since GW170104, nor given detailed results for testing general relativity with the more recent detections).

At the time of the GW170817 alert, I was working on writing a research proposal. As part of this, I was explaining why it was important to continue working on gravitational-wave parameter estimation, in particular how to deal with non-Gaussian or non-stationary noise. I think I may be a bit of a jinx. For GW170817, the glitch wasn’t a big problem, these type of blips can be removed. I’m more concerned about the longer duration ones, which are less easy to separate out from background noise. Don’t say I didn’t warn you in O3.

Parameter estimation rota

The duty of analysing signals to infer their source properties was divided up into shifts for O2. On January 4, the time of GW170104, I was on shift with my partner Aaron Zimmerman. It was his first day. Having survived that madness, Aaron signed back up for the rota. Can you guess who was on shift for the week which contained GW170814 and GW170817? Yep, Aaron (this time partnered with the excellent Carl-Johan Haster). Obviously, we’ll need to have Aaron on rota for the entirety of O3. In preparation, he has already started on paper drafting

Methods Section: Chained ROTA member to a terminal, ignored his cries for help. Detections followed swiftly.

Especially made

The lightest elements (hydrogen, helium and lithium) we made during the Big Bang. Stars burn these to make heavier elements. Energy can be released up to around iron. Therefore, heavier elements need to be made elsewhere, for example in the material ejected from supernova or (as we have now seen) neutron star mergers, where there are lots of neutrons flying around to be absorbed. Elements (like gold and platinum) formed by this rapid neutron capture are known as r-process elements, I think because they are beloved by pirates.

A couple of weeks ago, the Nobel Prize in Physics was announced for the observation of gravitational waves. In December, the laureates will be presented with a gold (not chocolate) medal. I love the idea that this gold may have come from merging neutron stars.

Nobel medal

Here’s one we made earlier. Credit: Associated Press/F. Vergara