Dragonfly 44: A potential Dark Matter Galaxy

Title: A High Stellar Velocity Dispersion and ~100 Globular Clusters for the Ultra Diffuse Galaxy Dragonfly 44

PublicationApJ, v828, Number 1, arXiv: 1606.06291

The title of this paper sounds like some standard astrophysics analyses; but, dig a little deeper and you’ll find – what I think – is an incredibly interesting, surprising and unexpected observation.

The Coma Cluster: NASA, ESA, and the Hubble Heritage Team (STScI/AURA)

Last year, using the WM Keck Observatory and the Gemini North Telescope in Manuakea, Hawaii, the Dragonfly Telephoto Array observed the Coma cluster (a large cluster of galaxies in the constellation Coma – I’ve included a Hubble Image to the left). The team identified a population of large, very low surface brightness (ie: not a lot of stars), spheroidal galaxies around an Ultra Diffuse Galaxy (UDG) called Dragonfly 44 (shown below). They determined that Dragonfly 44 has so few stars that gravity could not hold it together – so some other matter had to be involved – namely DARK MATTER (my favorite kind of unknown matter).


The ultra-diffuse galaxy Dragonfly 44. The galaxy consists almost entirely of dark matter. It is surrounded by faint, compact sources. Image credit: Pieter van Dokkum / Roberto Abraham / Gemini Observatory / SDSS / AURA.
The ultra-diffuse galaxy Dragonfly 44. The galaxy consists almost entirely of dark matter. It is surrounded by faint, compact sources. Image credit: Pieter van Dokkum / Roberto Abraham / Gemini Observatory / SDSS / AURA

The team used the DEIMOS instrument installed on Keck II to measure the velocities of stars for 33.5 hours over a period of six nights so they could determine the galaxy’s mass. Observations of Dragonfly 44’s rotational speed suggest that it has a mass of about one trillion solar masses, about the same as the Milky Way. However, the galaxy emits only 1% of the light emitted by the Milky Way. In other words, the Milky Way has more than a hundred times more stars than Dragonfly 44. I’ve also included the Mass-to-Light ratio plot vs. the dynamical mass. This illustrates how unique Dragonfly 44 is compared to other dark matter dominated galaxies like dwarf spheroidal galaxies.



Relation between dynamical mass-to-light ratio and dynamical mass. Open symbols are dispersion-dominated objects from Zaritsky, Gonzalez, & Zabludoff (2006) and Wolf et al. (2010). The UDGs VCC 1287 (Beasley et al. 2016) and Dragonfly 44 fall outside of the band defined by the other galaxies, having a very high M/L ratio for their mass.

What is particularly exciting is that we don’t understand how galaxies like this form.

Their research indicates that these UDGs could be failed galaxies, with the sizes, dark matter content, and globular cluster systems of much more luminous objects. But we’ll need to discover more to fully understand them.









Further reading (works by the same authors)
Forty-Seven Milky Way-Sized, Extremely Diffuse Galaxies in the Coma Cluster,arXiv: 1410.8141
Spectroscopic Confirmation of the Existence of Large, Diffuse Galaxies in the Coma Cluster: arXiv: 1504.03320

Gravity in the Next Dimension: Micro Black Holes at ATLAS

Article: Search for TeV-scale gravity signatures in high-mass final states with leptons and jets with the ATLAS detector at sqrt(s)=13 TeV
Authors: The ATLAS Collaboration
Reference: arXiv:1606.02265 [hep-ex]

What would gravity look like if we lived in a 6-dimensional space-time? Models of TeV-scale gravity theorize that the fundamental scale of gravity, MD, is much lower than what’s measured here in our normal, 4-dimensional space-time. If true, this could explain the large difference between the scale of electroweak interactions (order of 100 GeV) and gravity (order of 1016 GeV), an important open question in particle physics. There are several theoretical models to describe these extra dimensions, and they all predict interesting new signatures in the form of non-perturbative gravitational states. One of the coolest examples of such a state is microscopic black holes. Conveniently, this particular signature could be produced and measured at the LHC!

Sounds cool, but how do you actually look for microscopic black holes with a proton-proton collider? Because we don’t have a full theory of quantum gravity (yet), ATLAS researchers made predictions for the production cross-sections of these black holes using semi-classical approximations that are valid when the black hole mass is above MD. This production cross-section is also expected to dramatically larger when the energy scale of the interactions (pp collisions) surpasses MD. We can’t directly detect black holes with ATLAS, but many of the decay channels of these black holes include leptons in the final state, which IS something that can be measured at ATLAS! This particular ATLAS search looked for final states with at least 3 high transverse momentum (pt) jets, at least one of which must be a leptonic (electron or muon) jet (the others can be hadronic or leptonic). The sum of the transverse momenta, is used as a discriminating variable since the signal is expected to appear only at high pt.

This search used the full 3.2 fb-1 of 13 TeV data collected by ATLAS in 2015 to search for this signal above relevant Standard Model backgrounds (Z+jets, W+jets, and ttbar, all of which produce similar jet final states). The results are shown in Figure 1 (electron and muon channels are presented separately).  The various backgrounds are shown in various colored histograms, the data in black points, and two microscopic black hole models in green and blue lines. There is a slight excess in the 3 TeV region in the electron channel, which corresponds to a p-value of only 1% when tested against the background only hypothesis. Unfortunately, this isn’t enough evidence to indicate new physics yet, but it’s an exciting result nonetheless! This analysis was also used to improve exclusion limits on individual extra-dimensional gravity models, as shown in Figure 2. All limits were much stronger than those set in Run 1.

Figure 1: momentum distributions in the electron (a) and muon (b) channels


Screen Shot 2016-08-29 at 12.03.43 PM
Figure 2: Exclusion limits in the Mth, MD plane for models with various numbers of extra dimensions

So: no evidence of microscopic black holes or extra-dimensional gravity at the LHC yet, but there is a promising excess and Run 2 has only just begun. Since publication, ATLAS has collected another 10 fb-1 of sqrt(13) TeV data that has yet to be analyzed. These results could also be used to constrain other Beyond the Standard Model searches at the TeV scale that have similar high pt leptonic jet final states, which would give us more information about what can and can’t exist outside of the Standard Model. There is certainly more to be learned from this search!



References and further reading:


The CMB sheds light on galaxy clusters: Observing the kSZ signal with ACT and BOSS

Article: Detection of the pairwise kinematic Sunyaev-Zel’dovich effect with BOSS DR11 and the Atacama Cosmology Telescope
Authors: F. De Bernardis, S. Aiola, E. M. Vavagiakis, M. D. Niemack, N. Battaglia, and the ACT Collaboration
Reference: arXiv:1607.02139

Editor’s note: this post is written by one of the students involved in the published result.

Like X-rays shining through your body can inform you about your health, the cosmic microwave background (CMB) shining through galaxy clusters can tell us about the universe we live in. When light from the CMB is distorted by the high energy electrons present in galaxy clusters, it’s called the Sunyaev-Zel’dovich effect. A new 4.1σ measurement of the kinematic Sunyaev-Zel’dovich (kSZ) signal has been made from the most recent Atacama Cosmology Telescope (ACT) cosmic microwave background (CMB) maps and galaxy data from the Baryon Oscillation Spectroscopic Survey (BOSS). With steps forward like this one, the kinematic Sunyaev-Zel’dovich signal could become a probe of cosmology, astrophysics and particle physics alike.

The Kinematic Sunyaev-Zel’dovich Effect

It rolls right off the tongue, but what exactly is the kinematic Sunyaev-Zel’dovich signal? Galaxy clusters distort the cosmic microwave background before it reaches Earth, so we can learn about these clusters by looking at these CMB distortions. In our X-ray metaphor, the map of the CMB is the image of the X-ray of your arm, and the galaxy clusters are the bones. Galaxy clusters are the largest gravitationally bound structures we can observe, so they serve as important tools to learn more about our universe. In its essence, the Sunyaev-Zel’dovich effect is inverse-Compton scattering of cosmic microwave background photons off of the gas in these galaxy clusters, whereby the photons gain a “kick” in energy by interacting with the high energy electrons present in the clusters.

The Sunyaev-Zel’dovich effect can be divided up into two categories: thermal and kinematic. The thermal Sunyaev-Zel’dovich (tSZ) effect is the spectral distortion of the cosmic microwave background in a characteristic manner due to the photons gaining, on average, energy from the hot (~107 – 108 K) gas of the galaxy clusters. The kinematic (or kinetic) Sunyaev-Zel’dovich (kSZ) effect is a second-order effect—about a factor of 10 smaller than the tSZ effect—that is caused by the motion of galaxy clusters with respect to the cosmic microwave background rest frame. If the CMB photons pass through galaxy clusters that are moving, they are Doppler shifted due to the cluster’s peculiar velocity (the velocity that cannot be explained by Hubble’s law, which states that objects recede from us at a speed proportional to their distance). The kinematic Sunyaev-Zel’dovich effect is the only known way to directly measure the peculiar velocities of objects at cosmological distances, and is thus a valuable source of information for cosmology. It allows us to probe megaparsec and gigaparsec scales – that’s around 30,000 times the diameter of the Milky Way!

A schematic of the Sunyaev-Zel’dovich effect resulting in higher energy (or blue shifted) photons of the cosmic microwave background (CMB) when viewed through the hot gas present in galaxy clusters. Source: UChicago Astronomy.


Measuring the kSZ Effect

To make the measurement of the kinematic Sunyaev-Zel’dovich signal, the Atacama Cosmology Telescope (ACT) collaboration used a combination of cosmic microwave background maps from two years of observations by ACT. The CMB map used for the analysis overlapped with ~68000 galaxy sources from the Large Scale Structure (LSS) DR11 catalog of the Baryon Oscillation Spectroscopic Survey (BOSS). The catalog lists the coordinate positions of galaxies along with some of their properties. The most luminous of these galaxies were assumed to be located at the centers of galaxy clusters, so temperature signals from the CMB map were taken at the coordinates of these galaxy sources in order to extract the Sunyaev-Zel’dovich signal.

While the smallness of the kSZ signal with respect to the tSZ signal and the noise level in current CMB maps poses an analysis challenge, there exist several approaches to extracting the kSZ signal. To make their measurement, the ACT collaboration employed a pairwise statistic. “Pairwise” refers to the momentum between pairs of galaxy clusters, and “statistic” indicates that a large sample is used to rule out the influence of unwanted effects.

Here’s the approach: nearby galaxy clusters move towards each other on average, due to gravity. We can’t easily measure the three-dimensional momentum of clusters, but the average pairwise momentum can be estimated by using the line of sight component of the momentum, along with other information such as redshift and angular separations between clusters. The line of sight momentum is directly proportional to the measured kSZ signal: the microwave temperature fluctuation which is measured from the CMB map. We want to know if we’re measuring the kSZ signal when we look in the direction of galaxy clusters in the CMB map. Using the observed CMB temperature to find the line of sight momenta of galaxy clusters, we can estimate the mean pairwise momentum as a function of cluster separation distance, and check to see if we find that nearby galaxies are indeed falling towards each other. If so, we know that we’re observing the kSZ effect in action in the CMB map.

For the measurement quoted in their paper, the ACT collaboration finds the average pairwise momentum as a function of galaxy cluster separation, and explores a variety of error determinations and sources of systematic error. The most conservative errors based on simulations give signal-to-noise estimates that vary between 3.6 and 4.1.

The mean pairwise momentum estimator and best fit model for a selection of 20000 objects from the DR11 Large Scale Structure catalog, plotted as a function of comoving separation. The dashed line is the linear model, and the solid line is the model prediction including nonlinear redshift space corrections. The best fit provides a 4.1σ evidence of the kSZ signal in the ACTPol-ACT CMB map. Source: arXiv:1607.02139.
The mean pairwise momentum estimator and best fit model for a selection of 20000 objects from the DR11 Large Scale Structure catalog, plotted as a function of comoving separation. The dashed line is the linear model, and the solid line is the model prediction including nonlinear redshift space corrections. The best fit provides a 4.1σ evidence of the kSZ signal in the ACTPol-ACT CMB map. Source: arXiv:1607.02139.

The ACT and BOSS results are an improvement on the 2012 ACT detection, and are comparable with results from the South Pole Telescope (SPT) collaboration that use galaxies from the Dark Energy Survey. The ACT and BOSS measurement represents a step forward towards improved extraction of kSZ signals from CMB maps. Future surveys such as Advanced ACTPol, SPT-3G, the Simons Observatory, and next-generation CMB experiments will be able to apply the methods discussed here to improved CMB maps in order to achieve strong detections of the kSZ effect. With new data that will enable better measurements of galaxy cluster peculiar velocities, the pairwise kSZ signal will become a powerful probe of our universe in the years to come.

Implications and Future Experiments

One interesting consequence for particle physics will be more stringent constraints on the sum of the neutrino masses from the pairwise kinematic Sunyaev-Zel’dovich effect. Upper bounds on the neutrino mass sum from cosmological measurements of large scale structure and the CMB have the potential to determine the neutrino mass hierarchy, one of the next major unknowns of the Standard Model to be resolved, if the mass hierarchy is indeed a “normal hierarchy” with ν3 being the heaviest mass state. If the upper bound of the neutrino mass sum is measured to be less than 0.1 eV, the inverted hierarchy scenario would be ruled out, due to there being a lower limit on the mass sum of ~0.095 eV for an inverted hierarchy and ~0.056 eV for a normal hierarchy.

Forecasts for kSZ measurements in combination with input from Planck predict possible constraints on the neutrino mass sum with a precision of 0.29 eV, 0.22 eV and 0.096 eV for Stage II (ACTPol + BOSS), Stage III (Advanced ACTPol + BOSS) and Stage IV (next generation CMB experiment + DESI) surveys respectively, with the possibility of much improved constraints with optimal conditions. As cosmic microwave background maps are improved and Sunyaev-Zel’dovich analysis methods are developed, we have a lot to look forward to.


Background reading:

The Fermi LAT Data Depicting Dark Matter Detection

The center of the galaxy is brighter than astrophysicists expected. Could this be the result of the self-annihilation of dark matter? Chris Karwin, a graduate student from the University of California, Irvine presents the Fermi collaboration’s analysis.

Editor’s note: this is a guest post by one of the students involved in the published result.

Presenting: Fermi-LAT Observations of High-Energy Gamma-Ray Emission Toward the Galactic Center
Authors: The Fermi-LAT Collaboration (ParticleBites blogger is a co-author)
Reference: 1511.02938Astrophys.J. 819 (2016) no.1, 44
Artist rendition of the Fermi Gamma-ray Space telescope in orbit. Image: http://fermi.gsfc.nasa.gov
Artist rendition of the Fermi Gamma-ray Space telescope in orbit. Image from NASA.


Like other telescopes, the Fermi Gamma-Ray Space Telescope is a satellite that scans the sky collecting light. Unlike many telescopes, it searches for very high energy light: gamma-rays. The satellite’s main component is the Large Area Telescope (LAT). When this detector is hit with a high-energy gamma-ray, it measures the the energy and the direction in the sky from where it originated. The data provided by the LAT is an all-sky photon counts map:

The Fermi-LAT provides an all-sky counts map of gamma-rays. The color scale correspond to the number of detected photons. Image: http://svs.gsfc.nasa.gov/cgi-bin/details.cgi?aid=10887
All-sky counts map of gamma-rays. The color scale correspond to the number of detected photons. Image from NASA.

In 2009, researchers noticed that there appeared to be an excess of gamma-rays coming from the galactic center. This excess is found by making a model of the known astrophysical gamma-ray sources and then comparing it to the data.

What makes the excess so interesting is that its features seem consistent with predictions from models of dark matter annihilation. Dark matter theory and simulations predict:

  1. The distribution of dark matter in space. The gamma rays coming from dark matter annihilation should follow this distribution, or spatial morphology.
  2. The particles to which dark matter directly annihilates. This gives a prediction for the expected energy spectrum of the gamma-rays.

Although a dark matter interpretation of the excess is a very exciting scenario that would tell us new things about particle physics, there are also other possible astrophysical explanations. For example, many physicists argue that the excess may be due to an unresolved population of milli-second pulsars. Another possible explanation is that it is simply due to the mis-modeling of the background. Regardless of the physical interpretation, the primary objective of the Fermi analysis is to characterize the excess.

The main systematic uncertainty of the experiment is our limited understanding of the backgrounds: the gamma rays produced by known astrophysical sources. In order to include this uncertainty in the analysis, four different background models are constructed. Although these models are methodically chosen so as to account for our lack of understanding, it should be noted that they do not necessarily span the entire range of possible error. For each of the background models, a gamma-ray excess is found. With the objective of characterizing the excess, additional components are then added to the model. Among the different components tested, it is found that the fit is most improved when dark matter is added. This is an indication that the signal may be coming from dark matter annihilation.


This analysis is interested in the gamma rays coming from the galactic center. However, when looking towards the galactic center the telescope detects all of the gamma-rays coming from both the foreground and the background. The main challenge is to accurately model the gamma-rays coming from known astrophysical sources.

Schematic of the experiment. We are interested in gamma-rays coming from the galactic center, represented by the red circle. However, the LAT detects all of the gamma-rays coming from the foreground and background, represented by the blue region. The main challenge is to accurately model the gamma-rays coming from known astrophysical sources. Image: http://www.universetoday.com/106062/what-is-the-milky-way-2/
Schematic of the experiment. We are interested in gamma-rays coming from the galactic center, represented by the red circle. However, the LAT detects all of the gamma-rays coming from the foreground and background, represented by the blue region. The main challenge is to accurately model the gamma-rays coming from known astrophysical sources. Image adapted from Universe Today.

An overview of the analysis chain is as follows. The model of the observed region comes from performing a likelihood fit of the parameters for the known astrophysical sources. A likelihood fit is a statistical procedure that calculates the probability of observing the data given a set of parameters. In general there are two types of sources:

  1. Point sources such as known pulsars
  2. Diffuse sources due to the interaction of cosmic rays with the interstellar gas and radiation field

Parameters for these two types of sources are fit at the same time. One of the main uncertainties in the background is the cosmic ray source distribution. This is the number of cosmic ray sources as a function of distance from the center of the galaxy. It is believed that cosmic rays come from supernovae. However, the source distribution of supernova remnants is not well determined. Therefore, other tracers must be used. In this context a tracer refers to a measurement that can be made to infer the distribution of supernova remnants. This analysis uses both the distribution of OB stars and the distribution of pulsars as tracers. The former refers to OB associations, which are regions of O-type and B-type stars. These hot massive stars are progenitors of supernovae. In contrast to these progenitors, the distribution of pulsars is also used since pulsars are the end state of supernovae. These two extremes serve to encompass the uncertainty in the cosmic ray source distribution, although, as mentioned earlier, this uncertainty is by no means bracketing. Two of the four background model variants come from these distributions.

An overview of the analysis chain. In general there are two types of sources: point sources and diffuse source. The diffuse sources are due to the interaction of cosmic rays with interstellar gas and radiation fields. Spectral parameters for the diffuse sources are fit concurrently with the point sources using a likelihood fit. The question mark represents the possibility of an additional component possibly missing from the model, such as dark matter.

The information pertaining to the cosmic rays, gas, and radiation fields is input into a propagation code called GALPROP. This produces an all-sky gamma-ray intensity map for each of the physical processes that produce gamma-rays. These processes include the production of neutral pions due to the interaction of cosmic ray protons with the interstellar gas, which quickly decay into gamma-rays, cosmic ray electrons up-scattering low-energy photons of the radiation field via inverse Compton, and cosmic ray electrons interacting with the gas producing gamma-rays via Bremsstrahlung radiation.

Residual map for one of the background models. Image: http://arxiv.org/abs/1511.02938
Residual map for one of the background models. Image from 1511.02938

The maps of all the processes are then tuned to the data. In general, tuning is a procedure by which the background models are optimized for the particular data set being used. This is done using a likelihood analysis. There are two different tuning procedures used for this analysis. One tunes the normalization of the maps, and the other tunes both the normalization and the extra degrees of freedom related to the gas emission interior to the solar circle. These two tuning procedures, performed for the the two cosmic ray source models, make up the four different background models.

Point source models are then determined for each background model, and the spectral parameters for both diffuse sources and point sources are simultaneously fit using a likelihood analysis.

Results and Conclusion

Best fit dark matter spectra for the four different background models. Image: 1511.02938

In the plot of the best fit dark matter spectra for the four background models, the hatching of each curve corresponds to the statistical uncertainty of the fit. The systematic uncertainty can be interpreted as the region enclosed by the four curves. Results from other analyses of the galactic center are overlaid on the plot. This result shows that the galactic center analysis performed by the Fermi collaboration allows a broad range of possible dark matter spectra.

The Fermi analysis has shown that within systematic uncertainties a gamma-ray excess coming from the galactic center is detected. In order to try to explain this excess additional components were added to the model. Among the additional components tested it was found that the fit is most improved with that addition of a dark matter component. However, this does not establish that a dark matter signal has been detected. There is still a good chance that the excess can be due to something else, such as an unresolved population of millisecond pulsars or mis-modeling of the background. Further work must be done to better understand the background and better characterize the excess. Nevertheless, it remains an exciting prospect that the gamma-ray excess could be a signal of dark matter.


Background reading on dark matter and indirect detection:

LIGO and Gravitational Waves: A Hep-ex perspective

The exciting Twitter rumors have been confirmed! On Thursday, LIGO finally announced the first direct observation of gravitational waves, a prediction 100 years in the making. The media storm has been insane, with physicists referring to the discovery as “more significant than the discovery of the Higgs boson… the biggest scientific breakthrough of the century.” Watching Thursday’s press conference from CERN, it was hard not to make comparisons between the discovery of the Higgs and LIGO’s announcement.



The gravitational-wave event GW150914 observed by the LIGO Collaboration
The gravitational-wave event GW150914 observed by the LIGO Collaboration


Long standing Searches for well known phenomena


The Higgs boson was billed as the last piece of the Standard Model puzzle. The existence of the Higgs was predicted in the 1960s in order to explain the mass of vector bosons of the Standard Model, and avoid non-unitary amplitudes in W boson scattering. Even if the Higgs didn’t exist, particle physicists expected new physics to come into play at the TeV Scale, and experiments at the LHC were designed to find it.


Similarly, gravitational waves were the last untested fundamental prediction of General Relativity. At first, physicists remained skeptical of the existence of gravitational waves, but the search began in earnest with Joseph Webber in the 1950s (Forbes). Indirect evidence of gravitational waves was demonstrated a few decades later. A binary system consisting of a pulsar and neutron star was observed to release energy over time, presumably in the form of gravitational waves. Using Webber’s method for inspiration, LIGO developed two detectors of unprecedented precision in order to finally make direct observation.


Unlike the Higgs, General Relativity makes clear predictions about the properties of gravitational waves. Waves should travel at the speed of light, have two polarizations, and interact weakly with matter. Scientists at LIGO were even searching for a very particular signal, described as a characteristic “chirp”. With the upgrade to the LIGO detectors, physicists were certain they’d be capable of observing gravitational waves. The only outstanding question was how often these observations would happen.


The search for the Higgs involved more uncertainties. The one parameter essential for describing the Higgs, its mass, is not predicted by the Standard Model. While previous collider experiments at LEP and Fermilab were able to set limits on the Higgs mass, the observed properties of the Higgs were ultimately unknown before the discovery. No one knew whether or not the Higgs would be a Standard Model Higgs, or part of a more complicated theory like Supersymmetry or technicolor.


Monumental scientific endeavors


Answering the most difficult questions posed by the universe isn’t easy, or cheap. In terms of cost, both LIGO and the LHC represent billion dollar investments. Including the most recent upgrade, LIGO cost a total $1.1 billion, and when it was originally approved in 1992, “it represented the biggest investment the NSF had ever made” according to France Córdova, NSF director. The discovery of the Higgs was estimated by Forbes to cost a total of $13 billion, a hefty price to be paid by CERN’s member and observer states. Even the electricity bill costs more than $200 million per year.


The large investment is necessitated by the sheer monstrosity of the experiments. LIGO consists of two identical detectors roughly 4 km long, built 3000 km apart. Because of it’s large size, LIGO is capable of measuring ripples in space 10000 times smaller than an atomic nucleus, the smallest scale ever measured by scientists (LIGO Fact Page). The size of the LIGO vacuum tubes is only surpassed by those at the LHC. At 27 km in circumference, the LHC is the single largest machine in the world, and the most powerful particle accelerator to date. It only took a handful of people to predict the existence of gravitational waves and the Higgs, but it took thousands of physicists and engineers to find them.


Life after Discovery


Even the language surrounding both announcements is strikingly similar. Rumors were circulating for months before the official press conferences, and the expectations from each respective community were very high. Both discoveries have been touted as the discoveries of the century, with many experts claiming that results would usher in a “new era” of particle physics or observational astronomy.


With a few years of hindsight, it is clear that the “new era” of particle physics has begun. Before Run I of the LHC, particle physicists knew they needed to search for the Higgs. Now that the Higgs has been discovered, there is much more uncertainty surrounding the field. The list of questions to try and answer is enormous. Physicists want to understand the source of the Dark Matter that makes up roughly 25% of the universe, from where neutrinos derive their mass, and how to quantize gravity. There are several ad hoc features of the Standard Model that merit additional explanation, and physicists are still searching for evidence of supersymmetry and grand unified theories. While the to-do list is long, and well understood, how to solve these problems is not. Measuring the properties of the Higgs does allow particle physicists to set limits on beyond the Standard Model Physics, but it’s unclear at which scale new physics will come into play, and there’s no real consensus about which experiments deserve the most support. For some in the field, this uncertainty can result in a great deal of anxiety and skepticism about the future. For others, the long to-do list is an absolutely thrilling call to action.


With regards to the LIGO experiment, the future is much more clear. LIGO has only published one event from 16 days of data taking. There is much more data already in the pipeline, and more interferometers like VIRGO and (e)LISA, planning to go online in the near future. Now that gravitational waves have been proven to exist, they can be used to observe the universe in a whole new way. The first event already contains an interesting surprise. LIGO has observed two inspriraling black holes of 36 and 29 solar masses, merging into a final black hole of 62 solar masses. The data thus confirmed the existence of heavy stellar black holes, with masses more than 25 times greater than the sun, and that binary black hole systems form in nature (Atrophysical Journal). When VIRGO comes online, it will be possible to triangulate the source of these gravitational waves as well. LIGO’s job is to watch, and see what other secrets the universe has in store.