## When light and light collide

Title: “Evidence for light-by-light scattering in heavy-ion collisions with the ATLAS detector at the LHC”

Author: ATLAS Collaboration

Reference: doi:10.1038/nphys4208

According to classical wave theory, two electromagnetic waves that happen to cross each other in space will not interfere. In fact, this is a crucial feature of the conventional definition of a wave, in contrast to a corpuscle or particle: when two waves meet, they briefly occupy the same space at the same time, each without “knowing” about the other’s existence. Particles, on the other hand, do interact (or scatter) when they get close to each other, and the results of this encounter can be measured.

The mathematical backbone for this idea is the so-called superposition principle. It arises from the fact that the equations describing wave propagation are all linear in the fields and sources, meaning that they don’t occur in squares or cubes of those quantities. When we take two such waves that happen to be nearby, the linearity of the equations implies that we can treat the overall wave as just a linear superposition of two separate waves, traveling in different directions. The equations do not distinguish between the two scenarios.

This story gets more interesting after the insights by Albert Einstein who predicted the quantization of light, and the subsequent formal development of Quantum Electrodynamics in the 1940s by Shin’ichirō Tomonaga, Julian Schwinger and Richard Feynman. In the quantum theory of light, it is no longer “just” a wave, but rather a dual entity that can be treated as both a wave and as a particle called photon.

The new interpretation of light as a particle opens up the possibility that two electromagnetic waves may indeed interact when crossing each other, just as particles would. The mathematical equivalent statement is that the quantum theory of light yields wave equations that are not entirely linear anymore, but instead contain a small non-linear part. The non-linear contributions have to be tiny, otherwise we would already have detected the effects, but nonetheless they are predicted to occur by quantum theory. In this context, it is called light-by-light scattering.

Detection

In 2017, The ATLAS experiment at the LHC observed the first direct evidence of light-by-light scattering, using collisions of relativistic heavy-ions. Since this is such a rare phenomenon, it took us a long time to become directly experimentally sensitive to it.

The experiment is based on so-called Ultra-Peripheral Collisions (UPC) of lead ions (Z=82) at a center-of-mass energy of 5.02 TeV (or about 5,000 times the mass of the proton). In UPC collisions, the two oncoming beams are far enough apart that the beam particles are less likely to undergo hard scattering, and instead just “graze” each other. This means that the strong force is not usually involved in such interactions, since its range is tiny and it only has intra-nuclear coverage. Instead, the electromagnetic interaction dominates in UPC events.

The figure below shows how UPCs proceed. The grazing of two lead ions leads to large electromagnetic fields in the space between the ions, and this interaction can be interpreted as an exchange of photons between them, since photons are the mediator of electromagnetism. Then, by also looking for two final-state photons in the ATLAS detector (the ‘X’ on the right in the figure), the light-by-light process, $\gamma \gamma \rightarrow \gamma \gamma$, can be probed ($\gamma$ stands for photons).

In order to isolate this particular process from all the other physics happening at the LHC, a series of increasingly tighter selections (or cuts) is applied to the acquired data. The final cuts are optimized to obtain the maximum possible sensitivity to the light-by-light scattering process. This sensitivity depends on how likely it is to select a candidate signal event, and similarly on how unlikely it is to select a background event. The main background physics that could mimic the signature of light-by-light scattering (two photons in the ATLAS calorimeter on an UPC lead-ion data run) include $\gamma \gamma \rightarrow e^+ e^-$, where the final-state electrons and positrons are misidentified by the detector as photons; central exclusive production (CEP) of photons in $g g \rightarrow \gamma \gamma$ (where $g$ is a gluon); and hadronic fakes from $\pi_0$ production in low-pT dijet events, where $\pi_0$’s (neutral pions) decay to a pair of photons.

The various applied selections begin with a dedicated trigger which selects events with moderate activity in the calorimeter and very little activity elsewhere. This is what is expected in a lead-ion UPC collision, since the ions just escape down the LHC beam pipe and are not detected, leaving the two photons as the only visible products. Then a set of selections is applied to ensure that the recorded activity in the calorimeter is compatible with two photons. These selections rely mostly on the shape of electromagnetic showers deposited on calorimeter crystals, which varies for different types of incident particles.

Finally, a series of extra selections is applied to minimize the number of possible background events, such as vetoing any events containing charged-particle tracks in the ATLAS tracker, which effectively removes $\gamma \gamma \rightarrow e^+ e^-$ events with electrons and positrons mis-tagged as photons. Note that this also removes some of the real light-by-light signal events (about ~10%), where final-state photons undergo photon conversion after interacting with tracker material, but in this case the trade-off is certainly worth it. Another such selection is the requirement that the transverse momentum of the diphoton system be less than 2 GeV. This removes contributions from other fake-photon backgrounds (such as cosmic-ray muons), because it ensures that the net transverse momentum of the system is small, and thus likely to originate in the ion-ion interaction.

The same exact set of selections is applied to both data and Monte Carlo (MC) simulations of the experiment. The MC simulations yield an estimate of how many background and signal events should be expected in data. The table below shows the results.

The penultimate row contains the sum of all cuts and a comparison between total number of expected background events (2.6), light-by-light scattering events (7.3), and data (13). The sum of 2.6+7.3=9.9 events certainly seems compatible with the observed data, given the quoted uncertainties in the last row. In fact, it is possible to estimate the significance of this result by asking how likely it would be for the background-only hypothesis (that is, pretending light-by-light scattering doesn’t exist and only including the backgrounds) to yield 13 observed events. This likelihood is tiny, $5\times10^{-6}$, which corresponds to a significance of 4.4 sigma!

In addition to the number of events, in the figure below the paper also plots the diphoton invariant mass distribution for the 13 observed data events (black points), and for the MC simulations (signal in red and backgrounds in blue and gray). This comparison provides further evidence that we do indeed see light-by-light scattering in the ATLAS data.

Finally, given the observed number of events in data and the expected number of MC background events, it is possible to measure the cross-section of light-by-light scattering (as a reminder, the cross-section of a process measures how likely it is to occur in collisions). The ATLAS collaboration calculates the cross-section of light-by-light scattering with the formula:

$\sigma_{\text{fid}} = \frac{N_{\text{data}} - N_{\text{bkg}}}{C \times \int L dt}$

Where $N_{\text{data}}$ is the number of observed events in data, $N_{\text{bkg}}$ is the number of background events in MC, $\int L dt$ is the total amount of data collected, and $C$ is a correction factor which translates all of the detector inefficiencies into a single number. You can think of the entire denominator as the “effective” amount of data that was analyzed, and the numerator as the “effective” number of signal events that was seen. The ratio of the two quantities yields the probability of seeing light-by-light scattering in a single collision. The ATLAS collaboration found this value to be $70 \pm 24 \; (\text{statistical}) \pm 17 \;(\text{systematic})$ nb, which agrees with the theorized values of $45 \pm 9$ nb  and $49 \pm 10$ nb within uncertainties.

To conclude, the measurement of light-by-light scattering by ATLAS is an exciting result which offers us a direct glimpse into the stark differences between classical and quantum physics in an accessible (and dare I say) amusing way!

## Grad students can apply now for ComSciCon’18!

Applications are now open for the Communicating Science 2018 workshop, to be held in Boston, MA on June 14-16, 2018!

Graduate students at U.S. and Canadian institutions in all fields of science, technology, engineering, health, mathematics, and related fields are encouraged to apply. The application deadline is March 1st.

As for past ComSciCon national workshops, acceptance to the workshop is competitive; attendance is free and travel support and lodging will be provided to accepted applicants.

Attendees will be selected on the basis of their achievement in and capacity for leadership in science communication. Graduate students who have engaged in entrepreneurship and created opportunities for other students to practice science communication are especially encouraged to apply.

Participants will network with other leaders in science communication and build the communication skills that scientists and other technical professionals need to express complex ideas to the general public, experts in other fields, and their peers. In additional to panel discussions on topics like Science Journalism, Creative & Digital Storytelling, and Diversity and Inclusion in Science, ample time is allotted for networking with science communication experts and developing science outreach collaborations with fellow graduate students.

Longtime Astrobites readers may remember that ComSciCon was founded in 2012 by Astrobites and Chembites authors. Since then, we’ve led five national workshops on leadership in science communication and established a franchising model through which graduate student alums of our program have led sixteen regional and specialized workshops on training in science communication.

We are so excited about the impact ComSciCon has had on the science communication community. You can read more about our programs, the outcomes we have documented, find vignettes about our amazing graduate student attendees, and more in ComSciCon’s 2017 annual report.

None of ComSciCon’s impacts would be possible without the generous support of our sponsors. These contributions make it possible for us to offer this programming free of charge to graduate students, and to support their travel to our events. In particular, we wish to thank the American Astronomical Society for being a supporter of both Particlebites and ComSciCon.

If you believe in the mission of ComSciCon, you can support our program to! Visit our webpage to learn more and donate today!

## It’s A Wrap! Summary of ATLAS Updates from 2016

Article: Latest ATLAS results from Run 2
Authors: Claudia Gemme on behalf of the ATLAS Collaboration
Reference: arXiv:1612.01987 [hep-ex]

2016 certainly has been an…interesting year. There’s been a lot of division, so as the year winds down, this seems like a perfect time to reflect on something we can all get excited about: particle physics! This has been an incredible year for the ATLAS experiment and the LHC in general. Run 2 began in 2015, but really hit its stride this year. The LHC reached a peak luminosity of 1.37 x 1034 cm-2s-1, which exceeds the design luminosity by almost 40%! Additionally, the pile up (number of interactions per bunch crossing) nearly doubled with respect to 2015. From these delivered events, ATLAS was able to record 30 fb-1 of data at 13 TeV center-of-mass energy, with a DAQ (data acquisition) efficiency above 90%.

Before discussing the many analyses being done with this amazing new data set, I’d like to highlight some of the upgrades performed during the long shutdown (2013-2014) that allowed ATLAS to perform so well during Run 2. One of the most important improvements was the addition of the Insertable B-layer (IBL). The IBL is now the innermost part of the inner detector, located only 3.3 cm from the interaction point. The IBL was designed to combat radiation damage to the inner detector; due to its insertable nature, it can eventually be replaced and it shields the other 3 detector layers from the bulk of the radiation damage. Of course, it also allows more tracking points which further improves track reconstruction. There were also extensive improvements to the magnetic and cryogenic systems to provide more powerful cooling and repair damage from Run 1. To deal with the increased data load, the ATLAS DAQ and trigger systems were also upgraded (reminder, triggers are data collection elements that decide which events get stored for analysis and which get thrown out). The High Level Trigger, ATLAS’s only hardware level trigger was updated to output 100kHz of data (previously 75kHz), and the 2 software level triggers were merged into one. This allowed ATLAS DAQ to have an average physics data output of 1kHz, allowing more of the Run 2 events to be processed and stored.

Now, lets talk about ATLAS’s physics program in 2016. A precise understanding of the Standard Model is incredibly important for all particle physics programs. It allows physicists to test theoretical mass and coupling predictions, and to make accurate predictions for background processes in BSM searches. ATLAS improved the precision of many SM measurements in 2016, and a summary of some important cross section measurements is shown in Figure 1. In particular, the measurements of the Z+jets and diboson cross sections were compatible with new next-to-next-leading-order calculations from theorists, confirming theoretical predictions.

Another major goal of the Run 2 ATLAS physics program was to confirm the discovery of the Higgs Boson and further study it. During Run 1, both ATLAS and CMS found evidence of a Higgs boson with a combined measured mass of 125.09 GeV using the H→gg and H→ZZ*→4l channels. Furthermore, nearly all measured couplings were consistent with SM predictions within 2σ. In Run 2, ATLAS ‘rediscovered’ the Higgs at a compatible mass with a local significance of 10 (compared to 8.6 expected) using the same channels. The Run 2 cross section is slightly higher than the SM prediction, but still compatible within uncertainty. A summary of the cross-section measurements at various energies can be seen in Figure 2. Additionally, ATLAS physicists sought to show conclusive evidence of the H→bb decay channel in Run 2. This decay mode of the Higgs has the largest predicted branching fraction (58%) according to the SM, but has been difficult to measure due to large multi-jet backgrounds. The increased luminosity in Run 2 made studying the similar X+H→bb decay a promising alternative, despite its cross-section being much lower. This decay process is easier to isolate as leptonic decays of the W or Z create a clean signature. However, the measured significance of this decay was only 0.42, compared to a SM expectation of 1.94. The analysis procedure was well validated by measuring SM (W/Z)Z yield, so this large discrepancy is certainly an area for more study in 2017.

The final component of the ATLAS physics program is, of course, the many searches for BSM physics. The discovery potential for many of these processes is increased in Run 2 due to the enhanced cross-sections at the larger energy. There are 3 important channels for general BSM searches: diboson, diphoton, and dilepton. Many extensions of the SM predict new heavy particles including heavy neutral Higgs, Heavy Vector Triplet W, and some Gravitons, that could decay into vector-boson pairs. General searches in this channel are done by looking for hadronic decays of the boosted W and Z bosons within a single large-radius jet and using jet substructure properties. Unfortunately, no significant excess is observed. The diphoton channel became the hot topic of 2016, due to a deviation of at least 3σ from the SM only hypothesis seen in both ATLAS and CMS early this year using the 2015 data set. However, this excess was not confirmed with the full 2016 data set, which has 4x more statistics. The largest excess seen by ATLAS in the diphoton channel is now 2.3σ for a mass near 710 GeV. The dilepton final states have excellent sensitivity to a variety of new phenomena and have high signal selection efficiencies. Again, however, the 2016 measurements are consistent with the SM prediction. Lower limits on a resonance mass have been set, enhancing exclusion up to 1 TeV more than in Run 1.

ATLAS physicists also study the infamous SUSY model by looking for signatures of gluinos and squarks (predicted to be the lightest SUSY particles). These particles could be observed in events with high jet multiplicity and lots of missing energy. Once again, no significant excess was found. These results were interpreted within 2 SUSY models, and gluinos with mass up to 1600 GeV were excluded.

So, to conclude, no evidence of BSM physics has been found at ATLAS in Run 2, but the  new limits on some SUSY models and various other BSM phenomena have been improved and can be seen in Figures 3 and 4. Precision measurements of the SM were improved, the detector equipment was updated, and the Higgs was rediscovered, but ATLAS physicists and particle enthusiasts are still hoping for something more exciting. Lest we leave 2016 on a low note, its important to remember that only 50% of ATLAS searches have been updated to the Run 2 energy, so there are still many more channels to be explored. Plus, there are already some modest excesses, as well as the discrepancy in the H→bb measurement. There is certainly more to be discovered as Run 2 continues, so here’s to 2017!

## Horton Hears a Sterile Neutrino?

Article: Limits on Active to Sterile Neutrino Oscillations from Disappearance Searches in the MINOS, Daya Bay, and Bugey-3 Experiments
Authors:  Daya Bay and MINOS collaborations
Reference: arXiv:1607.01177v4

So far, the hunt for sterile neutrinos has come up empty. Could a joint analysis between MINOS, Daya Bay and Bugey-3 data hint at their existence?

Neutrinos, like the beloved Whos in Dr. Seuss’ “Horton Hears a Who!,” are light and elusive, yet have a large impact on the universe we live in. While neutrinos only interact with matter through the weak nuclear force and gravity, they played a critical role in the formation of the early universe. Neutrino physics is now an exciting line of research pursued by the Hortons of particle physics, cosmology, and astrophysics alike. While most of what we currently know about neutrinos is well described by a three-flavor neutrino model, a few inconsistent experimental results such as those from the Liquid Scintillator Neutrino Detector (LSND) and the Mini Booster Neutrino Experiment (MiniBooNE) hint at the presence of a new kind of neutrino that only interacts with matter through gravity. If this “sterile” kind of neutrino does in fact exist, it might also have played an important role in the evolution of our universe.

The three known neutrinos come in three flavors: electron, muon, or tau. The discovery of neutrino oscillation by the Sudbury Neutrino Observatory and the Super-Kamiokande Observatory, which won the 2015 Nobel Prize, proved that one flavor of neutrino can transform into another. This led to the realization that each neutrino mass state is a superposition of the three different neutrino flavor states. From neutrino oscillation measurements, most of the parameters that define the mixing between neutrino states are well known for the three standard neutrinos.

The relationship between the three known neutrino flavor states and mass states is usually expressed as a 3×3 matrix known as the PMNS matrix, for Bruno Pontecorvo, Ziro Maki, Masami Nakagawa and Shoichi Sakata. The PMNS matrix includes three mixing angles, the values of which determine “how much” of each neutrino flavor state is in each mass state. The distance required for one neutrino flavor to become another, the neutrino oscillation wavelength, is determined by the difference between the squared masses of the two mass states. The values of mass splittings $m_2^2-m_1^2$ and $m_3^2-m_2^2$ are known to good precision.

A fourth flavor? Adding a sterile neutrino to the mix

A “sterile” neutrino is referred to as such because it would not interact weakly: it would only interact through the gravitational force. Neutrino oscillations involving the hypothetical sterile neutrino can be understood using a “four-flavor model,” which introduces a fourth neutrino mass state, $m_4$, heavier than the three known “active” mass states. This fourth neutrino state would be mostly sterile, with only a small contribution from a mixture of the three known neutrino flavors. If the sterile neutrino exists, it should be possible to experimentally observe neutrino oscillations with a wavelength set by the difference between $m_4^2$ and the square of the mass of another known neutrino mass state. Current observations suggest a squared mass difference in the range of 0.1-10 eV$^2$.

Oscillations between active and sterile states would result in the disappearance of muon (anti)neutrinos and electron (anti)neutrinos. In a disappearance experiment, you know how many neutrinos of a specific type you produce, and you count the number of that type of neutrino a distance away, and find that some of the neutrinos have “disappeared,” or in other words, oscillated into a different type of neutrino that you are not detecting.

A joint analysis by the MINOS and Daya Bay collaborations

The MINOS and Daya Bay collaborations have conducted a joint analysis to combine independent measurements of muon (anti)neutrino disappearance by MINOS and electron antineutrino disappearance by Daya Bay and Bugey-3. Here’s a breakdown of the involved experiments:

• MINOS, the Main Injector Neutrino Oscillation Search: A long-baseline neutrino experiment with detectors at Fermilab and northern Minnesota that use an accelerator at Fermilab as the neutrino source
• The Daya Bay Reactor Neutrino Experiment: Uses antineutrinos produced by the reactors of China’s Daya Bay Nuclear Power Plant and the Ling Ao Nuclear Power Plant
• The Bugey-3 experiment: Performed in the early 1990s, used antineutrinos from the Bugey Nuclear Power Plant in France for its neutrino oscillation observations

Assuming a four-flavor model, the MINOS and Daya Bay collaborations put new constraints on the value of the mixing angle $\theta_{\mu e}$, the parameter controlling electron (anti)neutrino appearance in experiments with short neutrino travel distances. As for the hypothetical sterile neutrino? The analysis excluded the parameter space allowed by the LSND and MiniBooNE appearance-based indications for the existence of light sterile neutrinos for $\Delta m_{41}^2$ < 0.8 eV$^2$ at a 95% confidence level. In other words, the MINOS and Daya Bay analysis essentially rules out the LSND and MiniBooNE inconsistencies that allowed for the presence of a sterile neutrino in the first place. These results illustrate just how at odds disappearance searches and appearance searches are when it comes to providing insight into the existence of light sterile neutrinos. If the Whos exist, they will need to be a little louder in order for the world to hear them.

## Dragonfly 44: A potential Dark Matter Galaxy

Title: A High Stellar Velocity Dispersion and ~100 Globular Clusters for the Ultra Diffuse Galaxy Dragonfly 44

PublicationApJ, v828, Number 1, arXiv: 1606.06291

The title of this paper sounds like some standard astrophysics analyses; but, dig a little deeper and you’ll find – what I think – is an incredibly interesting, surprising and unexpected observation.

Last year, using the WM Keck Observatory and the Gemini North Telescope in Manuakea, Hawaii, the Dragonfly Telephoto Array observed the Coma cluster (a large cluster of galaxies in the constellation Coma – I’ve included a Hubble Image to the left). The team identified a population of large, very low surface brightness (ie: not a lot of stars), spheroidal galaxies around an Ultra Diffuse Galaxy (UDG) called Dragonfly 44 (shown below). They determined that Dragonfly 44 has so few stars that gravity could not hold it together – so some other matter had to be involved – namely DARK MATTER (my favorite kind of unknown matter).

The team used the DEIMOS instrument installed on Keck II to measure the velocities of stars for 33.5 hours over a period of six nights so they could determine the galaxy’s mass. Observations of Dragonfly 44’s rotational speed suggest that it has a mass of about one trillion solar masses, about the same as the Milky Way. However, the galaxy emits only 1% of the light emitted by the Milky Way. In other words, the Milky Way has more than a hundred times more stars than Dragonfly 44. I’ve also included the Mass-to-Light ratio plot vs. the dynamical mass. This illustrates how unique Dragonfly 44 is compared to other dark matter dominated galaxies like dwarf spheroidal galaxies.

What is particularly exciting is that we don’t understand how galaxies like this form.

Their research indicates that these UDGs could be failed galaxies, with the sizes, dark matter content, and globular cluster systems of much more luminous objects. But we’ll need to discover more to fully understand them.

Further reading (works by the same authors)
Forty-Seven Milky Way-Sized, Extremely Diffuse Galaxies in the Coma Cluster,arXiv: 1410.8141
Spectroscopic Confirmation of the Existence of Large, Diffuse Galaxies in the Coma Cluster: arXiv: 1504.03320