Solar Neutrino Problem

Why should we even care about neutrinos coming from the sun in the first place? In the 1960’s, the processes governing the interior of the sun were not well understood. There was a strong suspicion that the suns main energy source was the fusion of Hydrogen into Helium, but there was no direct evidence for this hypothesis. This is because the photons produced in fusion processes have a mean free path of about ​10^(-10) times the radius of the sun [1]. That is to say, it takes thousands of years for the light produced inside the core of the sun to escape and be detected at Earth. Photons then are not a good experimental observable to use if we want to understand the interior of the sun.

Additionally, these fusion processes also produce neutrinos, which are essentially non-interacting. Their non-interactive properties on one hand means that they can escape the interior of the sun unimpeded. Neutrinos thus give us a direct probe into the core of the sun without the wait that photons require. On the other hand though, these same non-interactive properties mean that detecting them is extremely difficult.

The undertaking to understand and measure these neutrinos was headed by John Bahcall, who headed the theoretical development, and Ray Davis Jr, who headed the experimental development.

In 1963, John Bahcall gave the first prediction of the neutrino flux coming from the sun [1]. Five years later in 1968, Ray Davis provided the first measurement of the solar neutrino flux [2]. They found that the predicted value was about 2.5 times higher than the measured value. This discrepancy is what became known as the solar neutrino problem.

This plot shows the discrepancy between the measured (blue) and predicted (not blue) amounts of electron neutrinos from various experiments. Blue corresponds to experimental measurements. The other colors correspond to the predicted amount of neutrinos from various sources. This figure was first presented in a 2004 paper by Bahcall [3].

Broadly speaking, there were three causes for this discrepancy:

  1. The prediction was incorrect. This was Bahcalls domain. At lowest order, this could involve some combination of two things. First, incorrect modeling of the sun resulting in inaccurate neutrino fluxes. Second, inaccurate calculation of the observable signal resulting from the neutrino interactions with the detector. Bahcall and his collaborators spent 20 years refining this work and much more but the discrepancy persisted.
  2. The experimental measurement was incorrect. During those same 20 years, until the late 1980’s, Ray Davis’ experiment was the only active neutrino experiment [4]. He continued to improve the experimental sensitivity, but the discrepancy still persisted.
  3. New Physics. In 1968, B. Pontecorv and V. Gribov formulated Neutrino oscillations as we know it today. They proposed that Neutrino flavor eigenstates are linear combinations of mass eigenstates [5]. At a very hand-wavy level, this ansatz sounds reasonable because a neutrino of one flavor at production can change its identity while it propagates from the Sun to the Earth. This is because it is the mass eigenstates that have well-defined time-evolution in quantum mechanics.

It turns out that Pontecorv and Gribov had found the resolution to the Solar Neutrino problem. It would take an additional 30 years for experimental verification of neutrino oscillations by Super-K in 1998 [6], and Sundbury Neutrino Observatory (SNO) in 1999 [7].


References:

[1] – Solar Neutrinos I: Theoretical This paper lays out the motivation for why we should care about solar neutrinos at all.

[2] – Search for Neutrinos from the Sun The first announcement of the measurement of the solar neutrino flux.

[3] – Solar Models and Solar Neutrinos This is a summary of the Solar Neutrino Problem as presented by Bahcall in 2004.

[4] – The Evolution of Neutrino Astronomy A recounting of their journey in neutrino oscillations written by Bahcall and Davis.

[5] – Neutrino Astronomy and Lepton Charge This is the paper that laid down the mathematical groundwork for neutrino oscillations.

[6] – Evidence for Oscillation of Atmospheric Neutrinos The Super-K collaboration reporting their findings in support of neutrino flavor oscillations.

[7] – The Sudbury Neutrino Observatory The SNO collaboration announcing that they had strong experimental evidence for neutrino oscillations.

Additional References

[A] – Formalism of Neturino Osciallations: An Introduction. An accessible introduction to neutrino oscillations, this is useful for anyone who wants a primer on this topic.

[B] – Neutrino Masses, Mixing, and Oscillations. This is the Particle Data Group (PDG) treatment of neutrino mixing and oscillation.

[C] – Solving the mystery of the missing neutrinos. Writen by John Bahcall, this is a comprehensive discussion of the “missing neutrino” or “solar neutrino” problem.

Riding the wave to new physics

Article title: “Particle physics applications of the AWAKE acceleration scheme”

Authors: A. Caldwell, J. Chappell, P. Crivelli, E. Depero, J. Gall, S. Gninenko, E. Gschwendtner, A. Hartin, F. Keeble, J. Osborne, A. Pardons, A. Petrenko, A. Scaachi , and M. Wing

Reference: arXiv:1812.11164

On the energy frontier, the search for new physics remains a contentious issue – do we continue to build bigger, more powerful colliders? Or is this a too costly (or too impractical) an endeavor? The standard method of accelerating charged particles remains in the realm of radio-frequency (RF) cavities, possessing an electric field strength of about 100 Megavolts per meter, such as that proposed for the future Compact Linear Accelerator (CLIC) at CERN aiming for center-of-mass energies in the multi-TeV regime. Such a technology in the linear fashion is nothing new, being a key part of the SLAC National Accelerator Laboratory (California, USA) for decades before it’s shutdown around the early millennium. However, a device such as CLIC would still require more than ten times the space of SLAC, predicted to come in at around 10-50 km. Not only that, the walls of the cavities are based on normal conducting material and so tend to heat up very quickly and so are typically run in short pulses. And we haven’t even mentioned the costs yet!

Physicists are a smart bunch, however, and they’re always on the lookout for new technologies, new techniques and unique ways of looking at the same problem. As you may have guessed already, the limiting factor determining the length required for sufficient linear acceleration is the field gradient. But what if there were a way to achieve hundreds of times that of a standard RF cavity? The answer has been found in plasma wakefields – separated bunches of dense protons with the potential to drive electrons up to gigaelectronvolt energies in a matter of meters!

Wakefields of plasma are by no means a new idea, being proposed first at least four decades ago. However, most examples have demonstrated this idea using electrons or lasers to ‘drive’ the wakefield in the plasma. More specifically, this is known as the ‘drive beam’ which does not actually participate in the acceleration but provides the large electric field gradient for what is known as the ‘witness beam’ – the electrons. However, this has not been demonstrated using protons as the drive beam to penetrate much further into the plasma – until now.

In fact, very recently CERN has demonstrated proton-driven wakefield technology for the first time during the 2016-2018 run of AWAKE (which stands for Advanced Proton Driven Plasma Wakefield Acceleration Experiment, naturally), accelerating electrons to 2 GeV in only 10 m. The protons that drive the electrons are injected from the Super Proton Synchrotron (SPS) into a Rubidium gas, ionizing the atoms and altering their uniform electron distribution into an osscilating wavelike state. The electrons that ‘witness’ the wakefield then ‘ride the wave’ much like a surfer at the forefront of a water wave. Right now, AWAKE is just a proof of concept, however plans to scale up to 10 GeV electrons in the coming years could hopefully pave the pathway to LHC level proton energies – shooting electrons up to TeV energies!

Figure 1: A layout of AWAKE (Advanced Proton Driven Plasma Wakefield Acceleration Experiment).

In this article, we focus instead on the interesting physics applications of such a device. Bunches of electrons with energies up to TeV energies is so far unprecedented. The most obvious application would of course be a high energy linear electron-positron collider. However, let’s focus on some of the more novel experimental applications that are being discussed today, particularly those that could benefit from such a strong electromagnetic presence in almost a ‘tabletop physics’ configuration.

Awake in the dark

One of the most popular considerations when it comes to dark matter is the existence of dark photons, mediating interactions between dark and visible sector physics (see “The lighter side of Dark Matter” for more details). Finding them has been the subject of recent experimental and theoretical approaches, even with high-energy electron fixed-target experiments already. Figure 2 shows such an interaction, where A^\prime represents the dark photon. One experiment based at CERN known as the NA64 already searches for dark photons through incident electrons on a target, utilizing interactions of the SPS proton beam. In the standard picture, the dark photon is searched through the missing energy signature, leaving the detector without interacting but escaping with a portion of the energy. The energy of the electrons is of course not the issue when the SPS is used, however the number of electrons is.

Figure 2: Dark photon production from a fixed-target experiment with an electron-positron final state.

Assuming one could work with the AWAKE scheme, one could achieve numbers of electrons on target orders of magnitude larger – clearly enhancing the reach for masses and mixing of the dark photon. The idea would be to introduce a high number of energetic electron bunches to a tungsten target with a following 10 m long volume for the dark photon to decay (in accordance with Figure 2). Because of the opposite charges of the electron and positron, the final decay products can then be separated with magnetic fields and hence one can ultimately determine the dark photon invariant mass.

Figure 3 shows how much of an impact a larger number of on-target electrons would make for the discovery reach in the plane of kinetic mixing \epsilon vs mass of the dark photon m_{A^\prime} (again we refer the reader to “The lighter side of Dark Matter” for explanations of these parameters). With the existing NA64 setup, one can already see new areas of the parameter space being explored for 1010 – 1013 electrons. However a significant difference can be seen with the electron bunches provided by the AWAKE configuration, with an ambitious limit shown by the 1016 electrons at 1 TeV.

Figure 3: Exclusion limits in the \epsilon - m_{A^\prime} plane for the dark photon decaying to an electron-positron final state. The NA64 experiment using larger numbers of electrons is shown in the colored non-solid curves from 1010 to 1013 of total on-target electrons. The solid colored lines show the AWAKE provided electron bunches with 1015 and 1016 at 50 GeV and 1016 at 1 TeV.

Light, but strong

Quantum Electrodynamics (or QED, for short), describing the interaction between fundamental electrons and photons, is perhaps the most precisely measured and well-studied theory out there, showing agreement with experiment in a huge range of situations. However, there are some extreme phenomena out in the universe where the strength of certain fields become so great that our current understanding starts to break down. For the electromagnetic field this can in fact be quantified as the Schwinger limit, above which it is expected that nonlinear field effects start to become significant. Typically at a strength around 1018 V/m, the nonlinear corrections to the equations of QED would predict the appearance of electron-positron pairs spontaneously created from such an enormous field.

One of the predictions is the multiphoton interaction with electrons in the initial state. In linear QED, the standard 2 \rightarrow 2 scattering of e^- + \gamma \rightarrow e^- + \gamma for example is only possible. In a strong field regime, however, the initial state can then open up to n numbers of photons. Given a strong enough laser pulse, multiple laser photons can interact with electrons and investigate this incredible region of physics. We show this in Figure 4.

Figure 4: Multiphoton interaction with an electron (left) and electron-positron production from photon absorption (right). $\latex n$ here is the number of photons absorbed in the initial state.

The good and bad news is that this had already been performed as far back as the 90s in the E144 experiment at SLAC, using 50 GeV electron bunches – however unable to reach the critical field value in the electrons frame of rest. AWAKE could certainly provide highly energetic electrons and allow for different kinematic experimental reach. Could this provide the first experimental measurement of the Schwinger critical field?

Of course, these are just a few considerations amongst a plethora of uses for the production of energetic electrons over such short distances. However as physicists desperately continue their search for new physics, it may be time to consider the use of new acceleration technologies on a larger scale as AWAKE has already shown its scalability. Wakefield acceleration may even establish itself with a fully-developed new physics search plan of its own.

References and further reading:

The Delirium over Helium

Title: “New evidence supporting the existence of the hypothetic X17 particle”

Authors: A.J. Krasznahorkay, M. Csatlós, L. Csige, J. Gulyás, M. Koszta, B. Szihalmi, and J. Timár; D.S. Firak, A. Nagy, and N.J. Sas; A. Krasznahorkay

Reference: https://arxiv.org/pdf/1910.10459.pdf

This is an update to the excellent “Delirium over Beryllium” bite written by Flip Tanedo back in 2016 introducing the Beryllium anomaly (I highly recommend starting there first if you just opened this page). At the time, the Atomki collaboration in Decebren, Hungary, had just found an unexpected excess on the angular correlation distribution of electron-positron pairs from internal pair conversion in the transition of excited states of Beryllium. According to them, this excess is consistent with a new boson of mass 17 MeV/c2, nicknamed the “X17” particle. (Note: for reference, 1 GeV/c2 is roughly the mass of a proton; for simplicity, from now on I’ll omit the “c2” term by setting c, the speed of light, to 1 and just refer to masses in MeV or GeV. Here’s a nice explanation of this procedure.)

A few weeks ago, the Atomki group released a new set of results that uses an updated spectrometer and measures the same observable (positron-electron angular correlation) but from transitions of Helium excited states instead of Beryllium. Interestingly, they again find a similar excess on this distribution, which could similarly be explained by a boson with mass ~17 MeV. There are still many questions surrounding this result, and lots of skeptical voices, but the replication of this anomaly in a different system (albeit not yet performed by independent teams) certainly raises interesting questions that seem to warrant further investigation by other researchers worldwide.

Nuclear physics and spectroscopy

The paper reports the production of excited states of Helium nuclei from the bombardment of tritium atoms with protons. To a non-nuclear physicist, this may not be immediately obvious, but nuclei can be in excited states just as electrons around atoms. The entire quantum wavefunction of the nucleus is usually found in the ground state, but can be excited by various mechanisms such as the proton bombardment used in this case. Protons with a specific energy (0.9 MeV) were targeted at tritium atoms to initiate the reaction 3H(p, γ)4He, in nuclear physics notation. The equivalent particle physics notation is p + 3H → He* → He + γ (→ e+ e), where ‘*’ denotes an excited state.

This particular proton energy serves to excite the newly-produced Helium atoms into a state with energy of 20.49 MeV. This energy is sufficiently close to the Jπ = 0 state (i.e. negative parity and quantum number J = 0), which is the second excited state in the ladder of states of Helium. This state has a centroid energy of 21.01 MeV and a wide “sigma” (or decay width) of 0.84 MeV. Note that energies of the first two excited states of Helium overlap quite a bit, so actually sometimes nuclei will be found in the first excited state instead, which is not phenomenologically interesting in this case.

Figure 1. Sketch of the energy distributions for the first two excited quantum states of Helium nuclei. The second excited state (with centroid energy of 21.01 MeV) exhibits an anomaly in the electron-positron angular correlation distribution in transitions to the ground state. Proton bombardment with 0.9 MeV protons yields Helium nuclei at 20.49 MeV, therefore producing both first and second excited states, which are overlapping.

With this reaction, experimentalists can obtain transitions from the Jπ = 0 excited state back to the ground state with Jπ = 0+. These transitions typically produce a gamma ray (photon) with 21.01 MeV energy, but occasionally the photon will internally convert into an electron-positron pair, which is the experimental signature of interest here. A sketch of the experimental concept is shown below. In particular, the two main observables measured by the researchers are the invariant mass of the electron-positron pair, and the angular separation (or angular correlation) between them, in the lab frame.

Figure 2. Schematic representation of the production of excited Helium states from proton bombardment, followed by their decay back to the ground state with the emission of an “X” particle. X here can refer to a photon converting into a positron-electron pair, in which case this is an internal pair creation (IPC) event, or to the hypothetical “X17” particle, which is the process of interest in this experiment. Adapted from 1608.03591.

The measurement

For this latest measurement, the researchers upgraded the spectrometer apparatus to include 6 arms instead of the previous 5. Below is a picture of the setup with the 6 arms shown and labeled. The arms are at azimuthal positions of 0, 60, 120, 180, 240, and 300 degrees, and oriented perpendicularly to the proton beam.

Figure 3. The Atomki nuclear spectrometer. This is an upgraded detector from the previous one used to detect the Beryllium anomaly, featuring 6 arms instead of 5. Each arm has both plastic scintillators for measuring electrons’ and positrons’ energies, as well as a silicon strip-based detector to measure their hit impact positions. Image credit: A. Krasznahorkay.

The arms consist of plastic scintillators to detect the scintillation light produced by the electrons and positrons striking the plastic material. The amount of light collected is proportional to the energy of the particles. In addition, silicon strip detectors are used to measure the hit position of these particles, so that the correlation angle can be determined with better precision.

With this setup, the experimenters can measure the energy of each particle in the pair and also their incident positions (and, from these, construct the main observables: invariant mass and separation angle). They can also look at the scalar sum of energies of the electron and positron (Etot), and use it to zoom in on regions where they expect more events due to the new “X17” boson: since the second excited state lives around 21.01 MeV, the signal-enriched region is defined as 19.5 MeV < Etot < 22.0 MeV. They can then use the orthogonal region, 5 MeV < Etot < 19 MeV (where signal is not expected to be present), to study background processes that could potentially contaminate the signal region as well.

The figure below shows the angular separation (or correlation) between electron-positron pairs. The red asterisks are the main data points, and consist of events with Etot in the signal region (19.5 MeV < Etot < 22.0 MeV). We can clearly see the bump occurring around angular separations of 115 degrees. The black asterisks consist of events in the orthogonal region, 5 MeV < Etot < 19 MeV. Clearly there is no bump around 115 degrees here. The researchers then assume that the distribution of background events in the orthogonal region (black asterisks) has the same shape inside the signal region (red asterisks), so they fit the black asterisks to a smooth curve (blue line), and rescale this curve to match the number of events in the signal region in the 40 to 90 degrees sub-range (the first few red asterisks). Finally, the re-scaled blue curve is used in the 90 to 135 degrees sub-range (the last few red asterisks) as the expected distribution.

Figure 4. Angular correlation between positrons and electrons emitted in Helium nuclear transitions to the ground state. Red dots are data in the signal region (sum of positron and electron energies between 19.5 and 22 MeV), and black dots are data in the orthogonal region (sum of energies between 5 and 19 MeV). The smooth blue curve is a fit to the orthogonal region data, which is then re-scaled to to be used as background estimation in the signal region. The blue, black, and magenta histograms are Monte Carlo simulations of expected backgrounds. The green curve is a fit to the data with the hypothesis of a new “X17” particle.

In addition to the data points and fitted curves mentioned above, the figure also reports the researchers’ estimates of the physics processes that cause the observed background. These are the black and magenta histograms, and their sum is the blue histogram. Finally, there is also a green curve on top of the red data, which is the best fit to a signal hypothesis, that is, assuming that a new particle with mass 16.84 ± 0.16 MeV is responsible for the bump in the high-angle region of the angular correlation plot.

The other main observable, the invariant mass of the electron-positron pair, is shown below.

Figure 5. Invariant mass distribution of emitted electrons and positrons in the transitions of Helium nuclei to the ground state. Red asterisks are data in the signal region (sum of electron and positron energies between 19.5 and 22 MeV), and black asterisks are data in the orthogonal region (sum of energies between 5 and 19 MeV). The green smooth curve is the best fit to the data assuming the existence of a 17 MeV particle.

The invariant mass is constructed from the equation

m_{e^+e^-} = \sqrt{1 - y^2} E_{\textrm{tot}} \; \textrm{sin}(\theta/2) + 2m_e^2 \left(1 + \frac{1+y^2}{1-y^2}\, \textrm{cos} \, \theta \right)

where all relevant quantities refer to electron and positron observables: Etot is as before the sum of their energies, y is the ratio of their energy difference over their sum (y \equiv (E_{e^+} - E_{e^-})/E_{\textrm{tot}}), θ is the angular separation between them, and me is the electron and positron mass. This is just one of the standard ways to calculate the invariant mass of two daughter particles in a reaction, when the known quantities are the angular separation between them and their individual energies in the lab frame.

The red asterisks are again the data in the signal region (19.5 MeV < Etot < 22 MeV), and the black asterisks are the data in the orthogonal region (5 MeV < Etot < 19 MeV). The green curve is a new best fit to a signal hypothesis, and in this case the best-fit scenario is a new particle with mass 17.00 ± 0.13 MeV, which is statistically compatible with the fit in the angular correlation plot. The significance of this fit is 7.2 sigma, which means the probability of the background hypothesis (i.e. no new particle) producing such large fluctuations in data is less than 1 in 390,682,215,445! It is remarkable and undeniable that a peak shows up in the data — the only question is whether it really is due to a new particle, or whether perhaps the authors failed to consider all possible backgrounds, or even whether there may have been an unexpected instrumental anomaly of some sort.

According to the authors, the same particle that could explain the anomaly in the Beryllium case could also explain the anomaly here. I think this claim needs independent validation by the theory community. In any case, it is very interesting that similar excesses show up in two “independent” systems such as the Beryllium and the Helium transitions.

Some possible theoretical interpretations

There are a few particle interpretations of this result that can be made compatible with current experimental constraints. Here I’ll just briefly summarize some of the possibilities. For a more in-depth view from a theoretical perspective, check out Flip’s “Delirium over Beryllium” bite.

The new X17 particle could be the vector gauge boson (or mediator) of a protophobic force, i.e. a force that interacts preferentially with neutrons but not so much with protons. This would certainly be an unusual and new force, but not necessarily impossible. Theorists have to work hard to make this idea work, as you can see here.

Another possibility is that the X17 is a vector boson with axial couplings to quarks, which could explain, in the case of the original Beryllium anomaly, why the excess appears in only some transitions but not others. There are complete theories proposed with such vector bosons that could fit within current experimental constraints and explain the Beryllium anomaly, but they also include new additional particles in a dark sector to make the whole story work. If this is the case, then there might be new accessible experimental observables to confirm the existence of this dark sector and the vector boson showing up in the nuclear transitions seen by the Atomki group. This model is proposed here.

However, an important caveat about these explanations is in order: so far, they only apply to the Beryllium anomaly. I believe the theory community needs to validate the authors’ assumption that the same particle could explain this new anomaly in Helium, and that there aren’t any additional experimental constraints associated with the Helium signature. As far as I can tell, this has not been shown yet. In fact, the similar invariant mass is the only evidence so far that this could be due to the same particle. An independent and thorough theoretical confirmation is needed with high-stake claims such as this one.

Questions and criticisms

In the years since the first Beryllium anomaly result, a few criticisms about the paper and about the experimental team’s history have been laid out. I want to mention some of those to point out that this is still a contentious result.

First, there is the group’s history of repeated claims of new particle discoveries every so often since the early 2000s. After experimental refutation of these claims by more precise measurements, there isn’t a proper and thorough discussion of why the original excesses were seen in the first place, and why they have subsequently disappeared. Especially for such groundbreaking claims, a consistent history of solid experimental attitude towards one’s own research is very valuable when making future claims.

Second, others have mentioned that some fit curves seem to pass very close to most data points (n.b. I can’t seem to find the blog post where I originally read this or remember its author – if you know where it is, please let me know so I can give proper credit!). Take a look at the plot below, which shows the observed Etot distribution. In experimental plots, there is usually a statistical fluctuation of data points around the “mean” behavior, which is natural and expected. Below, in contrast, the data points are remarkably close to the fit. This doesn’t in itself mean there is anything wrong here, but it does raise an interesting question of how the plot and the fit were produced. It could be that this is not a fit to some prior expected behavior, but just an “interpolation”. Still, if that’s the case, then it’s not clear (to me, at least) what role the interpolation curve plays.

Figure 6. Sum of electron and positron energies distribution produced in the decay of Helium nuclei to the ground state. Black dots are data and the red curve is a fit.

Third, there is also the background fit to data in Figure 4 (black asterisks and blue line). As Ethan Siegel has pointed out, you can see how well the background fit matches data, but only in the 40 to 90 degrees sub-range. In the 90 to 135 degrees sub-range, the background fit is actually quite poorer. In a less favorable interpretation of the results, this may indicate that whatever effect is causing the anomalous peak in the red asterisks is also causing the less-than-ideal fit in the black asterisks, where no signal due to a new boson is expected. If the excess is caused by some instrumental error instead, you’d expect to see effects in both curves. In any case, the background fit (blue curve) constructed from the black asterisks does not actually model the bump region very well, which weakens the argument for using it throughout all of the data. A more careful analysis of the background is warranted here.

Fourth, another criticism comes from the simplistic statistical treatment the authors employ on the data. They fit the red asterisks in Figure 4 with the “PDF”:

\textrm{PDF}(e^+ e^-) = N_{Bg} \times \textrm{PDF}(\textrm{data}) + N_{Sig} \times \textrm{PDF}(\textrm{sig})

where PDF stands for “Probability Density Function”, and in this case they are combining two PDFs: one derived from data, and one assumed from the signal hypothesis. The two PDFs are then “re-scaled” by the expected number of background events (N_{Bg}) and signal events (N_{sig}), according to Monte Carlo simulations. However, as others have pointed out, when you multiply a PDF by a yield such as N_{Bg}, you no longer have a PDF! A variable that incorporates yields is no longer a probability. This may just sound like a semantics game, but it does actually point to the simplicity of the treatment, and makes one wonder if there could be additional (and perhaps more serious) statistical blunders made in the course of data analysis.

Fifth, there is also of course the fact that no other experiments have seen this particle so far. This doesn’t mean that it’s not there, but particle physics is in general a field with very few “low-hanging fruits”. Most of the “easy” discoveries have already been made, and so every claim of a new particle must be compatible with dozens of previous experimental and theoretical constraints. It can be a tough business. Another example of this is the DAMA experiment, which has made claims of dark matter detection for almost 2 decades now, but no other experiments were able to provide independent verification (and in fact, several have provided independent refutations) of their claims.

I’d like to add my own thoughts to the previous list of questions and considerations.

The authors mention they correct the calibration of the detector efficiency with a small energy-dependent term based on a GEANT3 simulation. The updated version of the GEANT library, GEANT4, has been available for at least 20 years. I haven’t actually seen any results that use GEANT3 code since I’ve started in physics. Is it possible that the authors are missing a rather large effect in their physics expectations by using an older simulation library? I’m not sure, but just like the simplistic PDF treatment and the troubling background fit to the signal region, it doesn’t inspire as much confidence. It would be nice to at least have a more detailed and thorough explanation of what the simulation is actually doing (which maybe already exists but I haven’t been able to find?). This could also be due to a mismatch in the nuclear physics and high-energy physics communities that I’m not aware of, and perhaps nuclear physicists tend to use GEANT3 a lot more than high-energy physicists.

Also, it’s generally tricky to use Monte Carlo simulation to estimate efficiencies in data. One needs to make sure the experimental apparatus is well understood and be confident that their simulation reproduces all the expected features of the setup, which is often difficult to do in practice, as collider experimentalists know too well. I’d really like to see a more in-depth discussion of this point.

Finally, a more technical issue: from the paper, it’s not clear to me how the best fit to the data (red asterisks) was actually constructed. The authors claim:

Using the composite PDF described in Equation 1 we first performed a list of fits by fixing the simulated particle mass in the signal PDF to a certain value, and letting RooFit estimate the best values for NSig andNBg. Letting the particle mass lose in the fit, the best fitted mass is calculated for the best fit […]

When they let loose the particle mass in the fit, do they keep the “NSig” and “NBg” found with a fixed-mass hypothesis? If so, which fixed-mass NSig and which NBg do they use? And if not, what exactly was the purpose of performing the fixed-mass fits originally? I don’t think I fully got the point here.

Where to go from here

Despite the many questions surrounding the experimental approach, it’s still an interesting result that deserves further exploration. If it holds up with independent verification from other experiments, it would be an undeniable breakthrough, one that particle physicists have been craving for a long time now.

And independent verification is key here. Ideally other experiments need to confirm that they also see this new boson before the acceptance of this result grows wider. Many upcoming experiments will be sensitive to a new X17 boson, as the original paper points out. In the next few years, we will actually have the possibility to probe this claim from multiple angles. Dedicated standalone experiments at the LHC such as FASER and CODEX-b will be able to probe highly long-lived signatures coming from the proton-proton interaction point, and so should be sensitive to new particles such as axion-like particles (ALPs).

Another experiment that could have sensitivity to X17, and has come online this year, is PADME (disclaimer: I am a collaborator on this experiment). PADME stands for Positron Annihilation into Dark Matter Experiment and its main goal is to look for dark photons produced in the annihilation between positrons and electrons. You can find more information about PADME here, and I will write a more detailed post about the experiment in the future, but the gist is that PADME is a fixed-target experiment striking a beam of positrons (beam energy: 550 MeV) against a fixed target made of diamond (carbon atoms). The annihilation between positrons in the beam and electrons in the carbon atoms could give rise to a photon and a new dark photon via kinetic mixing. By measuring the incoming positron and the outgoing photon momenta, we can infer the missing mass which is carried away by the (invisible) dark photon.

If the dark photon is the X17 particle (a big if), PADME might be able to see it as well. Our dark photon mass sensitivity is roughly between 1 and 22 MeV, so a 17 MeV boson would be within our reach. But more interestingly, using the knowledge of where the new particle hypothesis lies, we might actually be able to set our beam energy to produce the X17 in resonance (using a beam energy of roughly 282 MeV). The resonance beam energy increases the number of X17s produced and could give us even higher sensitivity to investigate the claim.

An important caveat is that PADME can provide independent confirmation of X17, but cannot refute it. If the coupling between the new particle and our ordinary particles is too feeble, PADME might not see evidence for it. This wouldn’t necessarily reject the claim by Atomki, it would just mean that we would need a more sensitive apparatus to detect it.  This might be achievable with the next generation of PADME, or with the new experiments mentioned above coming online in a few years.

Finally, in parallel with the experimental probes of the X17 hypothesis, it’s critical to continue gaining a better theoretical understanding of this anomaly. In particular, an important check is whether the proposed theoretical models that could explain the Beryllium excess also work for the new Helium excess. Furthermore, theorists have to work very hard to make these models compatible with all current experimental constraints, so they can look a bit contrived. Perhaps a thorough exploration of the theory landscape could lead to more models capable of explaining the observed anomalies as well as evading current constraints.

Conclusions

The recent results from the Atomki group raise the stakes in the search for Physics Beyond the Standard Model. The reported excesses in the angular correlation between electron-positron pairs in two different systems certainly seems intriguing. However, there are still a lot of questions surrounding the experimental methods, and given the nature of the claims made, a crystal-clear understanding of the results and the setup need to be achieved. Experimental verification by at least one independent group is also required if the X17 hypothesis is to be confirmed. Finally, parallel theoretical investigations that can explain both excesses are highly desirable.

As Flip mentioned after the first excess was reported, even if this excess turns out to have an explanation other than a new particle, it’s a nice reminder that there could be interesting new physics in the light mass parameter space (e.g. MeV-scale), and a new boson in this range could also account for the dark matter abundance we see leftover from the early universe. But as Carl Sagan once said, extraordinary claims require extraordinary evidence.

In any case, this new excess gives us a chance to witness the scientific process in action in real time. The next few years should be very interesting, and hopefully will see the independent confirmation of the new X17 particle, or a refutation of the claim and an explanation of the anomalies seen by the Atomki group. So, stay tuned!

Further reading

CERN news

Ethan Siegel’s Forbes post

Flip Tanedo’s “Delirium over Beryllium” bite

Matt Strassler’s blog

Quanta magazine article on the original Beryllium anomaly

Protophobic force interpretation

Vector boson with axial couplings to quarks interpretation

Dark Photons in Light Places

Title: “Searching for dark photon dark matter in LIGO O1 data”

Author: Huai-Ke Guo, Keith Riles, Feng-Wei Yang, & Yue Zhao

Reference: https://www.nature.com/articles/s42005-019-0255-0

There is very little we know about dark matter save for its existence. Its mass(es), its interactions, even the proposition that it consists of particles at all is mostly up to the creativity of the theorist. For those who don’t turn to modified theories of gravity to explain the gravitational effects on galaxy rotation and clustering that suggest a massive concentration of unseen matter in the universe (among other compelling evidence), there are a few more widely accepted explanations for what dark matter might be. These include weakly-interacting massive particles (WIMPS), primordial black holes, or new particles altogether, such as axions or dark photons. 

In particle physics, this latter category is what’s known as the “hidden sector,” a hypothetical collection of quantum fields and their corresponding particles that are utilized in theorists’ toolboxes to help explain phenomena such as dark matter. In order to test the validity of the hidden sector, several experimental techniques have been concocted to narrow down the vast parameter space of possibilities, which generally consist of three strategies:

  1. Direct detection: Detector experiments look for low-energy recoils of dark matter particle collisions with nuclei, often involving emitted light or phonons. 
  2. Indirect detection: These searches focus on potential decay products of dark matter particles, which depends on the theory in question.
  3. Collider production: As the name implies, colliders seek to produce dark matter in order to study its properties. This is reliant on the other two methods for verification.

The first detection of gravitational waves from a black hole merger in 2015 ushered in a new era of physics, in which the cosmological range of theory-testing is no longer limited to the electromagnetic spectrum. Bringing LIGO (the Laser Interferometer Gravitational-Wave Observatory) to the table, proposals for the indirect detection of dark matter via gravitational waves began to spring up in the literature, with implications for primordial black hole detection or dark matter ensconced in neutron stars. Yet a new proposal, in a paper by Guo et. al., suggests that direct dark matter detection with gravitational waves may be possible, specifically in the case of dark photons. 

Dark photons are hidden sector particles in the ultralight regime of dark matter candidates. Theorized as the gauge boson of a U(1) gauge group, meaning the particle is a force-carrier akin to the photon of quantum electrodynamics, dark photons either do not couple or very weakly couple to Standard Model particles in various formulations. Unlike a regular photon, dark photons can acquire a mass via the Higgs mechanism. Since dark photons need to be non-relativistic in order to meet cosmological dark matter constraints, we can model them as a coherently oscillating background field: a plane wave with amplitude determined by dark matter energy density and oscillation frequency determined by mass. In the case that dark photons weakly interact with ordinary matter, this means an oscillating force is imparted. This sets LIGO up as a means of direct detection due to the mirror displacement dark photons could induce in LIGO detectors.

Figure 1: The experimental setup of the Advanced LIGO interferometer. We can see that light leaves the laser and is reflected between a few power recycling mirrors (PR), split by a beam splitter (BS), and bounced between input and end test masses (ITM and ETM). The entire system is mounted on seismically-isolated platforms to reduce noise as much as possible. Source: https://arxiv.org/pdf/1411.4547.pdf

LIGO consists of a Michelson interferometer, in which a laser shines upon a beam splitter which in turn creates two perpendicular beams. The light from each beam then hits a mirror, is reflected back, and the two beams combine, producing an interference pattern. In the actual LIGO detectors, the beams are reflected back some 280 times (down a 4 km arm length) and are split to be initially out of phase so that the photodiode detector should not detect any light in the absence of a gravitational wave. A key feature of gravitational waves is their polarization, which stretches spacetime in one direction and compresses it in the perpendicular direction in an alternating fashion. This means that when a gravitational wave passes through the detector, the effective length of one of the interferometer arms is reduced while the other is increased, and the photodiode will detect an interference pattern as a result. 

LIGO has been able to reach an incredible sensitivity of one part in 10^{23} in its detectors over a 100 Hz bandwidth, meaning that its instruments can detect mirror displacements up to 1/10,000th the size of a proton. Taking advantage of this number, Guo et. al. demonstrated that the differential strain (the ratio of the relative displacement of the mirrors to the interferometer’s arm length, or h = \Delta L/L) is also sensitive to ultralight dark matter via the modeling process described above. The acceleration induced by the dark photon dark matter on the LIGO mirrors is ultimately proportional to the dark electric field and charge-to-mass ratio of the mirrors themselves.

Once this signal is approximated, next comes the task of estimating the background. Since the coherence length is of order 10^9 m for a dark photon field oscillating at order 100 Hz, a distance much larger than the separation between the LIGO detectors at Hanford and Livingston (in Washington and Louisiana, respectively), the signals from dark photons at both detectors should be highly correlated. This has the effect of reducing the noise in the overall signal, since the noise in each of the detectors should be statistically independent. The signal-to-noise ratio can then be computed directly using discrete Fourier transforms from segments of data along the total observation time. However, this process of breaking up the data, known as “binning,” means that some signal power is lost and must be corrected for.

Figure 2: The end result of the Guo et. al. analysis of dark photon-induced mirror displacement in LIGO. Above we can see a plot of the coupling of dark photons to baryons as a function of the dark photon oscillation frequency. We can see that over further Advanced LIGO runs, up to O4-O5, these limits are expected to improve by several orders of magnitude. Source: https://www.nature.com/articles/s42005-019-0255-0

In applying this analysis to the strain data from the first run of Advanced LIGO, Guo et. al. generated a plot which sets new limits for the coupling of dark photons to baryons as a function of the dark photon oscillation frequency. There are a few key subtleties in this analysis, primarily that there are many potential dark photon models which rely on different gauge groups, yet this framework allows for similar analysis of other dark photon models. With plans for future iterations of gravitational wave detectors, further improved sensitivities, and many more data runs, there seems to be great potential to apply LIGO to direct dark matter detection. It’s exciting to see these instruments in action for discoveries that were not in mind when LIGO was first designed, and I’m looking forward to seeing what we can come up with next!

Learn More:

  1. An overview of gravitational waves and dark matter: https://www.symmetrymagazine.org/article/what-gravitational-waves-can-say-about-dark-matter
  2. A summary of dark photon experiments and results: https://physics.aps.org/articles/v7/115 
  3. Details on the hardware of Advanced LIGO: https://arxiv.org/pdf/1411.4547.pdf
  4. A similar analysis done by Pierce et. al.: https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.121.061102

Letting the Machines Search for New Physics

Article: “Anomaly Detection for Resonant New Physics with Machine Learning”

Authors: Jack H. Collins, Kiel Howe, Benjamin Nachman

Reference : https://arxiv.org/abs/1805.02664

One of the main goals of LHC experiments is to look for signals of physics beyond the Standard Model; new particles that may explain some of the mysteries the Standard Model doesn’t answer. The typical way this works is that theorists come up with a new particle that would solve some mystery and they spell out how it interacts with the particles we already know about. Then experimentalists design a strategy of how to search for evidence of that particle in the mountains of data that the LHC produces. So far none of the searches performed in this way have seen any definitive evidence of new particles, leading experimentalists to rule out a lot of the parameter space of theorists favorite models.

A summary of searches the ATLAS collaboration has performed. The left columns show model being searched for, what experimental signature was looked at and how much data has been analyzed so far. The color bars show the regions that have been ruled out based on the null result of the search. As you can see, we have already covered a lot of territory.

Despite this extensive program of searches, one might wonder if we are still missing something. What if there was a new particle in the data, waiting to be discovered, but theorists haven’t thought of it yet so it hasn’t been looked for? This gives experimentalists a very interesting challenge, how do you look for something new, when you don’t know what you are looking for? One approach, which Particle Bites has talked about before, is to look at as many final states as possible and compare what you see in data to simulation and look for any large deviations. This is a good approach, but may be limited in its sensitivity to small signals. When a normal search for a specific model is performed one usually makes a series of selection requirements on the data, that are chosen to remove background events and keep signal events. Nowadays, these selection requirements are getting more complex, often using neural networks, a common type of machine learning model, trained to discriminate signal versus background. Without some sort of selection like this you may miss a smaller signal within the large amount of background events.

This new approach lets the neural network itself decide what signal to  look for. It uses part of the data itself to train a neural network to find a signal, and then uses the rest of the data to actually look for that signal. This lets you search for many different kinds of models at the same time!

If that sounds like magic, lets try to break it down. You have to assume something about the new particle you are looking for, and the technique here assumes it forms a resonant peak. This is a common assumption of searches. If a new particle were being produced in LHC collisions and then decaying, then you would get an excess of events where the invariant mass of its decay products have a particular value. So if you plotted the number of events in bins of invariant mass you would expect a new particle to show up as a nice peak on top of a relatively smooth background distribution. This is a very common search strategy, and often colloquially referred to as a ‘bump hunt’. This strategy was how the Higgs boson was discovered in 2012.

A histogram showing the invariant mass of photon pairs. The Higgs boson shows up as a bump at 125 GeV. Plot from here

The other secret ingredient we need is the idea of Classification Without Labels (abbreviated CWoLa, pronounced like koala). The way neural networks are usually trained in high energy physics is using fully labeled simulated examples. The network is shown a set of examples and then guesses which are signal and which are background. Using the true label of the event, the network is told which of the examples it got wrong, its parameters are updated accordingly, and it slowly improves. The crucial challenge when trying to train using real data is that we don’t know the true label of any of data, so its hard to tell the network how to improve. Rather than trying to use the true labels of any of the events, the CWoLA technique uses mixtures of events. Lets say you have 2 mixed samples of events, sample A and sample B, but you know that sample A has more signal events in it than sample B. Then, instead of trying to classify signal versus background directly, you can train a classifier to distinguish between events from sample A and events from sample B and what that network will learn to do is distinguish between signal and background. You can actually show that the optimal classifier for distinguishing the two mixed samples is the same as the optimal classifier of signal versus background. Even more amazing, this technique actually works quite well in practice, achieving good results even when there is only a few percent of signal in one of the samples.

An illustration of the CWoLa method. A classifier trained to distinguish between two mixed samples of signal and background events learns can learn to classify signal versus background. Taken from here

The technique described in the paper combines these two ideas in a clever way. Because we expect the new particle to show up in a narrow region of invariant mass, you can use some of your data to train a classifier to distinguish between events in a given slice of invariant mass from other events. If there is no signal with a mass in that region then the classifier should essentially learn nothing, but if there was a signal in that region that the classifier should learn to separate signal and background. Then one can apply that classifier to select events in the rest of your data (which hasn’t been used in the training) and look for a peak that would indicate a new particle. Because you don’t know ahead of time what mass any new particle should have, you scan over the whole range you have sufficient data for, looking for a new particle in each slice.

The specific case that they use to demonstrate the power of this technique is for new particles decaying to pairs of jets. On the surface, jets, the large sprays of particles produced when quark or gluon is made in a LHC collision, all look the same. But actually the insides of jets, their sub-structure, can contain very useful information about what kind of particle produced it. If a new particle that is produced decays into other particles, like top quarks, W bosons or some a new BSM particle, before decaying into quarks then there will be a lot of interesting sub-structure to the resulting jet, which can be used to distinguish it from regular jets. In this paper the neural network uses information about the sub-structure for both of the jets in event to determine if the event is signal-like or background-like.

The authors test out their new technique on a simulated dataset, containing some events where a new particle is produced and a large number of QCD background events. They train a neural network to distinguish events in a window of invariant mass of the jet pair from other events. With no selection applied there is no visible bump in the dijet invariant mass spectrum. With their technique they are able to train a classifier that can reject enough background such that a clear mass peak of the new particle shows up. This shows that you can find a new particle without relying on searching for a particular model, allowing you to be sensitive to particles overlooked by existing searches.

Demonstration of the bump hunt search. The shaded histogram is the amount of signal in the dataset. The different levels of blue points show the data remaining after applying tighter and tighter selection based on the neural network classifier score. The red line is the predicted amount of background events based on fitting the sideband regions. One can see that for the tightest selection (bottom set of points), the data forms a clear bump over the background estimate, indicating the presence of a new particle

This paper was one of the first to really demonstrate the power of machine-learning based searches. There is actually a competition being held to inspire researchers to try out other techniques on a mock dataset. So expect to see more new search strategies utilizing machine learning being released soon. Of course the real excitement will be when a search like this is applied to real data and we can see if machines can find new physics that us humans have overlooked!

Read More:

  1. Quanta Magazine Article “How Artificial Intelligence Can Supercharge the Search for New Particles”
  2. Blog Post on the CWoLa Method “Training Collider Classifiers on Real Data”
  3. Particle Bites Post “Going Rogue: The Search for Anything (and Everything) with ATLAS”
  4. Blog Post on applying ML to top quark decays “What does Bidirectional LSTM Neural Networks has to do with Top Quarks?”
  5. Extended Version of Original Paper “Extending the Bump Hunt with Machine Learning”

Discovering the Top Quark

This post is about the discovery of the most massive quark in the Standard Model, the Top quark. Below is a “discovery plot” [1] from the Collider Detector at Fermilab collaboration (CDF). Here is the original paper.

This plot confirms the existence of the Top quark. Let’s understand how.

For each proton collision that passes certain selection conditions, the horizontal axis shows the best estimate of the Top quark mass. These selection conditions encode the particle “fingerprint” of the Top quark. Out of all possible proton collisions events, we only want to look at ones that perhaps came from Top quark decays. This subgroup of events can inform us of a best guess at the mass of the Top quark. This is what is being plotted on the x axis.

On the vertical axis are the number of these events.

The dashed distribution is the number of these events originating from the Top quark if the Top quark exists and decays this way. This could very well not be the case.

The dotted distribution is the background for these events, events that did not come from Top quark decays.

The solid distribution is the measured data.

To claim a discovery, the background (dotted) plus the signal (dashed) should equal the measured data (solid). We can run simulations for different top quark masses to give us distributions of the signal until we find one that matches the data. The inset at the top right is showing that a Top quark of mass of 175GeV best reproduces the measured data.

Taking a step back from the technicalities, the Top quark is special because it is the heaviest of all the fundamental particles. In the Standard Model, particles acquire their mass by interacting with the Higgs. Particles with more mass interact more with the Higgs. The Top mass being so heavy is an indicator that any new physics involving the Higgs may be linked to the Top quark.


References / Further Reading

[1] – Observation of Top Quark Production in pp Collisions with the Collider Detector at Fermilab – This is the “discovery paper” announcing experimental evidence of the Top.

[2] – Observation of tt(bar)H Production – Who is to say that the Top and the Higgs even have significant interactions to lowest order? The CMS collaboration finds evidence that they do in fact interact at “tree-level.”

[2] – The Perfect Couple: Higgs and top quark spotted together – This article further describes the interconnection between the Higgs and the Top.

To the Standard Model, and Beyond! with Kaon Decay

Title: “New physics implications of recent search for K_L \rightarrow \pi^0 \nu \bar{\nu} at KOTO”

Author: Kitahara et. al.

Reference: https://arxiv.org/pdf/1909.11111.pdf

The Standard Model, though remarkably accurate in its depiction of many physical processes, is incomplete. There are a few key reasons to think this: most prominently, it fails to account for gravitation, dark matter, and dark energy. There are also a host of more nuanced issues: it is plagued by “fine tuning” problems, whereby its parameters must be tweaked in order to align with observation, and “free parameter” problems, which come about since the model requires the direct insertion of parameters such as masses and charges rather than providing explanations for their values. This strongly points to the existence of as-yet undetected particles and the inevitability of a higher-energy theory. Since gravity should be a theory living at the Planck scale, at which both quantum mechanics and general relativity become relevant, this is to be expected. 

A promising strategy for probing physics beyond the Standard Model is to look at decay processes that are incredibly rare in nature, since their small theoretical uncertainties mean that only a few event detections are needed to signal new physics. A primary example of this scenario in action is the discovery of the positron via particle showers in a cloud chamber back in 1932. Since particle physics models of the time predicted zero anti-electron events during these showers, just one observation was enough to herald a new particle. 

The KOTO experiment, conducted at the Japan Proton Accelerator Research Complex (J-PARC), takes advantage of this strategy. The experiment was designed specifically to investigate a promising rare decay channel: K_L \rightarrow \pi^0 \nu \bar{\nu}, the decay of a neutral long kaon into a neutral pion, a neutrino, and an antineutrino. Let’s break down this interaction and discuss its significance. The kaon, a meson composed of an up quark and anti-strange quark, comes in both long and short varieties, describing the time of decay relative to each other. The Standard Model predicts a branching ratio of 3 \times 10^{-11} for this particular decay process, meaning that out of all the neutral long kaons that decay, only this tiny fraction of them decay into the combination of a neutral pion, neutrino, and an antineutrino, making it incredibly rare for this process to be observed in nature.

The Feynman diagram describing how a neutral pion, neutrino, and antineutrino are produced from a neutral long kaon. We note the production of two photons, a key observation for the KOTO experiment’s verification of event detection, as this differentiates this process from other neutral long kaon decay channels. Source: https://arxiv.org/pdf/1910.07585.pdf

Here’s where it gets exciting. The KOTO experiment recently reported four signal events within this decay channel where the Standard Model predicts just 0.10 \pm 0.02 events. If all four of these events are confirmed as the desired neutral long kaon decays, new physics is required to explain the enhanced signal. There are several possibilities, recently explored in a new paper by Kitahara et. al.,  for what this new physics might be. Before we go into too much detail, let’s consider how KOTO’s observation came to be.

The KOTO experiment is a fixed-target experiment, in which particles are accelerated and collide with something stationary. In this case, protons at energy 30 GeV collided with gold, producing a beam of kaons after other products are diverted with collimators and magnets. The observation of the desired K_L \rightarrow \pi^0 \nu \bar{\nu} mode is particularly difficult experimentally for several reasons. First, the initial and final decay products are neutrally charged, making them harder to detect since they do not ionize, a primary strategy for detecting charged particles. Second, neutral pions are produced via several other kaon decay channels, requiring several strategies to differentiate neutral pions produced by K_L \rightarrow \pi^0 \nu \bar{\nu} from those produced from K_L \rightarrow 3 \pi^0, K_L \rightarrow 2\pi^0, and K_L \rightarrow \pi^0 \pi^+ \pi^-, among others. As we can see in the Feynman diagram above, our desired decay mode has the advantage of producing two photons, allowing KOTO to observe these photons and their transverse momentum in order to pinpoint a K_L \rightarrow \pi^0 \nu \bar{\nu} decay. In terms of experimental construction, KOTO included charged veto detectors in order to reject events with charged particles in the final state and a systematic study of background events was performed in order to discount hadron showers originating from neutrons in the beam line. 

This setup was in service of KOTO’s goal to explore the question of CP violation with long kaon decay. CP violation refers to the violation of charge-parity symmetry, the combination of charge-conjugation symmetry (in which a theory is unchanged when we swap a particle for its antiparticle) and parity symmetry (in which a theory is invariant when left and right directions are swapped). We seek to understand why some processes seem to preserve CP symmetry when the Standard Model allows for violation, as is the case in quantum chromodynamics (QCD), and why some processes break CP symmetry, as is seen in the quark mixing matrix (CKM matrix) and the neutrino oscillation matrix. Overall, CP violation has implications for matter-antimatter asymmetry, the question of why the universe seems to be composed predominantly of matter when particle creation and decay processes produce equal amounts of both matter and antimatter. An imbalance of matter and antimatter in the universe could be created if CP violation existed under the extreme conditions of the early universe, mere seconds after the Big Bang. Explanations for matter-antimatter asymmetry that do not involve CP violation generally require the existence of primordial matter-antimatter asymmetry, effectively dodging the fundamental question. The observation of CP violation with KOTO could provide critical evidence toward an eventual answer.  

The Kitahara paper provides three interpretations of KOTO’s observation that incorporate physics beyond the Standard Model: new heavy physics, new light physics, and new particle production. The first, new heavy physics, amplifies the expected Standard Model signal via the incorporation of new operators that couple to existing Standard Model particles. If this coupling is suppressed, it could adequately explain the observed boost in the branching ratio. Light new physics involves reinterpreting the neutrino-antineutrino pair as a new light particle. Factoring in experimental constraints, this new light particle should decay in with a finite lifetime on the order of 0.1-0.01 nanoseconds, making it almost completely invisible to experiment. Finally, new particles could be produced within the K_L \rightarrow \pi^0 \nu \bar{\nu} decay channel, which should be light and long-lived in order to allow for its decay to two photons. The details of these new particle scenarios should involve constraints from other particle physics processes, but each serve to increase the branching ratio through direct production of more final state particles. On the whole, this demonstrates the potential for the K_L \rightarrow \pi^0 \nu \bar{\nu} to provide a window to physics beyond the Standard Model. 

Of course, this analysis presumes the accuracy of KOTO’s four signal events. Pending the confirmation of these detections, there are several exciting possibilities for physics beyond the Standard Model, so be sure to keep your eye on this experiment!

Learn More:

  1. An overview of the KOTO experiment’s data taking: https://arxiv.org/pdf/1910.07585.pdf
  2. A description of the sensitivity involved in the KOTO experiment’s search: https://j-parc.jp/en/topics/2019/press190304.html
  3. More insights into CP violation: https://www.symmetrymagazine.org/article/charge-parity-violation

Lazy photons at the LHC

Title: “Search for long-lived particles using delayed photons in
proton-proton collisions at √ s = 13 TeV”

Author: CMS Collaboration

Reference: https://arxiv.org/abs/1909.06166 (submitted to Phys. Rev. D)

An interesting group of searches for new physics at the LHC that has been gaining more attention in recent years relies on reconstructing and identifying physics objects that are displaced from the original proton-proton collision point. Several theoretical models predict such signatures due to the decay of long-lived particles (LLPs) that are produced in these collisions. Theories with LLPs typically feature a suppression of the available phase space for the decay of these particles, or a weak coupling between them and Standard Model (SM) particles.

An appealing feature of these signatures is that backgrounds can be greatly reduced by searching for displaced objects, since most SM physics display only prompt particles (i.e. produced immediately following the collision within the primary vertex resolution). Given that the sensitivity to new physics is determined both by the presence of signal events as well as by the absence of background events, the sensitivity to models with LLPs is increased with the expectation of low SM backgrounds.

A recent search for new physics with LLPs performed by the CMS Collaboration uses delayed photons as its driving experimental signature. For this search, events of interest contain delayed photons and missing transverse momentum (MET1). This signature is predicted in the LHC by various theories such as Gauge-Mediated Supersymmetry Breaking (GMSB), where long-lived supersymmetric particles produced in proton-proton (pp) collisions decay in a peculiar pattern, giving rise to stable particles that escape the detector (hence the MET) and also photons that are displaced from the interaction point. The expected signature is shown in Figure 1.

Figure 1. Example Feynman diagrams of Gauge-Mediated Supersymmetry Breaking (GMSB) processes that can give rise to final-state signatures at CMS consisting of two (left) or one (right) displaced photons and supersymmetric particles that escape the detector and show up as missing transverse momentum (MET) .

The main challenge of this analysis is the physics reconstruction of delayed photons, something that the LHC experiments were not originally designed to do. Both the detector and the physics software are optimized for prompt objects originating from the pp interaction point, where the vast majority of relevant physics happens at the LHC. This difference is illustrated in Figure 2.

Figure 2. Difference between prompt and displaced photons as seen at CMS with the electromagnetic calorimeter (ECAL). The ECAL crystals are oriented towards the interaction point and so for displaced photons both the shape of the electromagnetic shower generated inside the crystals and the arrival time of the photons are different from prompt photons produced at the proton-proton collision. Source: https://cms.cern/news/its-never-too-late-photons-cms

In order to reconstruct delayed photons, a separate reconstruction algorithm was developed that specifically looked for signs of photons out-of-sync with the pp collision. Before this development, out-of-time photons in the detector were considered something of an ‘incomplete or misleading reconstruction’ and discarded from the analysis workflow.

In order to use delayed photons in analysis, a precise understanding of CMS’s calorimeter timing capabilities is required. The collaboration measured the timing resolution of the electromagnetic calorimeter to be around 400 ps, and that sets the detector’s sensitivity to delayed photons.

Other relevant components of this analysis include a dedicated trigger (for 2017 data), developed to select events consistent with a single displaced photon. The identification of a displaced photon at the trigger level relies on the shape of the electromagnetic shower it deposits on the calorimeter: displaced photons produce a more elliptical shower, whereas prompt photons produce a more circular one. In addition, an auxiliary trigger (used for 2016 data, before the special trigger was developed) requires two photons, but no displacement.

The event selection requires one or two well-reconstructed high-momentum photons in the detector (depending on year), and at least 3 jets. The two main kinematic features of the event, the large arrival time (i.e. consistent with time of production delayed relative to the pp collision) and large MET, are used instead to extract the signal and the background yields (see below).

In general for LLP searches, it is difficult to estimate the expected background from Monte Carlo (MC) simulation alone, since a large fraction of backgrounds are due to detector inefficiencies and/or physics events that have poor MC modeling. Instead, this analysis estimates the background from the data itself, by using the so-called ‘ABCD’ method.

The ABCD method consists of placing events in data that pass the signal selection on a 2D histogram, with a suitable choice of two kinematic quantities as the X and Y variables. These two variables must be uncorrelated, a very important assumption. Then this histogram is divided into 4 regions or ‘bins’, and the one with the highest fraction of expected signal events becomes the ‘signal’ bin (call it the ‘C’ bin). The other three bins should contain mostly background events, and with the assumption that X and Y variables are uncorrelated, it is possible to predict the background in C by using the observed backgrounds in A, B, and D:

C_{\textrm{pred}} = \frac{B_{\textrm{obs}} \times D_{\textrm{obs}}}{A_{\textrm{obs}}}

Using this data-driven background prediction for the C bin, all that remains is to compare the actual observed yield in C to figure out if there is an excess, which could be attributed to the new physics under investigation:

\textrm{excess} = C_{\textrm{obs}} - C_{\textrm{pred}}

As an example, Table 1 shows the number of background events predicted and the number of events in data observed for the 2016 run.

Table 1. Observed yield in data (N_{\textrm{obs}}^{\textrm{data}}) and predicted background yields (N_{\text{bkg(no C)}}^{\textrm{post-fit}} and N_{\textrm{bkg}}^{\textrm{post-fit}}) in the LHC 2016 run, for all four bins (C is the signal-enriched bin). The observed data is entirely compatible with the predicted background, and no excess is seen.

Combining all the data, the CMS Collaboration did not find an excess of events over the expected background, which would have suggested evidence of new physics. Using statistical analysis, the collaboration can place upper limits on the possible mass and lifetime of new supersymmetric particles predicted by GMSB, based on the absence of excess events. The final result of this analysis is shown in Figure 3, where a mass of up to 500 GeV for the neutralino particle is excluded for lifetimes of 1 meter (here we measure lifetime in units of length by multiplying by the speed of light: c\tau = 1 meter).

Figure 3. Exclusion plots for the GMSB model, featuring exclusion contours of masses and lifetimes for the lightest supersymmetric particle (the neutralino). At its most sensitive mass region (around lifetimes of 1 meter) the CMS result excludes a mass of under 500 GeV for the neutralino, while for lower masses (100-200 GeV) the lifetime exclusion is quite high at around 100 meters or so.

The mass coverage of this search is higher than the previous search done by the ATLAS Collaboration with run 1 data (i.e. years 2010-2012 only), with a much higher sensitivity to longer lifetimes (up to a factor of 10 depending on the mass). But the ATLAS detector has a longitudinally-segmented calorimeter which allows them to precisely measure the direction of displaced photons, and when they do release their results for this search using run 2 data (2016-2018), it should also feature quite a large gain in sensitivity, potentially overshadowing this CMS result. So stay tuned for this exciting cat-and-mouse game between LHC experiments!

Further Reading

Footnotes

1: Here we use the notation MET instead of P_T^{\textrm{miss}} when referring to missing transverse momentum for typesetting reasons.

The lighter side of Dark Matter

Article title: “Absorption of light dark matter in semiconductors”

Authors: Yonit Hochberg, Tongyan Lin, and Kathryn M. Zurek

Reference: arXiv:1608.01994

Direct detection strategies for dark matter (DM) have grown significantly from the dominant narrative of looking for scattering of these ghostly particles off of large and heavy nuclei. Such experiments involve searches for the Weakly-Interacting Massive Particles (WIMPs) in the many GeV (gigaelectronvolt) mass range. Such candidates for DM are predicted by many beyond Standard Model (SM) theories, one of the most popular involving a very special and unique extension called supersymmetry. Once dubbed the “WIMP Miracle”, these types of particles were found to possess just the right properties to be suitable as dark matter. However, as these experiments become more and more sensitive, the null results put a lot of stress on their feasibility.

Typical detectors like that of LUX, XENON, PandaX and ZEPLIN, detect flashes of light (scintillation) from the result of particle collisions in noble liquids like argon or xenon. Other cryogenic-type detectors, used in experiments like CDMS, cool semiconductor arrays down to very low temperatures to search for ionization and phonon (quantized lattice vibration) production in crystals. Already incredibly successful at deriving direct detection limits for heavy dark matter, new ideas are emerging to look into the lighter side.

Recently, DM below the GeV range have become the new target of a huge range of detection methods, utilizing new techniques and functional materials – semiconductors, superconductors and even superfluid helium. In such a situation, recoils from the much lighter electrons in fact become much more sensitive than those of such large and heavy nuclear targets.

There are several ways that one can consider light dark matter interacting with electrons. One popular consideration is to introduce a new gauge boson that has a very small ‘kinetic’ mixing with the ordinary photon of the Standard Model. If massive, these ‘dark photons’ could also be potentially dark matter candidates themselves and an interesting avenue for new physics. The specifics of their interaction with the electron are then determined by the mass of the dark photon and the strength of its mixing with the SM photon.

Typically the gap between the valence and conduction bands in semiconductors like silicon and germanium is around an electronvolt (eV). When the energy of the dark matter particle exceeds the band gap, electron excitations in the material can usually be detected through a complicated secondary cascade of electron-hole pair generation. Below the band gap however, there is not enough energy to excite the electron to the conduction band, and so detection proceeds through low-energy multi-phonon excitations, with the dominant being the emission of two back-to-back phonons.

In both these regimes, the absorption rate of dark matter in the material is directly related to the properties of the material, namely its optical properties. In particular, the absorption rate for ordinary SM photons is determined by the polarization tensor in the medium, and in turn the complex conductivity, \hat{\sigma}(\omega)=\sigma_{1}+i \sigma_{2} , through what is known as the optical theorem. Ultimately this describes the response of the material to an electromagnetic field, which has been measured in several energy ranges. This ties together the astrophysical properties of how the dark matter moves through space and the fundamental description of DM-electron interactions at the particle level.

In a more technical sense, the rate of DM absorption, in events per unit time per unit target mass, is given by the following equation:

R=\frac{1}{\rho} \frac{\rho_{D M}}{m_{A^{\prime}}} \kappa_{e f f}^{2} \sigma_{1}

  • \rho – mass density of the target material
  • \rho_{DM} – local dark matter mass density (0.3 GeV/cm3) in the galactic halo
  • m_{A'} – mass of the dark photon particle
  • \kappa_{eff} – kinetic mixing parameter (in-medium)
  • \sigma_1 – absorption rate of ordinary SM photons

Shown in Figure 1, the projected sensitivity at 90% confidence limit (C.L.) for a 1 kg-year exposure of semiconductor target to dark photon detection can be almost an order of magnitude greater than existing nuclear recoil experiments. Dependence is shown on the kinetic mixing parameter and the mass of the dark photon. Limits are also shown for existing semiconductor experiments, known as DAMIC and CDMSLite with 0.6 and 70 kg-day exposure, respectively.

Figure 1. Projected reach of a silicon (blue, solid) and germanium (green, solid) semiconductor target at 90% C.L. for 1 kg-year exposure through the absorption of dark photons DM, kinetically mixed with SM photons. Multi-phonon excitations are significant for the sub-eV range, and electron excitations approximately over 0.6 and 1 eV (the size of the band gaps for germanium and silicon, respectively).

Furthermore, in the millielectronvolt-kiloelectronvolt range, these could provide much stronger constraints than any of those that currently exist from sources in astrophysics, even at this exposure. These materials also provide a novel way of detecting DM in a single experiment, so long as improvements are made in phonon detection.

These possibilities, amongst a plethora of other detection materials and strategies, can open up a significant area of parameter space for finally closing in on the identity of the ever-elusive dark matter!

References and further reading: 

Discovering the Tau

This plot [1] is the first experimental evidence for the particle that would eventually be named the tau.

On the horizontal axis is the energy of the experiment. This particular experiment collided electron and positron beams. On the vertical axis is the cross section of a specific event resulting from the electron and positron beams colliding. The cross section is like a probability for a given event to occur. When two particles collide, many many things can happen, each with their own probability. The cross section for an event encodes the probability for that particular event to occur. Events with larger probability have larger cross sections and vice versa.

The collaboration found one event could not be explained by the Standard Model at the time. The event in question looks like:

This event is peculiar because the final state contains both an electron and a muon with opposite charges. In 1975, when this paper was published, there was no way to obtain this final state, from any known particles or interactions.

In order to explain this anomaly, particle physicists proposed the following explanations:

  1. Pair production of a heavy lepton. With some insight from the future, we will call this heavy lepton the “tau.”

  2. Pair production of charged Bosons. These charged bosons actually end up being the bosons that mediate the weak nuclear force.

The production of tau’s and these bosons are not equally likely though. Depending on the initial energy of the beams, we are more likely to produce one than the other. It turns out that at the energies of this experiment (a few GeV), it is much more likely to produce taus than to produce the bosons. We would say that the taus have a larger cross section than the bosons. From the plot, we can read off that the production of taus, their cross section, is largest at around 5 GeV of energy. Finally, since these taus are the result of pair production, they are produced in pairs. This bump at 5 GeV is the energy at which it is most likely to produce a pair of taus. This plot then predicts the tau to have a mass of about 2.5 GeV.

References

[1] – Evidence for Anomalous Lepton Production in e+−e− Annihilation. This is the original paper that announced the anomaly that would become the Tau.

[2] – The Discovery of the Tau Lepton. This is a comprehensive story of the discovery of the Tau, written by Martin Perl who would go on to win the 1995 Nobel prize in Physics for its discovery.

[3] – Lepton Review. Hyperphysics provides an accessible review of the Leptonic sector of the Standard Model.