The P5 Report & The Future of Particle Physics (Part 1)

Particle physics is the epitome of ‘big science’. To answer our most fundamental questions out about physics requires world class experiments that push the limits of whats technologically possible. Such incredible sophisticated experiments, like those at the LHC, require big facilities to make them possible,  big collaborations to run them, big project planning to make dreams of new facilities a reality, and committees with big acronyms to decide what to build.

Enter the Particle Physics Project Prioritization Panel (aka P5) which is tasked with assessing the landscape of future projects and laying out a roadmap for the future of the field in the US. And because these large projects are inevitably an international endeavor, the report they released last week has a large impact on the global direction of the field. The report lays out a vision for the next decade of neutrino physics, cosmology, dark matter searches and future colliders. 

P5 follows the community-wide brainstorming effort known as the Snowmass Process in which researchers from all areas of particle physics laid out a vision for the future. The Snowmass process led to a particle physics ‘wish list’, consisting of all the projects and research particle physicists would be excited to work on. The P5 process is the hard part, when this incredibly exciting and diverse research program has to be made to fit within realistic budget scenarios. Advocates for different projects and research areas had to make a case of what science their project could achieve and a detailed estimate of the costs. The panel then takes in all this input and makes a set of recommendations of how the budget should be allocated, what should projects be realized and what hopes are dashed. Though the panel only produces a set of recommendations, they are used quite extensively by the Department of Energy which actually allocates funding. If your favorite project is not endorsed by the report, its very unlikely to be funded. 

Particle physics is an incredibly diverse field, covering sub-atomic to cosmic scales, so recommendations are divided up into several different areas. In this post I’ll cover the panel’s recommendations for neutrino physics and the cosmic frontier. Future colliders, perhaps the spiciest topic, will be covered in a follow up post.

The Future of Neutrino Physics

For those in the neutrino physics community all eyes were on the panels recommendations regarding the Deep Underground Neutrino Experiment (DUNE). DUNE is the US’s flagship particle physics experiment for the coming decade and aims to be the definitive worldwide neutrino experiment in the years to come. A high powered beam of neutrinos will be produced at Fermilab and sent 800 miles through the earth’s crust towards several large detectors placed in a mine in South Dakota. Its a much bigger project than previous neutrino experiments, unifying essentially the entire US community into a single collaboration.

DUNE is setup to produce world leading measurements of neutrino oscillations, the property by which neutrinos produced in one ‘flavor state’, (eg an electron-neutrino) gradually changes its state with sinusoidal probability (eg into a muon neutrino) as it propagates through space. This oscillation is made possible by a simple quantum mechanical weirdness: neutrino’s flavor state, whether it couples to electrons muons or taus, is not the same as its mass state. Neutrinos of a definite mass are therefore a mixture of the different flavors and visa versa.

Detailed measurements of this oscillation are the best way we know to determine several key neutrino properties. DUNE aims to finally pin down two crucial neutrino properties: their ‘mass ordering’, which will solidify how the different neutrino flavors and measured mass differences all fit together, and their ‘CP-violation’ which specifies whether neutrinos and their anti-matter counterparts behave the same or not. DUNE’s main competitor is the Hyper-Kamiokande experiment in Japan, another next-generation neutrino experiment with similar goals.

A depiction of the DUNE experiment. A high intensity proton beam at Fermilab is used to create a concentrated beam of neutrinos which are then sent through 800 miles of the Earth’s crust towards detectors placed deep underground South Dakota. Source

Construction of the DUNE experiment has been ongoing for several years and unfortunately has not been going quite as well as hoped. It has faced significant schedule delays and cost overruns. DUNE is now not expected to start taking data until 2031, significantly behind Hyper-Kamiokande’s projected 2027 start. These delays may lead to Hyper-K making these definitive neutrino measurements years before DUNE, which would be a significant blow to the experiment’s impact. This left many DUNE collaborators worried about its broad support from the community.

It came as a relief then when P5 report re-affirmed the strong science case for DUNE, calling it the “ultimate long baseline” neutrino experiment. The report strongly endorsed the completion of the first phase of DUNE. However, it recommended a pared-down version of its upgrade, advocating for an earlier beam upgrade in lieu of additional detectors. This re-imagined upgrade will still achieve the core physics goals of the original proposal with a significant cost savings. With this report, and news that the beleaguered underground cavern construction in South Dakota is now 90% complete, was certainly welcome holiday news to the neutrino community. This is also sets up a decade-long race between DUNE and Hyper-K to be the first to measure these key neutrino properties.

Cosmic Implications

While we normally think of particle physics as focused on the behavior of sub-atomic particles, its really about the study of fundamental forces and laws, no matter the method. This means that telescopes to study the oldest light in the universe, the Cosmic Microwave Background (CMB), fall into the same budget category as giant accelerators studying sub-atomic particles. Though the experiments in these two areas look very different, the questions they seek to answer are cross-cutting. Understanding how particles interact at very high energies helps us understand the earliest moments of the universe, when such particles were all interacting in a hot dense plasma. Likewise, by studying the these early moments of the universe and its large-scale evolution can tell us about what kinds of particles and forces are influencing its dynamics. When asking fundamental questions about the universe, one needs both the sharpest microscopes and the grandest panoramas possible.

The most prominent example of this blending of the smallest and largest scales in particle physics is dark matter. Some of our best evidence for dark matter comes analyzing the cosmic microwave background to determine how the primordial plasma behaved. These studies showed that some type of ‘cold’, matter that doesn’t interact with light, aka dark matter, was necessary to form the first clumps that eventually seeded the formation of galaxies. Without it, the universe would be much more soup-y and structureless than what we see to today.

The “cosmic web” galaxy clusters from the Millenium simulation. Measuring and understanding this web can tell us a lot about the fundamental constituents of the universe. Source

To determine what dark matter is then requires an attack from two fronts: design experiments here on earth attempting directly detect it, and further study its cosmic implications to look for more clues as to its properties.

The panel recommended next generation telescopes to study the CMB as a top priority. The so called ‘Stage 4’ CMB experiment would deploy telescopes in both the south pole and Chile’s Atacama desert to better characterize sources of atmospheric noise. The CMB has been studied extensively before, but the increased precision of CMS-S4 could shed light on mysteries like dark energy, dark matter, inflation, and the recent Hubble Tension. Given the past fruitfulness of these efforts, I think few doubted the science case for such a next generation experiment.

A mockup of one of the CMS-S4 telescopes which will be based in the Chilean desert. Note the person for scale on the right (source)

The P5 report recommended a suite of new dark matter experiments in the next decade, including the ‘ultimate’ liquid Xenon based dark matter search. Such an experiment would follow in the footsteps of massive noble gas experiments like LZ and XENONnT which have been hunting for a favored type of dark matter called WIMP’s for the last few decades. These experiments essentially build giant vats of liquid Xenon, carefully shield from any sources of external radiation, and look for signs of dark matter particles bumping into any of the Xenon atoms. The larger the vat of Xenon, the higher chance a dark matter particle will bump into something. Current generation experiments have ~7 tons of Xenon, and the next generation experiment would be even larger. The next generation aims to reach the so called ‘neutrino floor’, the point as which the experiments would be sensitive enough to observe astrophysical neutrinos bumping into the Xenon. Such neutrino interactions would look extremely similar to those of dark matter, and thus represent an unavoidable background which would signal the ultimate sensitivity of this type of experiment. WIMP’s could still be hiding in a basement below this neutrino floor, but finding them would be exceedingly difficult.

A photo of the current XENONnT experiment. This pristine cavity is then filled with liquid Xenon and closely monitored for signs of dark matter particles bumping into one of the Xenon atoms. Credit: XENON Collaboration

WIMP’s are not the only dark matter candidates in town, and recent years have also seen an explosion of interest in the broad range of dark matter possibilities, with axions being a prominent example. Other kinds of dark matter could have very different properties than WIMPs and have had much fewer dedicated experiments to search for them. There is ‘low hanging fruit’ to pluck in the way of relatively cheap experiments which can achieve world-leading sensitivity. Previously, these ‘table top’ sized experiments had a notoriously difficult time obtaining funding, as they were often crowded out of the budgets by the massive flagship projects. However, small experiments can be crucial to ensuring our best chance of dark matter discovery, as they fill in the blinds pots missed by the big projects.

The panel therefore recommended creating a new pool of funding set aside for these smaller scale projects. Allowing these smaller scale projects to flourish is important for the vibrancy and scientific diversity of the field, as the centralization of ‘big science’ projects can sometimes lead to unhealthy side effects. This specific recommendation also mirrors a broader trend of the report: to attempt to rebalance the budget portfolio to be spread more evenly and less dominated by the large projects.

A pie chart comparing the budget porfolio in 2023 (left) versus the projected budget in 2033 (right). Currently most of the budget is being taken up by the accelerator upgrades and cavern construction of DUNE, with some amount for the LHC upgrades. But by 2033 the panel recommends a much more equitable balance between different research area.

What Didn’t Make It

Any report like this comes with some tough choices. Budget realities mean not all projects can be funded. Besides the pairing down of some of DUNE’s upgrades, one of the biggest areas that was recommended against were ‘accessory experiments at the LHC’. In particular, MATHUSULA and the Forward Physics Facility were two experiments that proposed to build additional detectors near already existing LHC collision points to look for particles that may be missed by the current experiments. By building new detectors hundreds of meters away from the collision point, shielded by concrete and the earth, they can obtained unique sensitivity to ‘long lived’ particles capable of traversing such distances. These experiments would follow in the footsteps of the current FASER experiment, which is already producing impressive results.

While FASER found success as a relatively ‘cheap’ experiment, reusing detector components from and situating itself in a beam tunnel, these new proposals were asking for quite a bit more. The scale of these detectors would have required new caverns to be built, significantly increasing the cost. Given the cost and specialized purpose of these detectors, the panel recommended against their construction. These collaborations may now try to find ways to pare down their proposal so they can apply to the new small project portfolio.

Another major decision by the panel was to recommend against hosting a new Higgs factor collider in the US. But that will discussed more in a future post.

Conclusions

The P5 panel was faced with a difficult task, the total cost of all projects they were presented with was three times the budget. But they were able to craft a plan that continues the work of the previous decade, addresses current shortcomings and lays out an inspiring vision for the future. So far the community seems to be strongly rallying behind it. At time of writing, over 2700 community members from undergraduates to senior researchers have signed a petition endorsing the panels recommendations. This strong show of support will be key for turning these recommendations into actual funding, and hopefully lobbying congress to even increase funding so that more of this vision can be realized.

For those interested the full report as well as executive summaries of different areas can be found on the P5 website. Members of the US particle physics community are also encouraged to sign the petition endorsing the recommendations here.

And stayed tuned for part 2 of our coverage which will discuss the implications of the report on future colliders!

Moriond 2023 Recap

Every year since 1966,  particle physicists have gathered in the Alps to unveil and discuss their most important results of the year (and to ski). This year I had the privilege to attend the Moriond QCD session so I thought I would post a recap here. It was a packed agenda spanning 6 days of talks, and featured a lot of great results over many different areas of particle physics, so I’ll have to stick to the highlights here.

FASER Observes First Collider Neutrinos

Perhaps the most exciting result of Moriond came from the FASER experiment, a small detector recently installed in the LHC tunnel downstream from the ATLAS collision point. They announced the first ever observation of neutrinos produced in a collider. Neutrinos are produced all the time in LHC collisions, but because they very rarely interact, and current experiments were not designed to look for them, no one had ever actually observed them in a detector until now. Based on data collected during collisions from last year, FASER observed 153 candidate neutrino events, with a negligible amount of predicted backgrounds; an unmistakable observation.

Black image showing colorful tracks left by particles produced in a neutrino interaction
A neutrino candidate in the FASER emulsion detector. Source

This first observation opens the door for studying the copious high energy neutrinos produced in colliders, which sit in an energy range currently unprobed by other neutrino experiments. The FASER experiment is still very new, so expect more exciting results from them as they continue to analyze their data. A first search for dark photons was also released which should continue to improve with more luminosity. On the neutrino side, they have yet to release full results based on data from their emulsion detector which will allow them to study electron and tau neutrinos in addition to the muon neutrinos this first result is based on.

New ATLAS and CMS Results

The biggest result from the general purpose LHC experiments was ATLAS and CMS both announcing that they have observed the simultaneous production of 4 top quarks. This is one of the rarest Standard Model processes ever observed, occurring a thousand times less frequently than a Higgs being produced. Now that it has been observed the two experiments will use Run-3 data to study the process in more detail in order to look for signs of new physics.

Event displays from ATLAS and CMS showing the signature of 4 top events in their respective detectors
Candidate 4 top events from ATLAS (left) and CMS (right).

ATLAS also unveiled an updated measurement of the mass of the W boson. Since CDF announced its measurement last year, and found a value in tension with the Standard Model at ~7-sigma, further W mass measurements have become very important. This ATLAS result was actually a reanalysis of their previous measurement, with improved PDF’s and statistical methods. Though still not as precise as the CDF measurement, these improvements shrunk their errors slightly (from 19 to 16 MeV).  The ATLAS measurement reports a value of the W mass in very good agreement with the Standard Model, and approximately 4-sigma in tension with the CDF value. These measurements are very complex, and work is going to be needed to clarify the situation.

CMS had an intriguing excess (2.8-sigma global) in a search for a Higgs-like particle decaying into an electron and muon. This kind of ‘flavor violating’ decay would be a clear indication of physics beyond the Standard Model. Unfortunately it does not seem like ATLAS has any similar excess in their data.

Status of Flavor Anomalies

At the end of 2022, LHCb announced that the golden channel of the flavor anomalies, the R(K) anomaly, had gone away upon further analysis. Many of the flavor physics talks at Moriond seemed to be dealing with this aftermath.

Of the remaining flavor anomalies, R(D), a ratio describing the decay rates of B mesons in final states with D mesons and taus versus D mesons plus muons or electrons, has still been attracting interest. LHCb unveiled a new measurement that focused on hadronically taus and found a value that agreed with the Standard Model prediction. However this new measurement had larger error bars than others so it only brought down the world average slightly. The deviation currently sits at around 3-sigma.

A summary plot showing all the measurements of R(D) and R(D*). The newest LHCb measurement is shown in the red band / error bar on the left. The world average still shows a 3-sigma deviation to the SM prediction

An interesting theory talk pointed out that essentially any new physics which would produce a deviation in R(D) should also produce a deviation in another lepton flavor ratio, R(Λc), because it features the same b->clv transition. However LHCb’s recent measurement of R(Λc) actually found a small deviation in the opposite direction as R(D). The two results are only incompatible at the ~1.5-sigma level for now, but it’s something to continue to keep an eye on if you are following the flavor anomaly saga.

It was nice to see that the newish Belle II experiment is now producing some very nice physics results. The highlight of which was a world-best measurement of the mass of the tau lepton. Look out for more nice Belle II results as they ramp up their luminosity, and hopefully they can weigh in on the R(D) anomaly soon.

A fit to the invariant mass the visible decay products of the tau lepton, used to determine its intrinsic mass. An impressive show of precision from Belle II

Theory Pushes for Precision

The focus of much of the theory talks was about trying to advance our precision in predictions of standard model physics. This ‘bread and butter’ physics is sometimes overlooked in scientific press, but is an absolutely crucial part of the particle physics ecosystem. As experiments reach better and better precision, improved theory calculations are required to accurately model backgrounds, predict signals, and have precise standard model predictions to compare to so that deviations can be spotted. Nice results in this area included evidence for an intrinsic amount of charm quarks inside the proton from the NNPDF collaboration, very precise extraction of CKM matrix elements by using lattice QCD, and two different proposals for dealing with tricky aspects regarding the ‘flavor’ of QCD jets.

Final Thoughts

Those were all the results that stuck out to me. But this is of course a very biased sampling! I am not qualified enough to point out the highlights of the heavy ion sessions or much of the theory presentations. For a more comprehensive overview, I recommend checking out the slides for the excellent experimental and theoretical summary talks. Additionally there was the Moriond Electroweak conference that happened the week before the QCD one, which covers many of the same topics but includes neutrino physics results and dark matter direct detection. Overall it was a very enjoyable conference and really showcased the vibrancy of the field!

The Mini and Micro Boone Mystery, Part 1 Experiment

Title: “Search for an Excess of Electron Neutrino Interactions in MicroBooNE Using Multiple Final State Topologies”

Authors: The MiniBoone Collaboration

Reference: https://arxiv.org/abs/2110.14054

This is the first post in a series on the latest MicroBooNE results, covering the experimental side. Click here to read about the theory side. 

The new results from the MicroBoone experiment received a lot of excitement last week, being covered by several major news outlets. But unlike most physics news stories that make the press, it was a null result; they did not see any evidence for new particles or interactions. So why is it so interesting? Particle physics experiments produce null results every week, but what made this one newsworthy is that MicroBoone was trying to check the results from two previous experiments LSND and MiniBoone, that did see something anomalous with very high statistical evidence. If the LSND/MiniBoone result was confirmed, it would have been a huge breakthrough in particle physics, but now that it wasn’t many physicists are scratching their heads trying to make sense of these seemingly conflicting results. However, the MicroBoone experiment is not exactly the same as MiniBoone/LSND, and understanding the differences between the two sets of experiments may play an important role in unraveling this mystery.

Accelerator Neutrino Basics

All of these experiments are ‘accelerator neutrino experiments’, so lets first review what that means. Neutrino’s are ‘ghostly’ particles that are difficult to study (check out this post for more background on neutrinos).  Because they only couple through the weak force, neutrinos don’t like to interact with anything very much. So in order to detect them you need both a big detector with a lot of active material and a source with a lot of neutrinos. These experiments are designed to detect neutrinos produced in a human-made beam. To make the beam, a high energy beam of protons is directed at a target. These collisions produce a lot of particles, including unstable bound states of quarks like pions and kaons. These unstable particles have charge, so we can use magnets to focus them into a well-behaved beam.  When the pions and kaons decay they usually produce a muon and a muon neutrino. The beam of pions and kaons is pointed at an underground detector located a few hundred meters (or kilometers!) away, and then given time to decay. After they decay there will be a nice beam of muons and muon neutrinos. The muons can be stopped by some kind of shielding (like the earth’s crust), but the neutrinos will sail right through to the detector.

A diagram showing the basics of how a neutrino beam is made. Source

Nearly all of the neutrinos from the beam will still pass right through your detector, but a few of them will interact, allowing you to learn about their properties.

All of these experiments are considered ‘short-baseline’ because the distance between the neutrino source and the detector is only a few hundred meters (unlike the hundreds of kilometers in other such experiments). These experiments were designed to look for oscillation of the beam’s muon neutrinos into electron neutrinos which then interact with their detector (check out this post for some background on neutrino oscillations). Given the types of neutrinos we know about and their properties, this should be too short of a distance for neutrinos to oscillate, so any observed oscillation would be an indication something new (beyond the Standard Model) was going on.

The LSND + MiniBoone Anomaly

So the LSND and MiniBoone ‘anomaly’ was an excess of events above backgrounds that looked like electron neutrinos interacting with their detector. Both detectors were based on similar technology and were a similar distance from their neutrino source. Their detectors were essentially big tanks of mineral oil lined with light-detecting sensors.

An engineer styling inside the LSND detector. Source

At these energies the most common way neutrinos interact is to scatter against a neutron to produce a proton and a charged lepton (called a ‘charged current’ interaction). Electron neutrinos will produce outgoing electrons and muon neutrinos will produce outgoing muons.

A diagram of a ‘charged current’ interaction. A muon neutrino comes in and scatters against a neutron, producing a muon and a proton. Source

When traveling through the mineral oil these charged leptons will produce a ring of Cherenkov light which is detected by the sensors on the edge of the detector. Muons and electrons can be differentiated based on the characteristics of the Cherenkov light they emit. Electrons will undergo multiple scatterings off of the detector material while muons will not. This makes the Cherenkov rings of electrons ‘fuzzier’ than those of muons. High energy photons can produce electrons positron pairs which look very similar to a regular electron signal and are thus a source of background. 

A comparison of muon and electron Cherenkov rings from the Super-Kamiokande experiment. Electrons produce fuzzier rings than muons. Source

Even with a good beam and a big detector, the feebleness of neutrino interactions means that it takes a while to get a decent number of potential events. The MiniBoone experiment ran for 17 years looking for electron neutrinos scattering in their detector. In MiniBoone’s most recent analysis, they saw around 600 more events than would be expected if there were no anomalous electron neutrinos reaching their detector. The statistical significance of this excess, 4.8-sigma, was very high. Combining with LSND which saw a similar excess, the significance was above 6-sigma. This means its very unlikely this is a statistical fluctuation. So either there is some new physics going on or one of their backgrounds has been seriously under-estimated. This excess of events is what has been dubbed the ‘MiniBoone anomaly’.

The number of events seen in the MiniBoone experiment as a function of the energy seen in the interaction. The predicted number of events from various known background sources are shown in the colored histograms. The best fit to the data including the signal of anomalous oscillations is shown by the dashed line. One can see that at low energies the black data points lie significantly above these backgrounds and strongly favor the oscillation hypothesis.

The MicroBoone Result

The MicroBoone experiment was commissioned to verify the MiniBoone anomaly as well as test out a new type of neutrino detector technology. The MicroBoone is the first major neutrino experiment to use a ‘Liquid Argon Time Projection Chamber’ detector. This new detector technology allows more detailed reconstruction of what is happening when a neutrino scatters in the detector. The the active volume of the detector is liquid Argon, which allows both light and charge to propagate through it. When a neutrino scatters in the liquid Argon, scintillation light is produced that is collected in sensors. As charged particles created in the collision pass through the liquid Argon they ionize atoms they pass by. An electric field applied to the detector causes this produced charge to drift towards a mesh of wires where it can be collected. By measuring the difference in arrival time between the light and the charge, as well as the amount of charge collected at different positions and times, the precise location and trajectory of the particles produced in the collision can be determined. 

A beautiful reconstructed event in the MicroBoone detector. The colored lines show the tracks of different particles produced in the collision, all coming from a single point where the neutrino interaction took place. One can also see that one of the tracks produced a shower of particles away from the interaction vertex.

This means that unlike the MiniBoone and LSND, MicroBoone can see not just the lepton, but also the hadronic particles (protons, pions, etc) produced when a neutrino scatters in their detector. This means that the same type of neutrino interaction actually looks very different in their detector. So when they went to test the MiniBoone anomaly they adopted multiple different strategies of what exactly to look for. In the first case they looked for the type of interaction that an electron neutrino would have most likely produced: an outgoing electron and proton whose kinematics match those of a charged current interaction. Their second set of analyses, designed to mimic the MiniBoone selection, are slightly more general. They require one electron and any number of protons, but no pions. Their third analysis is the most general and requires an electron along with anything else. 

These different analyses have different levels of sensitivity to the MiniBoone anomaly, but all of them are found to be consistent with a background-only hypothesis: there is no sign of any excess events. Three out of four of them even see slightly less events than the expected background. 

A summary of the different MicroBoone analyses. The Y-axis shows the ratio of observed to expected number of events expected if there was only background present. The red lines show the excess predicted to be seen if the MiniBoone anomaly produced a signal in each channel. One can see that the black data points are much more consistent with the grey bands showing the background only prediction than amount predicted if the MiniBoone anomaly was present.

Overall the MicroBoone data rejects the hypothesis that the MiniBoone anomaly is due to electron neutrino charged current interactions at quite high significance (>3sigma). So if its not electron neutrinos causing the MiniBoone anomaly, what is it?

What’s Going On?

Given that MicroBoone did not see any signal, many would guess that MiniBoone’s claim of an excess must be flawed and they have underestimated one of their backgrounds. Unfortunately it is not very clear what that could be. If you look at the low-energy region where MiniBoone has an excess, there are three major background sources: decays of the Delta baryon that produce a photon (shown in tan), neutral pions decaying to pairs of photons (shown in red), and backgrounds from true electron neutrinos (shown in various shades of green). However all of these sources of background seem quite unlikely to be the source of the MiniBoone anomaly.

Before releasing these results, MicroBoone performed a dedicated search for Delta baryons decaying into photons, and saw a rate in agreement with the theoretical prediction MiniBoone used, and well below the amount needed to explain the MiniBoone excess.

Backgrounds from true electron neutrinos produced in the beam, as well as from the decays of muons, should not concentrate only at low energies like the excess does, and their rate has also been measured within MiniBoone data by looking at other signatures.

The decay of a neutral pions can produce two photons, and if one of them escapes detection, a single photon will mimic their signal. However one would expect that it would be more likely that photons would escape the detector near its edges, but the excess events are distributed uniformly in the detector volume.

So now the mystery of what could be causing this excess is even greater. If it is a background, it seems most likely it is from an unknown source not previously considered. As will be discussed in our part 2 post, its possible that MiniBoone anomaly was caused by a more exotic form of new physics; possibly the excess events in MiniBoone were not really coming from the scattering of electron neutrinos but something else that produced a similar signature in their detector. Some of these explanations included particles that decayed into pairs of electrons or photons. These sorts of explanations should be testable with MicroBoone data but will require dedicated analyses for their different signatures.

So on the experimental side, we now we are left to scratch our heads and wait for new results from MicroBoone that may help get to the bottom of this.

Click here for part 2 of our MicroBoone coverage that goes over the theory side of the story!

Read More

Is the Great Neutrino Puzzle Pointing to Multiple Missing Particles?” – Quanta Magazine article on the new MicroBoone result

“Can MiniBoone be Right?” – Resonaances blog post summarizing the MiniBoone anomaly prior to the the MicroBoone results

A review of different types of neutrino detectors – from the T2K experiment

The XENON1T Excess : The Newest Craze in Particle Physics

Paper: Observation of Excess Electronic Recoil Events in XENON1T

Authors: XENON1T Collaboration

Recently the particle physics world has been abuzz with a new result from the XENON1T experiment who may have seen a revolutionary signal. XENON1T is one of the world’s most sensitive dark matter experiments. The experiment consists of a huge tank of Xenon placed deep underground in the Gran Sasso mine in Italy. It is a ‘direct-detection’ experiment, hunting for very rare signals of dark matter particles from space interacting with their detector. It was originally designed to look for WIMP’s, Weakly Interacting Massive Particles, who used to be everyone’s favorite candidate for dark matter. However, given recent null results by WIMP-hunting  direct-detection experiments, and collider experiments at the LHC, physicists have started to broaden their dark matter horizons. Experiments like XENON1T, who were designed to look for heavy WIMP’s colliding off of Xenon nuclei have realized that they can also be very sensitive to much lighter particles by looking for electron recoils. New particles that are much lighter than traditional WIMP’s would not leave much of an impact on large Xenon nuclei, but they can leave a signal in the detector if they instead scatter off of the electrons around those nuclei. These electron recoils can be identified by the ionization and scintillation signals they leave in the detector, allowing them to be distinguished from nuclear recoils.

In this recent result, the XENON1T collaboration searched for these electron recoils in the energy range of 1-200 keV with unprecedented sensitivity.  Their extraordinary sensitivity is due to its exquisite control over backgrounds and extremely low energy threshold for detection. Rather than just being impressed, what has gotten many physicists excited is that the latest data shows an excess of events above expected backgrounds in the 1-7 keV region. The statistical significance of the excess is 3.5 sigma, which in particle physics is enough to claim ‘evidence’ of an anomaly but short of the typical 5-sigma required to claim discovery.

The XENON1T data that has caused recent excitement. The ‘excess’ is the spike in the data (black points) above the background model (red line) in the 1-7 keV region. The significance of the excess is around 3.5 sigma.

So what might this excess mean? The first, and least fun answer, is nothing. 3.5 sigma is not enough evidence to claim discovery, and those well versed in particle physics history know that there have been numerous excesses with similar significances have faded away with more data. Still it is definitely an intriguing signal, and worthy of further investigation.

The pessimistic explanation is that it is due to some systematic effect or background not yet modeled by the XENON1T collaboration. Many have pointed out that one should be skeptical of signals that appear right at the edge of an experiments energy detection threshold. The so called ‘efficiency turn on’, the function that describes how well an experiment can reconstruct signals right at the edge of detection, can be difficult to model. However, there are good reasons to believe this is not the case here. First of all the events of interest are actually located in the flat part of their efficiency curve (note the background line is flat below the excess), and the excess rises above this flat background. So to explain this excess their efficiency would have to somehow be better at low energies than high energies, which seems very unlikely. Or there would have to be a very strange unaccounted for bias where some higher energy events were mis-reconstructed at lower energies. These explanations seem even more implausible given that the collaboration performed an electron reconstruction calibration using the radioactive decays of Radon-220 over exactly this energy range and were able to model the turn on and detection efficiency very well.

Results of a calibration done to radioactive decays of Radon-220. One can see that data in the efficiency turn on (right around 2 keV) is modeled quite well and no excesses are seen.

However the possibility of a novel Standard Model background is much more plausible. The XENON collaboration raises the possibility that the excess is due to a previously unobserved background from tritium β-decays. Tritium decays to Helium-3 and an electron and a neutrino with a half-life of around 12 years. The energy released in this decay is 18.6 keV, giving the electron having an average energy of a few keV. The expected energy spectrum of this decay matches the observed excess quite well. Additionally, the amount of contamination needed to explain the signal is exceedingly small. Around 100 parts-per-billion of H2 would lead to enough tritium to explain the signal, which translates to just 3 tritium atoms per kilogram of liquid Xenon. The collaboration tries their best to investigate this possibility, but they neither rule out or confirm such a small amount of tritium contamination. However, other similar contaminants, like diatomic oxygen have been confirmed to be below this level by 2 orders of magnitude, so it is not impossible that they were able to avoid this small amount of contamination.

So while many are placing their money on the tritium explanation, there is the exciting possibility remains that this is our first direct evidence of physics Beyond the Standard Model (BSM)! So if the signal really is a new particle or interaction what would it be? Currently it it is quite hard to pin down exactly based on the data. The analysis was specifically searching for two signals that would have shown up in exactly this energy range: axions produced in the sun, and neutrinos produced in the sun interacting with electrons via a large (BSM) magnetic moment. Both of these models provide good fits to the signal shape, with the axion explanation being slightly preferred. However since this result has been released, many have pointed out that these models would actually be in conflict with constraints from astrophysical measurements. In particular, the axion model they searched for would have given stars an additional way to release energy, causing them to cool at a faster rate than in the Standard Model. The strength of interaction between axions and electrons needed to explain the XENON1T excess is incompatible with the observed rates of stellar cooling. There are similar astrophysical constraints on neutrino magnetic moments that also make it unlikely.

This has left door open for theorists to try to come up with new explanations for these excess events, or think of clever ways to alter existing models to avoid these constraints. And theorists are certainly seizing this opportunity! There are new explanations appearing on the arXiv every day, with no sign of stopping. In the roughly 2 weeks since the XENON1T announced their result and this post is being written, there have already been 50 follow up papers! Many of these explanations involve various models of dark matter with some additional twist, such as being heated up in the sun or being boosted to a higher energy in some other way.

A collage of different models trying to explain the XENON1T excess (center). Each plot is from a separate paper released in the first week and a half following the original announcement. Source

So while theorists are currently having their fun with this, the only way we will figure out the true cause of this this anomaly is with more data. The good news is that the XENON collaboration is already preparing for the XENONnT experiment that will serve as a follow to XENON1T. XENONnT will feature a larger active volume of Xenon and a lower background level, allowing them to potentially confirm this anomaly at the 5-sigma level with only a few months of data. If  the excess persists, more data would also allow them to better determine the shape of the signal; allowing them to possibly distinguish between the tritium shape and a potential new physics explanation. If real, other liquid Xenon experiments like LUX and PandaX should also be able to independently confirm the signal in the near future. The next few years should be a very exciting time for these dark matter experiments so stay tuned!

Read More:

Quanta Magazine Article “Dark Matter Experiment Finds Unexplained Signal”

Previous ParticleBites Post on Axion Searches

Blog Post “Hail the XENON Excess”

A Tau Neutrino Runs into a Candy Shop…

We recently discussed some curiosities in the data from the IceCube neutrino detector. This is a follow up Particle Bite on some of the sugary nomenclature IceCube uses to characterize some of its events.

As we explained previously, IceCube is a gigantic ultra-high energy cosmic neutrino detector in Antarctica. These neutrinos have energies between 10-100 times higher than the protons colliding at the Large Hadron Collider, and their origin and nature are largely a mystery. One thing that IceCube can tell us about these neutrinos is their flavor composition; see e.g. this post for a crash course in neutrino flavor.

When neutrinos interact with ambient nuclei through a W boson (charged current interactions), the following types of events might be seen:

Types of Ice Cube Events
Typical charged current events in IceCube. Displays from the IceCube collaboration.

I refer you to this series of posts for a gentle introduction to the Feynman diagrams above. The key is that the high energy neutrino interacts with an nucleus, breaking it apart (the remnants are called X above) and ejecting a high energy charged lepton which can be used to identify the flavor of the neutrino.

  • Muons travel a long distance and leave behind a trail of Cerenkov radiation called a track.
  • Electrons don’t travel as far and deposit all of their energy into a shower. These are also sometimes called cascades because of the chain of particles produced in the ‘bang’.
  • Taus typically leave a more dramatic signal, a double bang, when the tau is formed and then subsequently decays into more hadrons (X’ above).

In fact, the tau events can be further classified depending on how this ‘double bang’ is resolved—and it seems like someone was playing a popular candy-themed mobile game when naming these:

Types of tau events in IceCube from Cowan.
Types of candy-themed tau events in IceCube from D. Cowan at the TeVPA 2 conference.

In this figure from the TeVPA 2 conference proceedings, we find some silly classifications of what tau events look like according to their energy:

  • Lollipop: The tau is produced outside the detector so that the first ‘bang’ isn’t seen. Instead, there’s a visible track that leads to the second (observable) bang. The track is the stick and the bang is the lollipop head.
  • Inverted lollipop: Similar to the lollipop, except now the first ‘bang’ is seen in the detector but the second ‘bang’ occurs outside the detector and is not observed.
  • Sugardaddy: The tau is produced outside the detector but decays into a muon inside the detector. This looks almost like a muon track except that the tau produces less Cerenkov light so that one can identify the point where the tau decays into a muon.
  • Double pulse: While this isn’t candy-themed, it’s still very interesting. This is a double bang where the two bangs can’t be distinguished spatially. However, since one bang occurs slightly after the other, one can distinguish them in the time: it’s a “double bang” in time rather than space.
  • Tautsie pop: This is a low energy version of the sugardaddy where the shower-to-track energy is used to discriminate against background.

While the names may be silly, counting these types of events in IceCube is one of the exciting frontiers of flavor physics. And while we might be forgiven for thinking that neutrino physics is all about measuring very `small’ things—let me share the following graphic from Francis Halzen’s recent talk at the AMS Days workshop at CERN, overlaying one of the shower events over Madison, Wisconsin to give a sense of scale:

From F. Halzen on behalf of the IceCube collaboration.
From F. Halzen on behalf of the IceCube collaboration; from AMS Days at CERN 2015.

The Glashow Resonance on Ice

Are cosmic neutrinos trying to tell us something, deep in the Antarctic ice?

Presenting:

“Glashow resonance as a window into cosmic neutrino sources,”
by Barger, Lu, Learned, Marfatia, Pakvasa, and Weiler
Phys.Rev. D90 (2014) 121301 [1407.3255]

Related work: Anchordoqui et al. [1404.0622], Learned and Weiler [1407.0739], Ibe and Kaneta [1407.2848]

Is there an neutrino energy cutoff preventing Glashow resonance events in IceCube?
Is there an neutrino energy cutoff preventing Glashow resonance events in IceCube?

The IceCube Neutrino Observatory is a gigantic neutrino detector located in the Antarctic. Like an iceberg, only a small fraction of the lab is above ground: 86 strings extend to a depth of 2.5 kilometers into the ice, with each string instrumented with 60 detectors.

2 PeV event from the IceCube 3 year analysis; nicknamed "Big Bird." From 1405.5303.
2 PeV event from the IceCube 3 year analysis; nicknamed “Big Bird.” From 1405.5303.

These detectors search ultra high energy neutrinos by looking for Cerenkov radiation as they pass through the ice. This is really the optical version of a sonic boom. An example event is shown above, where the color and size of the spheres indicate the strength of the Cerenkov signal in each detector.

IceCube has released data for its first three years of running (1405.5303) and has found three events with very large energies: 1-2 peta-electron-volts: that’s ten thousand times the mass of the Higgs boson. In addition, there’s a spectrum of neutrinos in the 10-1000 TeV range.

Glashow resonance diagram.
Glashow resonance diagram.

These ultra high energy neutrinos are believed to originate from outside our galaxy through processes involving particle acceleration by black holes. One expects the flux of such neutrinos to go as a power law of the energy, \Phi \sim E^{-\alpha} where \alpha = 2 is a estimate from certain acceleration models. The existence of the three super high energy events at the PeV scale has led some people to think about a known deviation from the power law spectrum: the Glashow resonance. This is the sharp increase in the rate of neutrino interactions with matter coming from the resonant production of W bosons, as shown in the Feynman diagram to the left.

The Glashow resonance sticks out like a sore thumb in the spectrum. The position of the resonance is set by the energy required for an electron anti-neutrino to hit an electron at rest such that the center of mass energy is the W boson mass.

astro-ph/0101216
Sharp peak in the neutrino scattering rate from the Glashow resonance; image from Engel, Seckel, and Stanev in astro-ph/0101216.

If you work through the math on the back of an envelope, you’ll find that the resonance occurs for incident electron anti-neutrinos with an energy of  6.3 PeV; see figure to the leftt. This is “right around the corner” from the 1-2 PeV events already seen, and one might wonder whether it’s significant that we haven’t seen anything.

The authors of [1407.3255] have found that the absence of Glashow resonant neutrino events in IceCube is not yet a bona-fide “anomaly.” In fact, they point out that the future observation or non-observation of such neutrinos can give us valuable hints about the hard-to-study origin of these ultra high energy neutrinos. They present  six simple particle physics scenarios for how high energy neutrinos can be formed from cosmic rays that were accelerated by astrophysical accelerators like black holes. Each of these processes predict a ratio of neutrino and anti-neutrinos flavors at Earth (this includes neutrino oscillation effects over long distances). Since the Glashow resonance only occurs for electron anti-neutrinos, the authors point out that the appearance or non-appearance of the Glashow resonance in future data can constrain what types of processes may have produced these high energy neutrinos.

In more speculative work, the authors of [1404.0622] suggest that the absence of Glashow resonance events may even suggest some kind of new physics that impose a “speed limit” on neutrinos propagating through space that prevents neutrinos from ever reaching 6.3 PeV (see top figure).

Further Reading:

  • 1007.1247, Halzen and Klein, “IceCube: An Instrument for Neutrino Astronomy.” A review of the IceCube experiment.
  • hep-ph/9410384, Gaisser, Halzen, and Stanev, “Particle Physics with High Energy Neutrinos.” An older review of ultra high energy neutrinos.

Neutrinoless Double Beta Decay Experiments

Title: Neutrinoless Double Beta Decay Experiments
Author: Alberto Garfagnini
Published: arXiv:1408.2455 [hep-ex]

Neutrinoless double beta decay is a theorized process that, if observed, would provide evidence that the neutrino is its own antiparticle. The relatively recent discovery of neutrino mass from oscillation experiments makes this search particularly relevant, since the Majorana mechanism that requires particles to be self-conjugate can also provide mass. A variety of experiments based on different techniques hope to observe this process. Before providing an experimental overview, we first discuss the theory itself.

DBD
Figure 1: Neutrinoless double beta decay.

Beta decay occurs when an electron or positron is released along with a corresponding neutrino. Double beta decay is simply the simultaneous beta decay  of two neutrons in a nucleus. “Neutrinoless,” of course, means that this decay occurs without the accompanying neutrinos; in this case, the two neutrinos in the beta decay annihilate with one another, which is only possible if they are self-conjugate. Figures 1 and 2 demonstrate the process by formula and image, respectively.

doubleBetaandNeutrinoless
Figure 2: Double beta decay & neutrinoless double beta decay, from particlecentral.com/neutrinos_page.html.

The lack of accompanying neutrinos in such a decay violates lepton number, meaning this process is forbidden unless neutrinos are Majorana fermions. Without delving into a full explanation, this simply means that a particle is its own antiparticle (though more information is given in the references.) The importance lies in the lepton number of a neutrino. Neutrinoless double beta decay would require a nucleus to absorb two neutrinos, then decay into two protons and two electrons (to conserve charge). The only way in which this process does not violate lepton number is if the lepton charge is the same for a neutrino and an antineutrino; in other words, if they are the same particle.

The experiments currently searching for neutrinoless double beta decay can be classified according to the material used for detection. A partial list of active and future experiments is provided below.

1. EXO (Enriched Xenon Observatory): New Mexico, USA. The detector is filled with liquid 136Xe, which provides worse energy resolution than gaseous xenon, but is compensated by the use of both scintillating and ionizing signals. The collaboration finds no statistically significant evidence for 0νββ decay, and place a lower limit on the half life of 1.1 * 1025 years at 90% confidence.

2. KamLAND-Zen: Kamioka underground neutrino observatory near Toyama, Japan.  Like EXO, the experiment uses liquid xenon, but in the past has required purification due to aluminum contaminations in the detector. They report a 0νββ half life 90% CL at 2.6 * 1025 years. Figure 3 shows the energy spectra of candidate events with the best fit background.

KamLANDZEN
Figure 2: KamLAND-Zen energy spectra of selected candidate events together with the best-fit backgrounds and 2νββ decays.

3. GERDA (Germanium Dectetor Array): Laboratori Nazionali del Gran Sasso, Italy. GERDA utilizes High Purity 76Ge diodes, which provide excellent energy resolution but typically have very large backgrounds. To prevent signal contamination, GERDA has ultra-pure shielding that protect measurements from environmental radiation background sources. The half life is bound below at  90% confidence by 2.1 * 1025 years.

 4. MAJORANA: South Dakota, USA.  This experiment is under construction, but a prototpye is expected to begin running in 2014. If results from GERDA and MAJORANA look good, there is talk of building a next generation germanium experiment that combines diodes from each detector.

 5. CUORE: Laboratori Nazionali del Gran Sasso, Italy. CUORE is a 130Te bolometric direct detector, meaning that it has two layers: an absorber made of crystal that releases energy when struck, and a sensor which detects the induced temperature changes. The experiment is currently under construction, so there are no definite results, but it expects to begin taking data in 2015.

While these results do not seem to show the existence of 0νββ decay, such an observation would demonstrate the existence of Majorana fermions and give an estimate of the absolute neutrino mass scale. However, a missing observation would be just as significant in the role of scientific discovery, since this would imply that the neutrino is not in fact its own antiparticle. To get a better limit on the half life, more advanced detector technologies are necessary; it will be interesting to see if MAJORANA and CUORE will have better sensitivity to this process.

 

Further Reading:

 

Fractional particles in the sky

Title: Goldstone Bosons as Fractional Cosmic Neutrinos

Author: Steven Weinberg (University of Texas, Austin)
Published: Phys.Rev.Lett. 110 (2013) 241301 [arXiv:1305.1971]

The Standard Model includes three types of neutrinos—the nearly-massless, charge-less partners of the leptons. Recent measurements from the Planck satellite, however, find that the ‘effective number of neutrinos’ in the early universe is N_\text{eff} = 3.36 \pm 0.34. This is consistent with the Standard Model, but one may wonder what it means if this number really were fractional amount larger than three.

fractionalneutrinoPhysically, N_\text{eff} is actually a count of the number of light particles during recombination: the time in the early universe where the temperature had cooled enough for protons and electrons to form hydrogen. A snapshot era is imprinted on the cosmic microwave background (CMB). Particles whose masses are much less than the temperature—like neutrinos—are effectively ‘radiation’ during this era and affect the features of the CMB; see the appendix below for a rough sketch. In this way, cosmological observations can tell us about the spectrum of light particles.

The number N_\text{eff} is defined as part of the ratio between photons and non-photon contributions to the ‘radiation’ density of the universe. It is normalized to count the number of light fermion–anti-fermion pairs. In this paper, Steven Weinberg points out that a light bosonic particle can give a fractional contribution to this counting. First of all, fermionic and bosonic contributions to the energy density differ by 7/8ths due to the difference between Fermi and Bose statistics. Secondly, a boson that is its own antiparticle picks up an additional 1/2, so that it looks like a light boson should contribute

\displaystyle \Delta N_\text{eff} = \frac{1}{2} \left(\frac{7}{8}\right)^{-1} = \frac{4}{7} = 0.57.

We have two immediate problems:

  1. This is still larger than the observed mean that we’d like to hit, \Delta N_\text{eff} = 0.36.
  2. We’re implicitly assuming a new light scalar particle but quantum corrections generically make scalars very massive. (This is the essence of the Hierarchy problem associated with the Higgs mass.)

To address the second point, Weinberg assumes the new particle is a Goldstone boson—scalar particles which are naturally light because they’re associated with spontaneous symmetry breaking. For example, the lowest energy state of a ferromagnet breaks rotational symmetry since all the spins align in one direction. “Spin wave” excitations cost little energy and behave like light particles. Similarly, the strong force breaks chiral symmetry—which relates the behavior of left- and right-handed fermions. The pions are Goldstone bosons from this breaking and indeed have masses much smaller than other nuclear states like the proton. In this paper, Weinberg imagines that a new symmetry is broken spontaneously and the resulting Goldstone boson is the light state which can contribute to the number of light degrees of freedom in the early universe, N_\text{eff}.

This set up also gives a way to address the first problem, how do we reduce the contribution of this particle, \Delta N_\text{eff}, to better match what we observe in the CMB? One crucial assumption in our estimate for \Delta N_\text{eff} was that the new light particle was in thermal equilibrium with neutrinos. As the universe cooled, the other Standard Model particles became too heavy to be produced thermally and their entropy had to go towards heating up the lighter particles. If the Goldstone boson fell out of thermal equilibrium too early—say, its interaction rate became too small to overcome the expanding distance between it and other particles—it won’t be heated by the heavy Standard Model particles. Because only the neutrinos are heated, the Goldstone contributes much less than 4/7 to N_\text{eff}. (A sketch of the argument is in the appendix below.)

Weinberg points out that there’s an intermediate possibility: if the Goldstone boson just happens to go out of thermal equilibrium when only the muons, electrons, and neutrinos are still thermally accessible, then the only temperature increase for the neutrinos that isn’t picked up by the Goldstone comes from the muon. The expression for the entropy goes like

\displaystyle s \sim T^3 \left(\text{Photon}\right) + \frac{7}{8} T^3 \left(\text{SM}\right)

where “SM” refers to the number of Standard Model particles: a left-handed electron, a right-handed electron, a left-handed muon, a right-handed muon, and three left-handed neutrinos. (See this discussion on handedness.) The famous 7/8 shows up for the fermions. In order to conserve entropy when we lose the two muons, the other particles have to heat up by a factor of $ latex (57/43)^{1/3}$. Meanwhile, the Goldstone boson temperature stays constant since it doesn’t interact enough with the other particles to heat up. The contribution of the Goldstone to the effective number of light particles in the early universe is thus scaled down:

\displaystyle \Delta N_\text{eff} = \frac{4}{7} \times \left(\frac{43}{57}\right)^{4/3} = 0.39,

This is now quite close to the \Delta N_\text{eff} = 0.36 \pm 0.34 measured from the CMB. Weinberg goes on to construct an explicit example of how the Goldstone might interact with the Higgs to produce the correct interaction rates. As an example of further model building, he then notes that one may further construct models of dark matter where the broken symmetry that produced the Goldstone is associated with the stability of the dark matter particle.

 

Appendix

We briefly sketch how light particles can affect the cosmic microwave background. For details, see 1104.2333, the Snowmass paper 1309.5383, or the review in the PDG. Particles ‘decouple’ from the rest of the thermal particles in the early universe when their interaction rate is smaller than the expansion rate of the universe: the universe expands too quickly for the particles to stay in thermal equilibrium.

Neutrinos happen to decouple just before thermal electrons and positrons begin to annihilate. The energy from those annihilations thus go into heating the photons. From entropy conservation one can determine the fixed ratio between the neutrino and photon temperatures. This, in turn, allows one to determine the relative number and energy densities.

Additional contributions to the effective number of light particles N_\text{eff} thus lead to an increase in the energy density. In the radiation dominated era of the universe, this increases the expansion rate (Hubble parameter). One can then use two observables to pin down the additional contribution to N_\text{eff}.

CMB
CMB with the sound horizon \theta_s and diffusion scale \theta_d illustrated. Image from Lloyd Knox.

Tension between gravitational pull and pressure from radiation produces acoustic oscillations in the microwave background. Two features which are sensitive to the Hubble parameter are:

  1. The sound horizon. This is the scale of acoustic oscillations and can be seen in the peaks of the CMB power spectrum. The angular sound scale goes like 1/H.
  2. The diffusion scale. This measures the damping of small scale oscillations from photons diffusion. This scale goes like \sqrt{1/H}.

A heuristic picture of what these scales correspond to is shown if the figure. The measurement of these two parameters thus gives a fit for the Hubble parameter that can then give a fit for the effective number of light particles in the early universe, N_\text{eff}.