Too Massive? New measurement of the W boson’s mass sparks intrigue

This is part one of our coverage of the CDF W mass result covering its implications. Read about the details of the measurement in a sister post here!

Last week the physics world was abuzz with the latest results from an experiment that stopped running a decade ago. Some were heralding this as the beginning of a breakthrough in fundamental physics, headlines read “Shock result in particle experiment could spark physics revolution” (BBC). So what exactly is all the fuss about?

The result itself is an ultra-precise measurement of the mass of the W boson. The W boson is one of the carriers of weak force and this measurement pegged its mass at 80,433 MeV with an uncertainty of 9 MeV. The excitement is coming because this value disagrees with the prediction from our current best theory of particle physics, the Standard Model. In theoretical structure of the Standard Model the masses of the gauge bosons are all interrelated. In the Standard Model the mass of the W boson can be computed based on the mass of the Z as well as few other parameters in the theory (like the weak mixing angle). In a first approximation (ie to the lowest order in perturbation theory), the mass of the W boson is equal to the mass of the Z boson times the cosine of the weak mixing angle. Based on other measurements that have been performed including the Z mass, the Higgs mass, the lifetime of muons and others, the Standard Model predicts that the mass of the W boson should be 80,357 (with an uncertainty of 6 MeV). So the two numbers disagree quite strongly, at the level of 7 standard deviations.

If the measurement and the Standard Model prediction are both correct, this would imply that there is some deficiency in the Standard Model; some new particle interacting with the W boson whose effects haven’t been unaccounted for. This would be welcome news to particle physicists, as we know that the Standard Model is an incomplete theory but have been lacking direct experimental confirmation of its deficiencies. The size of the discrepancy would also mean that whatever new particle was causing the deviation may also be directly detectable within our current or near future colliders.

If this discrepancy is real, exactly what new particles would this entail? Judging based on the 30+ (and counting) papers released on the subject in the last week, there are a good number of possibilities. Some examples include extra Higgs bosons, extra Z-like bosons, and vector-like fermions. It would take additional measurements and direct searches to pick out exactly what the culprit was. But it would hopefully give experimenters definite targets of particles to look for, which would go a long way in advancing the field.

But before everyone starts proclaiming the Standard Model dead and popping champagne bottles, its important to take stock of this new CDF measurement in the larger context. Measurements of the W mass are hard, that’s why it has taken the CDF collaboration over 10 years to publish this result since they stopped taking data. And although this measurement is the most precise one to date, several other W mass measurements have been performed by other experiments.

The Other Measurements

A plot summarizing the various W mass measurements performed to date
A summary of all the W mass measurements performed to date (black dots) with their uncertainties (blue bars) as compared to the the Standard Model prediction (yellow band). One can see that this new CDF result is in tension with previous measurements. (source)

Previous measurements of the W mass have come from experiments at the Large Electron-Positron collider (LEP), another experiment at the Tevatron (D0) and experiments at the LHC (ATLAS and LHCb). Though none of these were as precise as this new CDF result, they had been painting a consistent picture of a value in agreement with the Standard Model prediction. If you take the average of these other measurements, their value differs from the CDF measurement the level about 4 standard deviations, which is quite significant. This discrepancy seems large enough that it is unlikely to arise from purely random fluctuation, and likely means that either some uncertainties have been underestimated or something has been overlooked in either the previous measurements or this new one.

What one would like are additional, independent, high precision measurements that could either confirm the CDF value or the average value of the previous measurements. Unfortunately it is unlikely that such a measurement will come in the near future. The only currently running facility capable of such a measurement is the LHC, but it will be difficult for experiments at the LHC to rival the precision of this CDF one.

W mass measurements are somewhat harder at the LHC than the Tevatron for a few reasons. First of all the LHC is proton-proton collider, while the Tevatron was a proton-antiproton collider, and the LHC also operates at a higher collision energy than the Tevatron. Both differences cause W bosons produced at the LHC to have more momentum than those produced at the Tevatron. Modeling of the W boson’s momentum distribution can be a significant uncertainty of its mass measurement, and the extra momentum of W’s at the LHC makes this a larger effect. Additionally, the LHC has a higher collision rate, meaning that each time a W boson is produced there are actually tens of other collisions laid on top (rather than only a few other collisions like at the Tevatron). These extra collisions are called pileup and can make it harder to perform precision measurements like these. In particular for the W mass measurement, the neutrino’s momentum has to be inferred from the momentum imbalance in the event, and this becomes harder when there are many collisions on top of each other. Of course W mass measurements are possible at the LHC, as evidenced by ATLAS and LHCb’s already published results. And we can look forward to improved results from ATLAS and LHCb as well as a first result from CMS. But it may be very difficult for them to reach the precision of this CDF result.

A histogram of the transverse mass of the W from the ATLAS result. Showing how 50 MeV shifts in the W mass change the spectrum by extremely small amounts (a few tenths of a percent).
A plot of the transverse mass (one of the variables used in a measurement) of the W from the ATLAS measurement. The red and yellow lines show how little the distribution changes if the W mass changes by 50 MeV, which is around two and half times the uncertainty of the ATLAS result. These shifts change the distribution by only a few tenths of a percent, illustrating the difficulty involved. (source)

The Future

A future electron positron collider would be able to measure the W mass extremely precisely by using an alternate method. Instead of looking at the W’s decay, the mass could be measured through its production, by scanning the energy of the electron beams very close to the threshold to produce two W bosons. This method should offer precision significantly better than even this CDF result. However any measurement from a possible future electron positron collider won’t come for at least a decade.

In the coming months, expect this new CDF measurement to receive a lot buzz. Experimentalists will be poring over the details trying to figure out why it is in tension with previous measurements and working hard to produce new measurements from LHC data. Meanwhile theorists will write a bunch of papers detailing the possibilities of what new particles could explain the discrepancy and if there is a connection to other outstanding anomalies (like the muon g-2). But the big question of whether we are seeing the first real crack in the Standard Model or there is some mistake in one or more of the measurements is unlikely to be answered for a while.

If you want to learn about how the measurement actually works, check out this sister post!

Read More:

Cern Courier “CDF sets W mass against the Standard Model

Blog post on the CDF result from an (ATLAS) expert on W mass measurements “[Have we] finally found new physics with the latest W boson mass measurement?”

PDG Review “Electroweak Model and Constraints on New Physics

The Mini and Micro Boone Mystery, Part 1 Experiment

Title: “Search for an Excess of Electron Neutrino Interactions in MicroBooNE Using Multiple Final State Topologies”

Authors: The MiniBoone Collaboration

Reference: https://arxiv.org/abs/2110.14054

This is the first post in a series on the latest MicroBooNE results, covering the experimental side. Click here to read about the theory side. 

The new results from the MicroBoone experiment received a lot of excitement last week, being covered by several major news outlets. But unlike most physics news stories that make the press, it was a null result; they did not see any evidence for new particles or interactions. So why is it so interesting? Particle physics experiments produce null results every week, but what made this one newsworthy is that MicroBoone was trying to check the results from two previous experiments LSND and MiniBoone, that did see something anomalous with very high statistical evidence. If the LSND/MiniBoone result was confirmed, it would have been a huge breakthrough in particle physics, but now that it wasn’t many physicists are scratching their heads trying to make sense of these seemingly conflicting results. However, the MicroBoone experiment is not exactly the same as MiniBoone/LSND, and understanding the differences between the two sets of experiments may play an important role in unraveling this mystery.

Accelerator Neutrino Basics

All of these experiments are ‘accelerator neutrino experiments’, so lets first review what that means. Neutrino’s are ‘ghostly’ particles that are difficult to study (check out this post for more background on neutrinos).  Because they only couple through the weak force, neutrinos don’t like to interact with anything very much. So in order to detect them you need both a big detector with a lot of active material and a source with a lot of neutrinos. These experiments are designed to detect neutrinos produced in a human-made beam. To make the beam, a high energy beam of protons is directed at a target. These collisions produce a lot of particles, including unstable bound states of quarks like pions and kaons. These unstable particles have charge, so we can use magnets to focus them into a well-behaved beam.  When the pions and kaons decay they usually produce a muon and a muon neutrino. The beam of pions and kaons is pointed at an underground detector located a few hundred meters (or kilometers!) away, and then given time to decay. After they decay there will be a nice beam of muons and muon neutrinos. The muons can be stopped by some kind of shielding (like the earth’s crust), but the neutrinos will sail right through to the detector.

A diagram showing the basics of how a neutrino beam is made. Source

Nearly all of the neutrinos from the beam will still pass right through your detector, but a few of them will interact, allowing you to learn about their properties.

All of these experiments are considered ‘short-baseline’ because the distance between the neutrino source and the detector is only a few hundred meters (unlike the hundreds of kilometers in other such experiments). These experiments were designed to look for oscillation of the beam’s muon neutrinos into electron neutrinos which then interact with their detector (check out this post for some background on neutrino oscillations). Given the types of neutrinos we know about and their properties, this should be too short of a distance for neutrinos to oscillate, so any observed oscillation would be an indication something new (beyond the Standard Model) was going on.

The LSND + MiniBoone Anomaly

So the LSND and MiniBoone ‘anomaly’ was an excess of events above backgrounds that looked like electron neutrinos interacting with their detector. Both detectors were based on similar technology and were a similar distance from their neutrino source. Their detectors were essentially big tanks of mineral oil lined with light-detecting sensors.

An engineer styling inside the LSND detector. Source

At these energies the most common way neutrinos interact is to scatter against a neutron to produce a proton and a charged lepton (called a ‘charged current’ interaction). Electron neutrinos will produce outgoing electrons and muon neutrinos will produce outgoing muons.

A diagram of a ‘charged current’ interaction. A muon neutrino comes in and scatters against a neutron, producing a muon and a proton. Source

When traveling through the mineral oil these charged leptons will produce a ring of Cherenkov light which is detected by the sensors on the edge of the detector. Muons and electrons can be differentiated based on the characteristics of the Cherenkov light they emit. Electrons will undergo multiple scatterings off of the detector material while muons will not. This makes the Cherenkov rings of electrons ‘fuzzier’ than those of muons. High energy photons can produce electrons positron pairs which look very similar to a regular electron signal and are thus a source of background. 

A comparison of muon and electron Cherenkov rings from the Super-Kamiokande experiment. Electrons produce fuzzier rings than muons. Source

Even with a good beam and a big detector, the feebleness of neutrino interactions means that it takes a while to get a decent number of potential events. The MiniBoone experiment ran for 17 years looking for electron neutrinos scattering in their detector. In MiniBoone’s most recent analysis, they saw around 600 more events than would be expected if there were no anomalous electron neutrinos reaching their detector. The statistical significance of this excess, 4.8-sigma, was very high. Combining with LSND which saw a similar excess, the significance was above 6-sigma. This means its very unlikely this is a statistical fluctuation. So either there is some new physics going on or one of their backgrounds has been seriously under-estimated. This excess of events is what has been dubbed the ‘MiniBoone anomaly’.

The number of events seen in the MiniBoone experiment as a function of the energy seen in the interaction. The predicted number of events from various known background sources are shown in the colored histograms. The best fit to the data including the signal of anomalous oscillations is shown by the dashed line. One can see that at low energies the black data points lie significantly above these backgrounds and strongly favor the oscillation hypothesis.

The MicroBoone Result

The MicroBoone experiment was commissioned to verify the MiniBoone anomaly as well as test out a new type of neutrino detector technology. The MicroBoone is the first major neutrino experiment to use a ‘Liquid Argon Time Projection Chamber’ detector. This new detector technology allows more detailed reconstruction of what is happening when a neutrino scatters in the detector. The the active volume of the detector is liquid Argon, which allows both light and charge to propagate through it. When a neutrino scatters in the liquid Argon, scintillation light is produced that is collected in sensors. As charged particles created in the collision pass through the liquid Argon they ionize atoms they pass by. An electric field applied to the detector causes this produced charge to drift towards a mesh of wires where it can be collected. By measuring the difference in arrival time between the light and the charge, as well as the amount of charge collected at different positions and times, the precise location and trajectory of the particles produced in the collision can be determined. 

A beautiful reconstructed event in the MicroBoone detector. The colored lines show the tracks of different particles produced in the collision, all coming from a single point where the neutrino interaction took place. One can also see that one of the tracks produced a shower of particles away from the interaction vertex.

This means that unlike the MiniBoone and LSND, MicroBoone can see not just the lepton, but also the hadronic particles (protons, pions, etc) produced when a neutrino scatters in their detector. This means that the same type of neutrino interaction actually looks very different in their detector. So when they went to test the MiniBoone anomaly they adopted multiple different strategies of what exactly to look for. In the first case they looked for the type of interaction that an electron neutrino would have most likely produced: an outgoing electron and proton whose kinematics match those of a charged current interaction. Their second set of analyses, designed to mimic the MiniBoone selection, are slightly more general. They require one electron and any number of protons, but no pions. Their third analysis is the most general and requires an electron along with anything else. 

These different analyses have different levels of sensitivity to the MiniBoone anomaly, but all of them are found to be consistent with a background-only hypothesis: there is no sign of any excess events. Three out of four of them even see slightly less events than the expected background. 

A summary of the different MicroBoone analyses. The Y-axis shows the ratio of observed to expected number of events expected if there was only background present. The red lines show the excess predicted to be seen if the MiniBoone anomaly produced a signal in each channel. One can see that the black data points are much more consistent with the grey bands showing the background only prediction than amount predicted if the MiniBoone anomaly was present.

Overall the MicroBoone data rejects the hypothesis that the MiniBoone anomaly is due to electron neutrino charged current interactions at quite high significance (>3sigma). So if its not electron neutrinos causing the MiniBoone anomaly, what is it?

What’s Going On?

Given that MicroBoone did not see any signal, many would guess that MiniBoone’s claim of an excess must be flawed and they have underestimated one of their backgrounds. Unfortunately it is not very clear what that could be. If you look at the low-energy region where MiniBoone has an excess, there are three major background sources: decays of the Delta baryon that produce a photon (shown in tan), neutral pions decaying to pairs of photons (shown in red), and backgrounds from true electron neutrinos (shown in various shades of green). However all of these sources of background seem quite unlikely to be the source of the MiniBoone anomaly.

Before releasing these results, MicroBoone performed a dedicated search for Delta baryons decaying into photons, and saw a rate in agreement with the theoretical prediction MiniBoone used, and well below the amount needed to explain the MiniBoone excess.

Backgrounds from true electron neutrinos produced in the beam, as well as from the decays of muons, should not concentrate only at low energies like the excess does, and their rate has also been measured within MiniBoone data by looking at other signatures.

The decay of a neutral pions can produce two photons, and if one of them escapes detection, a single photon will mimic their signal. However one would expect that it would be more likely that photons would escape the detector near its edges, but the excess events are distributed uniformly in the detector volume.

So now the mystery of what could be causing this excess is even greater. If it is a background, it seems most likely it is from an unknown source not previously considered. As will be discussed in our part 2 post, its possible that MiniBoone anomaly was caused by a more exotic form of new physics; possibly the excess events in MiniBoone were not really coming from the scattering of electron neutrinos but something else that produced a similar signature in their detector. Some of these explanations included particles that decayed into pairs of electrons or photons. These sorts of explanations should be testable with MicroBoone data but will require dedicated analyses for their different signatures.

So on the experimental side, we now we are left to scratch our heads and wait for new results from MicroBoone that may help get to the bottom of this.

Click here for part 2 of our MicroBoone coverage that goes over the theory side of the story!

Read More

Is the Great Neutrino Puzzle Pointing to Multiple Missing Particles?” – Quanta Magazine article on the new MicroBoone result

“Can MiniBoone be Right?” – Resonaances blog post summarizing the MiniBoone anomaly prior to the the MicroBoone results

A review of different types of neutrino detectors – from the T2K experiment

The XENON1T Excess : The Newest Craze in Particle Physics

Paper: Observation of Excess Electronic Recoil Events in XENON1T

Authors: XENON1T Collaboration

Recently the particle physics world has been abuzz with a new result from the XENON1T experiment who may have seen a revolutionary signal. XENON1T is one of the world’s most sensitive dark matter experiments. The experiment consists of a huge tank of Xenon placed deep underground in the Gran Sasso mine in Italy. It is a ‘direct-detection’ experiment, hunting for very rare signals of dark matter particles from space interacting with their detector. It was originally designed to look for WIMP’s, Weakly Interacting Massive Particles, who used to be everyone’s favorite candidate for dark matter. However, given recent null results by WIMP-hunting  direct-detection experiments, and collider experiments at the LHC, physicists have started to broaden their dark matter horizons. Experiments like XENON1T, who were designed to look for heavy WIMP’s colliding off of Xenon nuclei have realized that they can also be very sensitive to much lighter particles by looking for electron recoils. New particles that are much lighter than traditional WIMP’s would not leave much of an impact on large Xenon nuclei, but they can leave a signal in the detector if they instead scatter off of the electrons around those nuclei. These electron recoils can be identified by the ionization and scintillation signals they leave in the detector, allowing them to be distinguished from nuclear recoils.

In this recent result, the XENON1T collaboration searched for these electron recoils in the energy range of 1-200 keV with unprecedented sensitivity.  Their extraordinary sensitivity is due to its exquisite control over backgrounds and extremely low energy threshold for detection. Rather than just being impressed, what has gotten many physicists excited is that the latest data shows an excess of events above expected backgrounds in the 1-7 keV region. The statistical significance of the excess is 3.5 sigma, which in particle physics is enough to claim ‘evidence’ of an anomaly but short of the typical 5-sigma required to claim discovery.

The XENON1T data that has caused recent excitement. The ‘excess’ is the spike in the data (black points) above the background model (red line) in the 1-7 keV region. The significance of the excess is around 3.5 sigma.

So what might this excess mean? The first, and least fun answer, is nothing. 3.5 sigma is not enough evidence to claim discovery, and those well versed in particle physics history know that there have been numerous excesses with similar significances have faded away with more data. Still it is definitely an intriguing signal, and worthy of further investigation.

The pessimistic explanation is that it is due to some systematic effect or background not yet modeled by the XENON1T collaboration. Many have pointed out that one should be skeptical of signals that appear right at the edge of an experiments energy detection threshold. The so called ‘efficiency turn on’, the function that describes how well an experiment can reconstruct signals right at the edge of detection, can be difficult to model. However, there are good reasons to believe this is not the case here. First of all the events of interest are actually located in the flat part of their efficiency curve (note the background line is flat below the excess), and the excess rises above this flat background. So to explain this excess their efficiency would have to somehow be better at low energies than high energies, which seems very unlikely. Or there would have to be a very strange unaccounted for bias where some higher energy events were mis-reconstructed at lower energies. These explanations seem even more implausible given that the collaboration performed an electron reconstruction calibration using the radioactive decays of Radon-220 over exactly this energy range and were able to model the turn on and detection efficiency very well.

Results of a calibration done to radioactive decays of Radon-220. One can see that data in the efficiency turn on (right around 2 keV) is modeled quite well and no excesses are seen.

However the possibility of a novel Standard Model background is much more plausible. The XENON collaboration raises the possibility that the excess is due to a previously unobserved background from tritium β-decays. Tritium decays to Helium-3 and an electron and a neutrino with a half-life of around 12 years. The energy released in this decay is 18.6 keV, giving the electron having an average energy of a few keV. The expected energy spectrum of this decay matches the observed excess quite well. Additionally, the amount of contamination needed to explain the signal is exceedingly small. Around 100 parts-per-billion of H2 would lead to enough tritium to explain the signal, which translates to just 3 tritium atoms per kilogram of liquid Xenon. The collaboration tries their best to investigate this possibility, but they neither rule out or confirm such a small amount of tritium contamination. However, other similar contaminants, like diatomic oxygen have been confirmed to be below this level by 2 orders of magnitude, so it is not impossible that they were able to avoid this small amount of contamination.

So while many are placing their money on the tritium explanation, there is the exciting possibility remains that this is our first direct evidence of physics Beyond the Standard Model (BSM)! So if the signal really is a new particle or interaction what would it be? Currently it it is quite hard to pin down exactly based on the data. The analysis was specifically searching for two signals that would have shown up in exactly this energy range: axions produced in the sun, and neutrinos produced in the sun interacting with electrons via a large (BSM) magnetic moment. Both of these models provide good fits to the signal shape, with the axion explanation being slightly preferred. However since this result has been released, many have pointed out that these models would actually be in conflict with constraints from astrophysical measurements. In particular, the axion model they searched for would have given stars an additional way to release energy, causing them to cool at a faster rate than in the Standard Model. The strength of interaction between axions and electrons needed to explain the XENON1T excess is incompatible with the observed rates of stellar cooling. There are similar astrophysical constraints on neutrino magnetic moments that also make it unlikely.

This has left door open for theorists to try to come up with new explanations for these excess events, or think of clever ways to alter existing models to avoid these constraints. And theorists are certainly seizing this opportunity! There are new explanations appearing on the arXiv every day, with no sign of stopping. In the roughly 2 weeks since the XENON1T announced their result and this post is being written, there have already been 50 follow up papers! Many of these explanations involve various models of dark matter with some additional twist, such as being heated up in the sun or being boosted to a higher energy in some other way.

A collage of different models trying to explain the XENON1T excess (center). Each plot is from a separate paper released in the first week and a half following the original announcement. Source

So while theorists are currently having their fun with this, the only way we will figure out the true cause of this this anomaly is with more data. The good news is that the XENON collaboration is already preparing for the XENONnT experiment that will serve as a follow to XENON1T. XENONnT will feature a larger active volume of Xenon and a lower background level, allowing them to potentially confirm this anomaly at the 5-sigma level with only a few months of data. If  the excess persists, more data would also allow them to better determine the shape of the signal; allowing them to possibly distinguish between the tritium shape and a potential new physics explanation. If real, other liquid Xenon experiments like LUX and PandaX should also be able to independently confirm the signal in the near future. The next few years should be a very exciting time for these dark matter experiments so stay tuned!

Read More:

Quanta Magazine Article “Dark Matter Experiment Finds Unexplained Signal”

Previous ParticleBites Post on Axion Searches

Blog Post “Hail the XENON Excess”

LHCb’s Flavor Mystery Deepens

Title: Measurement of CP -averaged observables in the B0→ K∗0µ+µ− decay

Authors: LHCb Collaboration

Refference: https://arxiv.org/abs/2003.04831

In the Standard Model, matter is organized in 3 generations; 3 copies of the same family of particles but with sequentially heavier masses. Though the Standard Model can successfully describe this structure, it offers no insight into why nature should be this way. Many believe that a more fundamental theory of nature would better explain where this structure comes from. A natural way to look for clues to this deeper origin is to check whether these different ‘flavors’ of particles really behave in exactly the same ways, or if there are subtle differences that may hint at their origin.

The LHCb experiment is designed to probe these types of questions. And in recent years, they have seen a series of anomalies, tensions between data and Standard Model predictions, that may be indicating the presence of new particles which talk to the different generations. In the Standard Model, the different generations can only interact with each other through the W boson, which means that quarks with the same charge can only interact through more complicated processes like those described by ‘penguin diagrams’.

The so called ‘penguin diagrams’ describe how rare decays like bottom quark → strange quark can happen in the Standard Model. The name comes from both their shape and a famous bar bet. Who says physicists don’t have a sense of humor?

These interactions typically have quite small rates in the Standard Model, meaning that the rate of these processes can be quite sensitive to new particles, even if they are very heavy or interact very weakly with the SM ones. This means that studying these sort of flavor decays is a promising avenue to search for new physics.

In a press conference last month, LHCb unveiled a new measurement of the angular distribution of the rare B0→K*0μ+μ– decay. The interesting part of this process involves a b → s transition (a bottom quark decaying into a strange quark), where number of anomalies have been seen in recent years.

Feynman diagrams of the decay being studied. A B meson (composed of a bottom and a down quark) decays into a Kaon (composed of a strange quark and a down quark) and a pair of muons. Because this decay is very rare in the Standard Mode (left diagram) it could be a good place to look for the effects of new particles (right diagram). Diagrams taken from here

Rather just measuring the total rate of this decay, this analysis focuses on measuring the angular distribution of the decay products. They also perform this mesaurement in different bins of ‘q^2’, the dimuon pair’s invariant mass. These choices allow the measurement to be less sensitive to uncertainties in the Standard Model prediction due to difficult to compute hadronic effects. This also allows the possibility of better characterizing the nature of whatever particle may be causing a deviation.

The kinematics of decay are fully described by 3 angles between the final state particles and q^2. Based on knowing the spins and polarizations of each of the particles, they can fully describe the angular distributions in terms of 8 parameters. They also have to account for the angular distribution of background events, and distortions of the true angular distribution that are caused by the detector. Once all such effects are accounted for, they are able to fit the full angular distribution in each q^2 bin to extract the angular coefficients in that bin.

This measurement is an update to their 2015 result, now with twice as much data. The previous result saw an intriguing tension with the SM at the level of roughly 3 standard deviations. The new result agrees well with the previous one, and mildly increases the tension to the level of 3.4 standard deviations.

LHCb’s measurement of P’5, an observable describing one part of the angular distribution of the decay. The orange boxes show the SM prediction of this value and the red, blue and black point shows LHCb’s most recent measurement (a combination of its ‘Run 1’ measurement and the more recent 2016 data). The grey regions are excluded from the measurement because they have large backgrounds from the decays of other mesons.

This latest result is even more interesting given that LHCb has seen an anomaly in another measurement (the R_k anomaly) involving the same b → s transition. This had led some to speculate that both effects could be caused by a single new particle. The most popular idea is a so-called ‘leptoquark’ that only interacts with some of the flavors.

LHCb is already hard at work on updating this measurement with more recent data from 2017 and 2018, which should once again double the number of events. Updates to the R_k measurement with new data are also hotly anticipated. The Belle II experiment has also recent started taking data and should be able to perform similar measurements. So we will have to wait and see if this anomaly is just a statistical fluke, or our first window into physics beyond the Standard Model!

Read More:

Symmetry Magazine “The mystery of particle generations”

Cern Courier “Anomalies persist in flavour-changing B decays”

Lecture Notes “Introduction to Flavor Physcis”

A new anomaly: the electromagnetic duality anomaly

Article: Electromagnetic duality anomaly in curved spacetimes
Authors: I. Agullo, A. del Rio and J. Navarro-Salas
Reference: arXiv:1607.08879

Disclaimer: this blogpost requires some basic knowledge of QFT (or being comfortable with taking my word at face value for some of the claims made :))

Anomalies exists everywhere. Probably the most intriguing ones are medical, but in particle physics they can be pretty fascinating too. In physics, anomalies refer to the breaking of a symmetry. There are basically two types of anomalies:

  • The first type, gauge anomalies, are red-flags: if they show up in your theory, they indicate that the theory is mathematically inconsistent.
  • The second type of anomaly does not signal any problems with the theory and in fact can have experimentally observable consequences. A prime example is the chiral anomaly. This anomaly nicely explains the decay rate of the neutral pion into two photons.
    Fig. 1: Illustration of pion decay into two photons. [Credit: Wikimedia Commons]

In this paper, a new anomaly is discussed. This anomaly is related to the polarization of light and is called the electromagnetic duality anomaly.

Chiral anomaly 101
So let’s first brush up on the basics of the chiral anomaly. How does this anomaly explain the decay rate of the neutral pion into two photons? For that we need to start with the Lagrangian for QED that describes the interactions between the electromagnetic field (that is, the photons) and spin-½ fermions (which pions are build from):

\displaystyle \mathcal L = \bar\psi \left( i \gamma^\mu \partial_\mu - i e \gamma^\mu A_\mu \right) \psi + m \bar\psi \psi

where the important players in the above equation are the \psis that describe the spin-½ particles and the vector potential A_\mu that describes the electromagnetic field. This Lagrangian is invariant under the chiral symmetry:

\displaystyle \psi \to e^{i \gamma_5} \psi .

Due to this symmetry the current density j^\mu = \bar{\psi} \gamma_5 \gamma^\mu \psi is conserved: \nabla_\mu j^\mu = 0. This then immediately tells us that the charge associated with this current density is time-independent. Since the chiral charge is time-independent, it prevents the \psi fields to decay into the electromagnetic fields, because the \psi field has a non-zero chiral charge and the photons have no chiral charge. Hence, if this was the end of the story, a pion would never be able to decay into two photons.

However, the conservation of the charge is only valid classically! As soon as you go from classical field theory to quantum field theory this is no longer true; hence, the name (quantum) anomaly.  This can be seen most succinctly using Fujikawa’s observation that even though the field \psi and Lagrangian are invariant under the chiral symmetry, this is not enough for the quantum theory to also be invariant. If we take the path integral approach to quantum field theory, it is not just the Lagrangian that needs to be invariant but the entire path integral needs to be:

\displaystyle \int D[A] \, D[\bar\psi]\, \int D[\psi] \, e^{i\int d^4x \mathcal L} .

From calculating how the chiral symmetry acts on the measure D \left[\psi \right]  \, D \left[\bar \psi \right], one can extract all the relevant physics such as the decay rate.

The electromagnetic duality anomaly
Just like the chiral anomaly, the electromagnetic duality anomaly also breaks a symmetry at the quantum level that exists classically. The symmetry that is broken in this case is – as you might have guessed from its name – the electromagnetic duality. This symmetry is a generalization of a symmetry you are already familiar with from source-free electromagnetism. If you write down source-free Maxwell equations, you can just swap the electric and magnetic field and the equations look the same (you just have to send  \displaystyle \vec{E} \to \vec{B} and \vec{B} \to - \vec{E}). Now the more general electromagnetic duality referred to here is slightly more difficult to visualize: it is a rotation in the space of the electromagnetic field tensor and its dual. However, its transformation is easy to write down mathematically:

\displaystyle F_{\mu \nu} \to \cos \theta \, F_{\mu \nu} + \sin \theta \, \, ^\ast F_{\mu \nu} .

In other words, since this is a symmetry, if you plug this transformation into the Lagrangian of electromagnetism, the Lagrangian will not change: it is invariant. Now following the same steps as for the chiral anomaly, we find that the associated current is conserved and its charge is time-independent due to the symmetry. Here, the charge is simply the difference between the number of photons with left helicity and those with right helicity.

Let us continue following the exact same steps as those for the chiral anomaly. The key is to first write electromagnetism in variables analogous to those of the chiral theory. Then you apply Fujikawa’s method and… *drum roll for the anomaly that is approaching*…. Anti-climax: nothing happens, everything seems to be fine. There are no anomalies, nothing!

So why the title of this blog? Well, as soon as you couple the electromagnetic field with a gravitational field, the electromagnetic duality is broken in a deeply quantum way. The number of photon with left helicity and right helicity is no longer conserved when your spacetime is curved.

Physical consequences
Some potentially really cool consequences have to do with the study of light passing by rotating stars, black holes or even rotating clusters. These astrophysical objects do not only gravitationally bend the light, but the optical helicity anomaly tells us that there might be a difference in polarization between lights rays coming from different sides of these objects. This may also have some consequences for the cosmic microwave background radiation, which is ‘picture’ of our universe when it was only 380,000 years old (as compared to the 13.8 billion years it is today!). How big this effect is and whether we will be able to see it in the near future is still an open question.

 

 

Further reading 

  • An introduction to anamolies using only quantum mechanics instead of quantum field theory is “Anomalies for pedestrians” by Barry Holstein 
  • The beautiful book “Quantum field theory and the Standard Model” by Michael Schwartz has a nice discussion in the later chapters on the chiral anomaly.
  • Lecture notes by Adal Bilal for graduate students on anomalies in general  can be found here