Going Rogue: The Search for Anything (and Everything) with ATLAS

Title: “A model-independent general search for new phenomena with the ATLAS detector at √s=13 TeV”

Author: The ATLAS Collaboration

Reference: ATLAS-PHYS-CONF-2017-001

 

When a single experimental collaboration has a few thousand contributors (and even more opinions), there are a lot of rules. These rules dictate everything from how you get authorship rights to how you get chosen to give a conference talk. In fact, this rulebook is so thorough that it could be the topic of a whole other post. But for now, I want to focus on one rule in particular, a rule that has only been around for a few decades in particle physics but is considered one of the most important practices of good science: blinding.

In brief, blinding is the notion that it’s experimentally compromising for a scientist to look at the data before finalizing the analysis. As much as we like to think of ourselves as perfectly objective observers, the truth is, when we really really want a particular result (let’s say a SUSY discovery), that desire can bias our work. For instance, imagine you were looking at actual collision data while you were designing a signal region. You might unconsciously craft your selection in such a way to force an excess of data over background prediction. To avoid such human influences, particle physics experiments “blind” their analyses while they are under construction, and only look at the data once everything else is in place and validated.

Figure 1: “Blind analysis: Hide results to seek the truth”, R. MacCounor & S. Perlmutter for Nature.com

This technique has kept the field of particle physics in rigorous shape for quite a while. But there’s always been a subtle downside to this practice. If we only ever look at the data after we finalize an analysis, we are trapped within the confines of theoretically motivated signatures. In this blinding paradigm, we’ll look at all the places that theory has shone a spotlight on, but we won’t look everywhere. Our whole game is to search for new physics. But what if amongst all our signal regions and hypothesis testing and neural net classifications… we’ve simply missed something?

It is this nagging question that motivates a specific method of combing the LHC datasets for new physics, one that the authors of this paper call a “structured, global and automated way to search for new physics.” With this proposal, we can let the data itself tell us where to look and throw unblinding caution to the winds.

The idea is simple: scan the whole ATLAS dataset for discrepancies, setting a threshold for what defines a feature as “interesting”. If this preliminary scan stumbles upon a mysterious excess of data over Standard Model background, don’t just run straight to Stockholm proclaiming a discovery. Instead, simply remember to look at this area again once more data is collected. If your feature of interest is a fluctuation, it will wash out and go away. If not, you can keep watching it until you collect enough statistics to do the running to Stockholm bit. Essentially, you let a first scan of the data rather than theory define your signal regions of interest. In fact, all the cool kids are doing it: H1, CDF, D0, and even ATLAS and CMS have performed earlier versions of this general search.

The nuts and bolts of this particular paper include 3.2 fb-1 of 2015 13 TeV LHC data to try out. Since the whole goal of this strategy is to be as general as possible, we might as well go big or go home with potential topologies. To that end, the authors comb through all the data and select any event “involving high pT isolated leptons (electrons and muons), photons, jets, b-tagged jets and missing transverse momentum”. All of the backgrounds are simply modeled with Monte Carlo simulation.

Once we have all these events, we need to sort them. Here, “the classification includes all possible final state configurations and object multiplicities, e.g. if a data event with seven reconstructed muons is found it is classified in a ‘7- muon’ event class (7μ).” When you add up all the possible permutations of objects and multiplicities, you come up with a cool 639 event classes with at least 1 data event and a Standard Model expectation of at least 0.1.

From here, it’s just a matter of checking data vs. MC agreement and the pulls for each event class. The authors also apply some measures to weed out the low stat or otherwise sketchy regions; for instance, 1 electron + many jets is more likely to be multijet faking a lepton and shouldn’t necessarily be considered as a good event category. Once this logic applied, you can plot all of your SRs together grouped by category; Figure 2 shows an example for the multijet events. The paper includes 10 of these plots in total, with regions ranging in complexity from nothing but 1μ1j to more complicated final states like ETmiss2μ1γ4j (say that five times fast.)

Figure 2: The number of events in data and for the different SM background predictions considered. The classes are labeled according to the multiplicity and type (e, μ, γ, j, b, ETmiss) of the reconstructed objects for this event class. The hatched bands indicate the total uncertainty of the SM prediction.

 

Once we can see data next to Standard Model prediction for all these categories, it’s necessary to have a way to measure just how unusual an excess may be. The authors of this paper implement an algorithm that searches for the region of largest deviation in the distributions of two variables that are good at discriminating background from new physics. These are the effective massthe sum of all jet and missing momenta, and the invariant mass, computed with all visible objects and no missing energy.

For each deviation found, a simple likelihood function is built as the convolution of probability density functions (pdfs): one Poissonian pdf to describe the event yields, and Gaussian pdfs for each systematic uncertainty. The integral of this function, p0, is the probability that the Standard Model expectation fluctuated to the observed yield. This p0 value is an industry standard in particle physics: a value of p0 < 3e-7 is our threshold for discovery.

Sadly (or reassuringly), the smallest p0 value found in this scan is 3e-04 (in the 1m1e4b2j event class). To figure out precisely how significant this value is, the authors ran a series of pseudoexperiments for each event class and applied the same scanning algorithm to them, to determine how often such a deviation would occur in a wholly different fake dataset. In fact, a p0 of 3e-04 was expected 70% of the pseudoexperiments.

So the excesses that were observed are not (so far) significant enough to focus on. But the beauty of this analysis strategy is that this deviation can be easily followed up with the addition of a newer dataset. Think of these general searches as the sidekick of the superheros that are our flagship SUSY, exotics, and dark matter searches. They can help us dot i’s and cross t’s, make sure nothing falls through the cracks— and eventually, just maybe, make a discovery.

Why Electroweak SUSY is the Next Big Thing

Title: “Search for new physics in events with two low momentum opposite-sign leptons and missing transverse energy at s = 13 TeV”

Author: CMS Collaboration

Reference: CMS-PAS-SUS-16-048

 

March is an exciting month for high energy physicists. Every year at this time, scientists from all over the world gather for the annual Moriond Conference, where all of the latest results are shown and discussed. Now that this physics Christmas season is over, I, like many other physicists, am sifting through the proceedings, trying to get a hint of what is the new cool physics to be chasing after. My conclusions? The Higgsino search is high on this list.

Physicists chatting at the 2017 Moriond Conference. Image credit ATLAS-PHOTO-2017-009-1.

The search for Higgsinos falls under the broad and complex umbrella of searches for supersymmetry (SUSY). We’ve talked about SUSY on Particlebites in the past; see a recent post on the stop search for reference. Recall that the basic prediction of SUSY is that every boson in the Standard Model has a fermionic supersymmetric partner, and every fermion gets a bosonic partner.

So then what exactly is a Higgsino? The naming convention of SUSY would indicate that the –ino suffix means that a Higgsino is the supersymmetric partner of the Higgs boson. This is partly true, but the whole story is a bit more complicated, and requires some understanding of the Higgs mechanism.

To summarize, in our Standard Model, the photon carries the electromagnetic force, and the W and Z carry the weak force. But before electroweak symmetry breaking, these bosons did not have such distinct tasks. Rather, there were three massless bosons, the B, W, and Higgs, which together all carried the electroweak force. It is the supersymmetric partners of these three bosons that mix to form new mass eigenstates, which we call simply charginos or neutralinos, depending on their charge. When we search for new particles, we are searching for these mass eigenstates, and then interpreting our results in the context of electroweak-inos.

SUSY searches can be broken into many different analyses, each targeting a particular particle or group of particles in this new sector. Starting with the particles that are suspected to have low mass is a good idea, since we’re more likely to observe these at the current LHC collision energy. If we begin with these light particles, and add in the popular theory of naturalness, we conclude that Higgsinos will be the easiest to find of all the new SUSY particles. More specifically, the theory predicts three Higgsinos that mix into two neutralinos and a chargino, each with a mass around 200-300 GeV, but with a very small mass splitting between the three. See Figure 1 for a sample mass spectra of all these particles, where N and C indicate neutralino or chargino respectively (keep in mind this is just a possibility; in principle, any bino/wino/higgsino mass hierarchy is allowed.)

Figure 1: Sample electroweak SUSY mass spectrum. Image credit: T. Lari, INFN Milano

This is both good news and bad news. The good part is that we have reason to think that there are three Higgsinos with masses that are well within our reach at the LHC. The bad news is that this mass spectrum is very compressed, making the Higgsinos extremely difficult to detect experimentally. This is due to the fact that when C1 or N2 decays to N1 (the lightest neutralino), there is very little mass difference leftover to supply energy to the decay products. As a result, all of the final state objects (two N1s plus a W or a Z as a byproduct, see Figure 2) will have very low momentum and thus are very difficult to detect.

Figure 2: Electroweakino pair production and decay (CMS-PAS-SUS-16-048).

The CMS collaboration Higgsino analysis documented here uses a clever analysis strategy for such compressed decay scenarios. Since initial state radiation (ISR) jets occur often in proton-proton collisions, you can ask for your event to have one. This jet radiating from the collision will give the system a kick in the opposite direction, providing enough energy to those soft particles for them to be detectable. At the end of the day, the analysis team looks for events with ISR, missing transverse energy (MET), and two soft opposite sign leptons from the Z decay (to distinguish from hadronic SM-like backgrounds). Figure 3 shows a basic diagram of what these signal events would look like.

Figure 3: Signal event vector diagram. Image credit C. Botta, CERN

In order to conduct this search, several new analysis techniques were employed. Reconstruction of leptons at low pT becomes extremely important in this regime, and the standard cone isolation of the lepton and impact parameter cuts are used to ensure proper lepton identification. New discriminating variables are also added, which exploit kinematic information about the lepton and the soft particles around it, in order to distinguish “prompt” (signal) leptons from those that may have come from a jet and are thus “non prompt” (background.)

In addition, the analysis team paid special attention to the triggers that could be used to select signal events from the immense number of collisions, creating a new “compressed” trigger that uses combined information from both soft muons (pT > 5 GeV) and missing energy ( > 125 GeV).

With all of this effort, the group is able to probe down to a mass splitting between Higgsinos of 20 GeV, excluding N2 masses up to 230 GeV. This is an especially remarkable result because the current strongest limit on Higgsinos comes from the LEP experiment, a result that is over ten years old! Because the Higgsino searches are strongly limited by the low cross section of electroweak SUSY, additional data will certainly mean that these searches will proceed quickly, and more stringent bounds will be placed (or, perhaps, a discovery is in store!)

Figure 4: Figure 5: The observed exclusion contours (black) with the corresponding 1 standard deviation uncertainties. The dashed red curves present the expected limits with 1 SD experimental uncertainties (CMS-PAS-SUS-16-048).

 

Further Reading: 

  1. “Natural SUSY Endures”, Michele Papucci, Joshua T. Ruderman, Andreas Weiler.  arXiv [hep-ph] 1110.6926
  2. “Cornering electroweakinos at the LHC”, Stefania Gori, Sunghoon Jung, Lian-Tao Wang. arXiv [hep-ph] 1307.5952

     

What Happens When Energy Goes Missing?

Title: “Performance of algorithms that reconstruct missing transverse momentum in s = √8 TeV proton–proton collisions in the ATLAS detector”
Authors: ATLAS Collaboration

Reference: arXiv:1609.09324

Check out the public version of this post on the official ATLAS blog here!

 

The ATLAS experiment recently released a note detailing the nature and performance of algorithms designed to calculate what is perhaps the most difficult quantity in any LHC event: missing transverse energy. Missing energy is difficult because by its very nature, it is missing, thus making it unobservable in the detector. So where does this missing energy come from, and why do we even need it?

Figure 1

The LHC accelerate protons towards one another on the same axis, so they will collide head on. Therefore, the incoming partons have net momentum along the direction of the beamline, but no net momentum in the transverse direction (see Figure 1). MET is then defined as the negative vectorial sum (in the transverse plane) of all recorded particles. Any nonzero MET indicates a particle that escaped the detector. This escaping particle could be a regular Standard Model neutrino, or something much more exotic, such as the lightest supersymmetric particle or a dark matter candidate.

Figure 2

Figure 2 shows an event display where the calculated MET balances the visible objects in the detector. In this case, these visible objects are jets, but they could also be muons, photons, electrons, or taus. This constitutes the “hard term” in the MET calculation. Often there are also contributions of energy in the detector that are not associated to a particular physics object, but may still be necessary to get an accurate measurement of MET. This momenta is known as the “soft term”.

In the course of looking at all the energy in the detector for a given event, inevitably some pileup will sneak in. The pileup could be contributions from additional proton-proton collisions in the same bunch crossing, or from scattering of protons upstream of the interaction point. Either way, the MET reconstruction algorithms have to take this into account. Adding up energy from pileup could lead to more MET than was actually in the collision, which could mean the difference between an observation of dark matter and just another Standard Model event.

One of the ways to suppress pile up is to use a quantity called jet vertex fraction (JVF), which uses the additional information of tracks associated to jets. If the tracks do not point back to the initial hard scatter, they can be tagged as pileup and not included in the calculation. This is the idea behind the Track Soft Term (TST) algorithm. Another way to remove pileup is to estimate the average energy density in the detector due to pileup using event-by-event measurements, then subtracting this baseline energy. This is used in the Extrapolated Jet Area with Filter, or EJAF algorithm.

Once these algorithms are designed, they are tested in two different types of events. One of these is in W to lepton + neutrino decay signatures. These events should all have some amount of real missing energy from the neutrino, so they can easily reveal how well the reconstruction is working. The second group is Z boson to two lepton events. These events should not have any real missing energy (no neutrinos), so with these events, it is possible to see if and how the algorithm reconstructs fake missing energy. Fake MET often comes from miscalibration or mismeasurement of physics objects in the detector. Figures 3 and 4 show the calorimeter soft MET distributions in these two samples; here it is easy to see the shape difference between real and fake missing energy.

Figure 3: Distribution of the sum of missing energy in the calorimeter soft term shown in Z to μμ data and Monte Carlo events.

 

Figure 4: Distribution of the sum of missing energy in the calorimeter soft term shown in W to eν data and Monte Carlo events.

This note evaluates the performance of these algorithms in 8 TeV proton proton collision data collected in 2012. Perhaps the most important metric in MET reconstruction performance is the resolution, since this tells you how well you know your MET value. Intuitively, the resolution depends on detector resolution of the objects that went into the calculation, and because of pile up, it gets worse as the number of vertices gets larger. The resolution is technically defined as the RMS of the combined distribution of MET in the x and y directions, covering the full transverse plane of the detector. Figure 5 shows the resolution as a function of the number of vertices in Z to μμ data for several reconstruction algorithms. Here you can see that the TST algorithm has a very small dependence on the number of vertices, implying a good stability of the resolution with pileup.

Figure 5: Distribution of the sum of missing energy in the calorimeter soft term shown in W to eν data and Monte Carlo events.

Another important quantity to measure is the angular resolution, which is important in the reconstruction of kinematic variables such as the transverse mass of the W. It can be measured in W to μν simulation by comparing the direction of the MET, as reconstructed by the algorithm, to the direction of the true MET. The resolution is then defined as the RMS of the distribution of the phi difference between these two vectors. Figure 6 shows the angular resolution of the same five algorithms as a function of the true missing transverse energy. Note the feature between 40 and 60 GeV, where there is a transition region into events with high pT calibrated jets. Again, the TST algorithm has the best angular resolution for this topology across the entire range of true missing energy.

Figure 6: Resolution of ΔΦ(reco MET, true MET) for 0 jet W to μν Monte Carlo.

As the High Luminosity LHC looms larger and larger, the issue of MET reconstruction will become a hot topic in the ATLAS collaboration. In particular, the HLLHC will be a very high pile up environment, and many new pile up subtraction studies are underway. Additionally, there is no lack of exciting theories predicting new particles in Run 3 that are invisible to the detector. As long as these hypothetical invisible particles are being discussed, the MET teams will be working hard to catch them.

 

The Delirium over Beryllium

Article: Particle Physics Models for the 17 MeV Anomaly in Beryllium Nuclear Decays
Authors: J.L. Feng, B. Fornal, I. Galon, S. Gardner, J. Smolinsky, T. M. P. Tait, F. Tanedo
Reference: arXiv:1608.03591 (Submitted to Phys. Rev. D)
See also this Latin American Webinar on Physics recorded talk.
Also featuring the results from:
— Gulyás et al., “A pair spectrometer for measuring multipolarities of energetic nuclear transitions” (description of detector; 1504.00489NIM)
— Krasznahorkay et al., “Observation of Anomalous Internal Pair Creation in 8Be: A Possible Indication of a Light, Neutral Boson”  (experimental result; 1504.01527PRL version; note PRL version differs from arXiv)
— Feng et al., “Protophobic Fifth-Force Interpretation of the Observed Anomaly in 8Be Nuclear Transitions” (phenomenology; 1604.07411; PRL)

Editor’s note: the author is a co-author of the paper being highlighted. 

Recently there’s some press (see links below) regarding early hints of a new particle observed in a nuclear physics experiment. In this bite, we’ll summarize the result that has raised the eyebrows of some physicists, and the hackles of others.

A crash course on nuclear physics

Nuclei are bound states of protons and neutrons. They can have excited states analogous to the excited states of at lowoms, which are bound states of nuclei and electrons. The particular nucleus of interest is beryllium-8, which has four neutrons and four protons, which you may know from the triple alpha process. There are three nuclear states to be aware of: the ground state, the 18.15 MeV excited state, and the 17.64 MeV excited state.

Beryllium-8 excited nuclear states. The 18.15 MeV state (red) exhibits an anomaly. Both the 18.15 MeV and 17.64 states decay to the ground through a magnetic, p-wave transition. Image adapted from Savage et al. (1987).

Most of the time the excited states fall apart into a lithium-7 nucleus and a proton. But sometimes, these excited states decay into the beryllium-8 ground state by emitting a photon (γ-ray). Even more rarely, these states can decay to the ground state by emitting an electron–positron pair from a virtual photon: this is called internal pair creation and it is these events that exhibit an anomaly.

The beryllium-8 anomaly

Physicists at the Atomki nuclear physics institute in Hungary were studying the nuclear decays of excited beryllium-8 nuclei. The team, led by Attila J. Krasznahorkay, produced beryllium excited states by bombarding a lithium-7 nucleus with protons.

Preparation of beryllium-8 excited state
Beryllium-8 excited states are prepare by bombarding lithium-7 with protons.

The proton beam is tuned to very specific energies so that one can ‘tickle’ specific beryllium excited states. When the protons have around 1.03 MeV of kinetic energy, they excite lithium into the 18.15 MeV beryllium state. This has two important features:

  1. Picking the proton energy allows one to only produce a specific excited state so one doesn’t have to worry about contamination from decays of other excited states.
  2. Because the 18.15 MeV beryllium nucleus is produced at resonance, one has a very high yield of these excited states. This is very good when looking for very rare decay processes like internal pair creation.

What one expects is that most of the electron–positron pairs have small opening angle with a smoothly decreasing number as with larger opening angles.

Screen Shot 2016-08-22 at 9.18.11 AM
Expected distribution of opening angles for ordinary internal pair creation events. Each line corresponds to nuclear transition that is electric (E) or magenetic (M) with a given orbital quantum number, l. The beryllium transitionsthat we’re interested in are mostly M1. Adapted from Gulyás et al. (1504.00489).

Instead, the Atomki team found an excess of events with large electron–positron opening angle. In fact, even more intriguing: the excess occurs around a particular opening angle (140 degrees) and forms a bump.

Adapted from Krasznahorkay et al.
Number of events (dN/dθ) for different electron–positron opening angles and plotted for different excitation energies (Ep). For Ep=1.10 MeV, there is a pronounced bump at 140 degrees which does not appear to be explainable from the ordinary internal pair conversion. This may be suggestive of a new particle. Adapted from Krasznahorkay et al., PRL 116, 042501.

Here’s why a bump is particularly interesting:

  1. The distribution of ordinary internal pair creation events is smoothly decreasing and so this is very unlikely to produce a bump.
  2. Bumps can be signs of new particles: if there is a new, light particle that can facilitate the decay, one would expect a bump at an opening angle that depends on the new particle mass.

Schematically, the new particle interpretation looks like this:

Schematic of the Atomki experiment.
Schematic of the Atomki experiment and new particle (X) interpretation of the anomalous events. In summary: protons of a specific energy bombard stationary lithium-7 nuclei and excite them to the 18.15 MeV beryllium-8 state. These decay into the beryllium-8 ground state. Some of these decays are mediated by the new X particle, which then decays in to electron–positron pairs of a certain opening angle that are detected in the Atomki pair spectrometer detector. Image from 1608.03591.

As an exercise for those with a background in special relativity, one can use the relation (p_{e^+} + p_{e^-})^2 = m_X^2 to prove the result:

m_{X}^2 = \left(1-\left(\frac{E_{e^+}-E_{e^-}}{E_{e^+}+E_{e^-}}\right)^2\right) (E_{e^+}+E_{e^-})^2 \sin^2 \frac{\theta}{2}+\mathcal{O}(m_e^2)

This relates the mass of the proposed new particle, X, to the opening angle θ and the energies E of the electron and positron. The opening angle bump would then be interpreted as a new particle with mass of roughly 17 MeV. To match the observed number of anomalous events, the rate at which the excited beryllium decays via the X boson must be 6×10-6 times the rate at which it goes into a γ-ray.

The anomaly has a significance of 6.8σ. This means that it’s highly unlikely to be a statistical fluctuation, as the 750 GeV diphoton bump appears to have been. Indeed, the conservative bet would be some not-understood systematic effect, akin to the 130 GeV Fermi γ-ray line.

The beryllium that cried wolf?

Some physicists are concerned that beryllium may be the ‘boy that cried wolf,’ and point to papers by the late Fokke de Boer as early as 1996 and all the way to 2001. de Boer made strong claims about evidence for a new 10 MeV particle in the internal pair creation decays of the 17.64 MeV beryllium-8 excited state. These claims didn’t pan out, and in fact the instrumentation paper by the Atomki experiment rules out that original anomaly.

The proposed evidence for “de Boeron” is shown below:

Beryllium
The de Boer claim for a 10 MeV new particle. Left: distribution of opening angles for internal pair creation events in an E1 transition of carbon-12. This transition has similar energy splitting to the beryllium-8 17.64 MeV transition and shows good agreement with the expectations; as shown by the flat “signal – background” on the bottom panel. Right: the same analysis for the M1 internal pair creation events from the 17.64 MeV beryllium-8 states. The “signal – background” now shows a broad excess across all opening angles. Adapted from de Boer et al. PLB 368, 235 (1996).

When the Atomki group studied the same 17.64 MeV transition, they found that a key background component—subdominant E1 decays from nearby excited states—dramatically improved the fit and were not included in the original de Boer analysis. This is the last nail in the coffin for the proposed 10 MeV “de Boeron.”

However, the Atomki group also highlight how their new anomaly in the 18.15 MeV state behaves differently. Unlike the broad excess in the de Boer result, the new excess is concentrated in a bump. There is no known way in which additional internal pair creation backgrounds can contribute to add a bump in the opening angle distribution; as noted above: all of these distributions are smoothly falling.

The Atomki group goes on to suggest that the new particle appears to fit the bill for a dark photon, a reasonably well-motivated copy of the ordinary photon that differs in its overall strength and having a non-zero (17 MeV?) mass.

Theory part 1: Not a dark photon

With the Atomki result was published and peer reviewed in Physics Review Letters, the game was afoot for theorists to understand how it would fit into a theoretical framework like the dark photon. A group from UC Irvine, University of Kentucky, and UC Riverside found that actually, dark photons have a hard time fitting the anomaly simultaneously with other experimental constraints. In the visual language of this recent ParticleBite, the situation was this:

Beryllium-8
It turns out that the minimal model of a dark photon cannot simultaneously explain the Atomki beryllium-8 anomaly without running afoul of other experimental constraints. Image adapted from this ParticleBite.

The main reason for this is that a dark photon with mass and interaction strength to fit the beryllium anomaly would necessarily have been seen by the NA48/2 experiment. This experiment looks for dark photons in the decay of neutral pions (π0). These pions typically decay into two photons, but if there’s a 17 MeV dark photon around, some fraction of those decays would go into dark-photon — ordinary-photon pairs. The non-observation of these unique decays rules out the dark photon interpretation.

The theorists then decided to “break” the dark photon theory in order to try to make it fit. They generalized the types of interactions that a new photon-like particle, X, could have, allowing protons, for example, to have completely different charges than electrons rather than having exactly opposite charges. Doing this does gross violence to the theoretical consistency of a theory—but they goal was just to see what a new particle interpretation would have to look like. They found that if a new photon-like particle talked to neutrons but not protons—that is, the new force were protophobic—then a theory might hold together.

Schematic description of how model-builders “hacked” the dark photon theory to fit both the beryllium anomaly while being consistent with other experiments. This hack isn’t pretty—and indeed, comes at the cost of potentially invalidating the mathematical consistency of the theory—but the exercise demonstrates the target for how to a complete theory might have to behave. Image adapted from this ParticleBite.

Theory appendix: pion-phobia is protophobia

Editor’s note: what follows is for readers with some physics background interested in a technical detail; others may skip this section.

How does a new particle that is allergic to protons avoid the neutral pion decay bounds from NA48/2? Pions decay into pairs of photons through the well-known triangle-diagrams of the axial anomaly. The decay into photon–dark-photon pairs proceed through similar diagrams. The goal is then to make sure that these diagrams cancel.

A cute way to look at this is to assume that at low energies, the relevant particles running in the loop aren’t quarks, but rather nucleons (protons  and neutrons). In fact, since only the proton can talk to the photon, one only needs to consider proton loops. Thus if the new photon-like particle, X, doesn’t talk to protons, then there’s no diagram for the pion to decay into γX. This would be great if the story weren’t completely wrong.

Avoiding NA48
Avoiding NA48/2 bounds requires that the new particle, X, is pion-phobic. It turns out that this is equivalent to X being protophobic. The correct way to see this is on the left, making sure that the contribution of up-quark loops cancels the contribution from down-quark loops. A slick (but naively completely wrong) calculation is on the right, arguing that effectively only protons run in the loop.

The correct way of seeing this is to treat the pion as a quantum superposition of an up–anti-up and down–anti-down bound state, and then make sure that the X charges are such that the contributions of the two states cancel. The resulting charges turn out to be protophobic.

The fact that the “proton-in-the-loop” picture gives the correct charges, however, is no coincidence. Indeed, this was precisely how Jack Steinberger calculated the correct pion decay rate. The key here is whether one treats the quarks/nucleons linearly or non-linearly in chiral perturbation theory. The relation to the Wess-Zumino-Witten term—which is what really encodes the low-energy interaction—is carefully explained in chapter 6a.2 of Georgi’s revised Weak Interactions.

Theory part 2: Not a spin-0 particle

The above considerations focus on a new particle with the same spin and parity as a photon (spin-1, parity odd). Another result of the UCI study was a systematic exploration of other possibilities. They found that the beryllium anomaly could not be consistent with spin-0 particles. For a parity-odd, spin-0 particle, one cannot simultaneously conserve angular momentum and parity in the decay of the excited beryllium-8 state. (Parity violating effects are negligible at these energies.)

Parity
Parity and angular momentum conservation prohibit a “dark Higgs” (parity even scalar) from mediating the anomaly.

For a parity-odd pseudoscalar, the bounds on axion-like particles at 20 MeV suffocate any reasonable coupling. Measured in terms of the pseudoscalar–photon–photon coupling (which has dimensions of inverse GeV), this interaction is ruled out down to the inverse Planck scale.

Screen Shot 2016-08-24 at 4.01.07 PM
Bounds on axion-like particles exclude a 20 MeV pseudoscalar with couplings to photons stronger than the inverse Planck scale. Adapted from 1205.2671 and 1512.03069.

Additional possibilities include:

  • Dark Z bosons, cousins of the dark photon with spin-1 but indeterminate parity. This is very constrained by atomic parity violation.
  • Axial vectors, spin-1 bosons with positive parity. These remain a theoretical possibility, though their unknown nuclear matrix elements make it difficult to write a predictive model. (See section II.D of 1608.03591.)

Theory part 3: Nuclear input

The plot thickens when once also includes results from nuclear theory. Recent results from Saori Pastore, Bob Wiringa, and collaborators point out a very important fact: the 18.15 MeV beryllium-8 state that exhibits the anomaly and the 17.64 MeV state which does not are actually closely related.

Recall (e.g. from the first figure at the top) that both the 18.15 MeV and 17.64 MeV states are both spin-1 and parity-even. They differ in mass and in one other key aspect: the 17.64 MeV state carries isospin charge, while the 18.15 MeV state and ground state do not.

Isospin is the nuclear symmetry that relates protons to neutrons and is tied to electroweak symmetry in the full Standard Model. At nuclear energies, isospin charge is approximately conserved. This brings us to the following puzzle:

If the new particle has mass around 17 MeV, why do we see its effects in the 18.15 MeV state but not the 17.64 MeV state?

Naively, if the new particle emitted, X, carries no isospin charge, then isospin conservation prohibits the decay of the 17.64 MeV state through emission of an X boson. However, the Pastore et al. result tells us that actually, the isospin-neutral and isospin-charged states mix quantum mechanically so that the observed 18.15 and 17.64 MeV states are mixtures of iso-neutral and iso-charged states. In fact, this mixing is actually rather large, with mixing angle of around 10 degrees!

The result of this is that one cannot invoke isospin conservation to explain the non-observation of an anomaly in the 17.64 MeV state. In fact, the only way to avoid this is to assume that the mass of the X particle is on the heavier side of the experimentally allowed range. The rate for emission goes like the 3-momentum cubed (see section II.E of 1608.03591), so a small increase in the mass can suppresses the rate of emission by the lighter state by a lot.

The UCI collaboration of theorists went further and extended the Pastore et al. analysis to include a phenomenological parameterization of explicit isospin violation. Independent of the Atomki anomaly, they found that including isospin violation improved the fit for the 18.15 MeV and 17.64 MeV electromagnetic decay widths within the Pastore et al. formalism. The results of including all of the isospin effects end up changing the particle physics story of the Atomki anomaly significantly:

Parameter fits
The rate of X emission (colored contours) as a function of the X particle’s couplings to protons (horizontal axis) versus neutrons (vertical axis). The best fit for a 16.7 MeV new particle is the dashed line in the teal region. The vertical band is the region allowed by the NA48/2 experiment. Solid lines show the dark photon and protophobic limits. Left: the case for perfect (unrealistic) isospin. Right: the case when isospin mixing and explicit violation are included. Observe that incorporating realistic isospin happens to have only a modest effect in the protophobic region. Figure from 1608.03591.

The results of the nuclear analysis are thus that:

  1. An interpretation of the Atomki anomaly in terms of a new particle tends to push for a slightly heavier X mass than the reported best fit. (Remark: the Atomki paper does not do a combined fit for the mass and coupling nor does it report the difficult-to-quantify systematic errors  associated with the fit. This information is important for understanding the extent to which the X mass can be pushed to be heavier.)
  2. The effects of isospin mixing and violation are important to include; especially as one drifts away from the purely protophobic limit.

Theory part 4: towards a complete theory

The theoretical structure presented above gives a framework to do phenomenology: fitting the observed anomaly to a particle physics model and then comparing that model to other experiments. This, however, doesn’t guarantee that a nice—or even self-consistent—theory exists that can stretch over the scaffolding.

Indeed, a few challenges appear:

  • The isospin mixing discussed above means the X mass must be pushed to the heavier values allowed by the Atomki observation.
  • The “protophobic” limit is not obviously anomaly-free: simply asserting that known particles have arbitrary charges does not generically produce a mathematically self-consistent theory.
  • Atomic parity violation constraints require that the X couple in the same way to left-handed and right-handed matter. The left-handed coupling implies that X must also talk to neutrinos: these open up new experimental constraints.

The Irvine/Kentucky/Riverside collaboration first note the need for a careful experimental analysis of the actual mass ranges allowed by the Atomki observation, treating the new particle mass and coupling as simultaneously free parameters in the fit.

Next, they observe that protophobic couplings can be relatively natural. Indeed: the Standard Model Z boson is approximately protophobic at low energies—a fact well known to those hunting for dark matter with direct detection experiments. For exotic new physics, one can engineer protophobia through a phenomenon called kinetic mixing where two force particles mix into one another. A tuned admixture of electric charge and baryon number, (Q-B), is protophobic.

Baryon number, however, is an anomalous global symmetry—this means that one has to work hard to make a baryon-boson that mixes with the photon (see 1304.0576 and 1409.8165 for examples). Another alternative is if the photon kinetically mixes with not baryon number, but the anomaly-free combination of “baryon-minus-lepton number,” Q-(B-L). This then forces one to apply additional model-building modules to deal with the neutrino interactions that come along with this scenario.

In the language of the ‘model building blocks’ above, result of this process looks schematically like this:

Model building block
A complete theory is completely mathematically self-consistent and satisfies existing constraints. The additional bells and whistles required for consistency make additional predictions for experimental searches. Pieces of the theory can sometimes  be used to address other anomalies.

The theory collaboration presented examples of the two cases, and point out how the additional ‘bells and whistles’ required may tie to additional experimental handles to test these hypotheses. These are simple existence proofs for how complete models may be constructed.

What’s next?

We have delved rather deeply into the theoretical considerations of the Atomki anomaly. The analysis revealed some unexpected features with the types of new particles that could explain the anomaly (dark photon-like, but not exactly a dark photon), the role of nuclear effects (isospin mixing and breaking), and the kinds of features a complete theory needs to have to fit everything (be careful with anomalies and neutrinos). The single most important next step, however, is and has always been experimental verification of the result.

While the Atomki experiment continues to run with an upgraded detector, what’s really exciting is that a swath of experiments that are either ongoing or in construction will be able to probe the exact interactions required by the new particle interpretation of the anomaly. This means that the result can be independently verified or excluded within a few years. A selection of upcoming experiments is highlighted in section IX of 1608.03591:

Experimental searches
Other experiments that can probe the new particle interpretation of the Atomki anomaly. The horizontal axis is the new particle mass, the vertical axis is its coupling to electrons (normalized to the electric charge). The dark blue band is the target region for the Atomki anomaly. Figure from 1608.03591; assuming 100% branching ratio to electrons.

We highlight one particularly interesting search: recently a joint team of theorists and experimentalists at MIT proposed a way for the LHCb experiment to search for dark photon-like particles with masses and interaction strengths that were previously unexplored. The proposal makes use of the LHCb’s ability to pinpoint the production position of charged particle pairs and the copious amounts of D mesons produced at Run 3 of the LHC. As seen in the figure above, the LHCb reach with this search thoroughly covers the Atomki anomaly region.

Implications

So where we stand is this:

  • There is an unexpected result in a nuclear experiment that may be interpreted as a sign for new physics.
  • The next steps in this story are independent experimental cross-checks; the threshold for a ‘discovery’ is if another experiment can verify these results.
  • Meanwhile, a theoretical framework for understanding the results in terms of a new particle has been built and is ready-and-waiting. Some of the results of this analysis are important for faithful interpretation of the experimental results.

What if it’s nothing?

This is the conservative take—and indeed, we may well find that in a few years, the possibility that Atomki was observing a new particle will be completely dead. Or perhaps a source of systematic error will be identified and the bump will go away. That’s part of doing science.

Meanwhile, there are some important take-aways in this scenario. First is the reminder that the search for light, weakly coupled particles is an important frontier in particle physics. Second, for this particular anomaly, there are some neat take aways such as a demonstration of how effective field theory can be applied to nuclear physics (see e.g. chapter 3.1.2 of the new book by Petrov and Blechman) and how tweaking our models of new particles can avoid troublesome experimental bounds. Finally, it’s a nice example of how particle physics and nuclear physics are not-too-distant cousins and how progress can be made in particle–nuclear collaborations—one of the Irvine group authors (Susan Gardner) is a bona fide nuclear theorist who was on sabbatical from the University of Kentucky.

What if it’s real?

This is a big “what if.” On the other hand, a 6.8σ effect is not a statistical fluctuation and there is no known nuclear physics to produce a new-particle-like bump given the analysis presented by the Atomki experimentalists.

The threshold for “real” is independent verification. If other experiments can confirm the anomaly, then this could be a huge step in our quest to go beyond the Standard Model. While this type of particle is unlikely to help with the Hierarchy problem of the Higgs mass, it could be a sign for other kinds of new physics. One example is the grand unification of the electroweak and strong forces; some of the ways in which these forces unify imply the existence of an additional force particle that may be light and may even have the types of couplings suggested by the anomaly.

Could it be related to other anomalies?

The Atomki anomaly isn’t the first particle physics curiosity to show up at the MeV scale. While none of these other anomalies are necessarily related to the type of particle required for the Atomki result (they may not even be compatible!), it is helpful to remember that the MeV scale may still have surprises in store for us.

  • The KTeV anomaly: The rate at which neutral pions decay into electron–positron pairs appears to be off from the expectations based on chiral perturbation theory. In 0712.0007, a group of theorists found that this discrepancy could be fit to a new particle with axial couplings. If one fixes the mass of the proposed particle to be 20 MeV, the resulting couplings happen to be in the same ballpark as those required for the Atomki anomaly. The important caveat here is that parameters for an axial vector to fit the Atomki anomaly are unknown, and mixed vector–axial states are severely constrained by atomic parity violation.
KTeV anomaly
The KTeV anomaly interpreted as a new particle, U. From 0712.0007.
  • The anomalous magnetic moment of the muon and the cosmic lithium problem: much of the progress in the field of light, weakly coupled forces comes from Maxim Pospelov. The anomalous magnetic moment of the muon, (g-2)μ, has a long-standing discrepancy from the Standard Model (see e.g. this blog post). While this may come from an error in the very, very intricate calculation and the subtle ways in which experimental data feed into it, Pospelov (and also Fayet) noted that the shift may come from a light (in the 10s of MeV range!), weakly coupled new particle like a dark photon. Similarly, Pospelov and collaborators showed that a new light particle in the 1-20 MeV range may help explain another longstanding mystery: the surprising lack of lithium in the universe (APS Physics synopsis).

Could it be related to dark matter?

A lot of recent progress in dark matter has revolved around the possibility that in addition to dark matter, there may be additional light particles that mediate interactions between dark matter and the Standard Model. If these particles are light enough, they can change the way that we expect to find dark matter in sometimes surprising ways. One interesting avenue is called self-interacting dark matter and is based on the observation that these light force carriers can deform the dark matter distribution in galaxies in ways that seem to fit astronomical observations. A 20 MeV dark photon-like particle even fits the profile of what’s required by the self-interacting dark matter paradigm, though it is very difficult to make such a particle consistent with both the Atomki anomaly and the constraints from direct detection.

Should I be excited?

Given all of the caveats listed above, some feel that it is too early to be in “drop everything, this is new physics” mode. Others may take this as a hint that’s worth exploring further—as has been done for many anomalies in the recent past. For researchers, it is prudent to be cautious, and it is paramount to be careful; but so long as one does both, then being excited about a new possibility is part what makes our job fun.

For the general public, the tentative hopes of new physics that pop up—whether it’s the Atomki anomaly, or the 750 GeV diphoton bumpa GeV bump from the galactic center, γ-ray lines at 3.5 keV and 130 GeV, or penguins at LHCb—these are the signs that we’re making use of all of the data available to search for new physics. Sometimes these hopes fizzle away, often they leave behind useful lessons about physics and directions forward. Maybe one of these days an anomaly will stick and show us the way forward.

Further Reading

Here are some of the popular-level press on the Atomki result. See the references at the top of this ParticleBite for references to the primary literature.

UC Riverside Press Release
UC Irvine Press Release
Nature News
Quanta Magazine
Quanta Magazine: Abstractions
Symmetry Magazine
Los Angeles Times

Probing the Standard Model with muons: new results from MEG

Article: Search for the lepton flavor violating decay μ+ → e+γ with the full dataset of the MEG experiment
Authors: MEG Collaboration
Reference: arXiv:1605.05081

I work on the Muon g-2 experiment, which is housed inside a brand new building at Fermilab.  Next door, another experiment hall is under construction. It will be the home of the Mu2e experiment, which is slated to use Fermilab’s muon beam as soon as Muon g-2 wraps up in a few years. Mu2e will search for evidence of an extremely rare process — namely, the conversion of a muon to an electron in the vicinity of a nucleus. You can read more about muon-to-electron conversion in a previous post by Flip.

Today, though, I bring you news of a different muon experiment, located at the Paul Scherrer Institute in Switzerland. The MEG experiment was operational from 2008-2013, and they recently released their final result.

Context of the MEG experiment

Figure 1: Almost 100% of the time, a muon will decay into an electron and two neutrinos.

MEG (short for “mu to e gamma”) and Mu2e are part of the same family of experiments. They each focus on a particular example of charged lepton flavor violation (CLFV). Normally, a muon decays into an electron and two neutrinos. The neutrinos ensure that lepton flavor is conserved; the overall amounts of “muon-ness” and “electron-ness” do not change.

Figure 2 lists some possible CLFV muon processes. In each case, the muon transforms into an electron without producing any neutrinos — so lepton flavor is not conserved! These processes are allowed by the standard model, but with such minuscule probabilities that we couldn’t possibly measure them. If that were the end of the story, no one would bother doing experiments like MEG and Mu2e — but of course that’s not the end of the story. It turns out that many new physics models predict CLFV at levels that are within range of the next generation of experiments. If an experiment finds evidence for one of these CLFV processes, it will be a clear indication of beyond-the-standard-model physics.

Figure 2: Some examples of muon processes that do not conserve lepton flavor. Also listed are the current/upcoming experiments that aim to measure the probabilities of these never-before-observed processes.

Results from MEG

The goal of the MEG experiment was to do one of two things:

  1. Measure the branching ratio of the μ+ → e+γ decay, or
  2. Establish a new upper limit

Outcome #1 is only possible if the branching ratio is high enough to produce a clear signal. Otherwise, all the experimenters can do is say “the branching ratio must be smaller than such-and-such, because otherwise we would have seen a signal” (i.e., outcome #2).

MEG saw no evidence of μ+ → e+γ decays. Instead, they determined that the branching ratio is less than 4.2 × 10^-13 (90% confidence level). Roughly speaking, that means if you had a pair of magic goggles that let you peer directly into the subatomic world, you could stand around and watch 2 × 10^12 muons decay without seeing anything unusual. Because real experiments are messier and less direct than magic goggles, the MEG result is actually based on data from 7.5 × 10^14 muons.

Before MEG, the previous experiment to search for μ+ → e+γ was the MEGA experiment at Los Alamos; they collected data from 1993-1995, and published their final result in 1999. They found an upper limit for the branching ratio of 1.2 × 10^-11. Thus, MEG achieved a factor of 30 improvement in sensitivity over the previous result.

How the experiment works

Figure 3: The MEG signal consists of a back-to-back positron and gamma, each carrying half the rest energy of the parent muon.

A continuous beam of positive muons enters a large magnet and hits a thin plastic target. By interacting with the material, about 80% of the muons lose their kinetic energy and come to rest inside the target. Because the muons decay from rest, the MEG signal is simple. Energy and momentum must be conserved, so the positron and gamma emerge from the target in opposite directions, each with an energy of 52.83 MeV (half the rest energy of the muon).1  The experiment is specifically designed to catch and measure these events. It consists of three detectors: a drift chamber to measure the positron trajectory and momentum, a timing counter to measure the positron time, and a liquid xenon detector to measure the photon time, position, and energy. Data from all three detectors must be combined to get a complete picture of each muon decay, and determine whether it fits the profile of a MEG signal event.

Figure 4: Layout of the MEG experiment. Source: arXiv:1605.05081.

In principle, it sounds pretty simple….to search for MEG events, you look at each chunk of data and go through a checklist:

  • Is there a photon with the correct energy?
  • Is there a positron at the same time?
  • Did the photon and positron emerge from the target in opposite directions?
  • Does the positron have the correct energy?

Four yeses and you might be looking at a rare CLFV muon decay! However, the key word here is might. Unfortunately, it is possible for a normal muon decay to masquerade as a CLFV decay. For MEG, one source of background is “radiative muon decay,” in which a muon decays into a positron, two neutrinos and a photon; if the neutrinos happen to have very low energy, this will look exactly like a MEG event. In order to get a meaningful result, MEG scientists first had to account for all possible sources of background and figure out the expected number of background events for their data sample. In general, experimental particle physicists spend a great deal of time reducing and understanding backgrounds!

What’s next for MEG?

The MEG collaboration is planning an upgrade to their detector which will produce an order of magnitude improvement in sensitivity. MEG-II is expected to begin three years of data-taking late in 2017. Perhaps at the new level of sensitivity, a μ+ → e+γ signal will emerge from the background!

 

1 Because photons are massless and positrons are not, their energies are not quite identical, but it turns out that they both round to 52.83 MeV. You can work it out yourself if you’re skeptical (that’s what I did).

Further Reading

  • Robert H. Bernstein and Peter S. Cooper, “Charged Lepton Flavor Violation: An Experimenter’s Guide.” (arXiv:1307.5787)
  • S. Mihara, J.P. Miller, P. Paradisi and G. Piredda, “Charged Lepton Flavor–Violation Experiments.” (DOI: 10.1146/annurev-nucl-102912-144530)
  • André de Gouvêa and Petr Vogel, “Lepton Flavor and Number Conservation, and Physics Beyond the Standard Model.” (arXiv:1303.4097)

Jets: From Energy Deposits to Physics Objects

Title: “Jet energy scale and resolution in the CMS experiment in pp collisions at 8 TeV”
Author: The CMS Collaboration
Reference: arXiv:hep-ex:1607.03663v1.pdf

As a collider physicist, I care a lot about jets. They are fascinating objects that cover the ATLAS and CMS detectors during LHC operation and make event displays look really cool (see Figure 1.) Unfortunately, as interesting as jets are, they’re also somewhat complicated and difficult to measure. A recent paper from the CMS Collaboration details exactly how we reconstruct, simulate, and calibrate these objects.

This event was collected in August 2015. The two high-pT jets have an invariant mass of 6.9 TeV and the leading and subleading jet have a pT of 1.3 and 1.2 TeV respectively. (Image credit: ATLAS public results)
Figure 1: This event was collected in August 2015. The two high-pT jets have an invariant mass of 6.9 TeV and the leading and subleading jet have a pT of 1.3 and 1.2 TeV respectively. (Image credit: ATLAS public results)

For the uninitiated, a jet is the experimental signature of quarks or gluons that emerge from a high energy particle collision. Since these colored Standard Model particles cannot exist on their own due to confinement, they cluster or ‘hadronize’ as they move through a detector. The result is a spray of particles coming from the interaction point. This spray can contain mesons, charged and neutral hadrons, basically anything that is colorless as per the rules of QCD.

So what does this mess actually look like in a detector? ATLAS and CMS are designed to absorb most of a jet’s energy by the end of the calorimeters. If the jet has charged constituents, there will also be an associated signal in the tracker. It is then the job of the reconstruction algorithm to combine these various signals into a single object that makes sense. This paper discusses two different reconstructed jet types: calo jets and particle-flow (PF) jets. Calo jets are built only from energy deposits in the calorimeter; since the resolution of the calorimeter gets worse with higher energies, this method can get bad quickly. PF jets, on the other hand, are reconstructed by linking energy clusters in the calorimeters with signals in the trackers to create a complete picture of the object at the individual particle level. PF jets generally enjoy better momentum and spatial resolutions, especially at low energies (see Figure 2).

Jet-energy resolution for calorimeter and particle-flow jets as a function of the jet transverse momentum. The improvement in resolution, of almost a factor of two at low transverse momentum, remains sizable even for jets with very high transverse momentum. (Image credit: CMS Collaboration)
Jet-energy resolution for calorimeter and particle-flow jets as a function of the jet transverse momentum. The improvement in resolution, of almost a factor of two at low transverse momentum, remains sizable even for jets with very high transverse momentum.
(Image credit: CMS Collaboration)

Once reconstruction is done, we have a set of objects that we can now call jets. But we don’t want to keep all of them for real physics. Any given event will have a large number of pile up jets, which come from softer collisions between other protons in a bunch (in time), or leftover calorimeter signals from the previous bunch crossing (out of time). Being able to identify and subtract pile up considerably enhances our ability to calibrate the deposits that we know came from good physics objects. In this paper CMS reports a pile up reconstruction and identification efficiency of nearly 100% for hard scattering events, and they estimate that each jet energy is enhanced by about 10 GeV due to pileup alone.

Once the pile up is corrected, the overall jet energy correction (JEC) is determined via detector response simulation. The simulation is necessary to simulate how the initial quarks and gluons fragment, and the way in which those subsequent partons shower in the calorimeters. This correction is dependent on jet momentum (since the calorimeter resolution is as well), and jet pseudorapidity (different areas of the detector are made of different materials or have different total thickness.) Figure 3 shows the overall correction factors for several different jet radius R values.

Jet energy correction factors for a jet with pT = 30 GeV, as a function of eta (left). Note the spikes around 1.7 (TileGap3, very little absorber material) and 3 (beginning of endcaps.) Simulated jet energy response after JEC as a function of pT (right).
Figure 3: Jet energy correction factors for a jet with pT = 30 GeV, as a function of eta (left). Note the spikes around 1.7 (TileGap3, very little absorber material) and 3 (beginning of endcaps.) Simulated jet energy response after JEC as a function of pT (right).

Finally, we turn to data as a final check on how well these calibrations went. An example of such a check is the tag and probe method with dijet events. Here, we take a good clean event with two back-to-back jets, and ask for one low eta jet for a ‘tag’ jet. The other ‘probe’ jet, at arbitrary eta, is then measured using the previously derived corrections. If the resulting pT is close to the pT of the tag jet, we know the calibration was solid (this also gives us info on how calibrations perform as a function of eta.) A similar method known as pT balancing can be done with a single jet back to back with an easily reconstructed object, such as a Z boson or a photon.

This is really a bare bones outline of how jet calibration is done. In real life, there are systematic uncertainties, jet flavor dependence, correlations; the list goes on. But the entire procedure works remarkably well given the complexity of the task. Ultimately CMS reports a jet energy uncertainty of 3% for most physics analysis jets, and as low as 0.32% for some jets—a new benchmark for hadron colliders!

 

Further Reading:

  1. “Jets: The Manifestation of Quarks and Gluons.” Of Particular Significance, Matt Strassler.
  2. “Commissioning of the Particle-flow Event Reconstruction with the first LHC collisions recorded in the CMS detector.” The CMS Collaboration, CMS PAS PFT-10-001.
  3. “Determination of jet energy calibrations and transverse momentum resolution in CMS.” The CMS Collaboration, 2011 JINST 6 P11002.