This post is about the discovery of the most massive quark in the Standard Model, the Top quark. Below is a “discovery plot”  from the Collider Detector at Fermilab collaboration (CDF). Here is the original paper.
This plot confirms the existence of the Top quark. Let’s understand how.
For each proton collision that passes certain selection conditions, the horizontal axis shows the best estimate of the Top quark mass. These selection conditions encode the particle “fingerprint” of the Top quark. Out of all possible proton collisions events, we only want to look at ones that perhaps came from Top quark decays. This subgroup of events can inform us of a best guess at the mass of the Top quark. This is what is being plotted on the x axis.
On the vertical axis are the number of these events.
The dashed distribution is the number of these events originating from the Top quark if the Top quark exists and decays this way. This could very well not be the case.
The dotted distribution is the background for these events, events that did not come from Top quark decays.
The solid distribution is the measured data.
To claim a discovery, the background (dotted) plus the signal (dashed) should equal the measured data (solid). We can run simulations for different top quark masses to give us distributions of the signal until we find one that matches the data. The inset at the top right is showing that a Top quark of mass of 175GeV best reproduces the measured data.
Taking a step back from the technicalities, the Top quark is special because it is the heaviest of all the fundamental particles. In the Standard Model, particles acquire their mass by interacting with the Higgs. Particles with more mass interact more with the Higgs. The Top mass being so heavy is an indicator that any new physics involving the Higgs may be linked to the Top quark.
 – Observation of tt(bar)H Production – Who is to say that the Top and the Higgs even have significant interactions to lowest order? The CMS collaboration finds evidence that they do in fact interact at “tree-level.”
Title: “New physics implications of recent search for at KOTO”
Author: Kitahara et. al.
The Standard Model, though remarkably accurate in its depiction of many physical processes, is incomplete. There are a few key reasons to think this: most prominently, it fails to account for gravitation, dark matter, and dark energy. There are also a host of more nuanced issues: it is plagued by “fine tuning” problems, whereby its parameters must be tweaked in order to align with observation, and “free parameter” problems, which come about since the model requires the direct insertion of parameters such as masses and charges rather than providing explanations for their values. This strongly points to the existence of as-yet undetected particles and the inevitability of a higher-energy theory. Since gravity should be a theory living at the Planck scale, at which both quantum mechanics and general relativity become relevant, this is to be expected.
A promising strategy for probing physics beyond the Standard Model is to look at decay processes that are incredibly rare in nature, since their small theoretical uncertainties mean that only a few event detections are needed to signal new physics. A primary example of this scenario in action is the discovery of the positron via particle showers in a cloud chamber back in 1932. Since particle physics models of the time predicted zero anti-electron events during these showers, just one observation was enough to herald a new particle.
The KOTO experiment, conducted at the Japan Proton Accelerator Research Complex (J-PARC), takes advantage of this strategy. The experiment was designed specifically to investigate a promising rare decay channel: , the decay of a neutral long kaon into a neutral pion, a neutrino, and an antineutrino. Let’s break down this interaction and discuss its significance. The kaon, a meson composed of an up quark and anti-strange quark, comes in both long and short varieties, describing the time of decay relative to each other. The Standard Model predicts a branching ratio of for this particular decay process, meaning that out of all the neutral long kaons that decay, only this tiny fraction of them decay into the combination of a neutral pion, neutrino, and an antineutrino, making it incredibly rare for this process to be observed in nature.
Here’s where it gets exciting. The KOTO experiment recently reported four signal events within this decay channel where the Standard Model predicts just events. If all four of these events are confirmed as the desired neutral long kaon decays, new physics is required to explain the enhanced signal. There are several possibilities, recently explored in a new paper by Kitahara et. al., for what this new physics might be. Before we go into too much detail, let’s consider how KOTO’s observation came to be.
The KOTO experiment is a fixed-target experiment, in which particles are accelerated and collide with something stationary. In this case, protons at energy 30 GeV collided with gold, producing a beam of kaons after other products are diverted with collimators and magnets. The observation of the desired mode is particularly difficult experimentally for several reasons. First, the initial and final decay products are neutrally charged, making them harder to detect since they do not ionize, a primary strategy for detecting charged particles. Second, neutral pions are produced via several other kaon decay channels, requiring several strategies to differentiate neutral pions produced by from those produced from , , and , among others. As we can see in the Feynman diagram above, our desired decay mode has the advantage of producing two photons, allowing KOTO to observe these photons and their transverse momentum in order to pinpoint a decay. In terms of experimental construction, KOTO included charged veto detectors in order to reject events with charged particles in the final state and a systematic study of background events was performed in order to discount hadron showers originating from neutrons in the beam line.
This setup was in service of KOTO’s goal to explore the question of CP violation with long kaon decay. CP violation refers to the violation of charge-parity symmetry, the combination of charge-conjugation symmetry (in which a theory is unchanged when we swap a particle for its antiparticle) and parity symmetry (in which a theory is invariant when left and right directions are swapped). We seek to understand why some processes seem to preserve CP symmetry when the Standard Model allows for violation, as is the case in quantum chromodynamics (QCD), and why some processes break CP symmetry, as is seen in the quark mixing matrix (CKM matrix) and the neutrino oscillation matrix. Overall, CP violation has implications for matter-antimatter asymmetry, the question of why the universe seems to be composed predominantly of matter when particle creation and decay processes produce equal amounts of both matter and antimatter. An imbalance of matter and antimatter in the universe could be created if CP violation existed under the extreme conditions of the early universe, mere seconds after the Big Bang. Explanations for matter-antimatter asymmetry that do not involve CP violation generally require the existence of primordial matter-antimatter asymmetry, effectively dodging the fundamental question. The observation of CP violation with KOTO could provide critical evidence toward an eventual answer.
The Kitahara paper provides three interpretations of KOTO’s observation that incorporate physics beyond the Standard Model: new heavy physics, new light physics, and new particle production. The first, new heavy physics, amplifies the expected Standard Model signal via the incorporation of new operators that couple to existing Standard Model particles. If this coupling is suppressed, it could adequately explain the observed boost in the branching ratio. Light new physics involves reinterpreting the neutrino-antineutrino pair as a new light particle. Factoring in experimental constraints, this new light particle should decay in with a finite lifetime on the order of 0.1-0.01 nanoseconds, making it almost completely invisible to experiment. Finally, new particles could be produced within the decay channel, which should be light and long-lived in order to allow for its decay to two photons. The details of these new particle scenarios should involve constraints from other particle physics processes, but each serve to increase the branching ratio through direct production of more final state particles. On the whole, this demonstrates the potential for the to provide a window to physics beyond the Standard Model.
Of course, this analysis presumes the accuracy of KOTO’s four signal events. Pending the confirmation of these detections, there are several exciting possibilities for physics beyond the Standard Model, so be sure to keep your eye on this experiment!
An interesting group of searches for new physics at the LHC that has been gaining more attention in recent years relies on reconstructing and identifying physics objects that are displaced from the original proton-proton collision point. Several theoretical models predict such signatures due to the decay of long-lived particles (LLPs) that are produced in these collisions. Theories with LLPs typically feature a suppression of the available phase space for the decay of these particles, or a weak coupling between them and Standard Model (SM) particles.
An appealing feature of these signatures is that backgrounds can be greatly reduced by searching for displaced objects, since most SM physics display only prompt particles (i.e. produced immediately following the collision within the primary vertex resolution). Given that the sensitivity to new physics is determined both by the presence of signal events as well as by the absence of background events, the sensitivity to models with LLPs is increased with the expectation of low SM backgrounds.
A recent search for new physics with LLPs performed by the CMS Collaboration uses delayed photons as its driving experimental signature. For this search, events of interest contain delayed photons and missing transverse momentum (MET1). This signature is predicted in the LHC by various theories such as Gauge-Mediated Supersymmetry Breaking (GMSB), where long-lived supersymmetric particles produced in proton-proton (pp) collisions decay in a peculiar pattern, giving rise to stable particles that escape the detector (hence the MET) and also photons that are displaced from the interaction point. The expected signature is shown in Figure 1.
The main challenge of this analysis is the physics reconstruction of delayed photons, something that the LHC experiments were not originally designed to do. Both the detector and the physics software are optimized for prompt objects originating from the pp interaction point, where the vast majority of relevant physics happens at the LHC. This difference is illustrated in Figure 2.
In order to reconstruct delayed photons, a separate reconstruction algorithm was developed that specifically looked for signs of photons out-of-sync with the pp collision. Before this development, out-of-time photons in the detector were considered something of an ‘incomplete or misleading reconstruction’ and discarded from the analysis workflow.
In order to use delayed photons in analysis, a precise understanding of CMS’s calorimeter timing capabilities is required. The collaboration measured the timing resolution of the electromagnetic calorimeter to be around 400 ps, and that sets the detector’s sensitivity to delayed photons.
Other relevant components of this analysis include a dedicated trigger (for 2017 data), developed to select events consistent with a single displaced photon. The identification of a displaced photon at the trigger level relies on the shape of the electromagnetic shower it deposits on the calorimeter: displaced photons produce a more elliptical shower, whereas prompt photons produce a more circular one. In addition, an auxiliary trigger (used for 2016 data, before the special trigger was developed) requires two photons, but no displacement.
The event selection requires one or two well-reconstructed high-momentum photons in the detector (depending on year), and at least 3 jets. The two main kinematic features of the event, the large arrival time (i.e. consistent with time of production delayed relative to the pp collision) and large MET, are used instead to extract the signal and the background yields (see below).
In general for LLP searches, it is difficult to estimate the expected background from Monte Carlo (MC) simulation alone, since a large fraction of backgrounds are due to detector inefficiencies and/or physics events that have poor MC modeling. Instead, this analysis estimates the background from the data itself, by using the so-called ‘ABCD’ method.
The ABCD method consists of placing events in data that pass the signal selection on a 2D histogram, with a suitable choice of two kinematic quantities as the X and Y variables. These two variables must be uncorrelated, a very important assumption. Then this histogram is divided into 4 regions or ‘bins’, and the one with the highest fraction of expected signal events becomes the ‘signal’ bin (call it the ‘C’ bin). The other three bins should contain mostly background events, and with the assumption that X and Y variables are uncorrelated, it is possible to predict the background in C by using the observed backgrounds in A, B, and D:
Using this data-driven background prediction for the C bin, all that remains is to compare the actual observed yield in C to figure out if there is an excess, which could be attributed to the new physics under investigation:
As an example, Table 1 shows the number of background events predicted and the number of events in data observed for the 2016 run.
Combining all the data, the CMS Collaboration did not find an excess of events over the expected background, which would have suggested evidence of new physics. Using statistical analysis, the collaboration can place upper limits on the possible mass and lifetime of new supersymmetric particles predicted by GMSB, based on the absence of excess events. The final result of this analysis is shown in Figure 3, where a mass of up to 500 GeV for the neutralino particle is excluded for lifetimes of 1 meter (here we measure lifetime in units of length by multiplying by the speed of light: = 1 meter).
The mass coverage of this search is higher than the previous search done by the ATLAS Collaboration with run 1 data (i.e. years 2010-2012 only), with a much higher sensitivity to longer lifetimes (up to a factor of 10 depending on the mass). But the ATLAS detector has a longitudinally-segmented calorimeter which allows them to precisely measure the direction of displaced photons, and when they do release their results for this search using run 2 data (2016-2018), it should also feature quite a large gain in sensitivity, potentially overshadowing this CMS result. So stay tuned for this exciting cat-and-mouse game between LHC experiments!
Direct detection strategies for dark matter (DM) have grown significantly from the dominant narrative of looking for scattering of these ghostly particles off of large and heavy nuclei. Such experiments involve searches for the Weakly-Interacting Massive Particles (WIMPs) in the many GeV (gigaelectronvolt) mass range. Such candidates for DM are predicted by many beyond Standard Model (SM) theories, one of the most popular involving a very special and unique extension called supersymmetry. Once dubbed the “WIMP Miracle”, these types of particles were found to possess just the right properties to be suitable as dark matter. However, as these experiments become more and more sensitive, the null results put a lot of stress on their feasibility.
Typical detectors like that of LUX, XENON, PandaX and ZEPLIN, detect flashes of light (scintillation) from the result of particle collisions in noble liquids like argon or xenon. Other cryogenic-type detectors, used in experiments like CDMS, cool semiconductor arrays down to very low temperatures to search for ionization and phonon (quantized lattice vibration) production in crystals. Already incredibly successful at deriving direct detection limits for heavy dark matter, new ideas are emerging to look into the lighter side.
Recently, DM below the GeV range have become the new target of a huge range of detection methods, utilizing new techniques and functional materials – semiconductors, superconductors and even superfluid helium. In such a situation, recoils from the much lighter electrons in fact become much more sensitive than those of such large and heavy nuclear targets.
There are several ways that one can consider light dark matter interacting with electrons. One popular consideration is to introduce a new gauge boson that has a very small ‘kinetic’ mixing with the ordinary photon of the Standard Model. If massive, these ‘dark photons’ could also be potentially dark matter candidates themselves and an interesting avenue for new physics. The specifics of their interaction with the electron are then determined by the mass of the dark photon and the strength of its mixing with the SM photon.
Typically the gap between the valence and conduction bands in semiconductors like silicon and germanium is around an electronvolt (eV). When the energy of the dark matter particle exceeds the band gap, electron excitations in the material can usually be detected through a complicated secondary cascade of electron-hole pair generation. Below the band gap however, there is not enough energy to excite the electron to the conduction band, and so detection proceeds through low-energy multi-phonon excitations, with the dominant being the emission of two back-to-back phonons.
In both these regimes, the absorption rate of dark matter in the material is directly related to the properties of the material, namely its optical properties. In particular, the absorption rate for ordinary SM photons is determined by the polarization tensor in the medium, and in turn the complex conductivity, , through what is known as the optical theorem. Ultimately this describes the response of the material to an electromagnetic field, which has been measured in several energy ranges. This ties together the astrophysical properties of how the dark matter moves through space and the fundamental description of DM-electron interactions at the particle level.
In a more technical sense, the rate of DM absorption, in events per unit time per unit target mass, is given by the following equation:
– mass density of the target material
– local dark matter mass density (0.3 GeV/cm3) in the galactic halo
– mass of the dark photon particle
– kinetic mixing parameter (in-medium)
– absorption rate of ordinary SM photons
Shown in Figure 1, the projected sensitivity at 90% confidence limit (C.L.) for a 1 kg-year exposure of semiconductor target to dark photon detection can be almost an order of magnitude greater than existing nuclear recoil experiments. Dependence is shown on the kinetic mixing parameter and the mass of the dark photon. Limits are also shown for existing semiconductor experiments, known as DAMIC and CDMSLite with 0.6 and 70 kg-day exposure, respectively.
Furthermore, in the millielectronvolt-kiloelectronvolt range, these could provide much stronger constraints than any of those that currently exist from sources in astrophysics, even at this exposure. These materials also provide a novel way of detecting DM in a single experiment, so long as improvements are made in phonon detection.
These possibilities, amongst a plethora of other detection materials and strategies, can open up a significant area of parameter space for finally closing in on the identity of the ever-elusive dark matter!
This plot  is the first experimental evidence for the particle that would eventually be named the tau.
On the horizontal axis is the energy of the experiment. This particular experiment collided electron and positron beams. On the vertical axis is the cross section of a specific event resulting from the electron and positron beams colliding. The cross section is like a probability for a given event to occur. When two particles collide, many many things can happen, each with their own probability. The cross section for an event encodes the probability for that particular event to occur. Events with larger probability have larger cross sections and vice versa.
The collaboration found one event could not be explained by the Standard Model at the time. The event in question looks like:
This event is peculiar because the final state contains both an electron and a muon with opposite charges. In 1975, when this paper was published, there was no way to obtain this final state, from any known particles or interactions.
In order to explain this anomaly, particle physicists proposed the following explanations:
Pair production of a heavy lepton. With some insight from the future, we will call this heavy lepton the “tau.”
Pair production of charged Bosons. These charged bosons actually end up being the bosons that mediate the weak nuclear force.
The production of tau’s and these bosons are not equally likely though. Depending on the initial energy of the beams, we are more likely to produce one than the other. It turns out that at the energies of this experiment (a few GeV), it is much more likely to produce taus than to produce the bosons. We would say that the taus have a larger cross section than the bosons. From the plot, we can read off that the production of taus, their cross section, is largest at around 5 GeV of energy. Finally, since these taus are the result of pair production, they are produced in pairs. This bump at 5 GeV is the energy at which it is most likely to produce a pair of taus. This plot then predicts the tau to have a mass of about 2.5 GeV.
The quark-gluon plasma, a sea of unbound quarks and gluons moving at relativistic speeds thought to exist at extraordinarily high temperature and density, is a phase of matter critical to our understanding of the early universe and extreme stellar interiors. On the timescale of milliseconds after the Big Bang, the matter in the universe is postulated to have been in a quark-gluon plasma phase, before the universe expanded, cooled, and formed the hadrons we observe today from constituent quarks and gluons.The study of quark matter, the range of phases formed from quarks and gluons, can provide us with insight into the evanescent early universe, providing an intriguing focus for experimentation. Astrophysical objects that are comprised of quarks, such as neutron stars, are also thought to house the necessary conditions for the formation of quark-gluon plasma at their cores. With the accumulation of new data from neutron star mergers, studies of quark matter are becoming increasingly productive and rife for new discovery.
Quantum chromodynamics (QCD) is the theory of quarks and the strong interaction between them. In this theory, quarks and force-carrying gluons, the aptly-named particles that “glue” quarks together, have a “color” charge analogous to charge in quantum electrodynamics (QED). In QCD, the gluon field is often modeled as a narrow tube between two color charges with a constant strong force between them, in contrast with the inverse-square dependence on distance for fields in QED. The pair potential energy between the quarks increases linearly with separation, eventually surpassing the creation energy for a new quark-antiquark pair. Hence, the quarks cannot exist in unbound pairs at low energies, a property known as color confinement. When separation is attempted between quarks, new quarks are instead produced. In particle accelerators, physicists see “jets” of new color-neutral particles (mesons and baryons) in the process of hadronization. At high energies, the story changes and hinges on an idea known as asymptotic freedom, in which the strength of particle interactions decreases with increased energy scale in certain gauge theories such as QCD.
QCD matter is commonly probed with heavy-ion collision experiments and quark-gluon plasma has been produced before in minute quantities at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Lab as well as the Large Hadron Collider (LHC) at CERN. The goal of these experiments is to create conditions similar to those of the early universe or at the center of dense stars — doing so requires intense temperatures and an abundance of quarks. Heavy-ions, such as gold or lead nuclei, fit this bill when smashed together at relativistic speeds. When these collisions occur, the resulting “fireball” of quarks and gluons is unstable and quickly decays into a barrage of new, stable hadrons via the hadronization method discussed above.
There are several main goals of heavy-ion collision experiments around the world, revolving around the study of the phase diagram for quark matter. The first component of this is the search for the critical point: the endpoint of the line of first-order phase transitions. The phase transition between hadronic matter, in which quarks and gluons are confined, and partonic matter, in which they are dissociated in a quark-gluon plasma, is also an active area of investigation. There is additionally an ongoing search for chiral symmetry restoration at finite temperature and finite density. A chiral symmetry occurs when the handedness of the particles remains invariant under a parity transformation, that is, when the sign of a spatial coordinate is flipped. However, in QCD, a symmetric system becomes asymmetric in a process known as spontaneous symmetry breaking. Several experiments are designed to investigate evidence of the restoration of this symmetry.
The HADES (High-Acceptance DiElectron Spectrometer) collaboration is a group attempting to address such questions. In a recent experiment, HADES focused on the creation of quark matter via collisions of a beam of Au (gold) ions with a stack of Au foils. Dileptons, which are bound lepton-antilepton pairs that emerge from the decay of virtual particles, are a key element of HADES’ findings. In quantum field theory (QFT), in which particles are modeled as excitations in an underlying field, virtual particles can be thought of as excitations in the field that are transient due to limitations set by the uncertainty principle. Virtual particles are represented by internal lines in Feynman diagrams,are used as tools in calculations, and are not isolated or measured on their own — they are only exchanged with ordinary particles. In the HADES experiment, the virtual photons that produce dileptons which immediately decouple from the strong force. Produced at all stages of QCD interaction, they are ideal messengers of any modification of hadron properties. They are also thought to contain information about the thermal properties of the underlying medium.
To actually extract this information, the HADES detector utilizes a time-of-flight chamber and ring-imaging Cherenkov (RICH) chamber, which identifies particles using the characteristics of Cherenkov radiation: electromagnetic radiation emitted when a particle travels through a dielectric medium at a velocity greater than the phase velocity of light in that particular medium. The detector is then able to measure the invariant mass, rapidity (a commonly-used substitute measure for relativistic velocity), and transverse momentum of emitted electron-positron pairs, the dilepton of choice. In accelerator experiments, there are typically a number of selection criteria in place to ensure that the machinery is detecting the desired particles and the corresponding data is recorded. When a collision event occurs within HADES, a number of checks are in place to ensure that only electron-positron events are kept, factoring in both the number of detected events and detector inefficiency, while excess and background data is thrown out. The end point of this data collection is a calculation of the four-momenta of each lepton pair, a description of its relativistic energy and momentum components. This allows for the construction of a dilepton spectrum: the distribution of the invariant masses of detected dileptons.
The main data takeaway from this experiment was the observation of an excess of dilepton events in an exponential shape, contrasting with the expected number of dileptons from ordinary particle collisions. This suggests a shift in the properties of the underlying matter, with a reconstructed temperature above 70 MeV (note that particle physicists tend to quote temperatures in more convenient units of electron volts). The kicker comes when the group compares these results to simulated neutron star mergers, with expected core temperatures of 75 MeV. This means that the bulk matter created within HADES is similar to the highly dense matter formed in such mergers, a comparison which has become recently accessible due to multi-messenger signals incorporating both electromagnetic and gravitational wave data.
Practically, we see that HADES’ approach is quite promising for future studies of matter under extreme conditions, with the potential to reveal much about the state of the universe early on in its history as well as probe certain astrophysical objects — an exciting realization!
According to classical wave theory, two electromagnetic waves that happen to cross each other in space will not interfere. In fact, this is a crucial feature of the conventional definition of a wave, in contrast to a corpuscle or particle: when two waves meet, they briefly occupy the same space at the same time, each without “knowing” about the other’s existence. Particles, on the other hand, do interact (or scatter) when they get close to each other, and the results of this encounter can be measured.
The mathematical backbone for this idea is the so-called superposition principle. It arises from the fact that the equations describing wave propagation are all linear in the fields and sources, meaning that they don’t occur in squares or cubes of those quantities. When we take two such waves that happen to be nearby, the linearity of the equations implies that we can treat the overall wave as just a linear superposition of two separate waves, traveling in different directions. The equations do not distinguish between the two scenarios.
This story gets more interesting after the insights by Albert Einstein who predicted the quantization of light, and the subsequent formal development of Quantum Electrodynamics in the 1940s by Shin’ichirō Tomonaga, Julian Schwinger and Richard Feynman. In the quantum theory of light, it is no longer “just” a wave, but rather a dual entity that can be treated as both a wave and as a particle called photon.
The new interpretation of light as a particle opens up the possibility that two electromagnetic waves may indeed interact when crossing each other, just as particles would. The mathematical equivalent statement is that the quantum theory of light yields wave equations that are not entirely linear anymore, but instead contain a small non-linear part. The non-linear contributions have to be tiny, otherwise we would already have detected the effects, but nonetheless they are predicted to occur by quantum theory. In this context, it is called light-by-light scattering.
In 2017, The ATLAS experiment at the LHC observed the first direct evidence of light-by-light scattering, using collisions of relativistic heavy-ions. Since this is such a rare phenomenon, it took us a long time to become directly experimentally sensitive to it.
The experiment is based on so-called Ultra-Peripheral Collisions (UPC) of lead ions (Z=82) at a center-of-mass energy of 5.02 TeV (or about 5,000 times the mass of the proton). In UPC collisions, the two oncoming beams are far enough apart that the beam particles are less likely to undergo hard scattering, and instead just “graze” each other. This means that the strong force is not usually involved in such interactions, since its range is tiny and it only has intra-nuclear coverage. Instead, the electromagnetic interaction dominates in UPC events.
The figure below shows how UPCs proceed. The grazing of two lead ions leads to large electromagnetic fields in the space between the ions, and this interaction can be interpreted as an exchange of photons between them, since photons are the mediator of electromagnetism. Then, by also looking for two final-state photons in the ATLAS detector (the ‘X’ on the right in the figure), the light-by-light process, , can be probed ( stands for photons).
In order to isolate this particular process from all the other physics happening at the LHC, a series of increasingly tighter selections (or cuts) is applied to the acquired data. The final cuts are optimized to obtain the maximum possible sensitivity to the light-by-light scattering process. This sensitivity depends on how likely it is to select a candidate signal event, and similarly on how unlikely it is to select a background event. The main background physics that could mimic the signature of light-by-light scattering (two photons in the ATLAS calorimeter on an UPC lead-ion data run) include , where the final-state electrons and positrons are misidentified by the detector as photons; central exclusive production (CEP) of photons in (where is a gluon); and hadronic fakes from production in low-pT dijet events, where ’s (neutral pions) decay to a pair of photons.
The various applied selections begin with a dedicated trigger which selects events with moderate activity in the calorimeter and very little activity elsewhere. This is what is expected in a lead-ion UPC collision, since the ions just escape down the LHC beam pipe and are not detected, leaving the two photons as the only visible products. Then a set of selections is applied to ensure that the recorded activity in the calorimeter is compatible with two photons. These selections rely mostly on the shape of electromagnetic showers deposited on calorimeter crystals, which varies for different types of incident particles.
Finally, a series of extra selections is applied to minimize the number of possible background events, such as vetoing any events containing charged-particle tracks in the ATLAS tracker, which effectively removes events with electrons and positrons mis-tagged as photons. Note that this also removes some of the real light-by-light signal events (about ~10%), where final-state photons undergo photon conversion after interacting with tracker material, but in this case the trade-off is certainly worth it. Another such selection is the requirement that the transverse momentum of the diphoton system be less than 2 GeV. This removes contributions from other fake-photon backgrounds (such as cosmic-ray muons), because it ensures that the net transverse momentum of the system is small, and thus likely to originate in the ion-ion interaction.
The same exact set of selections is applied to both data and Monte Carlo (MC) simulations of the experiment. The MC simulations yield an estimate of how many background and signal events should be expected in data. The table below shows the results.
The penultimate row contains the sum of all cuts and a comparison between total number of expected background events (2.6), light-by-light scattering events (7.3), and data (13). The sum of 2.6+7.3=9.9 events certainly seems compatible with the observed data, given the quoted uncertainties in the last row. In fact, it is possible to estimate the significance of this result by asking how likely it would be for the background-only hypothesis (that is, pretending light-by-light scattering doesn’t exist and only including the backgrounds) to yield 13 observed events. This likelihood is tiny, , which corresponds to a significance of 4.4 sigma!
In addition to the number of events, in the figure below the paper also plots the diphoton invariant mass distribution for the 13 observed data events (black points), and for the MC simulations (signal in red and backgrounds in blue and gray). This comparison provides further evidence that we do indeed see light-by-light scattering in the ATLAS data.
Finally, given the observed number of events in data and the expected number of MC background events, it is possible to measure the cross-section of light-by-light scattering (as a reminder, the cross-section of a process measures how likely it is to occur in collisions). The ATLAS collaboration calculates the cross-section of light-by-light scattering with the formula:
Where is the number of observed events in data, is the number of background events in MC, is the total amount of data collected, and is a correction factor which translates all of the detector inefficiencies into a single number. You can think of the entire denominator as the “effective” amount of data that was analyzed, and the numerator as the “effective” number of signal events that was seen. The ratio of the two quantities yields the probability of seeing light-by-light scattering in a single collision. The ATLAS collaboration found this value to be nb, which agrees with the theorized values of nb and nb within uncertainties.
To conclude, the measurement of light-by-light scattering by ATLAS is an exciting result which offers us a direct glimpse into the stark differences between classical and quantum physics in an accessible (and dare I say) amusing way!
Graduate students at U.S. and Canadian institutions in all fields of science, technology, engineering, health, mathematics, and related fields are encouraged to apply. The application deadline is March 1st.
As for past ComSciCon national workshops, acceptance to the workshop is competitive; attendance is free and travel support and lodging will be provided to accepted applicants.
Attendees will be selected on the basis of their achievement in and capacity for leadership in science communication. Graduate students who have engaged in entrepreneurship and created opportunities for other students to practice science communication are especially encouraged to apply.
Participants will network with other leaders in science communication and build the communication skills that scientists and other technical professionals need to express complex ideas to the general public, experts in other fields, and their peers. In additional to panel discussions on topics like Science Journalism, Creative & Digital Storytelling, and Diversity and Inclusion in Science, ample time is allotted for networking with science communication experts and developing science outreach collaborations with fellow graduate students.
You can follow this link to submit an application or learn more about the workshop programs and participants. You can also follow ComSciCon on Twitter (@comscicon) and use #comscicon18 !
Longtime Astrobites readers may remember that ComSciCon was founded in 2012 by Astrobites and Chembites authors. Since then, we’ve led five national workshops on leadership in science communication and established a franchising model through which graduate student alums of our program have led sixteen regional and specialized workshops on training in science communication.
We are so excited about the impact ComSciCon has had on the science communication community. You can read more about our programs, the outcomes we have documented, find vignettes about our amazing graduate student attendees, and more in ComSciCon’s 2017 annual report.
None of ComSciCon’s impacts would be possible without the generous support of our sponsors. These contributions make it possible for us to offer this programming free of charge to graduate students, and to support their travel to our events. In particular, we wish to thank the American Astronomical Society for being a supporter of both Particlebites and ComSciCon.
If you believe in the mission of ComSciCon, you can support our program to! Visit our webpage to learn more and donate today!
Article: Latest ATLAS results from Run 2 Authors: Claudia Gemme on behalf of the ATLAS Collaboration Reference: arXiv:1612.01987 [hep-ex]
2016 certainly has been an…interesting year. There’s been a lot of division, so as the year winds down, this seems like a perfect time to reflect on something we can all get excited about: particle physics! This has been an incredible year for the ATLAS experiment and the LHC in general. Run 2 began in 2015, but really hit its stride this year. The LHC reached a peak luminosity of 1.37 x 1034 cm-2s-1, which exceeds the design luminosity by almost 40%! Additionally, the pile up (number of interactions per bunch crossing) nearly doubled with respect to 2015. From these delivered events, ATLAS was able to record 30 fb-1 of data at 13 TeV center-of-mass energy, with a DAQ (data acquisition) efficiency above 90%.
Before discussing the many analyses being done with this amazing new data set, I’d like to highlight some of the upgrades performed during the long shutdown (2013-2014) that allowed ATLAS to perform so well during Run 2. One of the most important improvements was the addition of the Insertable B-layer (IBL). The IBL is now the innermost part of the inner detector, located only 3.3 cm from the interaction point. The IBL was designed to combat radiation damage to the inner detector; due to its insertable nature, it can eventually be replaced and it shields the other 3 detector layers from the bulk of the radiation damage. Of course, it also allows more tracking points which further improves track reconstruction. There were also extensive improvements to the magnetic and cryogenic systems to provide more powerful cooling and repair damage from Run 1. To deal with the increased data load, the ATLAS DAQ and trigger systems were also upgraded (reminder, triggers are data collection elements that decide which events get stored for analysis and which get thrown out). The High Level Trigger, ATLAS’s only hardware level trigger was updated to output 100kHz of data (previously 75kHz), and the 2 software level triggers were merged into one. This allowed ATLAS DAQ to have an average physics data output of 1kHz, allowing more of the Run 2 events to be processed and stored.
Now, lets talk about ATLAS’s physics program in 2016. A precise understanding of the Standard Model is incredibly important for all particle physics programs. It allows physicists to test theoretical mass and coupling predictions, and to make accurate predictions for background processes in BSM searches. ATLAS improved the precision of many SM measurements in 2016, and a summary of some important cross section measurements is shown in Figure 1. In particular, the measurements of the Z+jets and diboson cross sections were compatible with new next-to-next-leading-order calculations from theorists, confirming theoretical predictions.
Another major goal of the Run 2 ATLAS physics program was to confirm the discovery of the Higgs Boson and further study it. During Run 1, both ATLAS and CMS found evidence of a Higgs boson with a combined measured mass of 125.09 GeV using the H→gg and H→ZZ*→4l channels. Furthermore, nearly all measured couplings were consistent with SM predictions within 2σ. In Run 2, ATLAS ‘rediscovered’ the Higgs at a compatible mass with a local significance of 10 (compared to 8.6 expected) using the same channels. The Run 2 cross section is slightly higher than the SM prediction, but still compatible within uncertainty. A summary of the cross-section measurements at various energies can be seen in Figure 2. Additionally, ATLAS physicists sought to show conclusive evidence of the H→bb decay channel in Run 2. This decay mode of the Higgs has the largest predicted branching fraction (58%) according to the SM, but has been difficult to measure due to large multi-jet backgrounds. The increased luminosity in Run 2 made studying the similar X+H→bb decay a promising alternative, despite its cross-section being much lower. This decay process is easier to isolate as leptonic decays of the W or Z create a clean signature. However, the measured significance of this decay was only 0.42, compared to a SM expectation of 1.94. The analysis procedure was well validated by measuring SM (W/Z)Z yield, so this large discrepancy is certainly an area for more study in 2017.
The final component of the ATLAS physics program is, of course, the many searches for BSM physics. The discovery potential for many of these processes is increased in Run 2 due to the enhanced cross-sections at the larger energy. There are 3 important channels for general BSM searches: diboson, diphoton, and dilepton. Many extensions of the SM predict new heavy particles including heavy neutral Higgs, Heavy Vector Triplet W, and some Gravitons, that could decay into vector-boson pairs. General searches in this channel are done by looking for hadronic decays of the boosted W and Z bosons within a single large-radius jet and using jet substructure properties. Unfortunately, no significant excess is observed. The diphoton channel became the hot topic of 2016, due to a deviation of at least 3σ from the SM only hypothesis seen in both ATLAS and CMS early this year using the 2015 data set. However, this excess was not confirmed with the full 2016 data set, which has 4x more statistics. The largest excess seen by ATLAS in the diphoton channel is now 2.3σ for a mass near 710 GeV. The dilepton final states have excellent sensitivity to a variety of new phenomena and have high signal selection efficiencies. Again, however, the 2016 measurements are consistent with the SM prediction. Lower limits on a resonance mass have been set, enhancing exclusion up to 1 TeV more than in Run 1.
ATLAS physicists also study the infamous SUSY model by looking for signatures of gluinos and squarks (predicted to be the lightest SUSY particles). These particles could be observed in events with high jet multiplicity and lots of missing energy. Once again, no significant excess was found. These results were interpreted within 2 SUSY models, and gluinos with mass up to 1600 GeV were excluded.
So, to conclude, no evidence of BSM physics has been found at ATLAS in Run 2, but thenew limits on some SUSY models and various other BSM phenomena have been improved and can be seen in Figures 3 and 4. Precision measurements of the SM were improved, the detector equipment was updated, and the Higgs was rediscovered, but ATLAS physicists and particle enthusiasts are still hoping for something more exciting. Lest we leave 2016 on a low note, its important to remember that only 50% of ATLAS searches have been updated to the Run 2 energy, so there are still many more channels to be explored. Plus, there are already some modest excesses, as well as the discrepancy in the H→bb measurement. There is certainly more to be discovered as Run 2 continues, so here’s to 2017!
Article: Limits on Active to Sterile Neutrino Oscillations from Disappearance Searches in the MINOS, Daya Bay, and Bugey-3 Experiments Authors: Daya Bay and MINOS collaborations Reference:arXiv:1607.01177v4
So far, the hunt for sterile neutrinos has come up empty. Could a joint analysis between MINOS, Daya Bay and Bugey-3 data hint at their existence?
Neutrinos, like the beloved Whos in Dr. Seuss’ “Horton Hears a Who!,” are light and elusive, yet have a large impact on the universe we live in. While neutrinos only interact with matter through the weak nuclear force and gravity, they played a critical role in the formation of the early universe. Neutrino physics is now an exciting line of research pursued by the Hortons of particle physics, cosmology, and astrophysics alike. While most of what we currently know about neutrinos is well described by a three-flavor neutrino model, a few inconsistent experimental results such as those from the Liquid Scintillator Neutrino Detector (LSND) and the Mini Booster Neutrino Experiment (MiniBooNE) hint at the presence of a new kind of neutrino that only interacts with matter through gravity. If this “sterile” kind of neutrino does in fact exist, it might also have played an important role in the evolution of our universe.
The three known neutrinos come in three flavors: electron, muon, or tau. The discovery of neutrino oscillation by the Sudbury Neutrino Observatory and the Super-Kamiokande Observatory, which won the 2015 Nobel Prize, proved that one flavor of neutrino can transform into another. This led to the realization that each neutrino mass state is a superposition of the three different neutrino flavor states. From neutrino oscillation measurements, most of the parameters that define the mixing between neutrino states are well known for the three standard neutrinos.
The relationship between the three known neutrino flavor states and mass states is usually expressed as a 3×3 matrix known as the PMNS matrix, for Bruno Pontecorvo, Ziro Maki, Masami Nakagawa and Shoichi Sakata. The PMNS matrix includes three mixing angles, the values of which determine “how much” of each neutrino flavor state is in each mass state. The distance required for one neutrino flavor to become another, the neutrino oscillation wavelength, is determined by the difference between the squared masses of the two mass states. The values of mass splittings and are known to good precision.
A fourth flavor? Adding a sterile neutrino to the mix
A “sterile” neutrino is referred to as such because it would not interact weakly: it would only interact through the gravitational force. Neutrino oscillations involving the hypothetical sterile neutrino can be understood using a “four-flavor model,” which introduces a fourth neutrino mass state, , heavier than the three known “active” mass states. This fourth neutrino state would be mostly sterile, with only a small contribution from a mixture of the three known neutrino flavors. If the sterile neutrino exists, it should be possible to experimentally observe neutrino oscillations with a wavelength set by the difference between and the square of the mass of another known neutrino mass state. Current observations suggest a squared mass difference in the range of 0.1-10 eV.
Oscillations between active and sterile states would result in the disappearance of muon (anti)neutrinos and electron (anti)neutrinos. In a disappearance experiment, you know how many neutrinos of a specific type you produce, and you count the number of that type of neutrino a distance away, and find that some of the neutrinos have “disappeared,” or in other words, oscillated into a different type of neutrino that you are not detecting.
A joint analysis by the MINOS and Daya Bay collaborations
The MINOS and Daya Bay collaborations have conducted a joint analysis to combine independent measurements of muon (anti)neutrino disappearance by MINOS and electron antineutrino disappearance by Daya Bay and Bugey-3. Here’s a breakdown of the involved experiments:
MINOS, the Main Injector Neutrino Oscillation Search: A long-baseline neutrino experiment with detectors at Fermilab and northern Minnesota that use an accelerator at Fermilab as the neutrino source
The Daya Bay Reactor Neutrino Experiment: Uses antineutrinos produced by the reactors of China’s Daya Bay Nuclear Power Plant and the Ling Ao Nuclear Power Plant
The Bugey-3 experiment: Performed in the early 1990s, used antineutrinos from the Bugey Nuclear Power Plant in France for its neutrino oscillation observations
Assuming a four-flavor model, the MINOS and Daya Bay collaborations put new constraints on the value of the mixing angle , the parameter controlling electron (anti)neutrino appearance in experiments with short neutrino travel distances. As for the hypothetical sterile neutrino? The analysis excluded the parameter space allowed by the LSND and MiniBooNE appearance-based indications for the existence of light sterile neutrinos for < 0.8 eV at a 95% confidence level. In other words, the MINOS and Daya Bay analysis essentially rules out the LSND and MiniBooNE inconsistencies that allowed for the presence of a sterile neutrino in the first place. These results illustrate just how at odds disappearance searches and appearance searches are when it comes to providing insight into the existence of light sterile neutrinos. If the Whos exist, they will need to be a little louder in order for the world to hear them.