As we explained previously, IceCube is a gigantic ultra-high energy cosmic neutrino detector in Antarctica. These neutrinos have energies between 10-100 times higher than the protons colliding at the Large Hadron Collider, and their origin and nature are largely a mystery. One thing that IceCube can tell us about these neutrinos is their flavor composition; see e.g. this post for a crash course in neutrino flavor.
When neutrinos interact with ambient nuclei through a W boson (charged current interactions), the following types of events might be seen:
I refer you to this series of posts for a gentle introduction to the Feynman diagrams above. The key is that the high energy neutrino interacts with an nucleus, breaking it apart (the remnants are called X above) and ejecting a high energy charged lepton which can be used to identify the flavor of the neutrino.
Muons travel a long distance and leave behind a trail of Cerenkov radiation called a track.
Electrons don’t travel as far and deposit all of their energy into a shower. These are also sometimes called cascades because of the chain of particles produced in the ‘bang’.
Taus typically leave a more dramatic signal, a double bang, when the tau is formed and then subsequently decays into more hadrons (X’ above).
In fact, the tau events can be further classified depending on how this ‘double bang’ is resolved—and it seems like someone was playing a popular candy-themed mobile game when naming these:
Lollipop: The tau is produced outside the detector so that the first ‘bang’ isn’t seen. Instead, there’s a visible track that leads to the second (observable) bang. The track is the stick and the bang is the lollipop head.
Inverted lollipop: Similar to the lollipop, except now the first ‘bang’ is seen in the detector but the second ‘bang’ occurs outside the detector and is not observed.
Sugardaddy: The tau is produced outside the detector but decays into a muon inside the detector. This looks almost like a muon track except that the tau produces less Cerenkov light so that one can identify the point where the tau decays into a muon.
Double pulse: While this isn’t candy-themed, it’s still very interesting. This is a double bang where the two bangs can’t be distinguished spatially. However, since one bang occurs slightly after the other, one can distinguish them in the time: it’s a “double bang” in time rather than space.
Tautsie pop: This is a low energy version of the sugardaddy where the shower-to-track energy is used to discriminate against background.
While the names may be silly, counting these types of events in IceCube is one of the exciting frontiers of flavor physics. And while we might be forgiven for thinking that neutrino physics is all about measuring very `small’ things—let me share the following graphic from Francis Halzen’s recent talk at the AMS Days workshop at CERN, overlaying one of the shower events over Madison, Wisconsin to give a sense of scale:
The IceCube Neutrino Observatory is a gigantic neutrino detector located in the Antarctic. Like an iceberg, only a small fraction of the lab is above ground: 86 strings extend to a depth of 2.5 kilometers into the ice, with each string instrumented with 60 detectors.
These detectors search ultra high energy neutrinos by looking for Cerenkov radiation as they pass through the ice. This is really the optical version of a sonic boom. An example event is shown above, where the color and size of the spheres indicate the strength of the Cerenkov signal in each detector.
IceCube has released data for its first three years of running (1405.5303) and has found three events with very large energies: 1-2 peta-electron-volts: that’s ten thousand times the mass of the Higgs boson. In addition, there’s a spectrum of neutrinos in the 10-1000 TeV range.
These ultra high energy neutrinos are believed to originate from outside our galaxy through processes involving particle acceleration by black holes. One expects the flux of such neutrinos to go as a power law of the energy, where is a estimate from certain acceleration models. The existence of the three super high energy events at the PeV scale has led some people to think about a known deviation from the power law spectrum: the Glashow resonance. This is the sharp increase in the rate of neutrino interactions with matter coming from the resonant production of W bosons, as shown in the Feynman diagram to the left.
The Glashow resonance sticks out like a sore thumb in the spectrum. The position of the resonance is set by the energy required for an electron anti-neutrino to hit an electron at rest such that the center of mass energy is the W boson mass.
If you work through the math on the back of an envelope, you’ll find that the resonance occurs for incident electron anti-neutrinos with an energy of 6.3 PeV; see figure to the leftt. This is “right around the corner” from the 1-2 PeV events already seen, and one might wonder whether it’s significant that we haven’t seen anything.
The authors of [1407.3255] have found that the absence of Glashow resonant neutrino events in IceCube is not yet a bona-fide “anomaly.” In fact, they point out that the future observation or non-observation of such neutrinos can give us valuable hints about the hard-to-study origin of these ultra high energy neutrinos. They present six simple particle physics scenarios for how high energy neutrinos can be formed from cosmic rays that were accelerated by astrophysical accelerators like black holes. Each of these processes predict a ratio of neutrino and anti-neutrinos flavors at Earth (this includes neutrino oscillation effects over long distances). Since the Glashow resonance only occurs for electronanti-neutrinos, the authors point out that the appearance or non-appearance of the Glashow resonance in future data can constrain what types of processes may have produced these high energy neutrinos.
In more speculative work, the authors of [1404.0622] suggest that the absence of Glashow resonance events may even suggest some kind of new physics that impose a “speed limit” on neutrinos propagating through space that prevents neutrinos from ever reaching 6.3 PeV (see top figure).
1007.1247, Halzen and Klein, “IceCube: An Instrument for Neutrino Astronomy.” A review of the IceCube experiment.
hep-ph/9410384, Gaisser, Halzen, and Stanev, “Particle Physics with High Energy Neutrinos.” An older review of ultra high energy neutrinos.
This is my first Particlebites entry (and first ever attempt at a blog!) so you will have to bear with me =).
As I am sure you know by now, the Higgs boson has been discovered at the Large Hadron Collider (LHC). As you also may know, `discovering’ a Higgs boson is not so easy since a Higgs doesn’t just `sit there’ in a detector. Once it is produced at the LHC it very quickly decays (in about seconds) into other particles of the Standard Model. For us to `see’ it we must detect these particles into which decays. The decay I want to focus on here is the Higgs boson decay to a pair of photons, which are the spin-1 particles which make up light and mediate the electromagnetic force. By studying its decays to photons we are literally shining light on the Higgs boson (see Figure 1)!
The decay to photons is one of the Higgs’ most precisely measured decay channels. Thus, even though the Higgs only decays to photons about 0.2 % of the time, this was nevertheless one of the first channels the Higgs was discovered in at the LHC. Of course other processes (which we call backgrounds) in the Standard Model can mimic the decays of a Higgs boson, so to see the Higgs we have to look for `bumps’ over these backgrounds (see Figure 2). By carefully reconstructing this `bump’, the Higgs decays to photons also allows us to reconstruct the Higgs mass (about 125 GeV in particle physics units or about kg in `real world’ units).
Furthermore, using arguments based on angular momentum the Higgs decay to photons also allows us to determine that the Higgs boson must be a spin-0 particle which we call a scalar . So we see that just in this one decay channel a great deal of information about the Higgs boson can be inferred.
Now I know what you’re thinking…But photons only `talk to’ particles which carry electric charge and the Higgs is electrically neutral!! And even crazier, the Higgs only `talks to’ particles with mass and photons are massless!!! This is blasphemy!!! What sort of voodoo magic is occurring here which allows the Higgs boson to decay to photons?
The resolution of this puzzle lies in the subtle properties of quantum field theory. More specifically the Higgs can decay to photons via electrically charged `virtual particles‘. For present purposes its enough to say (with a little hand-waiving) that since the Higgs can talk to the massive electrically charged particles in the Standard Model, like the W boson or top quark, which in turn can `talk to’ photons, this allows the Higgs to indirectly interact with photons despite the fact that they are massless and the Higgs is neutral. In fact any charged and massive particles which exist will in principle contribute to the indirect interaction between the Higgs boson and photons. Crucially this includes even charged particles which may exist beyond the Standard Model and which have yet to be discovered due to their large mass. The sum total of these contributions from all possible charged and massive particles which contribute to the Higgs decay to photons is represented by the `blob’ in Figure 3.
It is exciting and interesting to think that new exotic charged particles could be hiding in the `blob’ which creates this interaction between the Higgs boson and photons. These particles might be associated with supersymmetry, extra dimensions, or a host of other exciting possibilities. So while it remains to be seen which, if any, of the beyond the Standard Model possibilities (please let there be something!) the LHC will uncover, it is fascinating to think about what can be learned by shining a little light on the Higgs boson!
1. There is a possible exception to this if the Higgs is a spin-2 particle, but various theoretical arguments as well as other Higgs data make this scenario highly unlikely.
2. Note, virtual particle is unfortunately a misleading term since these are not really particles at all (really they are disturbances in their associated quantum fields), but I will avoid going down this rabbit hole for the time being and save it for a later post. See the previous `virtual particles’ link for a great and more in-depth discussion which takes you a little deeper into the rabbit hole.
Not all searches for new physics involve colliding protons at the the highest human-made energies. An alternate approach is to look for deviations in ultra-rare events at low energies. These deviations may be the quantum footprints of new, much heavier particles. In this bite, we’ll focus on the decay of a muon to an electron in the presence of a heavy atom.
The muon is a heavy version of the electron.There are a few properties that make muons nice systems for precision measurements:
They’re easy to produce. When you smash protons into a dense target, like tungsten, you get lots of light hadrons—among them, the charged pions. These charged pions decay into muons, which one can then collect by bending their trajectories with magnetic fields. (Puzzle: why don’t pions decay into electrons? Answer below.)
They can replace electrons in atoms. If you point this beam of muons into a target, then some of the muons will replace electrons in the target’s atoms. This is very nice because these “muonic atoms” are described by non-relativistic quantum mechanics with the electron mass replaced with ~100 MeV. (Muonic hydrogen was previous mentioned in this bite on the proton radius problem.)
They decay, and the decay products always include an electron that can be detected. In vacuum it will decay into an electron and two neutrinos through the weak force, analogous to beta decay.
These decays are sensitive to virtual effects. You don’t need to directly create a new particle in order to see its effects. Potential new particles are constrained to be very heavy to explain their non-observation at the LHC. However, even these heavy particles can leave an imprint on muon decay through ‘virtual effects’ according (roughly) to the Heisenberg uncertainty principle: you can quantum mechanically violate energy conservation, but only for very short times.
One should be surprised that muon conversion is even possible. The process cannot occur in vacuum because it cannot simultaneously conserve energy and momentum. (Puzzle: why is this true? Answer below.) However, this process is allowed in the presence of a heavy nucleus that can absorb the additional momentum, as shown in the comic at the top of this post.
Muon conversion experiments exploit this by forming muonic atoms in the 1s state and waiting for the muon to convert into an electron which can then be detected. The upside is that all electrons from conversion have a fixed energy because they all come from the same initial state: 1s muonic aluminum at rest in the lab frame. This is in contrast with more common muon decay modes which involve two neutrinos and an electron; because this is a multibody final state, there is a smooth distribution of electron energies. This feature allows physicists to distinguish between the conversion versus the more frequent muon decay in orbit or muon capture by the nucleus (similar to electron capture).
The Standard Model prediction for this rate is miniscule—it’s weighted by powers of the neutrino to the W boson mass ratio (Puzzle: how does one see this? Answer below.). In fact, the current experimental bound on muon conversion comes from the Sindrum II experiment looking at muonic gold which constrains the relative rate of muon conversion to muon capture by the gold nucleus to be less than . This, in turn, constrains models of new physics that predict some level of charged lepton flavor violation—that is, processes that change the flavor of a charged lepton, say going from muons to electrons.
The plot on the right shows the energy scales that are indirectly probed by upcoming muonic aluminum experiments: the Mu2e experiment at Fermilab and the COMET experiment at J-PARC. The blue lines show bounds from another rare muon decay: muons decaying into an electron and photon. The black solid lines show the reach for muon conversion in muonic aluminum. The dashed lines correspond to different experimental sensitivities (capture rates for conversion, branching ratios for decay with a photon). Note that the energy scales probed can reach 1-10 PeV—that’s 1000-10,000 TeV—much higher than the energy scales direclty probed by the LHC! In this way, flavor experiments and high energy experiments are complimentary searches for new physics.
These “next generation” muon conversion experiments are currently under construction and promise to push the intensity frontier in conjunction with the LHC’s energy frontier.
Solutions to exercises:
Why do pions decay into muons and not electrons? [Note: this requires some background in undergraduate-level particle physics.] One might expect that if a charged pion can decay into a muon and a neutrino, then it should also go into an electron and a neutrino. In fact, the latter should dominate since there’s much more phase space. However, the matrix element requires a virtual W boson exchange and thus depends on an [axial] vector current. The only vector available from the pion system is its 4-momentum. By momentum conservation this is $p_\pi = p_\mu + p_\nu$. The lepton momenta then contract with Dirac matrices on the leptonic current to give a dominant piece proportional to the lepton mass. Thus the amplitude for charged pion decay into a muon is much larger than the amplitude for decay into an electron.
Why can’t a muon decay into an electron in vacuum? The process cannot simultaneously conserve energy and momentum. This is simplest to see in the reference frame where the muon is at rest. Momentum conservation requires the electron to also be at rest. However, a particle has rest energy equal to its mass, but now there’s now way a muon at rest can pass on all of its energy to an electron at rest.
Why is muon conversion in the Standard Model suppressed by the ration of the neutrino to W masses? This can be seen by drawing the Feynman diagram (fig below from 1401.6077). Flavor violation in the Standard Model requires a W boson. Because the W is much heavier than the muon, this must be virtual and appear only as an internal leg. Further, W‘s couple charged leptons to neutrinos, so there must also be a virtual neutrino. The evaluation of this diagram into an amplitude gives factors of the neutrino mass in the numerator (required for the fermion chirality flip) and the W mass in the denominator. For some details, see this post.
1205.2671: Fundamental Physics at the Intensity Frontier (section 3.2.2)
The Large Hadron Collider is the world’s largest proton collider, and in a mere five years of active data acquisition, it has already achieved fame for the discovery of the elusive Higgs Boson in 2012. Though the LHC is currently off to allow for a series of repairs and upgrades, it is scheduled to begin running again within the month, this time with a proton collision energy of 13 TeV. This is nearly double the previous run energy of 8 TeV, opening the door to a host of new particle productions and processes. Many physicists are keeping their fingers crossed that another big discovery is right around the corner. Here are a few specific things that will be important in Run II.
1. Luminosity scaling
Though this is a very general category, it is a huge component of the Run II excitement. This is simply due to the scaling of luminosity with collision energy, which gives a remarkable increase in discovery potential for the energy increase.
If you’re not familiar, luminosity is the number of events per unit time and cross sectional area. Integrated luminosity sums this instantaneous value over time, giving a metric in the units of 1/area.
In the particle physics world, luminosities are measured in inverse femtobarns, where 1 fb-1 = 1/(10-43 m2). Each of the two main detectors at CERN, ATLAS and CMS, collected 30 fb-1 by the end of 2012. The main point is that more luminosity means more events in which to search for new physics.
Figure 1 shows the ratios of LHC luminosities for 7 vs. 8 TeV, and again for 13 vs. 8 TeV. Since the plot is in log scale on the y axis, it’s easy to tell that 13 to 8 TeV is a very large ratio. In fact, 100 fb-1 at 8 TeV is the equivalent of 1 fb-1 at 13 TeV. So increasing the energy by a factor less than 2 increase the integrated luminosity by a factor of 100! This means that even in the first few months of running at 13 TeV, there will be a huge amount of data available for analysis, leading to the likely release of many analyses shortly after the beginning of data acquisition.
Supersymmetry theory proposes the existence of a superpartner for every particle in the Standard Model, effectively doubling the number of fundamental particles in the universe. This helps to answer many questions in particle physics, namely the question of where the particle masses came from, known as the ‘hierarchy’ problem (see the further reading list for some good explanations.)
Current mass limits on many supersymmetric particles are getting pretty high, concerning some physicists about the feasibility of finding evidence for SUSY. Many of these particles have already been excluded for masses below the order of a TeV, making it very difficult to create them with the LHC as is. While there is talk of another LHC upgrade to achieve energies even higher than 14 TeV, for now the SUSY searches will have to make use of the energy that is available.
Figure 2 shows the cross sections for various supersymmetric particle pair production, including squark (the supersymmetric top quark) and gluino (the supersymmetric gluon). Given the luminosity scaling described previously, these cross sections tell us that with only 1 fb-1, physicists will be able to surpass the existing sensitivity for these supersymmetric processes. As a result, there will be a rush of searches being performed in a very short time after the run begins.
3. Dark Matter
Dark matter is one of the greatest mysteries in particle physics to date (see past particlebites posts for more information). It is also one of the most difficult mysteries to solve, since dark matter candidate particles are by definition very weakly interacting. In the LHC, potential dark matter creation is detected as missing transverse energy (MET) in the detector, since the particles do not leave tracks or deposit energy.
One of the best ways to ‘see’ dark matter at the LHC is in signatures with mono-jet or photon signatures; these are jets/photons that do not occur in pairs, but rather occur singly as a result of radiation. Typically these signatures have very high transverse momentum (pT) jets, giving a good primary vertex, and large amounts of MET, making them easier to observe. Figure 3 shows a Feynman diagram of such a decay, with the MET recoiling off a jet or a photon.
Though the topics in this post will certainly be popular in the next few years at the LHC, they do not even begin to span the huge volume of physics analyses that we can expect to see emerging from Run II data. The next year alone has the potential to be a groundbreaking one, so stay tuned!
Last Thursday, Nobel Laureate Sam Tingpresented the latest results (CERN press release) from the Alpha Magnetic Spectrometer (AMS-02) experiment, a particle detector attached to the International Space Station—think “ATLAS/CMS in space.” Instead of beams of protons, the AMS detector examines cosmic rays in search of signatures of new physics such as the products of dark matter annihilation in our galaxy.
In fact, this is just the latest chapter in an ongoing mystery involving the energy spectrum of cosmic positrons. Recall that positrons are the antimatter versions of electrons with identical properties except having opposite charge. They’re produced from known astrophysical processes when high-energy cosmic rays (mostly protons) crash into interstellar gas—in this case they’re known as `secondaries’ because they’re a product of the `primary’ cosmic rays.
The dynamics of charged particles in the galaxy are difficult to simulate due to the presence of intense and complicated magnetic fields. However, the diffusion models generically predict that the positron fraction—the number of positrons divided by the total number of positrons and electrons—decreases with energy. (This ratio of fluxes is a nice quantity because some astrophysical uncertainties cancel.)
This prediction, however, is in stark contrast with the observed positron fraction from recent satellite experiments:
The rising fraction had been hinted in balloon-based experiments for several decades, but the satellite experiments have been able to demonstrate this behavior conclusively because they can access higher energies. In their first set of results last year (shown above), AMS gave the most precise measurements of the positron fraction as far as 350 GeV. Yesterday’s announcement extended these results to 500 GeV and added the following observations:
First they claim that they have measured the maximum of the positron fraction to be 275 GeV. This is close to the edge of the data they’re releasing, but the plot of the positron fraction slope is slightly more convincing:
The observation of a maximum in what was otherwise a fairly featureless rising curve is key for interpretations of the excess, as we discuss below. A second observation is a bit more curious: while neither the electron nor the positron spectra follow a simple power law, , the total electron or positron flux does follow such a power law over a range of energies.
This is a little harder to interpret since the flux form electrons also, in principle, includes different sources of background. Note that this plot reaches higher energies than the positron fraction—part of the reason for this is that it is more difficult to distinguish between electrons and positrons at high energies. This is because the identification depends on how the particle bends in the AMS magnetic field and higher energy particles bend less. This, incidentally, is also why the FERMI data has much larger error bars in the first plot above—FERMI doesn’t have its own magnetic field and must rely on that of the Earth for charge discrimination.
So what should one make of the latest results?
The most optimistic hope is that this is a signal of dark matter, and at this point this is more of a ‘wish’ than a deduction. Independently of AMS, we know is that dark matter exists in a halo that surrounds our galaxy. The simplest dark matter models also assume that when two dark matter particles find each other in this halo, they can annihilate into Standard Model particle–anti-particle pairs, such as electrons and positrons—the latter potentially yielding the rising positron fraction signal seen by AMS.
From a particle physics perspective, this would be the most exciting possibility. The ‘smoking gun’ signature of such a scenario would be a steep drop in the positron fraction at the mass of the dark matter particle. This is because the annihilation occurs at low velocities so that the energy of the annihilation products is set by the dark matter mass. This is why the observation of a maximum in the positron fraction is interesting: the dark matter interpretation of this excess hinges on how steeply the fraction drops off.
There are, however, reasons to be skeptical.
One attractive feature of dark matter annihilations is thermal freeze out: the observation that the annihilation rate determines how much dark matter exists today after being in thermal equilibrium in the early universe. The AMS excess is suggestive of heavy (~TeV scale) dark matter with an annihilation rate three orders of magnitude larger than the rate required for thermal freeze out.
A study of the types of spectra one expects from dark matter annihilation shows fits that are somewhat in conflict with the combined observations of the positron fraction, total electron/positron flux, and the anti-proton flux (see 0809.2409). The anti-proton flux, in particular, does not have any known excess that would otherwise be predicted by dark matter annihilation into quarks.
There are ways around these issues, such as invoking mechanisms to enhance the present day annihilation rate, perhaps with the annihilation only creating leptons and not quarks. However, these are additional bells and whistles that model-builders must impose on the dark matter sector. It is also important to consider alternate explanations of the Pamela/FERMI/AMS positron fraction excess due to astrophysical phenomena. There are at least two very plausible candidates:
Pulsars are neutron stars that are known to emit “primary” electron/positron pairs. A nearby pulsar may be responsible for the observed rising positron fraction. See 1304.1791 for a recent discussion.
Alternately, supernova remnants may also generate a “secondary” spectrum of positrons from acceleration along shock waves (0909.4060, 0903.2794, 1402.0855).
Both of these scenarios are plausible and should temper the optimism that the rising positron fraction represents a measurement of dark matter. One useful handle to disfavor the astrophysical interpretations is to note that they would be anisotropic (not constant over all directions) whereas the dark matter signal would be isotropic. See 1405.4884 for a recent discussion. At the moment, the AMS measurements do not measure any anisotropy but are not yet sensitive enough to rule out astrophysical interpretations.
Finally, let us also point out an alternate approach to understand the positron fraction. The reason why it’s so difficult to study cosmic rays is that the complex magnetic fields in the galaxy are intractable to measure and, hence, make the trajectory of charged particles hopeless to trace backwards to their sources. Instead, the authors of 0907.1686 and 1305.1324 take an alternate approach: while we can’t determine the cosmic ray origins, we can look at the behavior of heavier cosmic ray particles and compare them to the positrons. This is because, as mentioned above, the bending of a charged particle in a magnetic field is determined by its mass and charge—quantities that are known for the various cosmic ray particles. Based on this, the authors are able to predict an upper bound for the positron fraction when one assumes that the positrons are secondaries (e.g in the case of supernovae remnant acceleration):
We see that the AMS-02 spectrum is just under the authors’ upper bound, and that the reported downturn is consistent with (even predicted from) the upper-bound. The authors’ analysis then suggests a non-dark matter explanation for the positron excess. See this post from Resonaances for a discussion of this point and an updated version of the above plot from the authors.
With that in mind, there are at least three things to look forward to in the future from AMS:
A corresponding upturn in the anti-proton flux is predicted in many types of dark matter annihilation models for the rising positron fraction. Thus far AMS-02 has not released anti-proton data due to the lower numbers of anti-protons.
Further sensitivity to the (an)isotropy of the excess is a critical test of the dark matter interpretation.
The shape of the drop-off with energy is also critical: a gradual drop-off is unlikely to come from dark matter whereas a steep drop off is considered to be a smoking gun for dark matter.
Only time will tell; though Ting suggested that new results would be presented at the upcoming AMS meeting at CERN in 2 months.
The recent Sackler Symposium on the Nature of Dark matter included three talks on various aspects of the Pamela/FERMI/AMS-02 rising positron fraction. You can view the videos here: Linden (pulsars), Galli (dark matter), Blum (upper bound on secondaries).
Neutrinoless double beta decay is a theorized process that, if observed, would provide evidence that the neutrino is its own antiparticle. The relatively recent discovery of neutrino mass from oscillation experiments makes this search particularly relevant, since the Majorana mechanism that requires particles to be self-conjugate can also provide mass. A variety of experiments based on different techniques hope to observe this process. Before providing an experimental overview, we first discuss the theory itself.
Beta decay occurs when an electron or positron is released along with a corresponding neutrino. Double beta decay is simply the simultaneous beta decay of two neutrons in a nucleus. “Neutrinoless,” of course, means that this decay occurs without the accompanying neutrinos; in this case, the two neutrinos in the beta decay annihilate with one another, which is only possible if they are self-conjugate. Figures 1 and 2 demonstrate the process by formula and image, respectively.
The lack of accompanying neutrinos in such a decay violates lepton number, meaning this process is forbidden unless neutrinos are Majorana fermions. Without delving into a full explanation, this simply means that a particle is its own antiparticle (though more information is given in the references.) The importance lies in the lepton number of a neutrino. Neutrinoless double beta decay would require a nucleus to absorb two neutrinos, then decay into two protons and two electrons (to conserve charge). The only way in which this process does not violate lepton number is if the lepton charge is the same for a neutrino and an antineutrino; in other words, if they are the same particle.
The experiments currently searching for neutrinoless double beta decay can be classified according to the material used for detection. A partial list of active and future experiments is provided below.
1. EXO (Enriched Xenon Observatory): New Mexico, USA. The detector is filled with liquid 136Xe, which provides worse energy resolution than gaseous xenon, but is compensated by the use of both scintillating and ionizing signals. The collaboration finds no statistically significant evidence for 0νββ decay, and place a lower limit on the half life of 1.1 * 1025 years at 90% confidence.
2. KamLAND-Zen: Kamioka underground neutrino observatory near Toyama, Japan. Like EXO, the experiment uses liquid xenon, but in the past has required purification due to aluminum contaminations in the detector. They report a 0νββ half life 90% CL at 2.6 * 1025 years. Figure 3 shows the energy spectra of candidate events with the best fit background.
3. GERDA (Germanium Dectetor Array): Laboratori Nazionali del Gran Sasso, Italy. GERDA utilizes High Purity 76Ge diodes, which provide excellent energy resolution but typically have very large backgrounds. To prevent signal contamination, GERDA has ultra-pure shielding that protect measurements from environmental radiation background sources. The half life is bound below at 90% confidence by 2.1 * 1025 years.
4. MAJORANA: South Dakota, USA. This experiment is under construction, but a prototpye is expected to begin running in 2014. If results from GERDA and MAJORANA look good, there is talk of building a next generation germanium experiment that combines diodes from each detector.
5. CUORE: Laboratori Nazionali del Gran Sasso, Italy. CUORE is a 130Te bolometric direct detector, meaning that it has two layers: an absorber made of crystal that releases energy when struck, and a sensor which detects the induced temperature changes. The experiment is currently under construction, so there are no definite results, but it expects to begin taking data in 2015.
While these results do not seem to show the existence of 0νββ decay, such an observation would demonstrate the existence of Majorana fermions and give an estimate of the absolute neutrino mass scale. However, a missing observation would be just as significant in the role of scientific discovery, since this would imply that the neutrino is not in fact its own antiparticle. To get a better limit on the half life, more advanced detector technologies are necessary; it will be interesting to see if MAJORANA and CUORE will have better sensitivity to this process.
The hydrogen atom is one of the primary examples studied in a typical introductory quantum mechanics course. Recent measurements indicate that this simple system may still have surprises for us. Could this be a hint of new physics? This post is based on the following papers:
“Muonic hydrogen and MeV forces” by D. Tucker-Smith and I. Yavin [1011.4922], Phys. Rev. D83 (2011) 101702
“Proton size anomaly” by V. Barger, C. Chiang, W. Keung, D. Marfatia [1011.3519], Phys. Rev. Lett. 106 (2011) 153001
“The Size of the Proton” by Pohl et al. in Nature 466 (2010) 213
Quantum mechanically, the proton is an object whose electric charge is smeared out over a small region. Experiments that scatter electrons off protons can probe this spatial extent and recent measurements indicate an effective proton charge radius of 0.877(7) femtometers.
Muons are heavy copies of electrons and can similarly form muonic hydrogen: an atom formed from a proton and a muon. Because the muons are heavier, they exist closer to the nucleus and are more sensitive to the extent of the proton charge: the effective Coulomb force is reduced as one dips into the charge distribution in the same way that the gravitational force decreases as one digs towards the center of the Earth.
By ‘tickling’ the muon into a higher energy level with a laser and then measuring the resulting X-ray emission, one can deduce the proton radius. Since lasers can be tuned to very precise frequencies, one can make a very precise measurement of the Lamb shift in the muonic hydrogen energy levels. This, in turn, can be converted into a measurement of the proton radius because the energy levels are sensitive to the overlap of the muon and proton probability distributions. Intuitively, when the muon is inside the proton charge radius, it experiences a weaker Coulomb potential due to screening.
The big surprise is that the muonic hydrogen measurement gives a radius of 0.842(7) femtometers, this is over five standard deviations smaller than the expected result based on regular hydrogen!
This discrepancy remains an open question despite several proposed solutions based on more precise theoretical calculations to relate the Lamb shift to the proton radius. One optimistic approach is to entertain the possibility that this is an indicator of new fundamental physics, such as a heretofore undiscovered force that tugs on the muon and electron differently. It turns out that these types of models are difficult to construct. One of the main constraints is actually nearly 40 years old and comes from the effect of such a new force on neutron–lead scattering.
Meanwhile, a new set of experiments to probe the proton radius anomaly are already underway. One of these is the Muon-Proton Scattering Experiment (MUSE); this would directly probe if the origin of the discrepancy came from the two different proton radius measurements described above: scattering for electrons versus spectroscopy for muons.
1301.0905: a recent review covering theoretical and experimental aspects of the proton radius problem
Title: “Search for physics beyond the standard model in events with two leptons, jets, and missing transverse energy in pp collisions at sqrt(s)=8 TeV.” br> Author: CMS Collaboration br> Published: CMS Public: Physics Results SUS12019
The CMS Collaboration, one of the two main groups working on multipurpose experiments at the Large Hadron Collider, has recently reported an excess of events with an estimated significance of 2.6σ. As a reminder, discoveries in particle physics are typically declared at 5σ. While this excess is small enough that it may not be related to new physics at all, it is also large enough to generate some discussion.
The excess occurs at an invariant mass of 20 – 70 GeV in dilepton + missing transverse energy (MET) decays. Some theorists claim that this may be a signature of supersymmetry. The analysis was completed using kinematic ‘edges’, an example of which can be seen in Figure 1. These shapes are typical of the decays of new particles predicted by supersymmetry.
The edge shape comes from the reconstructed invariant mass of the two leptons; in the diagram, these correspond to particles C and D. In models that conserve R-parity, which is the quantum number that distinguishes SUSY particles from Standard Model particles, a SUSY particle decays by emitting an SM particle and a lighter SUSY particle. In this case, two leptons are emitted in the chain. Reconstructing the invariant mass of the event is impossible because of the invisible massive particle. However, the total mass of the lepton pair can have any value, provided it is less than the maximum difference in mass between the initial and final state, as enforced by energy conservation. This maximum mass difference gives a hard cutoff, or ‘edge’, in the invariant mass distribution, as shown in the right side of Figure 1. Since the location of this cutoff is dependent on the mass of the original superparticle, these features can be very useful in obtaining information about such decays.
Figure 2 shows generated Monte Carlo for a new particle decaying to a two lepton final state. The red and blue lines show sources of background, while the green is the simulated signal. If the model was a good estimate of data, these three colored lines would sum to the distribution observed in data. Figure 3 shows the actual data distribution, with the relative significance of the excess around 20 – 70 GeV.
This excess is encouraging for physicists hoping to find stronger evidence for supersymmetry (or more generally, new physics) in Run II. However, 2.6σ is not especially high, and historically these excesses come and go all the time. Both CMS and ATLAS will certainly be watching this resonance in the 2015 13 TeV data, to see whether it grows into something more significant or simply fades into the background.
Title: Results on low mass WIMPs using an upgraded CRESST-II detector
Author: G. Angloher, A. Bento, C. Bucci, L. Canonica, A. Erb, F. v. Feilitzsch, N. Ferreiro Iachellini, P. Gorla, A. Gütlein, D. Hauff, P. Huff, J. Jochum, M. Kiefer, C. Kister, H. Kluck, H. Kraus, J.-C. Lanfranchi, J. Loebell, A. Münster, F. Petricca, W. Potzel, F. Pröbst, F. Reindl, S. Roth, K. Rottler, C. Sailer, K. Schäffner, J. Schieck, J. Schmaler, S. Scholl, S. Schönert, W. Seidel, M. v. Sivers, L. Stodolsky, C. Strandhagen, R. Strauss, A. Tanzke, M. Uffinger, A. Ulrich, I. Usherov, M. Wüstrich, S. Wawoczny, M. Willers, and A. Zöller
CRESST-II (Cryogenic Rare Event Search with Superconducting Thermometers) is a dark matter search experiment located at the Laboratori Nazionali del Gran Sasson in Italy. It is primarily involved with the search for WIMPs, or Weakly Interacting Massive Particles, which play a key role in both particle and astrophysics as a potential candidate for dark matter. If you are not yet intrigued enough about dark matter, see the list of references at the bottom of this post for more information. As dark matter candidates, WIMPs only interact via gravitational and weak forces, making them extremely difficult to detect.
CRESST-II attempts to detect WIMPs via elastic scattering off nuclei in scintillating CaWO4 crystals. This is a process known as direct detection, where scientists search for evidence of the WIMP itself; indirect detection requires searching for WIMP decay products. There are many challenges to direct detection, including the relatively low amount of recoil energy present in such scattering. An additional issue is the extremely high background, which is dominated by beta and gamma radiation of the nuclei. Overall, the experiment expects to obtain a few tens of events per kilogram-year.
In 2011, CREST-II reported a small excess of events outside of the predicted background levels. The statistical analysis makes use of a maximum likelihood function, which parameterizes each primary background to compute a total number of expected events. The results of this likelihood fit can be seen in Figure 1, where M1 and M2 are different mass hypotheses. From these values, CRESST-II reports a statistical significance of 4.7σ for M1, and 4.2σ for M2. Since a discovery is generally accepted to have a significance of 5σ, these numbers presented a pretty big cause for excitement.
In July of 2014, CRESST-II released a follow up paper: after some detector upgrades and further background reduction, these tantalizingly high significances have been revised, ruling out both mass hypotheses. The event excess was likely due to unidentified e–/γ background, which was reduced by a factor of 2 -10 via improved CaWO4 crystals used in this run. The elimination of these high signal significances is in agreement with other dark matter searches, which have also ruled out WIMP masses on the order of 20 GeV.
Figure 2 shows the most recent exclusion curve for the WIMP mass, which gives the cross section for production as a function of possible mass. The contour reported in the 2011 paper is shown in light blue. The 90% confidence limit from the 2014 paper is given in solid red, alongside the expected sensivity from the background model in light red. All other curves are due to data from other experiments; see the paper cited for more information.
Though this particular excess was ultimately not confirmed, these results overall present an optimistic picture for the dark matter search. Comparison between the limits from 2011 to 2014 show an much greater sensitivity for WIMP masses below 3 GeV, which were previously un-probed by other experiments. Additional detector improvements may result in even more stringent limit setting, shaping the dark matter search for future experiments.