QCD, CP, PQ, and the Axion

Figure 1: Axions– exciting new elementary particles, or a detergent? (credit to The Big Blog Theory, [5])

Before we dig into all the physics behind these acronyms (beyond SM physics! dark matter!), let’s start by breaking down the title.

QCD, or quantum chromodynamics, is the study of how quarks and gluons interact. CP is the combined operation of charge-parity; it swaps a particle for its antiparticle, then switches left and right. CP symmetry states that applying both operators should leave the laws of physics invariant, which is true for electromagnetism. Interestingly it is violated by the weak force (this becomes the problem of matter-antimatter asymmetry [1]). But more importantly, the strong force maintains CP symmetry. In fact, that’s exactly the problem.

CP violation in QCD would give an electric dipole moment to the neutron. Experimentally, physicists have constrained this value pretty tightly around zero. But our QCD Lagrangian has a more complicated vacuum than first thought, giving it a term (Figure 2) with a phase parameter that would break CP [2]. Basically, our issue is that the theory predicts some degree of CP violation, but experimentally we just don’t see it. This is known as the strong CP problem.

Figure 2: QCD Lagrangian term allowing for CP violation. Current experimental constraints place θeff ≤ 10^−10 .
Figure 2: QCD Lagrangian term allowing for CP violation. Current experimental constraints place θeff ≤ 10^−10 .

Naturally physicists want to find a fix for this problem, bringing us to the rest of the article title. The most recognized solution is the Peccei-Quinn, or PQ theory. The idea is that the phase parameter is not a constant but actually another symmetry of the Standard Model. This symmetry, called U(1)_PQ, is spontaneously broken, meaning that all states of the theory share the symmetry except for the ground state.

This may sound a bit similar to the Higgs mechanism, because it is. In both cases, we get a non-zero vacuum expectation value and an extra massless boson, called a Goldstone boson. However, as with the Higgs mechanism, the new boson is not exactly massless. Very few things are exact in physics, and approximate symmetry breaking means our massless Goldstone boson gains a tiny bit of mass after all. This new particle created from PQ theory is called an axion. This new axion effectively steps into the role of the phase parameter, allowing its value to relax to 0.

Is it reasonable to imagine some extra massive Standard Model particle bouncing around that we haven’t detected yet? Sure. Perhaps the axion is so heavy that we haven’t yet probed the necessary energy range at the LHC. Or maybe it interacts so rarely that we’ve been looking in the right places and just haven’t had the statistics. But any undiscovered massive particle floating around should make you think about dark matter. In fact, the axion is one of the few remaining viable candidates for DM, and lots of people are looking pretty hard for it.

One of the largest collaborations is ADMX at the University of Washington, which uses an RF cavity in a superconducting magnet to detect the very rare conversion of a DM axion into a microwave photon [3]. In order to be a good dark matter candidate, the axion would have to be fairly small, and some theories place its mass below 1 eV (for reference, the neutrinos are of mass ~0.3 eV). ADMX has eliminated possible masses on the micro-eV order. However, theorists are clever, and there’s a lot of model tuning available that can pin the axion mass practically anywhere you want it to be.

Now is the time to factor in the recent buzz about a diphoton excess at 750 GeV (see the January 2016 ParticleBites post to catch up on this.) Recent papers are trying to place the axion at this mass, since that resonance is yet to be explained by Standard Model processes.

Figure 3: Plot of data for recent diphoton excess observed at the LHC.
Figure 3: Plot of data for recent diphoton excess observed at the LHC.

For example, one can consider aligned QCD axion models, in which there are multiple axions with decay constants around the weak scale, in line with the dark matter relic abundance [4]. The models can get pretty diverse from here, suffice it to say that there are many possibilities. Though this excess is still far from confirmed, it is always exciting to speculate about what we don’t know and how we can figure it out. Because of strong CP and these recent model developments, the axion has earned a place pretty high up on this speculation list.

 

References

  1. The Mystery of CP Violation, Gabriella Sciola, MIT
  2. TASI Lectures on the Strong CP Problem
  3. Axion Dark Matter Experiment (ADMX)
  4. Quality of the Peccei-Quinn symmetry in the Aligned QCD Axion and Cosmological Implications, arXiv: 1603.0209 [hep-ph].
  5. The Big Blog Theory on axions

 

Monojet Dark Matter Searches at the LHC

Now is a good time to be a dark matter experiment. The astrophysical evidence for its existence is almost undeniable (such as gravitational lensing and the cosmic microwave background; see the “Further Reading” list if you want to know more.) Physicists are pulling out all the stops trying to pin DM down by any means necessary.

However, by its very nature, it is extremely difficult to detect; dark matter is called dark because it has no known electromagnetic interactions, meaning it doesn’t couple to the photon. It does, however, have very noticeable gravitational effects, and some theories allow for the possibility of weak interactions as well.

While there are a wide variety of experiments searching for dark matter right now, the scope of this post will be a bit narrower, focusing on a common technique used to look for dark matter at the LHC, known as ‘monojets’. We rely on the fact that a quark-quark interaction could actually produce dark matter particle candidates, known as weakly interacting massive particles (WIMPs), through some unknown process. Most likely, the dark matter would then pass through the detector without any interactions, kind of like neutrinos. But if it doesn’t have any interactions, how do we expect to actually see anything? Figure 1 shows the overall Feynman diagram of the interaction; I’ll explain how and why each of these particles comes into the picture.

Figure 1: Feynman diagram for dark matter production process.
Figure 1: Feynman diagram for dark matter production process.

The answer is a pretty useful metric used by particle physicists to measure things that don’t interact, known as ‘missing transverse energy’ or MEt. When two protons are accelerated down the beam line, their initial momentum in the transverse plane is necessarily zero. Your final state can have all kinds of decay products in that plane, but by conversation of momentum, their magnitude and direction have to add up to zero in the end. If you add up all your momentum in the transverse plane and get a non-zero value, you know the remaining momentum was taken away by these non-interacting particles. In our case, dark matter is going to be the missing piece of the puzzle.

Figure 2: Event display for one of the monojet candidates in the ATLAS 7 data.
Figure 2: Event display for one of the monojet candidates in the ATLAS 7 TeV data.

Now our search method is to collide protons and look for… well, nothing. That’s not an easy thing to do. So let’s add another particle to our final state: a single jet that was radiated off one of the initial protons. This is a pretty common occurrence in LHC collisions, so we’re not ruining our statistics. But now we have an extra handle on selecting these events, since that radiated single jet is going to recoil off the missing energy in the final state.

An actual event display from the ATLAS detector is shown in Figure 2 (where the single jet is shown in yellow in the transverse plane of the detector).

No results have been released yet from the monojet groups with the 13 and 14 TeV data. However, the same method was using in 2012-2013 LHC data, and has provided some results that can be compared to current knowledge. Figure 3 shows the WIMP-nucleon cross section as a function of WIMP mass from CMS at the LHC (EPJC 75 (2015) 235), overlaid with other exclusions from a variety of experiments. Anything above/right of these curves is the excluded region.

From here we can see that the LHC can provide better sensitivity to low mass regions with spin dependent couplings to DM. It’s worth giving the brief caveat that these comparisons are extremely model dependent and require a lot of effective field theory; notes on this are also given in the Further Reading list. The current results look pretty thorough, and a large region of the WIMP mass seems to have been excluded. Interestingly, some searches observe slight excesses in regions that other experiments have ruled out; in this way, these ‘exclusions’ are not necessarily as cut and dry as they may seem. The dark matter mystery is still far from a resolution, but the LHC may be able to get us a little bit closer.

cms 1cms2

 

 

 

 

 

 

 

 

 

 

 

With all this incoming data and such a wide variety of searches ongoing, it’s likely that dark matter will remain a hot topic in physics for decades to come, with or without a discovery. In the words of dark matter pioneer Vera Rubin, “We have peered into a new world, and have seen that it is more mysterious and more complex than we had imagined. Still more mysteries of the universe remain hidden. Their discovery awaits the adventurous scientists of the future. I like it this way.“

 

References & Further Reading:

  • Links to the CMS and ATLAS 8 TeV monojet analyses
  • “Dark Matter: A Primer”, arXiv hep-ph 1006.2483
  • Effective Field Theory notes
  • “Simplified Models for Dark Matter Searches at the LHC”, arXiv hep-ph 1506.03116
  • “Search for dark matter at the LHC using missing transverse energy”, Sarah Malik, CMS Collaboration Moriond talk

 

How to Turn On a Supercollider

Figure 1: CERN Control Centre excitement on June 5. Image from home.web.cern.ch.

After two years of slumber, the world’s biggest particle accelerator has come back to life. This marks the official beginning of Run 2 of the LHC, which will collide protons at nearly twice the energies achieve in Run 1. Results from this data were already presented at the recently concluded European Physical Society (EPS) Conference on High Energy Physics. And after achieving fame in 2012 through observation of the Higgs boson, it’s no surprise that the scientific community is waiting with bated breath to see what the LHC will do next.

The first official 13 TeV stable beam physics data arrived on June 5th. One of the first events recorded by the CMS detector is shown in Figure 2. But as it turns out, you can’t just walk up to the LHC, plug it back into the wall, and press the on switch (crazy, I know.) It takes an immense amount of work, planning, and coordination to even get the thing running.

Event display from one of the first Run 2 collisions.
Figure 2: Event display from one of the first Run 2 collisions.

The machine testing begins with the magnets. Since the LHC dipole magnets are superconducting, they need to be cooled to about 1.9K in order to function, which can take weeks. Each dipole circuit then must be tested to ensure functionality of the quench protection circuit, which will dump the beam in the event of sudden superconductivity loss. This process occurred between July and December of 2014.

Once the magnets are set, it’s time to start actually making beam. Immediately before entering the LHC, protons are circling around the Super Proton Synchroton, which acts as a pre-accelerator. Getting beam from the SPS to the LHC requires synchronization, a functional injection system, beam dump procedure, and a whole lot of other processes that are re-awoken and carefully tested. By April, beam commissioning was officially underway, meaning that protons were injected and circulating, and a mere 8 weeks later there were successful collisions at the safe energy of 6.5 TeV. As of right now, the CMS detector is reporting 84 pb-1 total integrated luminosity; a day-by-day breakdown can be seen in Figure 3.

CMS total integrated luminosity per day, from Ref 5.
Figure 3: CMS total integrated luminosity per day, from Ref 4.

But just having collisions does not mean that the LHC is up and fully functional. Sometimes things go wrong right when you least expect it. For example, the CMS magnet has been off to a bit of a rough start—there was an issue with its cooling system that kept the magnetic field off, meaning that charged particles would not bend. The LHC has also been taking the occasional week off for “scrubbing”, in which lots of protons are circulated to burn off electron clouds in the beam pipes.

This is all leading up to the next technical stop, when the CERN engineers get to go fix things that have broken and improve things that don’t work perfectly. So it’s a slow process, sure. But all the caution and extra steps and procedures are what make the LHC a one-of-a-kind experiment that has big sights set for the rest of Run 2. More posts to follow when more physics results arrive!

 

References:

  1. LHC Commissioning site
  2. Cyrogenics & Magnets at the LHC
  3. CERN collisions announcement
  4. CMS Public Luminosity results

Prospects for the International Linear Collider

Title: “Physics Case for the International Linear Collider”
Author: Linear Collider Collaboration (LCC) Physics Working Group
Published: arXiV hep-ex 1506:05992

For several years, rumors have been flying around the particle physics community about an entirely new accelerator facility, one that can take over for the LHC during its more extensive upgrades and can give physicists a different window into the complex world of the Standard Model and beyond. Through a few setbacks and moments of indecision, the project seems to have more momentum now than ever, so let’s go ahead and talk about the International Linear Collider: what it is, why we want it, and whether or not it will ever actually get off the ground.

The ILC is a proposed linear accelerator that will collide electrons and positrons, in comparison to the circular Large Hadron Collider ring that collides protons. So why make these design differences? Hasn’t the LHC done a lot for us? In two words: precision measurements!

Of course, the LHC got us the Higgs, and that’s great. But there are certain processes that physicists really want to look at now that occupy much higher fractions of the electron-positron cross section. In addition, the messiness associated with strong interactions is entirely gone with a lepton collider, leaving only a very well-defined initial state and easily calculable backgrounds. Let’s look specifically at what particular physical processes are motivating this design.

Higgs to fermion couplings, from CMS experiment (left) and projected for ILC (right).
Figure 1: Higgs to fermion couplings, from CMS experiment (left) and projected for ILC (right).

1. The Higgs. Everything always comes back to the Higgs, doesn’t it? We know that it’s out there, but beyond that, there are still many questions left unanswered. Physicists still want to determine whether the Higgs is composite, or whether it perhaps fits into a supersymmetric model of some kind. Additionally, we’re still uncertain about the couplings of the Higgs, both to the massive fermions and to itself. Figure 1 shows the current best estimate of Higgs couplings, which we expect to be proportional to the fermion mass, in comparison to how the precision of these measurements should improve with the ILC.

2.The Top Quark. Another thing that we’ve already discovered, but still want to know more about its characteristics and behaviors. We know that the Higgs field takes on a symmetry breaking value in all of space, due to the observed split of the electromagnetic and weak forces. As it turns out, it is the coupling of the Higgs to the top that provides this value, making it a key player in the Standard Model game.

3.New Physics. And of course there’s always the discovery potential. Since electron and positron beams can be polarized, we would be able to measure backgrounds with a whole new level of precision, providing a better image of possible decay chains that include dark matter or other beyond the SM particles.

Figure 2: ILC home page/Form One

Let’s move on to the actual design prospects for the ILC. Figure 2 shows the most recent blueprint of what such an accelerator would look like.  The ILC would have 2 separate detectors, and would be able to accelerate electrons/positrons to an energy of 500 GeV, with an option to upgrade to 1 TeV at a later point. The entire tunnel would be 31km long with two damping rings shown at the center. When accelerating electrons to extremely high energies, a linear collider is needed to offset extremely relativistic effects. For example, the Large Electron-Positron Collider synchrotron at CERN accelerates electrons to 50 GeV, giving them a relativistic gamma factor of 98,000. Compare that to a proton of 50 GeV in the same ring, which has a gamma of 54. That high gamma means that an electron requires an insane amount of energy to offset its synchrotron radiation, making a linear collider a more reasonable and cost effective choice.

 

Possible sites for the ILC in Japan.
Figure 3: Possible sites for the ILC in Japan.

In any large (read: expensive) experiment such as this, a lot of politics are going to come into play. The current highest bidder for the accelerator seems to be Japan, with possible construction sites in the mountain ranges (see Figure 3). The Japanese government is pretty eager to contribute a lot of funding to the project, something that other contenders have been reluctant to do (but such funding promises can very easily go awry, as the poor SSC shows us.) The Reference Design Reports report the estimated cost to be $6.7 billion, though U.S. Department of Energy officials have placed the cost closer to $20 billion. But the benefits of such a collaboration are immense. The infrastructure of such an accelerator could lead to the creation of a “new CERN”, one that could have as far-reaching influence in the future as CERN has enjoyed in the past few decades. Bringing together about 1000 scientists from more than 20 countries, the ILC truly has the potential to do great things for future international scientific collaboration, making it one of the most exciting prospects on the horizon of particle physics.

 

Further Reading:

  1. The International Linear Collider site: all things ILC
  2. ILC Reference Design Reports (RDR), for the very ambitious reader

A Quark Gluon Plasma Primer

Artist's rendition of a proton breaking down into free quarks after a critical temperature. Image credit Lawrence Berkeley National Laboratory.
Figure 1: Artist’s rendition of a proton breaking down into free quarks after a critical temperature. Image credit Lawrence Berkeley National Laboratory.

Quark gluon plasma, affectionately known as QGP or “quark soup”, is a big deal, attracting attention from particle, nuclear, and astrophysicists alike. In fact, scrolling through past ParticleBites, I was amazed to see that it hadn’t been covered yet! So consider this a QGP primer of sorts, including what exactly is predicted, why it matters, and what the landscape looks like in current experiments.

To understand why quark gluon plasma is important, we first have to talk about quarks themselves, and the laws that explain how they interact, otherwise known as quantum chromodynamics. In our observable universe, quarks are needy little socialites who can’t bear to exist by themselves. We know them as constituent particles in hadronic color-neutral matter, where the individual color charge of a single quark is either cancelled by its anticolor (as in mesons) or by two other differently colored quarks (as with baryons). But theory predicts that at a high enough temperature and density, the quarks can rip free of the strong force that binds them and become deconfined. This resulting matter is thus composed entirely of free quarks and gluons, and we expect it to behave as an almost perfect fluid. Physicists believe that in the first few fleeting moments after the Big Bang, all matter was in this state due to the extremely high temperatures. In this way, understanding QGP and how particles behave at the highest possible temperatures will give us a new insight into the creation and evolution of the universe.

The history of experiment with QGP begins in the 80s at CERN with the Super Proton Synchrotron (which is now used as the final injector into the LHC.) Two decades into the experiment, CERN announced in 2000 that it had evidence for a ‘new state of matter’; see Further Reading #3 for more information. Since then, the LHC and the Brookhaven Relativistic Heavy Ion Collider (RHIC) have taken up the search, colliding heavy lead or gold ions and producing temperatures on the order of trillions of Kelvin. Since then, both experiments have released results claiming to have produced QGP; see Figure 2 for a phase diagram that shows where QGP lives in experimental space.

Phases of QCD and the energy scales probed by experiment.
Phases of QCD and the energy scales probed by experiment.

All this being said, the QGP story is not over just yet. Physicists still want a better understanding of how this new matter state behaves; evidence seems to indicate that it acts almost like a perfect fluid (but when has “almost” ever satisfied a physicist?) Furthermore, experiments are searching to know more about how QGP transitions into a regular hadronic state of matter, as shown in the phase diagram. These questions draw in some other kinds of physics, including statistical mechanics, to examine how bubble formation or ‘cavitation’ occurs when chemical potential or pressure is altered during QGP evolution (see Further Reading 5). In this sense, observation of a QGP-like state is just the beginning, and heavy ion collision experiments will surely be releasing new results in the future.

 

Further Reading:

  1. “The Quark Gluon Plasma: A Short Introduction”, arXiv hep-ph 1101.3937
  2. “Evidence for a New State of Matter”, CERN press release
  3. “Hot stuff: CERN physicists create record-breaking subatomic soup”, Nature blog
  4. “The QGP Discovered at RHIC”, arXiv nucl-th 0403.032
  5. “Cavitation in a quark gluon plasma with finite chemical potential and several transport coefficients”, arXiv hep-ph 1505.06335

Cosmic Microwave Background: The Role of Particles in Astrophysics

Over the past decade, a new trend has been emerging in physics, one that is motivated by several key questions: what do we know about the origin of our universe? What do we know about its composition? And how will the universe evolve from here? To delve into these questions naturally requires a thorough examination of the universe via the astrophysics lens. But studying the universe on a large scale alone does not provide a complete picture. In fact, it is just as important to see the universe on the smallest possible scales, necessitating the trendy and (fairly) new hybrid field of particle astrophysics. In this post, we will look specifically at the cosmic microwave background (CMB), classically known as a pillar of astrophysics, within the context of particle physics, providing a better understanding of the broader questions that encompass both fields.

Essentially, the CMB is just what we see when we look into the sky and we aren’t looking at anything else. Okay, fine. But if we’re not looking at something in particular, why do we see anything at all? The answer requires us to jump back a few billion years to the very early universe.

Particle interactions shown up to point of recombination, after which photon paths are unchanged.
Figure 1: Particle interactions shown up to point of recombination, after which photon paths are unchanged.

Immediately after the Big Bang, it was impossible for particles to form atoms without immediately being broken apart by constant bombardment from stray photons. About 380,000 thousand years after the Big Bang, the Universe expanded and cooled to a temperature of about 3,000 K, allowing the first formation of stable hydrogen atoms. Since hydrogen is electrically neutral, the leftover photons could no longer interact, meaning that at that point their paths would remain unaltered indefinitely. These are the photons that we observe as CMB; Figure 1 shows this idea diagrammatically below. From our present observation point, we measure the CMB to have a temperature of about 2.76 K.

Since this radiation has been unimpeded since that specific point (known as the point of ‘recombination’), we can think of the CMB as a snapshot of the very early universe. It is interesting, then, to examine the regularity of the spectrum; the CMB is naturally not perfectly uniform, and the slight temperature variations can provide a lot of information about how the universe formed. In the early primordial soup universe, slight random density fluctuations exerted a greater gravitational pull on their surroundings, since they had slightly more mass. This process continues, and very large dense patches occur in an otherwise uniform space, heating up the photons in that area accordingly. The Planck satellite, launched in 2009, provides some beautiful images of the temperature anisotropies of the universe, as seen in Figure 2. Some of these variations can be quite severe, as in the recently released results about a supervoid aligned with an especially cold spot in the CMB (see Further Reading, item 4).

 

Planck satellite heat map images of the CMB.
Figure 2: Planck satellite heat map images of the CMB.

 

Composition of the universe by percent.
Figure 3: Composition of the universe by percent.

So what does this all have to do with particles? We’ve talked about a lot of astrophysics so far, so let’s tie it all together. The big correlation here is dark matter. The CMB has given us strong evidence that our universe has a flat geometry, and from general relativity, this provides restrictions on the mass, energy, and density of the universe. In this way, we know that atomic matter can constitute only 5% of the universe, and analysis of the peaks in the CMB gives an estimate of 26% for the total dark matter presence. The rest of the universe is believed to be dark energy (see Figure 3).

Both dark matter and dark energy are huge questions in particle physics that could be the subject of a whole other post. But the CMB plays a big role in making our questions a bit more precise. The CMB is one of several pieces of strong evidence that require the existence of dark matter and dark energy to justify what we observe in the universe. Some potential dark matter candidates include weakly interacting massive particles (WIMPs), sterile neutrinos, or the lightest supersymmetric particle, all of which bring us back to particle physics for experimentation. Dark energy is not as well understood, and there are still a wide variety of disparate theories to explain its true identity. But it is clear that the future of particle physics will likely be closely tied to astrophysics, so as a particle physicist it’s wise to keep an eye out for new developments in both fields!

 

Further Reading: 

  1. The Cosmic Cocktail: Three Parts Dark Matter”, Katherine Freese
  2. “Physics of the cosmic microwave background anistropy”, from the arXiv:astro-ph
  3. Summary of dark matter vs. dark energy and other resources from NASA
  4. Summary of the supervoid aligned with a cold spot in the CMB, Royal Astronomical Society monthly notices

LHC Run II: What To Look Out For

The Large Hadron Collider is the world’s largest proton collider, and in a mere five years of active data acquisition, it has already achieved fame for the discovery of the elusive Higgs Boson in 2012. Though the LHC is currently off to allow for a series of repairs and upgrades, it is scheduled to begin running again within the month, this time with a proton collision energy of 13 TeV. This is nearly double the previous run energy of 8 TeV,  opening the door to a host of new particle productions and processes. Many physicists are keeping their fingers crossed that another big discovery is right around the corner. Here are a few specific things that will be important in Run II.

 

1. Luminosity scaling

Though this is a very general category, it is a huge component of the Run II excitement. This is simply due to the scaling of luminosity with collision energy, which gives a remarkable increase in discovery potential for the energy increase.

If you’re not familiar, luminosity is the number of events per unit time and cross sectional area. Integrated luminosity sums this instantaneous value over time, giving a metric in the units of 1/area.

lumi                          intLumi

 In the particle physics world, luminosities are measured in inverse femtobarns, where 1 fb-1 = 1/(10-43 m2). Each of the two main detectors at CERN, ATLAS and CMS, collected 30 fb-1 by the end of 2012. The main point is that more luminosity means more events in which to search for new physics.

Figure 1 shows the ratios of LHC luminosities for 7 vs. 8 TeV, and again for 13 vs. 8 TeV. Since the plot is in log scale on the y axis, it’s easy to tell that 13 to 8 TeV is a very large ratio. In fact, 100 fb-1 at 8 TeV is the equivalent of 1 fb-1 at 13 TeV. So increasing the energy by a factor less than 2 increase the integrated luminosity by a factor of 100! This means that even in the first few months of running at 13 TeV, there will be a huge amount of data available for analysis, leading to the likely release of many analyses shortly after the beginning of data acquisition.

lumiRatio
Figure 1: Parton luminosity ratios, from J. Stirling at Imperial College London (see references.)

 

2. Supersymmetry

Supersymmetry theory proposes the existence of a superpartner for every particle in the Standard Model, effectively doubling the number of fundamental particles in the universe. This helps to answer many questions in particle physics, namely the question of where the particle masses came from, known as the ‘hierarchy’ problem (see the further reading list for some good explanations.)

Current mass limits on many supersymmetric particles are getting pretty high, concerning some physicists about the feasibility of finding evidence for SUSY. Many of these particles have already been excluded for masses below the order of a TeV, making it very difficult to create them with the LHC as is. While there is talk of another LHC upgrade to achieve energies even higher than 14 TeV, for now the SUSY searches will have to make use of the energy that is available.

SUSYxsec
Figure 2: Cross sections for the case of equal degenerate squark and gluino masses as a function of mass at √s = 13 TeV, from 1407.5066. q stands for quark, g stands for gluino, and t stands for stop.

 

Figure 2 shows the cross sections for various supersymmetric particle pair production, including squark (the supersymmetric top quark) and gluino (the supersymmetric gluon). Given the luminosity scaling described previously, these cross sections tell us that with only 1 fb-1, physicists will be able to surpass the existing sensitivity for these supersymmetric processes. As a result, there will be a rush of searches being performed in a very short time after the run begins.

 

3. Dark Matter

Dark matter is one of the greatest mysteries in particle physics to date (see past particlebites posts for more information). It is also one of the most difficult mysteries to solve, since dark matter candidate particles are by definition very weakly interacting. In the LHC, potential dark matter creation is detected as missing transverse energy (MET) in the detector, since the particles do not leave tracks or deposit energy.

One of the best ways to ‘see’ dark matter at the LHC is in signatures with mono-jet or photon signatures; these are jets/photons that do not occur in pairs, but rather occur singly as a result of radiation. Typically these signatures have very high transverse momentum (pT) jets, giving a good primary vertex, and large amounts of MET, making them easier to observe. Figure 3 shows a Feynman diagram of such a decay, with the MET recoiling off a jet or a photon.

feynmanMonoX
Figure 3: Feynman diagram of mono-X searches for dark matter, from “Hunting for the Invisible.”

 

Though the topics in this post will certainly be popular in the next few years at the LHC, they do not even begin to span the huge volume of physics analyses that we can expect to see emerging from Run II data. The next year alone has the potential to be a groundbreaking one, so stay tuned!

 

References: 

Further Reading:

 

 

Neutrinoless Double Beta Decay Experiments

Title: Neutrinoless Double Beta Decay Experiments
Author: Alberto Garfagnini
Published: arXiv:1408.2455 [hep-ex]

Neutrinoless double beta decay is a theorized process that, if observed, would provide evidence that the neutrino is its own antiparticle. The relatively recent discovery of neutrino mass from oscillation experiments makes this search particularly relevant, since the Majorana mechanism that requires particles to be self-conjugate can also provide mass. A variety of experiments based on different techniques hope to observe this process. Before providing an experimental overview, we first discuss the theory itself.

DBD
Figure 1: Neutrinoless double beta decay.

Beta decay occurs when an electron or positron is released along with a corresponding neutrino. Double beta decay is simply the simultaneous beta decay  of two neutrons in a nucleus. “Neutrinoless,” of course, means that this decay occurs without the accompanying neutrinos; in this case, the two neutrinos in the beta decay annihilate with one another, which is only possible if they are self-conjugate. Figures 1 and 2 demonstrate the process by formula and image, respectively.

doubleBetaandNeutrinoless
Figure 2: Double beta decay & neutrinoless double beta decay, from particlecentral.com/neutrinos_page.html.

The lack of accompanying neutrinos in such a decay violates lepton number, meaning this process is forbidden unless neutrinos are Majorana fermions. Without delving into a full explanation, this simply means that a particle is its own antiparticle (though more information is given in the references.) The importance lies in the lepton number of a neutrino. Neutrinoless double beta decay would require a nucleus to absorb two neutrinos, then decay into two protons and two electrons (to conserve charge). The only way in which this process does not violate lepton number is if the lepton charge is the same for a neutrino and an antineutrino; in other words, if they are the same particle.

The experiments currently searching for neutrinoless double beta decay can be classified according to the material used for detection. A partial list of active and future experiments is provided below.

1. EXO (Enriched Xenon Observatory): New Mexico, USA. The detector is filled with liquid 136Xe, which provides worse energy resolution than gaseous xenon, but is compensated by the use of both scintillating and ionizing signals. The collaboration finds no statistically significant evidence for 0νββ decay, and place a lower limit on the half life of 1.1 * 1025 years at 90% confidence.

2. KamLAND-Zen: Kamioka underground neutrino observatory near Toyama, Japan.  Like EXO, the experiment uses liquid xenon, but in the past has required purification due to aluminum contaminations in the detector. They report a 0νββ half life 90% CL at 2.6 * 1025 years. Figure 3 shows the energy spectra of candidate events with the best fit background.

KamLANDZEN
Figure 2: KamLAND-Zen energy spectra of selected candidate events together with the best-fit backgrounds and 2νββ decays.

3. GERDA (Germanium Dectetor Array): Laboratori Nazionali del Gran Sasso, Italy. GERDA utilizes High Purity 76Ge diodes, which provide excellent energy resolution but typically have very large backgrounds. To prevent signal contamination, GERDA has ultra-pure shielding that protect measurements from environmental radiation background sources. The half life is bound below at  90% confidence by 2.1 * 1025 years.

 4. MAJORANA: South Dakota, USA.  This experiment is under construction, but a prototpye is expected to begin running in 2014. If results from GERDA and MAJORANA look good, there is talk of building a next generation germanium experiment that combines diodes from each detector.

 5. CUORE: Laboratori Nazionali del Gran Sasso, Italy. CUORE is a 130Te bolometric direct detector, meaning that it has two layers: an absorber made of crystal that releases energy when struck, and a sensor which detects the induced temperature changes. The experiment is currently under construction, so there are no definite results, but it expects to begin taking data in 2015.

While these results do not seem to show the existence of 0νββ decay, such an observation would demonstrate the existence of Majorana fermions and give an estimate of the absolute neutrino mass scale. However, a missing observation would be just as significant in the role of scientific discovery, since this would imply that the neutrino is not in fact its own antiparticle. To get a better limit on the half life, more advanced detector technologies are necessary; it will be interesting to see if MAJORANA and CUORE will have better sensitivity to this process.

 

Further Reading:

 

CMS evidence of a possible SUSY decay chain

Title: “Search for physics beyond the standard model in events with two leptons, jets, and missing transverse energy in pp collisions at sqrt(s)=8 TeV.”
Author: CMS Collaboration
Published: CMS Public: Physics Results SUS12019

The CMS Collaboration, one of the two main groups working on multipurpose experiments at the Large Hadron Collider, has recently reported an excess of events with an estimated significance of 2.6σ. As a reminder, discoveries in particle physics are typically declared at 5σ. While this excess is small enough that it may not be related to new physics at all, it is also large enough to generate some discussion.

The excess occurs at an invariant mass of 20 – 70 GeV in dilepton + missing transverse energy (MET) decays. Some theorists claim that this may be a signature of supersymmetry. The analysis was completed using kinematic ‘edges’, an example of which can be seen in Figure 1. These shapes are typical of the decays of new particles predicted by supersymmetry. 

 

edgeDiagram
Figure 1: Diagram of kinematic ‘edge’ effects in decay chains, from “Search for an ‘edge’ with CMS”. On the left, A, B, C, and D represent particles decaying. On the right, the invariant mass of final state particles C and D is shown, where the y axis represents the number of events.

The edge shape comes from the reconstructed invariant mass of the two leptons; in the diagram, these correspond to particles C and D. In models that conserve R-parity, which is the quantum number that distinguishes SUSY particles from Standard Model particles, a SUSY particle decays by emitting an SM particle and a lighter SUSY particle. In this case, two leptons are emitted in the chain. Reconstructing the invariant mass of the event is impossible because of the invisible massive particle. However, the total mass of the lepton pair can have any value, provided it is less than the maximum difference in mass between the initial and final state, as enforced by energy conservation. This maximum mass difference gives a hard cutoff, or ‘edge’, in the invariant mass distribution, as shown in the right side of Figure 1. Since the location of this cutoff is dependent on the mass of the original superparticle, these features can be very useful in obtaining information about such decays.

 

Figure 2 shows generated Monte Carlo for a new particle decaying to a two lepton final state. The red and blue lines show sources of background, while the green is the simulated signal. If the model was a good estimate of data, these three colored lines would sum to the distribution observed in data. Figure 3 shows the actual data distribution, with the relative significance of the excess around 20 – 70 GeV.

newSUSYMC
Figure 2: Monte Carlo invariant mass distribution of paired electrons or muons; signal shown in green with characteristic edge.
excessPlot
Figure 3: Invariant mass data distribution for paired leptons; excess between 20 and 70 GeV constitutes an estimated 2.6σ significance. 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

This excess is encouraging for physicists hoping to find stronger evidence for supersymmetry (or more generally, new physics) in Run II. However, 2.6σ is not especially high, and historically these excesses come and go all the time. Both CMS and ATLAS will certainly be watching this resonance in the 2015 13 TeV data, to see whether it grows into something more significant or simply fades into the background.

 

Further reading:

New Results from the CRESST-II Dark Matter Experiment

  • Title: Results on low mass WIMPs using an upgraded CRESST-II detector
  • Author: G. Angloher, A. Bento, C. Bucci, L. Canonica, A. Erb, F. v. Feilitzsch, N. Ferreiro Iachellini, P. Gorla, A. Gütlein, D. Hauff, P. Huff, J. Jochum, M. Kiefer, C. Kister, H. Kluck, H. Kraus,  J.-C. Lanfranchi, J. Loebell, A. Münster, F. Petricca, W. Potzel, F. Pröbst, F. Reindl, S. Roth, K. Rottler, C. Sailer, K. Schäffner, J. Schieck, J. Schmaler, S. Scholl, S. Schönert, W. Seidel, M. v. Sivers, L. Stodolsky, C. Strandhagen, R. Strauss, A. Tanzke, M. Uffinger, A. Ulrich, I. Usherov, M. Wüstrich, S. Wawoczny, M. Willers, and A. Zöller
  • Published: arXiv:1407.3146 [astro-ph.CO]

CRESST-II (Cryogenic Rare Event Search with Superconducting Thermometers) is a dark matter search experiment located at the Laboratori Nazionali del Gran Sasson in Italy. It is primarily involved with the search for WIMPs, or Weakly Interacting Massive Particles, which play a key role in both particle and astrophysics as a potential candidate for dark matter. If you are not yet intrigued enough about dark matter, see the list of references at the bottom of this post for more information. As dark matter candidates, WIMPs only interact via gravitational and weak forces, making them extremely difficult to detect.

CRESST-II attempts to detect WIMPs via elastic scattering off nuclei in scintillating CaWO4 crystals. This is a process known as direct detection, where scientists search for evidence of the WIMP itself; indirect detection requires searching for WIMP decay products. There are many challenges to direct detection, including the relatively low amount of recoil energy present in such scattering. An additional issue is the extremely high background, which is dominated by beta and gamma radiation of the nuclei. Overall, the experiment expects to obtain a few tens of events per kilogram-year.

CRESST1
Figure 1: Expected number of events for background and signal in 2011 CRESST-II run; from 1109.0702v1.

 

In 2011, CREST-II reported a small excess of events outside of the predicted background levels. The statistical analysis makes use of a maximum likelihood function, which parameterizes each primary background to compute a total number of expected events. The results of this likelihood fit can be seen in Figure 1, where M1 and M2 are different mass hypotheses. From these values, CRESST-II reports a statistical significance of 4.7σ for M1, and 4.2σ for M2. Since a discovery is generally accepted to have a significance of 5σ, these numbers presented a pretty big cause for excitement.

 

 

 

In July of 2014, CRESST-II released a follow up paper: after some detector upgrades and further background reduction, these tantalizingly high significances have been revised, ruling out both mass hypotheses. The event excess was likely due to unidentified  e/γ background, which was reduced by a factor of 2 -10 via improved CaWO4 crystals used in this run. The elimination of these high signal significances is in agreement with other dark matter searches, which have also ruled out WIMP masses on the order of 20 GeV.

Figure 2 shows the most recent exclusion curve for the WIMP mass, which gives the cross section for production as a function of possible mass. The contour reported in the 2011 paper is shown in light blue. The 90% confidence limit from the 2014 paper is given in solid red, alongside the expected sensivity from the background model in light red. All other curves are due to data from other experiments; see the paper cited for more information.

CRESST2
Figure 2: WIMP parameter space for spin-independent WIMP-nucleon scattering, from 1407.3146v1.

Though this particular excess was ultimately not confirmed, these results overall present an optimistic picture for the dark matter search. Comparison between the limits from 2011 to 2014 show an much greater sensitivity for WIMP masses below 3 GeV, which were previously un-probed by other experiments. Additional detector improvements may result in even more stringent limit setting, shaping the dark matter search for future experiments.

 

Further Reading