Tag Archives: particle physics

Why You Should Be Skeptical about Faster-than-Light Neutrinos

While I do love science, I don’t always love IFL Science. They can be good at drumming up enthusiasm, but they can also be ridiculously gullible. Case in point: last week, IFL Science ran a piece on a recent paper purporting to give evidence for faster-than-light particles.

Faster than light! Sounds cool, right? Here’s why you should be skeptical:

If a science article looks dubious, you should check out the source. In this case, IFL Science links to an article on the preprint server arXiv.

arXiv is a freely accessible website where physicists and mathematicians post their articles. The site has multiple categories, corresponding to different fields. It’s got categories for essentially any type of physics you’d care to include, with the option to cross-list if you think people from multiple areas might find your work interesting.

So which category is this paper in? Particle physics? Astrophysics?

General Physics, actually.

General Physics is arXiv’s catch-all category. Some of it really is general, and can’t be put into any more specific place. But most of it, including this, falls into another category: things arXiv’s moderators think are fishy.

arXiv isn’t a journal. If you follow some basic criteria, it won’t reject your articles. Instead, dubious articles are put into General Physics, to signify that they don’t seem to belong with the other scholarship in the established categories. General Physics is a grab-bag of weird ideas and crackpot theories, a mix of fringe physicists and overenthusiastic amateurs. There probably are legitimate papers in there too…but for every paper in there, you can guarantee that some experienced researcher found it suspicious enough to send into exile.

Even if you don’t trust the moderators of arXiv, there are other reasons to be wary of faster-than-light particles.

According to Einstein’s theory of relativity, massless particles travel at the speed of light, while massive particles always travel slower. To travel faster than the speed of light, you need to have a very unusual situation: a particle whose mass is an imaginary number.

Particles like that are called tachyons, and they’re a staple of science fiction. While there was a time when they were a serious subject of physics speculation, nowadays the general view is that tachyons are a sign we’re making bad assumptions.

Assuming that someone is a republic serial villain is a good example.

Why is that? It has to do with the nature of mass.

In quantum field theory, what we observe as particles arise as ripples in quantum fields, extending across space and time. The harder it is to make the field ripple, the higher the particle’s mass.

A tachyon has imaginary mass. This means that it isn’t hard to make the field ripple at all. In fact, exactly the opposite happens: it’s easier to ripple than to stay still! Any ripple, no matter how small, will keep growing until it’s not just a ripple, but a new default state for the field. Only when it becomes hard to change again will the changes stop. If it’s hard to change, though, then the particle has a normal, non-imaginary mass, and is no longer a tachyon!

Thus, the modern understanding is that if a theory has tachyons in it, it’s because we’re assuming that one of the quantum fields has the wrong default state. Switching to the correct default gets rid of the tachyons.

There are deeper problems with the idea proposed in this paper. Normally, the only types of fields that can have tachyons are scalars, fields that can be defined by a single number at each point, sort of like a temperature. The particles this article is describing aren’t scalars, though, they’re fermions, the type of particle that includes everyday matter like electrons. Those sorts of particles can’t be tachyons at all without breaking some fairly important laws of physics. (For a technical explanation of why this is, Lubos Motl’s reply to the post here is pretty good.)

Of course, this paper’s author knows all this. He’s well aware that he’s suggesting bending some fairly fundamental laws, and he seems to think there’s room for it. But that, really, is the issue here: there’s room for it. The paper isn’t, as IFL Science seems to believe, six pieces of evidence for faster-than-light particles. It’s six measurements that, if you twist them around and squint and pick exactly the right model, have room for faster-than-light particles. And that’s…probably not worth an article.

Misleading Headlines and Tacky Physics, Oh My!

It’s been making the rounds on the blogosphere (despite having come out three months ago). It’s probably showed up on your Facebook feed. It’s the news that (apparently) one of the biggest discoveries of recent years may have been premature. It’s….

The Huffington Post writing a misleading headline to drum up clicks!

The article linked above is titled “Scientists Raise Doubts About Higgs Boson Discovery, Say It Could Be Another Particle”. And while that is indeed technically all true, it’s more than a little misleading.

When the various teams at the Large Hadron Collider announced their discovery of the Higgs, they didn’t say it was exactly the Higgs predicted by the Standard Model. In fact, it probably shouldn’t be: most of the options for extending the Standard Model, like supersymmetry, predict a Higgs boson with slightly different properties. Until the Higgs is measured more precisely, these slightly different versions won’t be ruled out.

Of course, “not ruled out” is not exactly newsworthy, which is the main problem with this article. The Huffington Post quotes a paper that argues, not that there is new evidence for an alternative to the Higgs, but simply that one particular alternative that the authors like hasn’t been ruled out yet.

Also, it’s probably the tackiest alternative out there.

The theory in question is called Technicolor, and if you’re imagining a certain coat then you may have an idea of how tacky we’re talking.

Any Higgs will do…

To describe technicolor, let’s take a brief aside and talk about the colors of quarks.

Rather than having one type of charge going from plus to minus like Electromagnetism, the Strong Nuclear Force has three types of charge, called red, green, and blue. Quarks are charged under the strong force, and can be red, green, or blue, while the antimatter partners of quarks have the equivalent of negative charges, anti-red, anti-green, and anti-blue. The strong force binds quarks together into protons and neutrons. The strong force is also charged under itself, which means that not only does it bind quarks together, it also binds itself together, so that it only acts at very very short range.

In combination, these two facts have one rather surprising consequence. A proton contains three quarks, but a proton’s mass is over a hundred times the total mass of three quarks. The same is true of neutrons.

The reason why is that most of the mass isn’t coming from the quarks, it’s coming from the strength of the strong force. Mass, contrary to what you might think, isn’t fundamental “stuff”. It’s just a handy way of talking about energy that isn’t due to something we can easily see. Particles have energy because they move, but they also have energy due to internal interactions, as well as interactions with other fields like the Higgs field. While a lone quark’s mass is due to its interaction with the Higgs field, the quarks inside a proton are also interacting with each other, gaining enormous amounts of energy from the strong force trapped within. That energy, largely invisible from an outside view, contributes most of what we see as the mass of the proton.

Technicolor asks the following: what if it’s not just protons and neutrons? What if the mass of everything, quarks and electrons and the W and Z bosons, was due not truly to the Higgs, but to another force, like the strong force but even stronger? The Higgs we think we saw at the LHC would not be fundamental, but merely a composite, made up of  two “techni-quarks” with “technicolor” charges. [Edited to remove confusion with Preon Theory]

It’s…an idea. But it’s never been a very popular one.

Part of the problem is that the simpler versions of technicolor have been ruled out, so theorists are having to invoke increasingly baroque models to try to make it work. But that, to some extent, is also true of supersymmetry.

A bigger problem is that technicolor is just kind of…tacky.

Technicolor doesn’t say anything deep about the way the universe works. It doesn’t propose new [types of] symmetries, and it doesn’t say anything about what happens at the very highest energies. It’s not really tied in to any of the other lines of speculation in physics, it doesn’t lead to a lot of discussion between researchers. It doesn’t require an end, a fundamental lowest level with truly fundamental particles. You could potentially keep adding new levels of technicolor, new things made up of other things made up of other things, ad infinitum.

And the fleas that bite ’em, presumably.

[Note: to clarify, technicolor theories don’t actually keep going like this, their extra particles don’t require another layer of technicolor to gain their masses. That would be an actual problem with the concept itself, not a reason it’s tacky. It’s tacky because, in a world where most physicists feel like we’ve really gotten down to the fundamental particles, adding new composite objects seems baroque and unnecessary, like adding epicycles. Fleas upon fleas as it were.]

In a word, it’s not sexy.

Does that mean it’s wrong? No, of course not. As the paper linked by Huffington Post points out, technicolor hasn’t been ruled out yet.

Does that mean I think people shouldn’t study it? Again, no. If you really find technicolor meaningful and interesting, go for it! Maybe you’ll be the kick it needs to prove itself!

But good grief, until you manage that, please don’t spread your tacky, un-sexy theory all over Facebook. A theory like technicolor should get press when it’s got a good reason, and “we haven’t been ruled out yet” is never, ever, a good reason.

 

[Edit: Esben on Facebook is more well-informed about technicolor than I am, and pointed out some issues with this post. Some of them are due to me conflating technicolor with another old and tacky theory, while some were places where my description was misleading. Corrections in bold.]

What’s an Amplitude? Just about everything.

I am an Amplitudeologist. In other words, I study scattering amplitudes. I’ve explained bits and pieces of what scattering amplitudes are in other posts, but I ought to give a short definition here so everyone’s on the same page:

A scattering amplitude is the formula used to calculate the probability that some collection of particles will “scatter”, emerging as some (possibly different) collection of particles.

Note that I’m using some weasel words here. The scattering amplitude is not a probability itself, but “the formula used to calculate the probability”. For those familiar with the mathematics of waves, the scattering amplitude gives the amplitude of a “probability wave” that must be squared to get the probability. (Those familiar with waves might also ask: “If this is the amplitude, what about the period?” The truth is that because scattering amplitudes are calculated using complex numbers, what we call the “amplitude” also contains information about the wave’s “period”. It may seem like an inconsistent way to name things from the perspective of a beginning student, but it is actually consistent with the terminology in a large chunk of physics.)

In some of the simplest scattering amplitudes particles literally “scatter”, with two particles “colliding” and emerging traveling in different directions.

A scattering amplitude can also describe a more complicated situation, though. At particle colliders like the Large Hadron Collider, two particles (a pair of protons for the LHC) are accelerated fast enough that when they collide they release a whole slew of new particles. Since it still fits the “some particles go in, some particles go out” template, this is still described by a scattering amplitude.

It goes even further than that, though, because “some particles” could also just be “one particle”. If you’re dealing with something unstable (the particle equivalent of radioactive, essentially) then one particle can decay into two or more particles. There’s a whole slew of questions that require that sort of calculation. For example, if unstable particles were produced in the early universe, how many of them would be left around today? If dark matter is unstable (and some possible candidates are), when it decays it might release particles we could detect. In general, this sort of scattering amplitude is often of interest to astrophysicists when they happen to get involved in particle physics.

You can even use scattering amplitudes to describe situations that, at first glance, don’t sound like collisions of particles at all. If you want to find the effect of a magnetic field on an electron to high accuracy, the calculation also involves a scattering amplitude. A magnetic field can be thought of in terms of photons, particles of light, because light is a vibration in the electro-magnetic field. This means that the effect of a magnetic field on an electron can be calculated by “scattering” an electron and a photon.

4gravanom

If this looks familiar, check the handbook section.

In fact, doing the calculation in this way leads to what is possibly the most accurately predicted number in all of science.

Scattering amplitudes show up all over the place, from particle physics at the Large Hadron Collider to astrophysics to delicate experiments on electrons in magnetic fields. That said, there are plenty of things people calculate in theoretical physics that don’t use scattering amplitudes, either because they involve questions that are difficult to answer from the scattering amplitude point of view, or because they invoke different formulas altogether. Still, scattering amplitudes are central to the work of a large number of physicists. They really do cover just about everything.

“China” plans super collider

When I saw the headline, I was excited.

“China plans super collider” says Nature News.

There’s been a lot of worry about what may happen if the Large Hadron Collider finishes its run without discovering anything truly new. If that happens, finding new particles might require a much bigger machine…and since even that machine has no guarantee of finding anything at all, world governments may be understandably reluctant to fund it.

As such, several prominent people in the physics community have put their hopes on China. The country’s somewhat autocratic nature means that getting funding for a collider is a matter of convincing a few powerful people, not a whole fractious gaggle of legislators. It’s a cynical choice, but if it keeps the field alive so be it.

If China was planning a super collider, then, that would be great news!

Too bad it’s not.

Buried eight paragraphs in to Nature’s article we find the following:

The Chinese government is yet to agree on any funding, but growing economic confidence in the country has led its scientists to believe that the political climate is ripe, says Nick Walker, an accelerator physicist at DESY, Germany’s high-energy physics laboratory in Hamburg. Although some technical issues remain, such as keeping down the power demands of an energy-hungry ring, none are major, he adds.

The Chinese government is yet to agree on any funding. China, if by China you mean the Chinese government, is not planning a super collider.

So who is?

Someone must have drawn these diagrams, after all.

Reading the article, the most obvious answer is Beijing’s Institute of High Energy Physics (IHEP). While this is true, the article leaves out any mention of a more recently founded site, the Center for Future High Energy Physics (CFHEP).

This is a bit odd, given that CFHEP’s whole purpose is to compose a plan for the next generation of colliders, and persuade China’s government to implement it. They were founded, with heavy involvement from non-Chinese physicists including their director Nima Arkani-Hamed, with that express purpose in mind. And since several of the quotes in the article come from Yifang Wang, director of IHEP and member of the advisory board of CFHEP, it’s highly unlikely that this isn’t CFHEP’s plan.

So what’s going on here? On one level, it could be a problem on the journalists’ side. News editors love to rewrite headlines to be more misleading and click-bait-y, and claiming that China is definitely going to build a collider draws much more attention than pointing out the plans of a specialized think tank. I hope that it’s just something like that, and not the sort of casual racism that likes to think of China as a single united will. Similarly, I hope that the journalists involved just didn’t dig deep enough to hear about CFHEP, or left it out to simplify things, because there is a somewhat darker alternative.

CFHEP’s goal is to convince the Chinese government to build a collider, and what better way to do that than to present them with a fait accompli? If the public thinks that this is “China’s” plan, that wheels are already in motion, wouldn’t it benefit the Chinese government to play along? Throw in a few sweet words about the merits of international collaboration (a big part of the strategy of CFHEP is to bring international scientists to China to show the sort of community a collider could attract) and you’ve got a winning argument, or at least enough plausibility to get US and European funding agencies in a competitive mood.

This…is probably more cynical than what’s actually going on. For one, I don’t even know whether this sort of tactic would work.

Do these guys look like devious manipulators?

Indeed, it might just be a journalistic omission, part of a wider tendency of science journalists to focus on big projects and ignore the interesting part, the nitty-gritty things that people do to push them forward. It’s a shame, because people are what drive the news forward, and as long as science is viewed as something apart from real human beings people are going to continue to mistrust and misunderstand it.

Either way, one thing is clear. The public deserves to hear a lot more about CFHEP.

Look what I made!

In a few weeks, I’ll be giving a talk for Stony Brook’s Graduate Awards Colloquium, to an audience of social science grad students and their parents.

One of the most useful tools when talking to people in other fields is a shared image. You want something from your field that they’ve seen, that they’re used to, that they’ll recognize. Building off of that kind of thing can be a great way to communicate.

If there’s one particle physics image that lots and lots of people have seen, it’s the Standard Model. Generally, it’s organized into charts like this:

Standard_Model_of_Elementary_Particles

I thought that if people saw a chart like that, but for N=4 super Yang-Mills, it might make the theory seem a bit more familiar. N=4 super Yang-Mills has a particle much like the Standard Model’s gluon with spin 1, paired with four gluinos, particles that are sort of but not really like quarks with spin 1/2, and six scalars, particles whose closest analogue in the Standard Model is the Higgs with spin 0.

In N=4 super Yang-Mills, none of these particles have any mass, since if supersymmetry isn’t “broken” all particles have the same mass. So where mass is written in the Standard Model table, I can just put zero. The table I linked also gives the electric charge of each particle. That doesn’t really mean anything for N=4 super Yang-Mills. It isn’t a theory that tries to describe the real world, so there’s no direct equivalent to a real-world force like electromagnetism. Since everything in the theory has to have the same charge, again due to supersymmetry, I can just list all of their “electric charges” as zero.

Putting it all together, I get the diagram below. The theory has eleven particles in total, so it won’t fit into a nice neat square. Still, this should be more familiar than most of the ways I could present things.

N4SYMParticleContent

Particles are not Species

It has been estimated that there are 7.5 million undiscovered species of animals, plants and fungi. Most of these species are insects. If someone wanted billions of dollars to search the Amazon rainforest with the goal of cataloging every species of insect, you’d want them to have a pretty good reason. Maybe they are searching for genes that could cure diseases, or trying to understand why an ecosystem is dying.

The primary goal of the Large Hadron Collider is to search for new subatomic particles. If we’re spending billions searching for these things, they must have some use, right? After all, it’s all well and good knowing about a bunch of different particles, but there must be a whole lot of sorts of particles out there, at least if you judge by science fiction (these two are also relevant). Surely we could just focus on finding the useful ones, and ignore the rest?

The thing is, particle physics isn’t like that. Particles aren’t like insects, you don’t find rare new types scattered in out-of-the-way locations. That’s because each type of particle isn’t like a species of animal. Instead, each particle is a fundamental law of nature.

Move over Linnaeus.

Move over Linnaeus.

It wasn’t always like this. In the late 50’s and early 60’s, particle accelerators were producing a zoo of new particles with no clear rhyme or reason, and it looked like they would just keep producing more. That impression changed when Murray Gell-Mann proposed his Eightfold Way, which led to the development of the quark model. He explained the mess of new particles in terms of a few fundamental particles, the quarks, which made up the more complicated particles that were being discovered.

Nowadays, the particles that we’re trying to discover aren’t, for the most part, the zoo of particles of yesteryear. Instead, we’re looking for new fundamental particles.

What makes a particle fundamental?

The new particles of the early 60’s were a direct consequence of the existence of quarks. Once you understood how quarks worked, you could calculate the properties of all of the new particles, and even predict ones that hadn’t been found yet.

By contrast, fundamental particles aren’t based on any other particles, and you can’t predict everything about them. When we discover a new fundamental particle like the Higgs boson, we’re discovering a new, independent law of nature. Each fundamental particle is a law that states, across all of space and time, “if this happens, make this particle”. It’s a law that holds true always and everywhere, regardless of how often the particle is actually produced.

Think about the laws of physics like the cockpit of a plane. In front of the pilot is a whole mess of controls, dials and switches and buttons. Some of those controls are used every flight, some much more rarely. There are probably buttons on that plane that have never been used. But if a single button is out of order, the plane can’t take off.

Each fundamental particle is like a button on that plane. Some turn “on” all the time, while some only turn “on” in special circumstances. But each button is there all the same, and if you’re missing one, your theory is incomplete. It may agree with experiments now, but eventually you’re going to run into problems of one sort or another that make your theory inconsistent.

The point of discovering new particles isn’t just to find the one that will give us time travel or let us blow up Vulcan. Technological applications would be nice, but the real point is deeper: we want to know how reality works, and for every new fundamental particle we discover, we’ve found out a fact that’s true about the whole universe.

The Four Ways Physicists Name Things

If you’re a biologist and you discover a new animal, you’ve always got Latin to fall back on. If you’re an astronomer, you can describe what you see. But if you’re a physicist, your only option appears to involve falling back on one of a few terrible habits.

The most reasonable option is just to name it after a person. Yang-Mills and the Higgs Boson may sound silly at first, but once you know the stories of C. N. Yang, Robert Mills, Peter Higgs and Satyendra Nath Bose you start appreciating what the names mean. While this is usually the most elegant option, the increasingly collaborative nature of physics means that many things have to be named with a series of initials, like ABJM, BCJ and KKLT.

A bit worse is the tendency to just give it the laziest name possible. What do you call the particles that “glue” protons and neutrons together? Why gluons, of course, yuk yuk yuk!

This is particularly common when it comes to supersymmetry, where putting the word “super” in front of something almost always works. If that fails, it’s time to go for more specific conventions: to find the partner of an existing particle, if the new particle is a boson, just add “s-” for “super”“scalar” apparently to the name. This creates perfectly respectable names like stau, sneutrino, and selectron. If the new particle is a fermion, instead you add “-ino” to the end, getting something like a gluino if you start with a gluon. If you’ve heard of neutrinos, you may know that neutrino means “little neutral one”. You might perfectly rationally expect that gluino means “little gluon”, if you had any belief that physicists name things logically. We don’t. A gluino is called a gluino because it’s a fermion, and neutrinos are fermions, and the physicists who named it were too lazy to check what “neutrino” actually means.

Pictured: the superpartner of Nidoran?

Worse still are names that are obscure references and bad jokes. These are mercifully rare, and at least memorable when they occur. In quantum mechanics, you write down probabilities using brackets of two quantum states, \langle a | b\rangle. What if you need to separate the two states, \langle a| and |b\rangle? Then you’ve got a “bra” and a “ket”!

Or have you heard the story of how quarks were named? Quarks, for those of you unfamiliar with them, are found in protons and neutrons in groups of three. Murray Gell-Mann, one of the two people who first proposed the existence of quarks, got their name from Finnegan’s Wake, a novel by James Joyce, which at one point calls for “Three quarks for Muster Mark!” While this may at first sound like a heartwarming tale of respect for the literary classics, it should be kept in mind that a) Finnegan’s Wake is a novel composed almost entirely of gibberish, read almost exclusively by people who pretend to understand it to seem intelligent and b) this isn’t exactly the most important or memorable line in the book. So Gell-Mann wasn’t so much paying homage to a timeless work of literature as he was referencing the most mind-numbingly obscure piece of nerd trivia before the invention of Mara Jade. Luckily these days we have better ways to remember the name.

Albeit wrinklier ways.

The final, worst category, though, don’t even have good stories going for them. They are the names that tell you absolutely nothing about the thing they are naming.

Probably the worst examples of this from my experience are the a-theorem and the c-theorem. In both cases, a theory happened to have a parameter in it labeled by a letter. When a theorem was proven about that parameter, rather than giving it a name that told you anything at all about what it was, people just called it by the name of the parameter. Mathematics is full of names like this too. Without checking Wikipedia, what’s the difference between a set, a group, and a category? What the heck is a scheme?

If you ever have to name something, be safe and name it after a person. If you don’t, just try to avoid falling into these bad habits of physics naming.

A Wild Infinity Appears! Or, Renormalization

Back when Numberphile’s silly video about the zeta function came up, I wrote a post explaining the process of regularization, where physicists take an incorrect infinite result and patch it over to get something finite. At the end of that post I mentioned a particular variant of regularization, called renormalization, which was especially important in quantum field theory.

Renormalization has to do with how we do calculations and make predictions in particle physics. If you haven’t read my post “What’s so hard about Quantum Field Theory anyway?” you should read it before trying to tackle this one. The important concepts there are that probabilities in particle physics are calculated using Feynman Diagrams, that those diagrams consist of lines representing particles and points representing the ways they interact, that each line and point in the diagram gives a number that must be plugged in to the calculation, and that to do the full calculation you have to add up all the possible diagrams you can draw.

Let’s say you’re interested in finding out the mass of a particle. How about the Higgs?

You can’t weigh it, or otherwise see how gravity affects it: it’s much too light, and decays into other particles much too fast. Luckily, there is another way. As I mentioned in this post, a particle’s mass and its kinetic energy (energy of motion) both contribute to its total energy, which in turn affects what particles it can turn into if it decays. So if you want to find a particle’s mass, you need the relationship between its motion and its energy.

Suppose we’ve got a Higgs particle moving along. We know it was created out of some collision, and we know what it decays into at the end. With that, we can figure out its mass.

higgstree

There’s a problem here, though: we only know what happens at the beginning and the end of this diagram. We can’t be certain what happens in the middle. That means we need to add in all of the other diagrams, every possible diagram with that beginning and that end.

Just to look at one example, suppose the Higgs particle splits into a quark and an anti-quark (the antimatter version of the quark). If they come back together later into a Higgs, the process would look the same from the outside. Here’s the diagram for it:

higgsloop

When we’re “measuring the Higgs mass”, what we’re actually measuring is the sum of every single diagram that begins with the creation of a Higgs and ends with it decaying.

Surprisingly, that’s not the problem!

The problem comes when you try to calculate the number that comes out of that diagram, when the Higgs splits into a quark-antiquark pair. According to the rules of quantum field theory, those quarks don’t have to obey the normal relationship between total energy, kinetic energy, and mass. They can have any kinetic energy at all, from zero all the way up to infinity. And because it’s quantum field theory, you have to add up all of those possible kinetic energies, all the way up. In this case, the diagram actually gives you infinity.

(Note that not every diagram with unlimited kinetic energy is going to be infinite. The first time theorists calculated infinite diagrams, they were surprised.

For those of you who know calculus, the problem here comes after you integrate over momentum. The two quarks each give a factor of one over the momentum, and then you integrate the result four times (for three dimensions of space plus time), which gives an infinite result. If you had different particles arranged in a different way you might divide by more factors of momentum and get a finite value.)

The modern understanding of infinite results like this is that they arise from our ignorance. The mass of the Higgs isn’t actually infinity, because we can’t just add up every kinetic energy up to infinity. Instead, at some point before we get to infinity “something else” happens.

We don’t know what that “something else” is. It might be supersymmetry, it might be something else altogether. Whatever it is, we don’t know enough about it now to include it in the calculations as anything more than a cutoff, a point beyond which “something” happens. A theory with a cutoff like this, one that is only “effective” below a certain energy, is called an Effective Field Theory.

While we don’t know what happens at higher energies, we still need a way to complete our calculations if we want to use them in the real world. That’s where renormalization comes in.

When we use renormalization, we bring in experimental observations. We know that, no matter what is contributing to the Higgs particle’s mass, what we observe in the real world is finite. “Something” must be canceling the divergence, so we simply assume that “something” does, and that the final result agrees with the experiment!

"Something"

“Something”

In order to do this, we accepted the experimental result for the mass of the Higgs. That means that we’ve lost any ability to predict the mass from our theory. This is a general rule for renormalization: we trade ignorance (of the “something” that happens at high energy) for a loss of predictability.

If we had to do this for every calculation, we couldn’t predict anything at all. Luckily, for many theories (called renormalizable theories) there are theorems proving that you only need to do this a few times to fix the entire theory. You give up the ability to predict the results of a few experiments, but you gain the ability to predict the rest.

Luckily for us, the Standard Model is a renormalizable theory. Unfortunately, some important theories are not. In particular, quantum gravity is non-renormalizable. In order to fix the infinities in quantum gravity, you need to do the renormalization trick an infinite number of times, losing an infinite amount of predictability. Thus, while making a theory of quantum gravity is not difficult in principle, in practice the most obvious way to create the theory results in a “theory” that can never make any predictions.

One of the biggest virtues of string theory (some would say its greatest virtue) is that these infinities never appear. You never need to renormalize string theory in this way, which is what lets it work as a theory of quantum gravity. N=8 supergravity, the gravity cousin of N=4 super Yang-Mills, might also have this handy property, which is why many people are so eager to study it.

Amplitudes on Paperscape

Paperscape is a very cool tool developed by Damien George and Rob Knegjens. It analyzes papers from arXiv, the paper repository where almost all physics and math papers live these days. By putting papers that cite each other closer together and pushing papers that don’t cite each other further apart, Paperscape creates a map of all the papers on arXiv, arranged into “continents” based on the links between them. Papers with more citations are shown larger, newer papers are shown brighter, and subject categories are indicated by color-coding.

Here’s a zoomed-out view:

PaperscapeFullMap

Already you can see several distinct continents, corresponding to different arXiv categories like high energy theory and astrophysics.

If you want to find amplitudes on this map, just zoom in between the purple continent (high energy theory, much of which is string theory) and the green one (high energy lattice, nuclear experiment, high energy experiment, and high energy phenomenology, broadly speaking these are all particle physics).

PaperscapeAmplitudesMap

When you zoom in, Paperscape shows words that commonly appear in a given region of papers. Zoomed in this far, you can see amplitudes!

Amplitudeologists like me live on an island between particle physics and string theory. We’re connected on both sides by bridges of citations and shared terms, linking us to people who study quarks and gluons on one side to people who study strings and geometry on the other. Think of us like Manhattan, an island between two shores, densely networked in to the surroundings.

PaperscapeZoomedMap

Zoom in further, and you can see common keywords for individual papers. Exploring around here shows not only what is getting talked about, but what sort of subjects as well. You can see by the color-coding that many papers in amplitudes are published as hep-th, or high energy theory, but there’s a fair number of papers from hep-ph (phenomenology) and from nuclear physics as well.

There’s a lot of interesting things you can do with Paperscape. You can search for individuals, or look at individual papers, seeing who they cite and who cite them. Try it out!

High Energy? What does that mean?

I am a high energy physicist who uses the high energy and low energy limits of a theory that, while valid up to high energies, is also a low-energy description of what at high energies ends up being string theory (string theorists, of course, being high energy physicists as well).

If all of that makes no sense to you, congratulations, you’ve stumbled upon one of the worst-kept secrets of theoretical physics: we really could use a thesaurus.

“High energy” means different things in different parts of physics. In general, “high” versus “low” energy classifies what sort of physics you look at. “High” energy physics corresponds to the very small, while “low” energies encompass larger structures. Many people explain this via quantum mechanics: the uncertainty principle says that the more certain you are of a particle’s position, the less certain you can be of how fast it is going, which would imply that a particle that is highly restricted in location might have very high energy. You can also understand it without quantum mechanics, though: if two things are held close together, it generally has to be by a powerful force, so the bond between them will contain more energy. Another perspective is in terms of light. Physicists will occasionally use “IR”, or infrared, to mean “low energy” and “UV”, or ultraviolet, to mean “high energy”. Infrared light has long wavelengths and low energy photons, while ultraviolet light has short wavelengths and high energy photons, so the analogy is apt. However, the analogy only goes so far, since “UV physics” is often at energies much greater than those of UV light (and the same sort of situation applies for IR).

So what does “low energy” or “high energy” mean? Well…

The IR limit: Lowest of the “low energy” points, this refers to the limit of infinitely low energy. While you might compare it to “absolute zero”, really it just refers to energy that’s so low that compared to the other energies you’re calculating with it might as well be zero. This is the “low energy limit” I mentioned in the opening sentence.

Low energy physics: Not “high energy physics”. Low energy physics covers everything from absolute zero up to atoms. Once you get up to high enough energy to break up the nucleus of an atom, you enter…

High energy physics: Also known as “particle physics”, high energy physics refers to the study of the subatomic realm, which also includes objects which aren’t technically particles like strings and “branes”. If you exclude nuclear physics itself, high energy physics generally refers to energies of a mega-electron-volt and up. For comparison, the electrons in atoms are bound by energies of around an electron-volt, which is the characteristic energy of chemistry, so high energy physics is at least a million times more energetic. That said, high energy physicists are often interested in low energy consequences of their theories, including all the way down to the IR limit. Interestingly, by this point we’ve already passed both infrared light (from a thousandth of an electron-volt to a single electron volt) and ultraviolet light (several electron-volts to a hundred or so). Compared to UV light, mega-electron volt scale physics is quite high energy.

The TeV scale: If you’re operating a collider though, mega-electron-volts (or MeV) are low-energy physics. Often, calculations for colliders will assume that quarks, whose masses are around the MeV scale, actually have no mass at all! Instead, high energy for particle colliders means giga (billion) or tera (trillion) electron volt processes. The LHC, for example, operates at around 7 TeV now, with 14 TeV planned. This is the range of scales where many had hoped to see supersymmetry, but as time has gone on results have pushed speculation up to higher and higher energies. Of course, these are all still low energy from the perspective of…

The string scale: Strings are flexible, but under enormous tension that keeps them very very short. Typically, strings are posed to be of length close to the Planck length, the characteristic length at which quantum effects become relevant for gravity. This enormously small length corresponds to the enormously large Planck energy, which is on the order of 1028 electron-volts. That’s about ten to the sixteen times the energies of the particles at the LHC, or ten to the twenty-two times the MeV scale that I called “high energy” earlier. For comparison, there are about ten to the twenty-two atoms in a milliliter of water. When extra dimensions in string theory are curled up, they’re usually curled up at this scale. This means that from a string theory perspective, going to the TeV scale means ignoring the high energy physics and focusing on low energy consequences, which is why even the highest mass supersymmetric particles are thought of as low energy physics when approached from string theory.

The UV limit: Much as the IR limit is that of infinitely low energy, the UV limit is the formal limit of infinitely high energy. Again, it’s not so much an actual destination, as a comparative point where the energy you’re considering is much higher than the energy of anything else in your calculation.

These are the definitions of “high energy” and “low energy”, “UV” and “IR” that one encounters most often in theoretical particle physics and string theory. Other parts of physics have their own idea of what constitutes high or low energy, and I encourage you to ask people who study those parts of physics if you’re curious.