Astrophysics, the Impossible Science

Last week, Nobel Laureate Martinus Veltman gave a talk at the Simons Center. After the talk, a number of people asked him questions about several things he didn’t know much about, including supersymmetry and dark matter. After deflecting a few such questions, he proceeded to go on a brief rant against astrophysics, professing suspicion of the field’s inability to do experiments and making fun of an astrophysicist colleague’s imprecise data. The rant was a rather memorable feat of curmudgeonliness, and apparently typical Veltman behavior. It left several of my astrophysicist friends fuming. For my part, it inspired me to write a positive piece on astrophysics, highlighting something I don’t think is brought up enough.

The thing about astrophysics, see, is that astrophysics is impossible.

Imagine, if you will, an astrophysical object. As an example, picture a black hole swallowing a star.

Are you picturing it?

Now think about where you’re looking from. Chances are, you’re at some point up above the black hole, watching the star swirl around, seeing something like this:

Where are you in this situation? On a spaceship? Looking through a camera on some probe?

Astrophysicists don’t have spaceships that can go visit black holes. Even the longest-ranging probes have barely left the solar system. If an astrophysicist wants to study a black hole swallowing a star, they can’t just look at a view like that. Instead, they look at something like this:

The image on the right is an artist’s idea of what a black hole looks like. The three on the left? They’re what the astrophysicist actually sees. And even that is cleaned up a bit, the raw output can be even more opaque.

A black hole swallowing a star? Just a few blobs of light, pixels on screen. You can measure brightness and dimness, filter by color from gamma rays to radio waves, and watch how things change with time. You don’t even get a whole lot of pixels for distant objects. You can’t do experiments, either, you just have to wait for something interesting to happen and try to learn from the results.

It’s like staring at the static on a TV screen, day after day, looking for patterns, until you map out worlds and chart out new laws of physics and infer a space orders of magnitude larger than anything anyone’s ever experienced.

And naively, that’s just completely and utterly impossible.

And yet…and yet…and yet…it works!

Crazy people staring at a screen can’t successfully make predictions about what another part of the screen will look like. They can’t compare results and hone their findings. They can’t demonstrate principles (like General Relativity) that change technology here on Earth. Astrophysics builds on itself, discovery by discovery, in a way that can only be explained by accepting that it really does work (a theme that I’ve had occasion to harp on before).

Physics began with astrophysics. Trying to explain the motion of dots in a telescope and objects on the ground with the same rules led to everything we now know about the world. Astrophysics is hard, arguably impossible…but impossible or not, there are people who spend their lives successfully making it work.

What does Copernicus have to say about String Theory?

Putting aside some highly controversial exceptions, string theory has made no testable predictions. Conceivably, a world governed by string theory and a world governed by conventional particle physics would be indistinguishable to every test we could perform today. Furthermore, it’s not even possible to say that string theory predicts the same things with fewer fudge-factors, as string theory descriptions of our world seem to have dramatically many more free parameters than conventional ones.

Critics of string theory point to this as a reason why string theory should be excluded from science, sent off to the chilly arctic wasteland of the math department. (No offense to mathematicians, I’m sure your department is actually quite warm and toasty.) What these critics are missing is an important feature of the scientific process: before scientists are able to make predictions, they propose explanations.

To explain what I mean by that, let’s go back to the beginning of the 16th century.

At the time, the authority on astronomy was still Ptolemy’s Syntaxis Mathematica, a book so renowned that it is better known by the Arabic-derived superlative Almagest, “the greatest”. Ptolemy modeled the motions of the planets and stars as a series of interlocking crystal spheres with the Earth at the center, and did so well enough that until that time only minor improvements on the model had been made.

This is much trickier than it sounds, because even in Ptolemy’s day astronomers could tell that the planets did not move in simple circles around the Earth. There were major distortions from circular motion, the most dramatic being the phenomenon of retrograde motion.

If the planets really were moving in simple circles around the Earth, you would expect them to keep moving in the same direction. However, ancient astronomers saw that sometimes, some of the planets moved backwards. The planet would slow down, turn around, go backwards a bit, then come to a stop and turn again.

Thus sparking the invention of the spirograph.

In order to take this into account, Ptolemy introduced epicycles, extra circles of motion for the planets. The epicycle would move on the planet’s primary circle, or deferent, and the planet would rotate around the epicycle, like so:

French Wikipedia had a better picture.

These epicycles weren’t just for retrograde motion, though. They allowed Ptolemy to model all sorts of irregularities in the planets’ motions. Any deviation from a circle could conceivably be plotted out by adding another epicycle (though Ptolemy had other methods to model this sort of thing, among them something called an equant). Enter Copernicus.

Enter Copernicus’s hair.

Copernicus didn’t like Ptolemy’s model. He didn’t like equants, and what’s more, he didn’t like the idea that the Earth was the center of the universe. Like Plato, he preferred the idea that the center of the universe was a divine fire, a source of heat and light like the Sun. He decided to put together a model of the planets with the Sun in the center. And what he found, when he did, was an explanation for retrograde motion.

In Copernicus’s model, the planets always go in one direction around the Sun, never turning back. However, some of the planets are faster than the Earth, and some are slower. If a planet is slower than the Earth and it passes by it will look like it is going backwards, due to the Earth’s speed. This is tricky to visualize, but hopefully the picture below will help: As you can see in the picture, Mars starts out ahead of Earth in its orbit, then falls behind, making it appear to move backwards.

Despite this simplification, Copernicus still needed epicycles. The planets’ motions simply aren’t perfect circles, even around the Sun. After getting rid of the equants from Ptolemy’s theory, Copernicus’s model ended up having just as many epicycles as Ptolemy’s!

Copernicus’s model wasn’t any better at making predictions (in fact, due to some technical lapses in its presentation, it was even a little bit worse). It didn’t have fewer “fudge factors”, as it had about the same number of epicycles. If you lived in the 16th century, you would have been completely justified in believing that the Earth was the center of the universe, and not the Sun. Copernicus had failed to establish his model as scientific truth.

However, Copernicus had still done something Ptolemy didn’t: he had explained retrograde motion. Retrograde motion was a unique, qualitative phenomenon, and while Ptolemy could include it in his math, only Copernicus gave you a reason why it happened.

That’s not enough to become the reigning scientific truth, but it’s a damn good reason to pay attention. It was justification for astronomers to dedicate years of their lives to improving the model, to working with it and trying to get unique predictions out of it. It was enough that, over half a century later, Kepler could take it and turn it into a theory that did make predictions better than Ptolemy, that did have fewer fudge-factors.

String theory as a model of the universe doesn’t make novel predictions, it doesn’t have fewer fudge factors. What it does is explain, explaining spectra of particles in terms of shapes of space and time, the existence of gravity and light in terms of closed and open strings, the temperature of black holes in terms of what’s going on inside them (this last really ought to be the subject of its own post, it’s one of the big triumphs of string theory). You don’t need to accept it as scientific truth. Like Copernicus’s model in his day, we don’t have the evidence for that yet. But you should understand that, as a powerful explanation, the idea of string theory as a model of the universe is worth spending time on.

Of course, string theory is useful for many things that aren’t modeling the universe. But that’s the subject of another post.

Update on the Amplituhedron

Awhile back I wrote a post on the Amplituhedron, a type of mathematical object  found by Nima Arkani-Hamed and Jaroslav Trnka that can be used to do calculations of scattering amplitudes in planar N=4 super Yang-Mills theory. (Scattering amplitudes are formulas used to calculate probabilities in particle physics, from the probability that an unstable particle will decay to the probability that a new particle could be produced by a collider.) Since then, they published two papers on the topic, the most recent of which came out the day before New Year’s Eve. These papers laid out the amplituhedron concept in some detail, and answered a few lingering questions. The latest paper focused on one particular formula, the probability that two particles bounce off each other. In discussing this case, the paper serves two purposes:

1. Demonstrating that Arkani-Hamed and Trnka did their homework.

2. Showing some advantages of the amplituhedron setup.

Let’s talk about them one at a time.

Doing their homework

There’s already a lot known about N=4 super Yang-Mills theory. In order to propose a new framework like the amplituhedron, Arkani-Hamed and Trnka need to show that the new framework can reproduce the old knowledge. Most of the paper is dedicated to doing just that. In several sections Arkani-Hamed and Trnka show that the amplituhedron reproduces known properties of the amplitude, like the behavior of its logarithm, its collinear limit (the situation when two momenta in the calculation become parallel), and, of course, unitarity.

What, you heard the amplituhedron “removes” unitarity? How did unitarity get back in here?

This is something that has confused several commenters, both here and on Ars Technica, so it bears some explanation.

Unitarity is the principle that enforces the laws of probability. In its simplest form, unitarity requires that all probabilities for all possible events add up to one. If this seems like a pretty basic and essential principle, it is! However, it and locality (the idea that there is no true “action at a distance”, that particles must meet to interact) can be problematic, causing paradoxes for some approaches to quantum gravity. Paradoxes like these inspired Arkani-Hamed to look for ways to calculate scattering amplitudes that don’t rely on locality and unitarity, and with the amplituhedron he succeeded.

However, just because the amplituhedron doesn’t rely on unitarity and locality, doesn’t mean it violates them. The amplituhedron, for all its novelty, still calculates quantities in N=4 super Yang-Mills. N=4 super Yang-Mills is well understood, it’s well-behaved and cuddly, and it obeys locality and unitarity.

This is why the amplituhedron is not nearly as exciting as a non-physicist might think. The amplituhedron, unlike most older methods, isn’t based on unitarity and locality. However, the final product still has to obey unitarity and locality, because it’s the same final product that others calculate through other means. So it’s not as if we’ve completely given up on basic principles of physics.

Not relying on unitarity and locality is valuable. For those who research scattering amplitudes, it has often been useful to try to “eliminate” one principle or another from our calculations. 20 years ago, avoiding Feynman diagrams was the key to finding dramatic simplifications. Now, many different approaches try to sidestep different principles. (For example, while the amplituhedron calculates an integrand and leaves a final integral to be done, I’m working on approaches that never employ an integrand.)

If we can avoid relying on some “basic” principle, that’s usually good evidence that the principle might be a consequence of something even more basic. By showing how unitarity can arise from the amplituhedron, Arkani-Hamed and Trnka have shown that a seemingly basic principle can come out of a theory that doesn’t impose it.

Advantages of the Amplituhedron

Not all of the paper compares to old results and principles, though. A few sections instead investigate novel territory, and in doing so show some of the advantages and disadvantages of the amplituhedron.

Last time I wrote on this topic, I was unclear on whether the amplituhedron was more efficient than existing methods. At this point, it appears that it is not. While the formula that the amplituhedron computes has been found by other methods up to seven loops, the amplituhedron itself can only get up to three loops or so in practical cases. (Loops are a way that calculations are classified in particle physics. More loops means a more complex calculation, and a more precise final result.)

The amplituhedron’s primary advantage is not in efficiency, but rather in the fact that its mathematical setup makes it straightforward to derive interesting properties for any number of loops desired. As Trnka occasionally puts it, the central accomplishment of the amplituhedron is to find “the question to which the amplitude is the answer”. By being able to phrase this “question” mathematically, one can be very general, which allows them to discover several properties that should hold no matter how complex the rest of the calculation becomes. It also has another implication: if this mathematical question has a complete mathematical answer, that answer could calculate the amplitude for any number of loops. So while the amplituhedron is not more efficient than other methods now, it has the potential to be dramatically more efficient if it can be fully understood.

All that said, it’s important to remember that the amplituhedron is still limited in scope. Currently, it applies to a particular theory, one that doesn’t (and isn’t meant to) describe the real world. It’s still too early to tell whether similar concepts can be defined for more realistic theories. If they can, though, it won’t depend on supersymmetry or string theory. One of the most powerful techniques for making predictions for the Large Hadron Collider, the technique of generalized unitarity, was first applied to N=4 super Yang-Mills. While the amplituhedron is limited now, I would not be surprised if it (and its competitors) give rise to practical techniques ten or twenty years down the line. It’s happened before, after all.

Four Gravitons and a…Postdoc?

As a few of you already know, it’s looking increasingly certain that I will be receiving my Ph.D. in the spring. I’ll graduate, ceasing to be a grad student and becoming that most mysterious of academic entities, a postdoc.

When describing graduate school before, I compared it to an apprenticeship. (I expanded on that analogy more here.) Let’s keep pursuing that analogy. If a graduate student is like an apprentice, then a Postdoctoral Scholar, or Postdoc, is like a journeyman.

In Medieval Europe, once an apprenticeship was completed the apprentice was permitted to work independently, earning a wage for their own labors. However, they still would not have their own shop. Instead, they would work for a master craftsman. Such a person was called a journeyman, after the French work journée, meaning a day’s work.

Similarly, once a graduate student gets their Ph.D., they are able to do scientific research independently. However, most graduate students are not ready to be professors when fresh out of their Ph.D. Instead, they become postdocs, working in an established professor’s group. Like a journeyman, a postdoc is nominally independent, but in practice works under loose supervision from the more mature members of their field.

Another similarity between postdocs and journeymen is their tendency to travel. Historically, a journeyman would spend several years traveling, studying in the workshops of several masters. Similarly, a postdoc will often (especially in today’s interconnected world) travel far from where they began in order to broaden their capabilities.

A postdoctoral job generally lasts two or three years, one for particularly short positions. Most scientists will go through at least one postdoctoral position after achieving their Ph.D. In some fields (theoretical physics in particular), a scientist will have two or three such positions in different places before finding a job as a professor. Postdocs are paid significantly better than grad students, but generally significantly worse than professors. They don’t (typically) teach, but depending on the institution and field they may do some TA work.

Being still a grad student, my blog is titled “4 gravitons and a grad student”. That could change, though. Once I become a postdoc, I have three options:

  1. Keep the old title. Keeping the same title and domain name makes it easier for people to find the blog. It also maintains the alliteration, which I think is fun. On the other hand, it would be hard to justify, and I’d likely have to write something silly about taking a grad student perspective or the like.
  2. Change to “4 gravitons and a postdoc”. I’d lose the fun alliteration, but the title would accurately represent my current state. However, I might lose a few readers who don’t expect the change.
  3. Cut it down to “4 gravitons”. This matches the blog’s twitter handle (@4gravitons). It’s quick, it’s recognizable, and it keeps the memorable part of the old title without adding anything new to remember. However, it would be less unique in google searches.

What do you folks think? I’ve still got a while to decide, and I’d love to hear your opinions!

Amplitudes on Paperscape

Paperscape is a very cool tool developed by Damien George and Rob Knegjens. It analyzes papers from arXiv, the paper repository where almost all physics and math papers live these days. By putting papers that cite each other closer together and pushing papers that don’t cite each other further apart, Paperscape creates a map of all the papers on arXiv, arranged into “continents” based on the links between them. Papers with more citations are shown larger, newer papers are shown brighter, and subject categories are indicated by color-coding.

Here’s a zoomed-out view:

PaperscapeFullMap

Already you can see several distinct continents, corresponding to different arXiv categories like high energy theory and astrophysics.

If you want to find amplitudes on this map, just zoom in between the purple continent (high energy theory, much of which is string theory) and the green one (high energy lattice, nuclear experiment, high energy experiment, and high energy phenomenology, broadly speaking these are all particle physics).

PaperscapeAmplitudesMap

When you zoom in, Paperscape shows words that commonly appear in a given region of papers. Zoomed in this far, you can see amplitudes!

Amplitudeologists like me live on an island between particle physics and string theory. We’re connected on both sides by bridges of citations and shared terms, linking us to people who study quarks and gluons on one side to people who study strings and geometry on the other. Think of us like Manhattan, an island between two shores, densely networked in to the surroundings.

PaperscapeZoomedMap

Zoom in further, and you can see common keywords for individual papers. Exploring around here shows not only what is getting talked about, but what sort of subjects as well. You can see by the color-coding that many papers in amplitudes are published as hep-th, or high energy theory, but there’s a fair number of papers from hep-ph (phenomenology) and from nuclear physics as well.

There’s a lot of interesting things you can do with Paperscape. You can search for individuals, or look at individual papers, seeing who they cite and who cite them. Try it out!

The Amplitudes Revolution Will Not Be Televised (But It Will Be Streamed)

I’ve been at the Simons Center’s workshop on the Geometry and Physics of Scattering Amplitudes all week, so I don’t have time for a long post. There have been a lot of great talks from a lot of great amplitudes-folks (including one on Tuesday by Lance Dixon discussing this work, and one on the same day explaining the much-hyped amplituhedron). Curious folks can follow the conference link above to find videos and slides for each of the talks, arranged by the talk schedule.

I’ve made some great contacts, picked up a couple running jokes (check out Rutger Boels’s talk on Monday and Lance’s talk on Tuesday), heard the phrase “only seven loops” stated in relative seriousness, and heard the story of why the conference ended up choosing an artist’s conception of the amplituhedron for the workshop poster, which I can relate if folks are especially curious.

Elegance, Not So Mysterious

You’ll often hear theoretical physicists in the media referring to one theory or another as “elegant”. String theory in particular seems to get this moniker fairly frequently.

It may often seem like mathematical elegance is some sort of mysterious sixth sense theorists possess, as inexplicable to the average person as color to a blind person. What’s “elegant” about string theory, after all?

Before explaining elegance, I should take a bit of time to say what it’s not. Elegance isn’t Occam’s razor. It isn’t naturalness, either. Both of those concepts have their own technical definitions.

Elegance, by contrast, is a much hazier, and yet much simpler, notion. It’s hazy enough that any definition could provoke arguments, but I can at least give you an approximate idea by telling you that an elegant theory is simple to describe, if you know the right terms. Often, it is simpler than the phenomenon that it explains.

How does this apply to something like string theory? String theory seems to be incredibly complicated: ten dimensions, curled up in a truly vast number of different ways, giving rise to whole spectrums of particles.

That said, the basic idea is quite simple. String theory asks the question: what if, in addition to fundamental point-particles (zero dimensional objects), there were fundamental objects of other dimensions? That idea leads to complicated consequences: if your theory is going to produce all the particles of the real world then you need the ten dimensions and the supersymmetry and yadda yadda. But the basic idea is simple to describe. An elegant theory can have very complicated consequences, but still be simple to describe.

This, broadly, is the sort of explanation theoretical physicists look for. Math is the kind of field where the same basic systems can describe very complex phenomena. Since theoretical physics is about describing the world in terms of math, the right explanation is usually the most elegant.

This can occasionally trip physicists up when they migrate to other careers. In biology, for example, the elegant solution is often not the right one, because evolution doesn’t care about elegance: evolution just grabs whatever is within reach. Financial systems and economics occasionally have similar problems. All this is to say that while elegance is an important thing for a physicist to strive for, sometimes we have to be careful about it.

The Parke-Taylor Amplitudes: Why Quantum Field Theory Might Not Be So Hard, After All

If you’ve been following my blog for a while, you know that Quantum Field Theory is hard work. To calculate anything, you have to draw an ever-increasing number of diagrams, translate them into formulas involving the momentum and energy of your particles, and add all those formulas up to get your final result, the amplitude of the process you’re interested in.

As I said in that post, my area of research involves trying to find patterns in the results of these calculations, patterns that make doing the calculation simpler. With that in mind, you might wonder why we expect to find any patterns in the first place. If Quantum Field Theory is so complicated, what insurance do we have that it can be made simpler? Where does the motivation come from?

Our motivation comes from a series of discoveries that show that things really do simplify, often in unexpected ways. I won’t go through all of these discoveries here, but I want to tell you about one of the first discoveries that showed amplitudes researchers that they were on the right track.

Let’s try to calculate a comparatively simple process. Say that we’ve got two gluons (force carrying bosons for the strong force, an example of a Yang-Mills field). Suppose the two gluons collide, and some number of gluons emerge. It could be two again, or it could be three, or more.

For now, let’s just think about diagrams at tree level, that is, diagrams with no loops. The particles can travel from place to place in the diagram, but they can’t form closed loops on the inside.

Gluons have two types of interactions, places where particle lines can come together. You can either have three lines meeting at one point, or four.

If two gluons come in and two come out, we have four possible diagrams:

4ptMHV

Note that while the last diagram looks like it has a loop in it (in the form of the triangle in the middle), actually that triangle just represents that two particles are passing each other without colliding, so that their lines cross.

The number of diagrams increases substantially as you increase the number of outgoing particles. With two particles going to three particles, you get fifteen diagrams. Here are three examples:

5ptMHV

Since the number of diagrams just keeps increasing, you’d expect the final amplitude to become more and more complicated as well. However, Steven Parke and Tomasz Taylor found in 1986 that for a particular arrangement of the spins of the particles (for technical people: this is the Maximally Helicity Violating configuration, or two particles with negative helicity and all the rest with positive helicity) the answer simplifies dramatically. In the sort of variables we use these days, the result can be expressed in an incredibly simple form:

\frac{\langle 1 | 2 \rangle^4}{ \langle 1 | 2 \rangle\langle 2 | 3 \rangle\langle 3 | 4 \rangle \ldots \langle n-1 | n \rangle\langle n | 1 \rangle}

Here the angle brackets represent momenta of the incoming (for 1 and 2) and outgoing (all the other numbers) particles, with n being the total number of particles (two going in, and however many going out). (Technically, these are spinor-helicity variables, and those interested in the technical details should check out chapter 3 of this or chapter 2 of this.)

Nowadays, we know why this amplitude looks so simple, in terms of something called BCFW recursion. At the time though, it was quite extraordinary.

This is the sort of simplification we keep running into when studying amplitudes. Almost always, it means that there is some deeper principle that we don’t yet understand, something that would let us do our calculations much faster and more efficiently. It indicates that Quantum Field Theory might not be so hard after all.

Where are the Amplitudeologists?

As I’ve mentioned a couple of times before, I’m part of a sub-field of theoretical physics called Amplitudeology.

Amplitudeology in its modern incarnation is relatively new, and concentrated in a few specific centers. I thought it might be interesting to visualize which universities have amplitudeologists, so I took a look at the attendee lists of two recent conferences and put their affiliations into google maps. In an attempt to balance things, one of the conferences is in North America and the other is in Europe. Here is the result:

The West Coast of the US has two major centers, Stanford/SLAC and UCLA, focused around Lance Dixon and Zvi Bern respectively. The Northeast has a fair assortment, including places that have essentially everything like the Perimeter Institute and the Institute for Advanced Study and places known especially for their amplitudes work like Brown.

Europe has quite a large number of places. There are many universities in Europe with a long history of technical research into quantum field theory. When amplitudes began to become more prominent as its own sub-field, many of these places slotted right in. In particular, there are many locations in Germany, a decent number in the UK, a few in the vicinity of CERN, and a variety of places of some importance elsewhere.

Outside of Europe and North America, there’s much less amplitudes research going on. Physics in general is a very international enterprise, and many sub-fields have a lot of participation from researchers in China, India, Japan, and Korea. Amplitudes, for the most part, hasn’t caught on in those places yet.

This map is just a result of looking at two conferences. More data would yield many places that were left out of this setup, including a longstanding community in Russia. Still, it gives you a rough idea of where to find amplitudeologists, should you have need of one.

High Energy? What does that mean?

I am a high energy physicist who uses the high energy and low energy limits of a theory that, while valid up to high energies, is also a low-energy description of what at high energies ends up being string theory (string theorists, of course, being high energy physicists as well).

If all of that makes no sense to you, congratulations, you’ve stumbled upon one of the worst-kept secrets of theoretical physics: we really could use a thesaurus.

“High energy” means different things in different parts of physics. In general, “high” versus “low” energy classifies what sort of physics you look at. “High” energy physics corresponds to the very small, while “low” energies encompass larger structures. Many people explain this via quantum mechanics: the uncertainty principle says that the more certain you are of a particle’s position, the less certain you can be of how fast it is going, which would imply that a particle that is highly restricted in location might have very high energy. You can also understand it without quantum mechanics, though: if two things are held close together, it generally has to be by a powerful force, so the bond between them will contain more energy. Another perspective is in terms of light. Physicists will occasionally use “IR”, or infrared, to mean “low energy” and “UV”, or ultraviolet, to mean “high energy”. Infrared light has long wavelengths and low energy photons, while ultraviolet light has short wavelengths and high energy photons, so the analogy is apt. However, the analogy only goes so far, since “UV physics” is often at energies much greater than those of UV light (and the same sort of situation applies for IR).

So what does “low energy” or “high energy” mean? Well…

The IR limit: Lowest of the “low energy” points, this refers to the limit of infinitely low energy. While you might compare it to “absolute zero”, really it just refers to energy that’s so low that compared to the other energies you’re calculating with it might as well be zero. This is the “low energy limit” I mentioned in the opening sentence.

Low energy physics: Not “high energy physics”. Low energy physics covers everything from absolute zero up to atoms. Once you get up to high enough energy to break up the nucleus of an atom, you enter…

High energy physics: Also known as “particle physics”, high energy physics refers to the study of the subatomic realm, which also includes objects which aren’t technically particles like strings and “branes”. If you exclude nuclear physics itself, high energy physics generally refers to energies of a mega-electron-volt and up. For comparison, the electrons in atoms are bound by energies of around an electron-volt, which is the characteristic energy of chemistry, so high energy physics is at least a million times more energetic. That said, high energy physicists are often interested in low energy consequences of their theories, including all the way down to the IR limit. Interestingly, by this point we’ve already passed both infrared light (from a thousandth of an electron-volt to a single electron volt) and ultraviolet light (several electron-volts to a hundred or so). Compared to UV light, mega-electron volt scale physics is quite high energy.

The TeV scale: If you’re operating a collider though, mega-electron-volts (or MeV) are low-energy physics. Often, calculations for colliders will assume that quarks, whose masses are around the MeV scale, actually have no mass at all! Instead, high energy for particle colliders means giga (billion) or tera (trillion) electron volt processes. The LHC, for example, operates at around 7 TeV now, with 14 TeV planned. This is the range of scales where many had hoped to see supersymmetry, but as time has gone on results have pushed speculation up to higher and higher energies. Of course, these are all still low energy from the perspective of…

The string scale: Strings are flexible, but under enormous tension that keeps them very very short. Typically, strings are posed to be of length close to the Planck length, the characteristic length at which quantum effects become relevant for gravity. This enormously small length corresponds to the enormously large Planck energy, which is on the order of 1028 electron-volts. That’s about ten to the sixteen times the energies of the particles at the LHC, or ten to the twenty-two times the MeV scale that I called “high energy” earlier. For comparison, there are about ten to the twenty-two atoms in a milliliter of water. When extra dimensions in string theory are curled up, they’re usually curled up at this scale. This means that from a string theory perspective, going to the TeV scale means ignoring the high energy physics and focusing on low energy consequences, which is why even the highest mass supersymmetric particles are thought of as low energy physics when approached from string theory.

The UV limit: Much as the IR limit is that of infinitely low energy, the UV limit is the formal limit of infinitely high energy. Again, it’s not so much an actual destination, as a comparative point where the energy you’re considering is much higher than the energy of anything else in your calculation.

These are the definitions of “high energy” and “low energy”, “UV” and “IR” that one encounters most often in theoretical particle physics and string theory. Other parts of physics have their own idea of what constitutes high or low energy, and I encourage you to ask people who study those parts of physics if you’re curious.