Monthly Archives: November 2013

The Parke-Taylor Amplitudes: Why Quantum Field Theory Might Not Be So Hard, After All

If you’ve been following my blog for a while, you know that Quantum Field Theory is hard work. To calculate anything, you have to draw an ever-increasing number of diagrams, translate them into formulas involving the momentum and energy of your particles, and add all those formulas up to get your final result, the amplitude of the process you’re interested in.

As I said in that post, my area of research involves trying to find patterns in the results of these calculations, patterns that make doing the calculation simpler. With that in mind, you might wonder why we expect to find any patterns in the first place. If Quantum Field Theory is so complicated, what insurance do we have that it can be made simpler? Where does the motivation come from?

Our motivation comes from a series of discoveries that show that things really do simplify, often in unexpected ways. I won’t go through all of these discoveries here, but I want to tell you about one of the first discoveries that showed amplitudes researchers that they were on the right track.

Let’s try to calculate a comparatively simple process. Say that we’ve got two gluons (force carrying bosons for the strong force, an example of a Yang-Mills field). Suppose the two gluons collide, and some number of gluons emerge. It could be two again, or it could be three, or more.

For now, let’s just think about diagrams at tree level, that is, diagrams with no loops. The particles can travel from place to place in the diagram, but they can’t form closed loops on the inside.

Gluons have two types of interactions, places where particle lines can come together. You can either have three lines meeting at one point, or four.

If two gluons come in and two come out, we have four possible diagrams:

4ptMHV

Note that while the last diagram looks like it has a loop in it (in the form of the triangle in the middle), actually that triangle just represents that two particles are passing each other without colliding, so that their lines cross.

The number of diagrams increases substantially as you increase the number of outgoing particles. With two particles going to three particles, you get fifteen diagrams. Here are three examples:

5ptMHV

Since the number of diagrams just keeps increasing, you’d expect the final amplitude to become more and more complicated as well. However, Steven Parke and Tomasz Taylor found in 1986 that for a particular arrangement of the spins of the particles (for technical people: this is the Maximally Helicity Violating configuration, or two particles with negative helicity and all the rest with positive helicity) the answer simplifies dramatically. In the sort of variables we use these days, the result can be expressed in an incredibly simple form:

\frac{\langle 1 | 2 \rangle^4}{ \langle 1 | 2 \rangle\langle 2 | 3 \rangle\langle 3 | 4 \rangle \ldots \langle n-1 | n \rangle\langle n | 1 \rangle}

Here the angle brackets represent momenta of the incoming (for 1 and 2) and outgoing (all the other numbers) particles, with n being the total number of particles (two going in, and however many going out). (Technically, these are spinor-helicity variables, and those interested in the technical details should check out chapter 3 of this or chapter 2 of this.)

Nowadays, we know why this amplitude looks so simple, in terms of something called BCFW recursion. At the time though, it was quite extraordinary.

This is the sort of simplification we keep running into when studying amplitudes. Almost always, it means that there is some deeper principle that we don’t yet understand, something that would let us do our calculations much faster and more efficiently. It indicates that Quantum Field Theory might not be so hard after all.

Where are the Amplitudeologists?

As I’ve mentioned a couple of times before, I’m part of a sub-field of theoretical physics called Amplitudeology.

Amplitudeology in its modern incarnation is relatively new, and concentrated in a few specific centers. I thought it might be interesting to visualize which universities have amplitudeologists, so I took a look at the attendee lists of two recent conferences and put their affiliations into google maps. In an attempt to balance things, one of the conferences is in North America and the other is in Europe. Here is the result:

The West Coast of the US has two major centers, Stanford/SLAC and UCLA, focused around Lance Dixon and Zvi Bern respectively. The Northeast has a fair assortment, including places that have essentially everything like the Perimeter Institute and the Institute for Advanced Study and places known especially for their amplitudes work like Brown.

Europe has quite a large number of places. There are many universities in Europe with a long history of technical research into quantum field theory. When amplitudes began to become more prominent as its own sub-field, many of these places slotted right in. In particular, there are many locations in Germany, a decent number in the UK, a few in the vicinity of CERN, and a variety of places of some importance elsewhere.

Outside of Europe and North America, there’s much less amplitudes research going on. Physics in general is a very international enterprise, and many sub-fields have a lot of participation from researchers in China, India, Japan, and Korea. Amplitudes, for the most part, hasn’t caught on in those places yet.

This map is just a result of looking at two conferences. More data would yield many places that were left out of this setup, including a longstanding community in Russia. Still, it gives you a rough idea of where to find amplitudeologists, should you have need of one.

High Energy? What does that mean?

I am a high energy physicist who uses the high energy and low energy limits of a theory that, while valid up to high energies, is also a low-energy description of what at high energies ends up being string theory (string theorists, of course, being high energy physicists as well).

If all of that makes no sense to you, congratulations, you’ve stumbled upon one of the worst-kept secrets of theoretical physics: we really could use a thesaurus.

“High energy” means different things in different parts of physics. In general, “high” versus “low” energy classifies what sort of physics you look at. “High” energy physics corresponds to the very small, while “low” energies encompass larger structures. Many people explain this via quantum mechanics: the uncertainty principle says that the more certain you are of a particle’s position, the less certain you can be of how fast it is going, which would imply that a particle that is highly restricted in location might have very high energy. You can also understand it without quantum mechanics, though: if two things are held close together, it generally has to be by a powerful force, so the bond between them will contain more energy. Another perspective is in terms of light. Physicists will occasionally use “IR”, or infrared, to mean “low energy” and “UV”, or ultraviolet, to mean “high energy”. Infrared light has long wavelengths and low energy photons, while ultraviolet light has short wavelengths and high energy photons, so the analogy is apt. However, the analogy only goes so far, since “UV physics” is often at energies much greater than those of UV light (and the same sort of situation applies for IR).

So what does “low energy” or “high energy” mean? Well…

The IR limit: Lowest of the “low energy” points, this refers to the limit of infinitely low energy. While you might compare it to “absolute zero”, really it just refers to energy that’s so low that compared to the other energies you’re calculating with it might as well be zero. This is the “low energy limit” I mentioned in the opening sentence.

Low energy physics: Not “high energy physics”. Low energy physics covers everything from absolute zero up to atoms. Once you get up to high enough energy to break up the nucleus of an atom, you enter…

High energy physics: Also known as “particle physics”, high energy physics refers to the study of the subatomic realm, which also includes objects which aren’t technically particles like strings and “branes”. If you exclude nuclear physics itself, high energy physics generally refers to energies of a mega-electron-volt and up. For comparison, the electrons in atoms are bound by energies of around an electron-volt, which is the characteristic energy of chemistry, so high energy physics is at least a million times more energetic. That said, high energy physicists are often interested in low energy consequences of their theories, including all the way down to the IR limit. Interestingly, by this point we’ve already passed both infrared light (from a thousandth of an electron-volt to a single electron volt) and ultraviolet light (several electron-volts to a hundred or so). Compared to UV light, mega-electron volt scale physics is quite high energy.

The TeV scale: If you’re operating a collider though, mega-electron-volts (or MeV) are low-energy physics. Often, calculations for colliders will assume that quarks, whose masses are around the MeV scale, actually have no mass at all! Instead, high energy for particle colliders means giga (billion) or tera (trillion) electron volt processes. The LHC, for example, operates at around 7 TeV now, with 14 TeV planned. This is the range of scales where many had hoped to see supersymmetry, but as time has gone on results have pushed speculation up to higher and higher energies. Of course, these are all still low energy from the perspective of…

The string scale: Strings are flexible, but under enormous tension that keeps them very very short. Typically, strings are posed to be of length close to the Planck length, the characteristic length at which quantum effects become relevant for gravity. This enormously small length corresponds to the enormously large Planck energy, which is on the order of 1028 electron-volts. That’s about ten to the sixteen times the energies of the particles at the LHC, or ten to the twenty-two times the MeV scale that I called “high energy” earlier. For comparison, there are about ten to the twenty-two atoms in a milliliter of water. When extra dimensions in string theory are curled up, they’re usually curled up at this scale. This means that from a string theory perspective, going to the TeV scale means ignoring the high energy physics and focusing on low energy consequences, which is why even the highest mass supersymmetric particles are thought of as low energy physics when approached from string theory.

The UV limit: Much as the IR limit is that of infinitely low energy, the UV limit is the formal limit of infinitely high energy. Again, it’s not so much an actual destination, as a comparative point where the energy you’re considering is much higher than the energy of anything else in your calculation.

These are the definitions of “high energy” and “low energy”, “UV” and “IR” that one encounters most often in theoretical particle physics and string theory. Other parts of physics have their own idea of what constitutes high or low energy, and I encourage you to ask people who study those parts of physics if you’re curious.

What’s up with arXiv?

First of all, I wanted to take a moment to say that this is the one-year anniversary of this blog. I’ve been posting every week, (almost always) on Friday, since I first was motivated to start blogging back in November 2012. It’s been a fun ride, through ups and downs, Ars Technica and Amplituhedra, and I hope it’s been fun for you, the reader, as well!

I’ve been giving links to arXiv since my very first post, but I haven’t gone into detail about what arXiv is. Since arXiv is a rather unique phenomenon, it could use a more full description.

arXivpic

The word arXiv is pronounced much like the normal word archive, just think of the capital X like a Greek letter Chi.

Much as the name would suggest, arXiv is an archive, specifically a preprint archive. A pre-print is in a sense a paper before it becomes a paper; more accurately, it is a scientific paper that has not yet been published in a journal. In the past, such preprints would be kept by individual universities, or passed between interested individuals. Now arXiv, for an increasing range of fields (first physics and mathematics, now also computer science, quantitative biology, quantitative finance, and statistics) puts all of the preprints in one easily accessible, free to access place.

Different fields have different conventions when it comes to using arXiv. As a theoretical physicist, I can only really speak to how we use the system.

When theoretical physicists write a paper, it is often not immediately clear which journal we should submit it to. Different journals have different standards, and a paper that will gather more interest can be published in a more prestigious journal. In order to gauge how much interest a paper will raise, most theoretical physicists will put their papers up on arXiv as preprints first, letting them sit there for a few months to drum up attention and get feedback before formally submitting the paper to a journal.

The arXiv isn’t just for preprints, though. Once a paper is published in a journal, a copy of the paper remains on arXiv. Often, the copy on arXiv will be updated when the paper is updated, changed to the journal’s preferred format and labeled with the correct journal reference. So arXiv, ultimately, contains almost all of the papers published in theoretical physics in the last decade or two, all free to read.

But it’s not just papers! The digital format of arXiv makes it much easier to post other files alongside a paper, so that many people upload not just their results, but the computer code they used to generate them, or their raw data in long files. You can also post papers too long or unwieldy to publish in a journal, making arXiv an excellent dropping-off point for information in whatever format you think is best.

We stand at the edge of a new age of freely accessible science. As more and more disciplines start to use arXiv and similar services, we’ll have more flexibility to get more information to more people, while still keeping the advantage of peer review for publication in actual journals. It’s going to be very interesting to see where things go from here.

Braaains…Boltzmann Braaaains…

In honor of Halloween yesterday, let me tell you a scary physics story:

Sarah was an ordinary college student, in an ordinary dorm room, ordinary bean bag chairs strewn around an ordinary bed with ordinary pink sheets. If she concentrated, she could imagine her ordinary parents back home in ordinary Minnesota. In her ordinary physics textbook on her ordinary desk, ordinary laws of physics were written, described as the result of centuries of experimentation.

Unbeknownst to Sarah, the universe was much more chaotic and random than she realized, and also much more vast. Arbitrary collections of matter formed and dissipated, and over the universe’s long history, any imaginable combination might come to be.

Combinations like Sarah.

You see, Sarah too was a random combination, a chance arrangement of particles formed only a bare few moments ago. In truth, she had no ordinary parents, nor was she surrounded by an ordinary college, and the laws of physics that her textbook asserted were discovered through centuries of experimentation were just a moment’s distribution of ink on a page.

And as she got up to open the door into the vast dark of the outside, her world dissipated, and she ceased to exist.

That’s the life of a Boltzmann Brain. If a universe is random and old enough, it is inevitable that such minds exist. They might have memories of an extended, orderly world, but these would just be illusions, chance arrangements of their momentary neurons. What’s more, they may think they know the laws of physics through careful experiment and reasoning, but such knowledge would be illusory as well. And most frightening of all, if the universe is truly ancient and unimaginably vast, there would be many orders of magnitude more Boltzmann Brains than real humans…so many, that it would almost certainly be the case that you are in fact a Boltzmann Brain right now!

This is legitimately worrying to some physicists. The situation gets a bit more interesting when you remember that, as a Boltzmann Brain, anything you know about physics may well be a lie, since the history of research you think exists might not have. The problem is, if you manage to prove that you are probably a Boltzmann Brain, you had to use physics to do it. But your physics is probably wrong!

This, as Sean Carroll argues is why the concept of a Boltzmann Brain is self-defeating. It is, in a way, a logical impossibility. And if a universe of Boltzmann Brains is logically impossible, then any physics that makes Boltzmann Brains more likely than normal humans must similarly be wrong. That’s Carroll’s argument, one that he uses to argue for specific physical conclusions about the real world, namely a proposal about the properties of the Higgs boson.

It might seem philosophically illegitimate to use such a paradox to argue about the real world. However, philosophers have a similar argument when it comes to such “reality is a lie” scenarios. In general, modern philosophers point out that any argument that proves that all of our knowledge is false or meaningless by necessity also proves itself false or meaningless. This is what allows analytical philosophy to carry forward and make progress, even if it can’t reject the idea that reality is an illusion by more objective means.

With that said, there seems to be a difference between simply rejecting arguments that “show” that the world is an illusion or that we are all Boltzmann Brains, and using those arguments to draw conclusions about other parts of the world. I would be curious if there are similar arguments to Carroll’s in philosophy, arguments that draw conclusions more specific than “we exist and can know things”. Any philosopher readers should feel welcome to chime in in the comments!

And for the rest of you, you probably aren’t a Boltzmann Brain. But if the outside world looks a little too dark tonight…