Bras and Kets, Trading off Instincts

Some physics notation is a joke, but that doesn’t mean it shouldn’t be taken seriously.

Take bras and kets. On the surface, as silly a physics name as any. If you want to find the probability that a state in quantum mechanics turns into another state, you write down a “bracket” between the two states:

\langle a | b\rangle

This leads, with typical physics logic, to the notation for the individual states: separate out the two parts, into a “bra” and a “ket”:

\langle a||b\rangle

It’s kind of a dumb joke, and it annoys the heck out of mathematicians. Not for the joke, of course, mathematicians probably have worse.

Mathematicians are annoyed when we use complicated, weird notation for something that looks like a simple, universal concept. Here, we’re essentially just taking inner products of vectors, something mathematicians have been doing in one form or another for centuries. Yet rather than use their time-tested notation we use our own silly setup.

There’s a method to the madness, though. Bras and kets are handy for our purposes because they allow us to leverage one of the most powerful instincts of programmers: the need to close parentheses.

In programming, various forms of parentheses and brackets allow you to isolate parts of code for different purposes. One set of lines might only activate under certain circumstances, another set of brackets might make text bold. But in essentially every language, you never want to leave an open parenthesis. Doing so is almost always a mistake, one that leaves the rest of your code open to whatever isolated region you were trying to create.

Open parentheses make programmers nervous, and that’s exactly what “bras” and “kets” are for. As it turns out, the states represented by “bras” and “kets” are in a certain sense un-measurable: the only things we can measure are the brackets between them. When people say that in quantum mechanics we can only predict probabilities, that’s a big part of what they mean: the states themselves mean nothing without being assembled into probability-calculating brackets.

This ends up making “bras” and “kets” very useful. If you’re calculating something in the real world and your formula ends up with a free “bra” or a “ket”, you know you’ve done something wrong. Only when all of your bras and kets are assembled into brackets will you have something physically meaningful. Since most physicists have done some programming, the programmer’s instinct to always close parentheses comes to the rescue, nagging until you turn your formula into something that can be measured.

So while our notation may be weird, it does serve a purpose: it makes our instincts fit the counter-intuitive world of quantum mechanics.

Scooped Is a Spectrum

I kind of got scooped recently.

I say kind of, because as I’ve been realizing being scooped isn’t quite the all-or-nothing thing you’d think it would be. Rather, being scooped is a spectrum.

Go ahead and scoop up a spectrum as you’re reading this.

By the way, I’m going to be a bit cagey about what exactly I got scooped on. As you’ll see, there are still a few things my collaborator and I need to figure out, and in the meantime I don’t want to put my foot in my mouth. Those of you who follow what’s going on in amplitudes might have some guesses. In case you’re worried, it has nothing to do with my work on Hexagon Functions.

When I heard about the paper that scooped us, my first reaction was to assume the project I’d been working on for a few weeks was now a dead end. When another group publishes the same thing you’ve been working on, and does it first, there doesn’t seem to be much you can do besides shake hands and move on.

As it turns out, though, things are a bit more complicated. The risk of publishing fast, after all, is making mistakes. In this case, it’s starting to look like a few of the obstructions that were holding us back weren’t solved by the other group, and in fact that they may have ignored those obstructions altogether in their rush to get something publishable.

This creates an interesting situation. It’s pretty clear the other group is beyond us in certain respects, they published first for a (good) reason. On the other hand, precisely because we’ve been slower, we’ve caught problems that it looks like the other group didn’t notice. Rather than rendering our work useless, this makes it that much more useful: complementing the other group’s work rather than competing with it.

Being scooped is a spectrum. If two groups are working on very similar things, then whoever publishes first usually wins. But if the work is different enough, then a whole range of roles opens up, from corrections and objections to extensions and completions. Being scooped doesn’t have to be the end of the world, in fact, it can be the beginning.

A Tale of Two CMB Measurements

While trying to decide what to blog about this week, I happened to run across this article by Matthew Francis on Ars Technica.

Apparently, researchers have managed to use Planck‘s measurement of the Cosmic Microwave Background to indirectly measure a more obscure phenomenon, the Cosmic Neutrino Background.

The Cosmic Microwave Background, or CMB is often described as the light of the Big Bang, dimmed and spread to the present day. More precisely, it’s the light released from the first time the universe became transparent. When electrons and protons joined to form the first atoms, light no longer spent all its time being absorbed and released by electrical charges, and was free to travel in a mostly-neutral universe.

This means that the CMB is less like a view of the Big Bang, and more like a screen separating us from it. Light and charged particles from before the CMB was formed will never be observable to us, because they would have been absorbed by the early universe. If we want to see beyond this screen, we need something with no electric charge.

That’s where the Cosmic Neutrino Background comes in. Much as the CMB consists of light from the first time the universe became transparent, the CNB consists of neutrinos from the first time the universe was cool enough for them to travel freely. Since this happened a bit before the universe was transparent to light, the CNB gives information about an earlier stage in the universe’s history.

Unfortunately, neutrinos are very difficult to detect, the low-energy ones left over from the CNB even more so. Rather than detecting the CNB directly, it has to be observed through its indirect effects on the CMB, and that’s exactly what these researchers did.

Now does all of this sound just a little bit familiar?

Gravitational waves are also hard to detect, hard enough that we haven’t directly detected any yet. They’re also electrically neutral, so they can also give us information from behind the screen of the CMB, letting us learn about the very early universe. And when the team at BICEP2 purported to measure these primordial gravitational waves indirectly, by measuring the CMB, the press went crazy about it.

This time, though? That Ars Technica article is the most prominent I could find. There’s nothing in major news outlets at all.

I don’t think that this is just a case of people learning from past mistakes. I also don’t think that BICEP2’s results were just that much more interesting: they were making a claim about cosmic inflation rather than just buttressing the standard Big Bang model, but (outside of certain contrarians here at Perimeter) inflation is not actually all that controversial. It really looks like hype is the main difference here, and that’s kind of sad. The difference between a big (premature) announcement that got me to write four distinct posts and an article I almost didn’t notice is just one of how the authors chose to make their work known.

Don’t Watch the Star, Watch the Crowd

I didn’t comment last week on Hawking’s proposed solution of the black hole firewall problem. The media buzz around it was a bit less rabid than the last time he weighed in on this topic, but there was still a lot more heat than light.

The impression I get from the experts is that Hawking’s proposal (this time made in collaboration with Andrew Strominger and Malcom Perry, the former of whom is famous for, among other things, figuring out how string theory can explain the entropy of black holes) resembles some earlier suggestions, with enough new elements to make it potentially interesting but potentially just confusing. It’s a development worth paying attention to for specialists, but it’s probably not the sort of long-awaited answer the media seems to be presenting it as.

This raises a question: how, as a non-specialist, are you supposed to tell the difference? Sure, you can just read blogs like mine, but I can’t report on everything.

I may have a pretty solid grounding in physics, but I know almost nothing about music. I definitely can’t tell what makes a song good. About the best I can do is see if I can dance to it, but that doesn’t seem to be a reliable indicator of quality music. Instead, my best bet is usually to watch the crowd.

Lasers may make this difficult.

Ask the star of a show if they’re doing good work, and they’re unlikely to be modest. Ask the average music fan, though, and you get a better idea. Watch music fans as a group, and you get even more information.

When a song starts playing everywhere you go, when people start pulling it out at parties and making their own imitations of it, then maybe it’s important. That might not mean it’s good, but it does mean it’s worth knowing about.

When Hawking or Strominger or Witten or anyone whose name you’ve heard of says they’ve solved the puzzle of the century, be cautious. If it really is worth your attention, chances are it won’t be the last you’ll hear about it. Other physicists will build off of it, discuss it, even spin off a new sub-field around it. If it’s worth it, you won’t have to trust what the stars of the physics world say: you’ll be able to listen to the crowd.

Romeo and Juliet, through a Wormhole

Perimeter is hosting this year’s Mathematica Summer School on Theoretical Physics. The school is a mix of lectures on a topic in physics (this year, the phenomenon of quantum entanglement) and tips and tricks for using the symbolic calculation program Mathematica.

Juan Maldacena is one of the lecturers, which gave me a chance to hear his Romeo and Juliet-based explanation of the properties of wormholes. While I’ve criticized some of Maldacena’s science popularization work in the past, this one is pretty solid, so I thought I’d share it with you guys.

You probably think of wormholes as “shortcuts” to travel between two widely separated places. As it turns out, this isn’t really accurate: while “normal” wormholes do connect distant locations, they don’t do it in a way that allows astronauts to travel between them, Interstellar-style. This can be illustrated with something called a Penrose diagram:

Static

Static “Greyish Black” Diagram

In the traditional Penrose diagram, time goes upward, while space goes from side to side. In order to measure both in the same units, we use the speed of light, so one year on the time axis corresponds to one light-year on the space axis. This means that if you’re traveling at a 45 degree line on the diagram, you’re going at the speed of light. Any lower angle is impossible, while any higher angle means you’re going slower.

If we start in “our universe” in the diagram, can we get to the “other universe”?

Pretty clearly, the answer is no. As long as we go slower than the speed of light, when we pass the event horizon of the wormhole we will end up, not in the “other universe”, but at the part of the diagram labeled Future Singularity, the singularity at the center of the black hole. Even going at the speed of light only keeps us orbiting the event horizon for all eternity, at best.

What use could such a wormhole be? Well, imagine you’re Romeo or Juliet.

Romeo has been banished from Verona, but he took one end of a wormhole with him, while the other end was left with Juliet. He can’t go through and visit her, she can’t go through and visit him. But if they’re already considering taking poison, there’s an easier way. If they both jump in to the wormhole, they’ll fall in to the singularity. Crucially, though, it’s the same singularity, so once they’re past the event horizon they can meet inside the black hole, spending some time together before the end.

Depicted here for more typical quantum protagonists, Alice and Bob.

This explains what wormholes really are: two black holes that share a center.

Why was Maldacena talking about this at a school on entanglement? Maldacena has recently conjectured that quantum entanglement and wormholes are two sides of the same phenomenon, that pairs of entangled particles are actually connected by wormholes. Crucially, these wormholes need to have the properties described above: you can’t use a pair of entangled particles to communicate information faster than light, and you can’t use a wormhole to travel faster than light. However, it is the “shared” singularity that ends up particularly useful, as it suggests a solution to the problem of black hole firewalls.

Firewalls were originally proposed as a way of getting around a particular paradox relating three states connected by quantum entanglement: a particle inside a black hole, radiation just outside the black hole, and radiation far away from the black hole. The way the paradox is set up, it appears that these three states must all be connected. As it turns out, though, this is prohibited by quantum mechanics, which only allows two states to be entangled at a time. The original solution proposed for this was a “firewall”, a situation in which anyone trying to observe all three states would “burn up” when crossing the event horizon, thus avoiding any observed contradiction. Maldacena’s conjecture suggests another way: if someone interacts with the far-away radiation, they have an effect on the black hole’s interior, because the two are connected by a wormhole! This ends up getting rid of the contradiction, allowing the observer to view the black hole and distant radiation as two different descriptions of the same state, and it depends crucially on the fact that a wormhole involves a shared singularity.

There’s still a lot of detail to be worked out, part of the reason why Maldacena presented this research here was to inspire more investigation from students. But it does seem encouraging that Romeo and Juliet might not have to face a wall of fire before being reunited.

The Theorist Exclusion Principle

There are a lot of people who think theoretical physics has gone off-track, though very few of them agree on exactly how. Some think that string theory as a whole is a waste of time, others that the field just needs to pay more attention to their preferred idea. Some think we aren’t paying enough attention to the big questions, or that we’re too focused on “safe” ideas like supersymmetry, even when they aren’t working out. Some think the field needs less focus on mathematics, while others think it needs even more.

Usually, people act on these opinions by writing strongly worded articles and blog posts. Sometimes, they have more power, and act with money, creating grants and prizes that only go to their preferred areas of research.

Let’s put the question of whether the field actually needs to change aside for the moment. Even if it does, I’m skeptical that this sort of thing will have any real effect. While grants and blogs may be very good at swaying experimentalists, theorists are likely to be harder to shift, due to what I’m going to call the Theorist Exclusion Principle.

The Pauli Exclusion Principle is a rule from quantum mechanics that states that two fermions (particles with half-integer spin) can’t occupy the same state. Fermions include electrons, quarks, protons…essentially, all the particles that make up matter. Many people learn about the Pauli Exclusion Principle first in a chemistry class, where it explains why electrons fall into different energy levels in atoms: once one energy level “fills up”, no more electrons can occupy the same state, and any additional electrons are “excluded” and must occupy a different energy level.

Those 1s electrons are such a clique!

In contrast, bosons (like photons, or the Higgs) can all occupy the same state. It’s what allows for things like lasers, and it’s why all the matter we’re used to is made out of fermions: because fermions can’t occupy the same state as each other, as you add more fermions the structures they form have to become more and more complicated.

Experimentalists are a little like bosons. While you can’t stuff two experimentalists into the same quantum state, you can get them working on very similar projects. They can form large collaborations, with each additional researcher making the experiment that much easier. They can replicate eachother’s work, making sure it was accurate. They can take some physical phenomenon and subject it to a battery of tests, so that someone is bound to learn something.

Theorists, on the other hand, are much more like fermions. In theory, there’s very little reason to work on something that someone else is already doing. Replication doesn’t mean very much: the purest theory involves mathematical proofs, where replication is essentially pointless. Theorists do form collaborations, but they don’t have the same need for armies of technicians and grad students that experimentalists do. With no physical objects to work on, there’s a limit to how much can be done pursuing one particular problem, and if there really are a lot of options they can be pursued by one person with a cluster.

Like fermions, then, theorists expand to fill the projects available. If an idea is viable, someone will probably work on it, and once they do, there isn’t much reason for someone else to do the same thing.

This makes theory a lot harder to influence than experiment. You can write the most beautiful thinkpiece possible to persuade theorists to study the deep questions of the universe, but if there aren’t any real calculations available nothing will change. Contrary to public perception, theoretical physicists aren’t paid to just sit around thinking all day: we calculate, compute, and publish, and if a topic doesn’t lend itself to that then we won’t get much mileage out of it. And no matter what you try to preferentially fund with grants, mostly you’ll just get people re-branding what they’re already doing, shifting a few superficial details to qualify.

Theorists won’t occupy the same states, so if you want to influence theorists you need to make sure there are open states where you’re trying to get them to go. Historically, theorists have shifted when new states have opened up: new data from experiment that needed a novel explanation, new mathematical concepts that opened up new types of calculations. You want there to be fewer string theorists, or more focus on the deep questions? Give us something concrete to do, and I guarantee you’ll get theorists flooding in.

Want to Open up Your Work? Try a Data Mine!

Have you heard of the Open Science movement?

The general idea is to make scientists’ work openly accessible, both to the general public and to other scientists. This doesn’t just include published results, but the raw data as well. The goal is to make it possible for anyone, in principle, to check the validity of important results.

I’m of the opinion that this sort of thing isn’t always feasible, but when it is it’s usually a great thing to do. And in my field, the best way to do this sort of thing is to build a data mine.

I’m thinking in particular of Blümlein, Broadhurst, and Vermaseren’s Multiple Zeta Value Data Mine. Multiple zeta values are the result of generalizing the Riemann Zeta Function, and evaluating it at one. They’re transcendental numbers, and there are complicated relations between them. Finding all those relations, even for a restricted subset of them, can be a significant task. Usually, there aren’t published programs for this sort of thing, like most things in physics we have to jury-rig up our own code. What makes the folks behind the multiple zeta value data mine unique is that when they had to do this, they didn’t just keep the code to themselves. Instead, they polished it up and put it online.

That’s the general principle behind building a data mine. By putting your tools online, you make them available to others, so other researchers can use them as a jumping-off point for their own work. This can speed up the field, bringing everyone up to the same starting point, and has the side benefit of gathering heaps of citations from people who use your tools.

My collaborators already have a site with some of the data from our research into hexagon functions. Originally, it was just a place to house extra-large files that couldn’t be included with the original paper. For our next paper, we’re planning on expanding it into a true data mine, and including enough technology for someone else to build off of our results.

Historic Montreal

I’m at a conference in Montreal this week, so it’s going to be a short post. The University of Montreal’s Centre de Recherches Mathématiques has been holding a program on the various hidden symmetries of N=4 super Yang-Mills since the beginning of the summer. This week is the amplitudes-focused part of the program, so they’ve brought in a bunch of amplitudes-folks from around the world, myself included.

It’s been great hanging out with fellow members of my sub-field, as always, both at the conference and at dinner afterwards. Over possibly too much wine I heard stories of the heady days of 2007, when James Drummond and Johannes Henn first discovered one of the most powerful symmetries of N=4 super Yang-Mills (a duality called dual conformal invariance) and Andrew Hodges showed off the power of a set of funky variables called twistors. It’s amazing to me how fast the field moves, sometimes: by the time I started doing amplitudes work in 2011 these ideas were the bedrock of the field. History operates on different scales, and in amplitudes a few decades have played host to an enormous amount of progress.

History in the real world can move surprisingly fast too. After seeing cathedrals in Zurich that date back to the medieval era, I was surprised when the majestic basilica overlooking Montreal turned out to be less than a century old.

In retrospect the light-up cross should have made it obvious.

In retrospect the light-up cross should have made it obvious.

Amplitudes Megapost

If you met me on a plane and asked me what I do, I’d probably lead with something like this:

“I come up with mathematical tricks to make particle physics calculations easier.”

People like me, who research these tricks, are sometimes known as Amplitudeologists. We studying scattering amplitudes, mathematical formulas used to calculate the probabilities of different things happening when sub-atomic particles collide.

Why do we want to make calculations easier? Because particle physics is hard!

More specifically, calculations in particle physics can be hard for three broad reasons: lots of loops, lots of legs, or more complicated theories.

Loops measure precision. They’re called loops because more complicated Feynman diagrams contain “loops” of particles, while the simplest, with no loops at all, are called “trees”. The more loops you include, the more precise your calculation becomes, but it also becomes more complicated.

Legs are the number of particles involved. If two particles collide and bounce off each other, then there are a total of four legs: two from the incoming particles, two from the outgoing ones. Calculations with more legs are almost always more complicated than calculations with fewer.

Most of the time, our end-goal is to calculate things that are relevant to the real world. Usually, this means QCD, or Quantum Chromodynamics, the theory of quarks and gluons. QCD is very complicated, though. Often, we work to hone our techniques on simpler theories first. N=4 super Yang-Mills has been called the simplest quantum field theory, particularly the further simplified, planar version. If you want a basic overview of it, check out the Handy Handbooks tab at the top of my blog. Often, progress in amplitudeology involves adapting tricks from planar N=4 super Yang-Mills to more complicated, and more realistic, theories.

I should point out that our goal in amplitudeology isn’t always to do more complicated calculations. Sometimes, it’s about doing a calculation we already know how to do, but in a way that’s more insightful. This lets us learn more about the theories we’re studying, as well as gaining insights about larger problems like the nature of space and time.

So what sorts of tricks do we use to do all this? Well, there are a few broad categories…

Generalized Unitarity

The prizewinning idea that started it all, generalized unitarity came out of the collaboration of Zvi Bern, Lance Dixon, and David Kosower, starting in the 90’s. The core of the idea is difficult to describe in a quick sentence, but it essentially boils down to noticing that, rather than thinking about every single multi-loop Feynman diagram independently, you can think of loop diagrams as what you get when you sew trees together.

This is a very powerful idea. These days, pretty much everyone who studies amplitudeology learns it, and it’s proven pivotal for a wide array of applications.

In planar N=4 super Yang-Mills it’s one of the techniques that can go to exceptionally high loop order, to six or seven loops. If you drop the “planar” condition, it’s still quite powerful. If you do things right, as Zvi Bern, John Joseph Carrasco, and Henrik Johansson found, you can get results in N=8 supergravity “for free”. This raises what has ended up being one of the big questions of our sub-field: does N=8 supergravity behave like most attempts at theories of quantum gravity, with pesky infinite results that we don’t know how to deal with, or does it behave like N=4 super Yang-Mills, which has no pesky infinities at all? Answering this question requires a dizzying seven-loop calculation, the mystique of which got me in to the field in the first place. Unfortunately, despite diligent efforts from Bern and collaborators, they’ve been stuck at four loops for quite some time. In the meantime they’ve been extending things in all the other amplitudes-directions: more legs, more complicated theories (in this case, supergravity with less supersymmetry), and more insight. Recently, it looks like they may have found a way around this hurdle, so the mystery at seven loops may not be so far away after all.

Generalized Unitarity is also one of the most powerful amplitudes tricks for real-world theories, in particular QCD. In this case, it’s main virtue is in legs, not loops, going up to seven-particles at one loop for practical, LHC-relevant calculations. There’s also a major effort to push this to two loops, with some success.

BCFW Recursion

If generalized unitarity was the trick that got experimentalists to sit up and take notice, BCFW is the one that got the attention of the pure theorists. In the mid-2000s Ruth Britto, Freddy Cachazo, and Bo Feng (later joined by theoretical physics superstar Ed Witten) figured out a way to build up tree amplitudes to any number of legs recursively for any theory, starting with three particles and working their way up. Their method was both fairly efficient and extremely insightful, and it’s another trick that’s made its way into every amplitudeologist’s arsenal. Further developments led to a recursive procedure that could work up to any number of loops in planar N=4 super Yang-Mills, which while not especially efficient did lead to…

The Positive Grassmannian, and the Amplituhedron

The work of Nima Arkani-Hamed, Jacob Bourjaily, Freddy Cachazo, Alexander Goncharov, Alexander Postnikov, and Jaroslav Trnka on the Positive Grassmannian (and more recently the Amplituhedron) has pushed the “more insight” direction impressively far. The Amplituhedron in particular captured the public’s imagination, as well as that of mathematicians, by packaging the all-loop amplitude into a particularly clean, mathematically meaningful form. Now they’re working on pushing this deep understanding to non-planar N=4 super Yang-Mills.

Integration Tricks

Generalized unitarity and the Amplituhedron have one thing in common: neither gives the full result. Calculating scattering amplitudes traditionally is a two-step process: first, add up all possible Feynman diagrams, then add up (integrate) all possible momenta. Generalized unitarity and the Amplituhedron let you skip the diagrams, but in both cases you still need to integrate. There’s a whole lore of integration techniques, from breaking things up into a basis of known “master” integrals (an example paper on this theme here), to attacking the integrations numerically via a process known as sector decomposition (one of the better programs that does this here). Higher-loop integrations are typically quite tough, even with these techniques.

Polylogarithms

These integrals will usually result in a type of mathematical functions called polylogarithms, or transcendental functions. Understanding these functions has led to an enormous amount of progress (and I’m not just saying that because it’s what I work on 😉 ).

It all started when Alexander Goncharov, Mark Spradlin, Cristian Vergu, and Anastasia Volovich figured out how to write a laboriously calculated seventeen-page two-loop six-particle amplitude in just two lines. To do this, they used mathematical properties of polylogarithms that were previously largely unknown to physicists. Their success inspired Lance Dixon, James Drummond, and Johannes Henn to use these methods to guess the correct answer at three loops, work that was completed with my involvement.

Since then, both groups have made a lot of progress. In general, Spradlin, Volovich, and collaborators have been pushing things farther in terms of legs and insight, while Dixon and collaborators have made progress at higher loops. So far we’ve gotten to four loops (here, plus unpublished work), while the others have proposals for any number of particles at two loops and substantial progress for seven particles at three loops.

All of this is still for planar N=4 super Yang-Mills. Using these tricks for more complicated theories is trickier. However, while you usually can’t just guess the answer like you can for N=4, a good understanding of the properties of polylogarithms can still take you quite far.

Integrability

Why did the polylogarithm folks start with six particles? Wouldn’t four or five have been easier?

As it turns out, four and five particle amplitudes are indeed easier, so much so that for planar N=4 super Yang-Mills they’re known up to any loop order. And while a number of elements went in to that result, one that really filled in the details was integrability.

Integrability is tough to describe in a short sentence, but essentially it involves describing highly symmetric systems all in one go, without having to use the step-by-step approximations of perturbation theory. For our purposes, this means bypassing the loop-by-loop perspective altogether.

Integrability is a substantial field in its own right, probably bigger than amplitudeology. There’s a lot going on, and only some of it touches on amplitudes-related topics. When it does, though, it’s quite impressive, with the flagship example being the work of Benjamin Basso, Amit Sever, and Pedro Vieira. They are able to compute amplitudes in planar N=4 super Yang-Mills for any and all loops, instead making an approximation based on the particle momenta. These days, they’re working on making their method more complete and robust, while building up understanding of other structures that might eventually allow them to say something about the non-planar case.

CHY and the Ambitwistor String

Ed Witten’s involvement in BCFW didn’t come completely out of left field. He had shown interest in N=4 super Yang-Mills earlier, with the invention of the twistor string. The twistor string calculates tree amplitudes in N=4 super Yang-Mills as the result of a string-theory-like framework. The advantage to such a framework is that, while normal quantum field theory involves large numbers of different diagrams, string theory only has one diagram “shape” for each loop.

This advantage has been thrust back into the spotlight recently via the work of Freddy Cachazo, Song He, and Ellis Ye Yuan. Their CHY formula works not just for N=4 super Yang-Mills, but for a wide (and growing) variety of other theories, allowing them to examine those theories’ properties in a particularly powerful way. Meanwhile, Lionel Mason and David Skinner have given the CHY formula a more solid theoretical grounding in the form of their ambitwistor string, which they have recently been able to generalize to a loop-level proposal.

Amplitudeology is a large and growing field, and there are definitely important people I haven’t mentioned. Some, like Henriette Elvang and Yu-tin Huang, have been involved with many different things over the years, so there wasn’t a clear place to put them. Others are part of the European community, where there’s a lot of work on string theory amplitudes and on pushing the boundaries of polylogarithms. Still others were left out simply because I ran out of room. I’ve only covered a small part of the field here, but I hope that small part gives you an idea of the richness of the whole.

Journalists Are Terrible at Quasiparticles

TerribleQuasiparticleHeadlineNo, they haven’t, and no, that’s not what they found, and no, that doesn’t make sense.

Quantum field theory is how we understand particle physics. Each fundamental particle comes from a quantum field, a law of nature in its own right extending across space and time. That’s why it’s so momentous when we detect a fundamental particle, like the Higgs, for the first time, why it’s not just like discovering a new species of plant.

That’s not the only thing quantum field theory is used for, though. Quantum field theory is also enormously important in condensed matter and solid state physics, the study of properties of materials.

When studying materials, you generally don’t want to start with fundamental particles. Instead, you usually want to think about overall properties, ways the whole material can move and change overall. If you want to understand the quantum properties of these changes, you end up describing them the same way particle physicists talk about fundamental fields: you use quantum field theory.

In particle physics, particles come from vibrations in fields. In condensed matter, your fields are general properties of the material, but they can also vibrate, and these vibrations give rise to quasiparticles.

Probably the simplest examples of quasiparticles are the “holes” in semiconductors. Semiconductors are materials used to make transistors. They can be “doped” with extra slots for electrons. Electrons in the semiconductor will move around from slot to slot. When an electron moves, though, you can just as easily think about it as a “hole”, an empty slot, that “moved” backwards. As it turns out, thinking about electrons and holes independently makes understanding semiconductors a lot easier, and the same applies to other types of quasiparticles in other materials.

Unfortunately, the article I linked above is pretty impressively terrible, and communicates precisely none of that.

The problem starts in the headline:

Scientists have finally discovered massless particles, and they could revolutionise electronics

Scientists have finally discovered massless particles, eh? So we haven’t seen any massless particles before? You can’t think of even one?

After 85 years of searching, researchers have confirmed the existence of a massless particle called the Weyl fermion for the first time ever. With the unique ability to behave as both matter and anti-matter inside a crystal, this strange particle can create electrons that have no mass.

Ah, so it’s a massless fermion, I see. Well indeed, there are no known fundamental massless fermions, not since we discovered neutrinos have mass anyway. The statement that these things “create electrons” of any sort is utter nonsense, however, let alone that they create electrons that themselves have no mass.

Electrons are the backbone of today’s electronics, and while they carry charge pretty well, they also have the tendency to bounce into each other and scatter, losing energy and producing heat. But back in 1929, a German physicist called Hermann Weyl theorised that a massless fermion must exist, that could carry charge far more efficiently than regular electrons.

Ok, no. Just no.

The problem here is that this particular journalist doesn’t understand the difference between pure theory and phenomenology. Weyl didn’t theorize that a massless fermion “must exist”, nor did he say anything about their ability to carry charge. Weyl described, mathematically, how a massless fermion could behave. Weyl fermions aren’t some proposed new fundamental particle, like the Higgs boson: they’re a general type of particle. For a while, people thought that neutrinos were Weyl fermions, before it was discovered that they had mass. What we’re seeing here isn’t some ultimate experimental vindication of Weyl, it’s just an old mathematical structure that’s been duplicated in a new material.

What’s particularly cool about the discovery is that the researchers found the Weyl fermion in a synthetic crystal in the lab, unlike most other particle discoveries, such as the famous Higgs boson, which are only observed in the aftermath of particle collisions. This means that the research is easily reproducible, and scientists will be able to immediately begin figuring out how to use the Weyl fermion in electronics.

Arrgh!

Fundamental particles from particle physics, like the Higgs boson, and quasiparticles, like this particular Weyl fermion, are completely different things! Comparing them like this, as if this is some new efficient trick that could have been used to discover the Higgs, just needlessly confuses people.

Weyl fermions are what’s known as quasiparticles, which means they can only exist in a solid such as a crystal, and not as standalone particles. But further research will help scientists work out just how useful they could be. “The physics of the Weyl fermion are so strange, there could be many things that arise from this particle that we’re just not capable of imagining now,” said Hasan.

In the very last paragraph, the author finally mentions quasiparticles. There’s no mention of the fact that they’re more like waves in the material than like fundamental particles, though. From this description, it makes it sound like they’re just particles that happen to chill inside crystals, like they’re agoraphobic or something.

What the scientists involved here actually discovered is probably quite interesting. They’ve discovered a new sort of ripple in the material they studied. The ripple can carry charge, and because it can behave like a massless particle it can carry charge much faster than electrons can. (To get a basic idea as to how this works, think about waves in the ocean. You can have a wave that goes much faster than the ocean’s current. As the wave travels, no actual water molecules travel from one side to the other. Instead, it is the motion that travels, the energy pushing the wave up and down being transferred along.)

There’s no reason to compare this to particle physics, to make it sound like another Higgs boson. This sort of thing dilutes the excitement of actual particle discoveries, perpetuating the misconception of particles as just more species to find and catalog. Furthermore, it’s just completely unnecessary: condensed matter is a very exciting field, one that the majority of physicists work on. It doesn’t need to ride on the coat-tails of particle physics rhetoric in order to capture peoples’ attention. I’ve seen journalists do this kind of thing before, comparing new quasiparticles and composite particles with fundamental particles like the Higgs, and every time I cringe. Don’t you have any respect for the subject you’re writing about?