Tag Archives: theoretical physics

What I Was Not Saying in My Last Post

Science communication is a gradual process. Anything we say is incomplete, prone to cause misunderstanding. Luckily, we can keep talking, give a new explanation that corrects those misunderstandings. This of course will lead to new misunderstandings. We then explain again, and so on. It sounds fruitless, but in practice our audience nevertheless gets closer and closer to the truth.

Last week, I tried to explain physicists’ notion of a fundamental particle. In particular, I wanted to explain what these particles aren’t: tiny, indestructible spheres, like Democritus imagined. Instead, I emphasized the idea of fields, interacting and exchanging energy, with particles as just the tip of the field iceberg.

I’ve given this kind of explanation before. And when I do, there are two things people often misunderstand. These correspond to two topics which use very similar language, but talk about different things. So this week, I thought I’d get ahead of the game and correct those misunderstandings.

The first misunderstanding: None of that post was quantum.

If you’ve heard physicists explain quantum mechanics, you’ve probably heard about wave-particle duality. Things we thought were waves, like light, also behave like particles, things we thought were particles, like electrons, also behave like waves.

If that’s on your mind, and you see me say particles don’t exist, maybe you think I mean waves exist instead. Maybe when I say “fields”, you think I’m talking about waves. Maybe you think I’m choosing one side of the duality, saying that waves exist and particles don’t.

To be 100% clear: I am not saying that.

Particles and waves, in quantum physics, are both manifestations of fields. Is your field just at one specific point? Then it’s a particle. Is it spread out, with a fixed wavelength and frequency? Then it’s a wave. These are the two concepts connected by wave-particle duality, where the same object can behave differently depending on what you measure. And both of them, to be clear, come from fields. Neither is the kind of thing Democritus imagined.

The second misunderstanding: This isn’t about on-shell vs. off-shell.

Some of you have seen some more “advanced” science popularization. In particular, you might have listened to Nima Arkani-Hamed, of amplituhedron fame, talk about his perspective on particle physics. Nima thinks we need to reformulate particle physics, as much as possible, “on-shell”. “On-shell” means that particles obey their equations of motion, normally quantum calculations involve “off-shell” particles that violate those equations.

To again be clear: I’m not arguing with Nima here.

Nima (and other people in our field) will sometimes talk about on-shell vs off-shell as if it was about particles vs. fields. Normal physicists will write down a general field, and let it be off-shell, we try to do calculations with particles that are on-shell. But once again, on-shell doesn’t mean Democritus-style. We still don’t know what a fully on-shell picture of physics will look like. Chances are it won’t look like the picture of sloshing, omnipresent fields we started with, at least not exactly. But it won’t bring back indivisible, unchangeable atoms. Those are gone, and we have no reason to bring them back.

These Ain’t Democritus’s Particles

Physicists talk a lot about fundamental particles. But what do we mean by fundamental?

The Ancient Greek philosopher Democritus thought the world was composed of fundamental indivisible objects, constantly in motion. He called these objects “atoms”, and believed they could never be created or destroyed, with every other phenomenon explained by different types of interlocking atoms.

The things we call atoms today aren’t really like this, as you probably know. Atoms aren’t indivisible: their electrons can be split from their nuclei, and with more energy their nuclei can be split into protons and neutrons. More energy yet, and protons and neutrons can in turn be split into quarks. Still, at this point you might wonder: could quarks be Democritus’s atoms?

In a word, no. Nonetheless, quarks are, as far as we know, fundamental particles. As it turns out, our “fundamental” is very different from Democritus’s. Our fundamental particles can transform.

Think about beta decay. You might be used to thinking of it in terms of protons and neutrons: an unstable neutron decays, becoming a proton, an electron, and an (electron-anti-)neutrino. You might think that when the neutron decays, it literally “decays”, falling apart into smaller pieces.

But when you look at the quarks, the neutron’s smallest pieces, that isn’t the picture at all. In beta decay, a down quark in the neutron changes, turning into an up quark and an unstable W boson. The W boson then decays into an electron and a neutrino, while the up quark becomes part of the new proton. Even looking at the most fundamental particles we know, Democritus’s picture of unchanging atoms just isn’t true.

Could there be some even lower level of reality that works the way Democritus imagined? It’s not impossible. But the key insight of modern particle physics is that there doesn’t need to be.

As far as we know, up quarks and down quarks are both fundamental. Neither is “made of” the other, or “made of” anything else. But they also aren’t little round indestructible balls. They’re manifestations of quantum fields, “ripples” that slosh from one sort to another in complicated ways.

When we ask which particles are fundamental, we’re asking what quantum fields we need to describe reality. We’re asking for the simplest explanation, the simplest mathematical model, that’s consistent with everything we could observe. So “fundamental” doesn’t end up meaning indivisible, or unchanging. It’s fundamental like an axiom: used to derive the rest.

You Can’t Anticipate a Breakthrough

As a scientist, you’re surrounded by puzzles. For every test and every answer, ten new questions pop up. You can spend a lifetime on question after question, never getting bored.

But which questions matter? If you want to change the world, if you want to discover something deep, which questions should you focus on? And which should you ignore?

Last year, my collaborators and I completed a long, complicated project. We were calculating the chance fundamental particles bounce off each other in a toy model of nuclear forces, pushing to very high levels of precision. We managed to figure out a lot, but as always, there were many questions left unanswered in the end.

The deepest of these questions came from number theory. We had noticed surprising patterns in the numbers that showed up in our calculation, reminiscent of the fancifully-named Cosmic Galois Theory. Certain kinds of numbers never showed up, while others appeared again and again. In order to see these patterns, though, we needed an unusual fudge factor: an unexplained number multiplying our result. It was clear that there was some principle at work, a part of the physics intimately tied to particular types of numbers.

There were also questions that seemed less deep. In order to compute our result, we compared to predictions from other methods: specific situations where the question becomes simpler and there are other ways of calculating the answer. As we finished writing the paper, we realized we could do more with some of these predictions. There were situations we didn’t use that nonetheless simplified things, and more predictions that it looked like we could make. By the time we saw these, we were quite close to publishing, so most of us didn’t have the patience to follow these new leads. We just wanted to get the paper out.

At the time, I expected the new predictions would lead, at best, to more efficiency. Maybe we could have gotten our result faster, or cleaned it up a bit. They didn’t seem essential, and they didn’t seem deep.

Fast forward to this year, and some of my collaborators (specifically, Lance Dixon and Georgios Papathanasiou, along with Benjamin Basso) have a new paper up: “The Origin of the Six-Gluon Amplitude in Planar N=4 SYM”. The “origin” in their title refers to one of those situations: when the variables in the problem are small, and you’re close to the “origin” of a plot in those variables. But the paper also sheds light on the origin of our calculation’s mysterious “Cosmic Galois” behavior.

It turns out that the origin (of the plot) can be related to another situation, when the paths of two particles in our calculation almost line up. There, the calculation can be done with another method, called the Pentagon Operator Product Expansion, or POPE. By relating the two, Basso, Dixon, and Papathanasiou were able to predict not only how our result should have behaved near the origin, but how more complicated as-yet un-calculated results should behave.

The biggest surprise, though, lurked in the details. Building their predictions from the POPE method, they found their calculation separated into two pieces: one which described the physics of the particles involved, and a “normalization”. This normalization, predicted by the POPE method, involved some rather special numbers…the same as the “fudge factor” we had introduced earlier! Somehow, the POPE’s physics-based setup “knows” about Cosmic Galois Theory!

It seems that, by studying predictions in this specific situation, Basso, Dixon, and Papathanasiou have accomplished something much deeper: a strong hint of where our mysterious number patterns come from. It’s rather humbling to realize that, were I in their place, I never would have found this: I had assumed “the origin” was just a leftover detail, perhaps useful but not deep.

I’m still digesting their result. For now, it’s a reminder that I shouldn’t try to pre-judge questions. If you want to learn something deep, it isn’t enough to sit thinking about it, just focusing on that one problem. You have to follow every lead you have, work on every problem you can, do solid calculation after solid calculation. Sometimes, you’ll just make incremental progress, just fill in the details. But occasionally, you’ll have a breakthrough, something that justifies the whole adventure and opens your door to something strange and new. And I’m starting to think that when it comes to breakthroughs, that’s always been the only way there.

What Do Theorists Do at Work?

Picture a scientist at work. You’re probably picturing an experiment, test tubes and beakers bubbling away. But not all scientists do experiments. Theoretical physicists work on the mathematical side of the field, making predictions and trying to understand how to make them better. So what does it look like when a theoretical physicist is working?

A theoretical physicist, at work in the equation mines

The first thing you might imagine is that we just sit and think. While that happens sometimes, we don’t actually do that very often. It’s better, and easier, to think by doing something.

Sometimes, this means working with pen and paper. This should be at least a little familiar to anyone who has done math homework. We’ll do short calculations and draw quick diagrams to test ideas, and do a more detailed, organized, “show your work” calculation if we’re trying to figure out something more complicated. Sometimes very short calculations are done on a blackboard instead, it can help us visualize what we’re doing.

Sometimes, we use computers instead. There are computer algebra packages, like Mathematica, Maple, or Sage, that let us do roughly what we would do on pen and paper, but with the speed and efficiency of a computer. Others program in more normal programming languages: C++, Python, even Fortran, making programs that can calculate whatever they are interested in.

Sometimes we read. With most of our field’s papers available for free on arXiv.org, we spend time reading up on what our colleagues have done, trying to understand their work and use it to improve ours.

Sometimes we talk. A paper can only communicate so much, and sometimes it’s better to just walk down the hall and ask a question. Conversations are also a good way to quickly rule out bad ideas, and narrow down to the promising ones. Some people find it easier to think clearly about something if they talk to a colleague about it, even (sometimes especially) if the colleague isn’t understanding much.

And sometimes, of course, we do all the other stuff. We write up our papers, making the diagrams nice and the formulas clean. We teach students. We go to meetings. We write grant applications.

It’s been said that a theoretical physicist can work anywhere. That’s kind of true. Some places are more comfortable, and everyone has different preferences: a busy office, a quiet room, a cafe. But with pen and paper, a computer, and people to talk to, we can do quite a lot.

The Road to Reality

I build tools, mathematical tools to be specific, and I want those tools to be useful. I want them to be used to study the real world. But when I build those tools, most of the time, I don’t test them on the real world. I use toy models, simpler cases, theories that don’t describe reality and weren’t intended to.

I do this, in part, because it lets me stay one step ahead. I can do more with those toy models, answer more complicated questions with greater precision, than I can for the real world. I can do more ambitious calculations, and still get an answer. And by doing those calculations, I can start to anticipate problems that will crop up for the real world too. Even if we can’t do a calculation yet for the real world, if it requires too much precision or too many particles, we can still study it in a toy model. Then when we’re ready to do those calculations in the real world, we know better what to expect. The toy model will have shown us some of the key challenges, and how to tackle them.

There’s a risk, working with simpler toy models. The risk is that their simplicity misleads you. When you solve a problem in a toy model, could you solve it only because the toy model is easy? Or would a similar solution work in the real world? What features of the toy model did you need, and which are extra?

The only way around this risk is to be careful. You have to keep track of how your toy model differs from the real world. You must keep in mind difficulties that come up on the road to reality: the twists and turns and potholes that real-world theories will give you. You can’t plan around all of them, that’s why you’re working with a toy model in the first place. But for a few key, important ones, you should keep your eye on the horizon. You should keep in mind that, eventually, the simplifications of the toy model will go away. And you should have ideas, perhaps not full plans but at least ideas, for how to handle some of those difficulties. If you put the work in, you stand a good chance of building something that’s useful, not just for toy models, but for explaining the real world.

Why You Might Want to Bootstrap

A few weeks back, Quanta Magazine had an article about attempts to “bootstrap” the laws of physics, starting from simple physical principles and pulling out a full theory “by its own bootstraps”. This kind of work is a cornerstone of my field, a shared philosophy that motivates a lot of what we do. Building on deep older results, people in my field have found that just a few simple principles are enough to pick out specific physical theories.

There are limits to this. These principles pick out broad traits of theories: gravity versus the strong force versus the Higgs boson. As far as we know they don’t separate more closely related forces, like the strong nuclear force and the weak nuclear force. (Originally, the Quanta article accidentally made it sound like we know why there are four fundamental forces: we don’t, and the article’s phrasing was corrected.) More generally, a bootstrap method isn’t going to tell you which principles are the right ones. For any set of principles, you can always ask “why?”

With that in mind, why would you want to bootstrap?

First, it can make your life simpler. Those simple physical principles may be clear at the end, but they aren’t always obvious at the start of a calculation. If you don’t make good use of them, you might find you’re calculating many things that violate those principles, things that in the end all add up to zero. Bootstrapping can let you skip that part of the calculation, and sometimes go straight to the answer.

Second, it can suggest possibilities you hadn’t considered. Sometimes, your simple physical principles don’t select a unique theory. Some of the options will be theories you’ve heard of, but some might be theories that never would have come up, or even theories that are entirely new. Trying to understand the new theories, to see whether they make sense and are useful, can lead to discovering new principles as well.

Finally, even if you don’t know which principles are the right ones, some principles are better than others. If there is an ultimate theory that describes the real world, it can’t be logically inconsistent. That’s a start, but it’s quite a weak requirement. There are principles that aren’t required by logic itself, but that still seem important in making the world “make sense”. Often, we appreciate these principles only after we’ve seen them at work in the real world. The best example I can think of is relativity: while Newtonian mechanics is logically consistent, it requires a preferred reference frame, a fixed notion for which things are moving and which things are still. This seemed reasonable for a long time, but now that we understand relativity the idea of a preferred reference frame seems like it should have been obviously wrong. It introduces something arbitrary into the laws of the universe, a “why is it that way?” question that doesn’t have an answer. That doesn’t mean it’s logically inconsistent, or impossible, but it does make it suspect in a way other ideas aren’t. Part of the hope of these kinds of bootstrap methods is that they uncover principles like that, principles that aren’t mandatory but that are still in some sense “obvious”. Hopefully, enough principles like that really do specify the laws of physics. And if they don’t, we’ll at least have learned how to calculate better.

Calculating the Hard Way, for Science!

I had a new paper out last week, with Jacob Bourjaily and Matthias Volk. We’re calculating the probability that particles bounce off each other in our favorite toy model, N=4 super Yang-Mills. And this time, we’re doing it the hard way.

The “easy way” we didn’t take is one I have a lot of experience with. Almost as long as I’ve been writing this blog, I’ve been calculating these particle probabilities by “guesswork”: starting with a plausible answer, then honing it down until I can be confident it’s right. This might sound reckless, but it works remarkably well, letting us calculate things we could never have hoped for with other methods. The catch is that “guessing” is much easier when we know what we’re looking for: in particular, it works much better in toy models than in the real world.

Over the last few years, though, I’ve been using a much more “normal” method, one that so far has a better track record in the real world. This method, too, works better than you would expect, and we’ve managed some quite complicated calculations.

So we have an “easy way”, and a “hard way”. Which one is better? Is the hard way actually harder?

To test that, you need to do the same calculation both ways, and see which is easier. You want it to be a fair test: if “guessing” only works in the toy model, then you should do the “hard” version in the toy model as well. And you don’t want to give “guessing” any unfair advantages. In particular, the “guess” method works best when we know a lot about the result we’re looking for: what it’s made of, what symmetries it has. In order to do a fair test, we must use that knowledge to its fullest to improve the “hard way” as well.

We picked an example in the middle: not too easy, and not too hard, a calculation that was done a few years back “the easy way” but not yet done “the hard way”. We plugged in all the modern tricks we could, trying to use as much of what we knew as possible. We trained a grad student: Matthias Volk, who did the lion’s share of the calculation and learned a lot in the process. We worked through the calculation, and did it properly the hard way.

Which method won?

In the end, the hard way was indeed harder…but not by that much! Most of the calculation went quite smoothly, with only a few difficulties at the end. Just five years ago, when the calculation was done “the easy way”, I doubt anyone would have expected the hard way to be viable. But with modern tricks it wasn’t actually that hard.

This is encouraging. It tells us that the “hard way” has potential, that it’s almost good enough to compete at this kind of calculation. It tells us that the “easy way” is still quite powerful. And it reminds us that the more we know, and the more we apply our knowledge, the more we can do.