Tag Archives: quarks

Amplitudes 2019 Retrospective

I’m back from Amplitudes 2019, and since I have more time I figured I’d write down a few more impressions.

Amplitudes runs all the way from practical LHC calculations to almost pure mathematics, and this conference had plenty of both as well as everything in between. On the more practical side a standard “pipeline” has developed: get a large number of integrals from generalized unitarity, reduce them to a more manageable number with integration-by-parts, and then compute them with differential equations. Vladimir Smirnov and Johannes Henn presented the state of the art in this pipeline, challenging QCD calculations that required powerful methods. Others aimed to replace various parts of the pipeline. Integration-by-parts could be avoided in the numerical unitarity approach discussed by Ben Page, or alternatively with the intersection theory techniques showcased by Pierpaolo Mastrolia. More radical departures included Stefan Weinzierl’s refinement of loop-tree duality, and Jacob Bourjaily’s advocacy of prescriptive unitarity. Robert Schabinger even brought up direct integration, though I mostly viewed his talk as an independent confirmation of the usefulness of Erik Panzer’s thesis. It also showcased an interesting integral that had previously been represented by Lorenzo Tancredi and collaborators as elliptic, but turned out to be writable in terms of more familiar functions. It’s going to be interesting to see whether other such integrals arise, and whether they can be spotted in advance.

On the other end of the scale, Francis Brown was the only speaker deep enough in the culture of mathematics to insist on doing a blackboard talk. Since the conference hall didn’t actually have a blackboard, this was accomplished by projecting video of a piece of paper that he wrote on as the talk progressed. Despite the awkward setup, the talk was impressively clear, though there were enough questions that he ran out of time at the end and had to “cheat” by just projecting his notes instead. He presented a few theorems about the sort of integrals that show up in string theory. Federico Zerbini and Eduardo Casali’s talks covered similar topics, with the latter also involving intersection theory. Intersection theory also appeared in a poster from grad student Andrzej Pokraka, which overall is a pretty impressively broad showing for a part of mathematics that Sebastian Mizera first introduced to the amplitudes community less than two years ago.

Nima Arkani-Hamed’s talk on Wednesday fell somewhere in between. A series of airline mishaps brought him there only a few hours before his talk, and his own busy schedule sent him back to the airport right after the last question. The talk itself covered several topics, tied together a bit better than usual by a nice account in the beginning of what might motivate a “polytope picture” of quantum field theory. One particularly interesting aspect was a suggestion of a space, smaller than the amplituhedron, that might more accuractly the describe the “alphabet” that appears in N=4 super Yang-Mills amplitudes. If his proposal works, it may be that the infinite alphabet we were worried about for eight-particle amplitudes is actually finite. Ömer Gürdoğan’s talk mentioned this, and drew out some implications. Overall, I’m still unclear as to what this story says about whether the alphabet contains square roots, but that’s a topic for another day. My talk was right after Nima’s, and while he went over-time as always I compensated by accidentally going under-time. Overall, I think folks had fun regardless.

Though I don’t know how many people recognized this guy

Amplitudes 2019

It’s that time of year again, and I’m at Amplitudes, my field’s big yearly conference. This year we’re in Dublin, hosted by Trinity.

Which also hosts the Book of Kells, and the occasional conference reception just down the hall from the Book of Kells

Increasingly, the organizers of Amplitudes have been setting aside a few slots for talks from people in other fields. This year the “closest” such speaker was Kirill Melnikov, who pointed out some of the hurdles that make it difficult to have useful calculations to compare to the LHC. Many of these hurdles aren’t things that amplitudes-people have traditionally worked on, but are still things that might benefit from our particular expertise. Another such speaker, Maxwell Hansen, is from a field called Lattice QCD. While amplitudeologists typically compute with approximations, order by order in more and more complicated diagrams, Lattice QCD instead simulates particle physics on supercomputers, chopping up their calculations on a grid. This allows them to study much stronger forces, including the messy interactions of quarks inside protons, but they have a harder time with the situations we’re best at, where two particles collide from far away. Apparently, though, they are making progress on that kind of calculation, with some clever tricks to connect it to calculations they know how to do. While I was a bit worried that this would let them fire all the amplitudeologists and replace us with supercomputers, they’re not quite there yet, nonetheless they are doing better than I would have expected. Other speakers from other fields included Leron Borsten, who has been applying the amplitudes concept of the “double copy” to M theory and Andrew Tolley, who uses the kind of “positivity” properties that amplitudeologists find interesting to restrict the kinds of theories used in cosmology.

The biggest set of “non-traditional-amplitudes” talks focused on using amplitudes techniques to calculate the behavior not of particles but of black holes, to predict the gravitational wave patterns detected by LIGO. This year featured a record six talks on the topic, a sixth of the conference. Last year I commented that the research ideas from amplitudeologists on gravitational waves had gotten more robust, with clearer proposals for how to move forward. This year things have developed even further, with several initial results. Even more encouragingly, while there are several groups doing different things they appear to be genuinely listening to each other: there were plenty of references in the talks both to other amplitudes groups and to work by more traditional gravitational physicists. There’s definitely still plenty of lingering confusion that needs to be cleared up, but it looks like the community is robust enough to work through it.

I’m still busy with the conference, but I’ll say more when I’m back next week. Stay tuned for square roots, clusters, and Nima’s travel schedule. And if you’re a regular reader, please fill out last week’s poll if you haven’t already!

The Black Box Theory of Everything

What is science? What makes a theory scientific?

There’s a picture we learn in high school. It’s not the whole story, certainly: philosophers of science have much more sophisticated notions. But for practicing scientists, it’s a picture that often sits in the back of our minds, informing what we do. Because of that, it’s worth examining in detail.

In the high school picture, scientific theories make predictions. Importantly, postdictions don’t count: if you “predict” something that already happened, it’s too easy to cheat and adjust your prediction. Also, your predictions must be different from those of other theories. If all you can do is explain the same results with different words you aren’t doing science, you’re doing “something else” (“metaphysics”, “religion”, “mathematics”…whatever the person you’re talking to wants to make fun of, but definitely not science).

Seems reasonable, right? Let’s try a thought experiment.

In the late 1950’s, the physics of protons and neutrons was still quite mysterious. They seemed to be part of a bewildering zoo of particles that no-one could properly explain. In the 60’s and 70’s the field started converging on the right explanation, from Gell-Mann’s eightfold way to the parton model to the full theory of quantum chromodynamics (QCD for short). Today we understand the theory well enough to package things into computer code: amplitudes programs like BlackHat for collisions of individual quarks, jet algorithms that describe how those quarks become signals in colliders, lattice QCD implemented on supercomputers for pretty much everything else.

Now imagine that you had a time machine, prodigious programming skills, and a grudge against 60’s era-physicists.

Suppose you wrote a computer program that combined the best of QCD in the modern world. BlackHat and more from the amplitudes side, the best jet algorithms and lattice QCD code, and more: a program that could reproduce any calculation in QCD that anyone can do today. Further, suppose you don’t care about silly things like making your code readable. Since I began the list above with BlackHat, we’ll call the combined box of different codes BlackBox.

Now suppose you went back in time, and told the bewildered scientists of the 50’s that nuclear physics was governed by a very complicated set of laws: the ones implemented in BlackBox.

Behold, your theory

Your “BlackBox theory” passes the high school test. Not only would it match all previous observations, it could make predictions for any experiment the scientists of the 50’s could devise. Up until the present day, your theory would match observations as well as…well as well as QCD does today.

(Let’s ignore for the moment that they didn’t have computers that could run this code in the 50’s. This is a thought experiment, we can fudge things a bit.)

Now suppose that one of those enterprising 60’s scientists, Gell-Mann or Feynman or the like, noticed a pattern. Maybe they got it from an experiment scattering electrons off of protons, maybe they saw it in BlackBox’s code. They notice that different parts of “BlackBox theory” run on related rules. Based on those rules, they suggest a deeper reality: protons are made of quarks!

But is this “quark theory” scientific?

“Quark theory” doesn’t make any new predictions. Anything you could predict with quarks, you could predict with BlackBox. According to the high school picture of science, for these 60’s scientists quarks wouldn’t be scientific: they would be “something else”, metaphysics or religion or mathematics.

And in practice? I doubt that many scientists would care.

“Quark theory” makes the same predictions as BlackBox theory, but I think most of us understand that it’s a better theory. It actually explains what’s going on. It takes different parts of BlackBox and unifies them into a simpler whole. And even without new predictions, that would be enough for the scientists in our thought experiment to accept it as science.

Why am I thinking about this? For two reasons:

First, I want to think about what happens when we get to a final theory, a “Theory of Everything”. It’s probably ridiculously arrogant to think we’re anywhere close to that yet, but nonetheless the question is on physicists’ minds more than it has been for most of history.

Right now, the Standard Model has many free parameters, numbers we can’t predict and must fix based on experiments. Suppose there are two options for a final theory: one that has a free parameter, and one that doesn’t. Once that one free parameter is fixed, both theories will match every test you could ever devise (they’re theories of everything, after all).

If we come up with both theories before testing that final parameter, then all is well. The theory with no free parameters will predict the result of that final experiment, the other theory won’t, so the theory without the extra parameter wins the high school test.

What if we do the experiment first, though?

If we do, then we’re in a strange situation. Our “prediction” of the one free parameter is now a “postdiction”. We’ve matched numbers, sure, but by the high school picture we aren’t doing science. Our theory, the same theory that was scientific if history went the other way, is now relegated to metaphysics/religion/mathematics.

I don’t know about you, but I’m uncomfortable with the idea that what is or is not science depends on historical chance. I don’t like the idea that we could be stuck with a theory that doesn’t explain everything, simply because our experimentalists were able to work a bit faster.

My second reason focuses on the here and now. You might think we have nothing like BlackBox on offer, no time travelers taunting us with poorly commented code. But we’ve always had the option of our own Black Box theory: experiment itself.

The Standard Model fixes some of its parameters from experimental results. You do a few experiments, and you can predict the results of all the others. But why stop there? Why not fix all of our parameters with experiments? Why not fix everything with experiments?

That’s the Black Box Theory of Everything. Each individual experiment you could possibly do gets its own parameter, describing the result of that experiment. You do the experiment, fix that parameter, then move on to the next experiment. Your theory will never be falsified, you will never be proven wrong. Sure, you never predict anything either, but that’s just an extreme case of what we have now, where the Standard Model can’t predict the mass of the Higgs.

What’s wrong with the Black Box Theory? (I trust we can all agree that it’s wrong.)

It’s not just that it can’t make predictions. You could make it a Black Box All But One Theory instead, that predicts one experiment and takes every other experiment as input. You could even make a Black Box Except the Standard Model Theory, that predicts everything we can predict now and just leaves out everything we’re still confused by.

The Black Box Theory is wrong because the high school picture of what counts as science is wrong. The high school picture is a useful guide, it’s a good rule of thumb, but it’s not the ultimate definition of science. And especially now, when we’re starting to ask questions about final theories and ultimate parameters, we can’t cling to the high school picture. We have to be willing to actually think, to listen to the philosophers and consider our own motivations, to figure out what, in the end, we actually mean by science.


Made of Quarks Versus Made of Strings

When you learn physics in school, you learn it in terms of building blocks.

First, you learn about atoms. Indivisible elements, as the Greeks foretold…until you learn that they aren’t indivisible. You learn that atoms are made of electrons, protons, and neutrons. Then you learn that protons and neutrons aren’t indivisible either, they’re made of quarks. They’re what physicists call composite particles, particles made of other particles stuck together.

Hearing this story, you notice a pattern. Each time physicists find a more fundamental theory, they find that what they thought were indivisible particles are actually composite. So when you hear physicists talking about the next, more fundamental theory, you might guess it has to work the same way. If quarks are made of, for example, strings, then each quark is made of many strings, right?

Nope! As it turns out, there are two different things physicists can mean when they say a particle is “made of” a more fundamental particle. Sometimes they mean the particle is composite, like the proton is made of quarks. But sometimes, like when they say particles are “made of strings”, they mean something different.

To understand what this “something different” is, let’s go back to quarks for a moment. You might have heard there are six types, or flavors, of quarks: up and down, strange and charm, top and bottom. The different types have different mass and electric charge. You might have also heard that quarks come in different colors, red green and blue. You might wonder then, aren’t there really eighteen types of quark? Red up quarks, green top quarks, and so forth?

Physicists don’t think about it that way. Unlike the different flavors, the different colors of quark have a more unified mathematical description. Changing the color of a quark doesn’t change its mass or electric charge. All it changes is how the quark interacts with other particles via the strong nuclear force. Know how one color works, and you know how the other colors work. Different colors can also “mix” together, similarly to how different situations can mix together in quantum mechanics: just as Schrodinger’s cat can be both alive and dead, a quark can be both red and green.

This same kind of thing is involved in another example, electroweak unification. You might have heard that electromagnetism and the weak nuclear force are secretly the same thing. Each force has corresponding particles: the familiar photon for electromagnetism, and W and Z bosons for the weak nuclear force. Unlike the different colors of quarks, photons and W and Z bosons have different masses from each other. It turns out, though, that they still come from a unified mathematical description: they’re “mixtures” (in the same Schrodinger’s cat-esque sense) of the particles from two more fundamental forces (sometimes called “weak isospin” and “weak hypercharge”). The reason they have different masses isn’t their own fault, but the fault of the Higgs: the Higgs field we have in our universe interacts with different parts of this unified force differently, so the corresponding particles end up with different masses.

A physicist might say that electromagnetism and the weak force are “made of” weak isospin and weak hypercharge. And it’s that kind of thing that physicists mean when they say that quarks might be made of strings, or the like: not that quarks are composite, but that quarks and other particles might have a unified mathematical description, and look different only because they’re interacting differently with something else.

This isn’t to say that quarks and electrons can’t be composite as well. They might be, we don’t know for sure. If they are, the forces binding them together must be very strong, strong enough that our most powerful colliders can’t make them wiggle even a little out of shape. The tricky part is that composite particles get mass from the energy holding them together. A particle held together by very powerful forces would normally be very massive, if you want it to end up lighter you have to construct your theory carefully to do that. So while occasionally people will suggest theories where quarks or electrons are composite, these theories aren’t common. Most of the time, if a physicist says that quarks or electrons are “made of ” something else, they mean something more like “particles are made of strings” than like “protons are made of quarks”.

The Rippling Pond Universe

[Background: Someone told me they couldn’t imagine popularizing Quantum Field Theory in the same flashy way people popularize String Theory. Naturally I took this as a challenge. Please don’t take any statements about what “really exists” here too seriously, this isn’t intended as metaphysics, just metaphor.]

 

You probably learned about atoms in school.

Your teacher would have explained that these aren’t the same atoms the ancient Greeks imagined. Democritus thought of atoms as indivisible, unchanging spheres, the fundamental constituents of matter. We know, though, that atoms aren’t indivisible. They’re clouds of electrons, buzzing in their orbits around a nucleus of protons and neutrons. Chemists can divide the electrons from the rest, nuclear physicists can break the nucleus. The atom is not indivisible.

And perhaps your teacher remarked on how amazing it is, that the nucleus is such a tiny part of the atom, that the atom, and thus all solid matter, is mostly empty space.

 

You might have learned that protons and neutrons, too, are not indivisible. That each proton, and each neutron, is composed of three particles called quarks, particles which can be briefly freed by powerful particle colliders.

And you might have wondered, then, even if you didn’t think to ask: are quarks atoms? The real atoms, the Greek atoms, solid indestructible balls of fundamental matter?

 

They aren’t, by the way.

 

You might have gotten an inkling of this, learning about beta decay. In beta decay, a neutron transforms, becoming a proton, an electron, and a neutrino. Look for an electron inside a neutron, and you won’t find one. Even if you look at the quarks, you see the same transformation: a down quark becomes an up quark, plus an electron, plus a neutrino. If quarks were atoms, indivisible and unchanging, this couldn’t happen. There’s nowhere for the electron to hide.

 

In fact, there are no atoms, not the way the Greeks imagined. Just ripples.

Water Drop

Picture the universe as a pond. This isn’t a still pond: something has disturbed it, setting ripples and whirlpools in motion. These ripples and whirlpools skim along the surface of the pond, eddying together and scattering apart.

Our universe is not a simple pond, and so these are not simple ripples. They shine and shimmer, each with their own bright hue, colors beyond our ordinary experience that mix in unfamiliar ways. The different-colored ripples interact, merge and split, and the pond glows with their light.

Stand back far enough, and you notice patterns. See that red ripple, that stays together and keeps its shape, that meets other ripples and interacts in predictable ways. You might imagine the red ripple is an atom, truly indivisible…until it splits, transforms, into ripples of new colors. The quark has changed, down to up, an electron and a neutrino rippling away.

All of our world is encoded in the colors of these ripples, each kind of charge its own kind of hue. With a wink (like your teacher’s, telling you of empty atoms), I can tell you that distance itself is just a kind of ripple, one that links other ripples together. The pond’s very nature as a place is defined by the ripples on it.

 

This is Quantum Field Theory, the universe of ripples. Democritus said that in truth there are only atoms and the void, but he was wrong. There are no atoms. There is only the void. It ripples and shimmers, and each of us lives as a collection of whirlpools, skimming the surface, seeming concrete and real and vital…until the ripples dissolve, and a new pattern comes.

Amplitudes Papers I Haven’t Had Time to Read

Interesting amplitudes papers seem to come in groups. Several interesting papers went up this week, and I’ve been too busy to read any of them!

Well, that’s not quite true, I did manage to read this paper, by James Drummond, Jack Foster, and Omer Gurdogan. At six pages long, it wasn’t hard to fit in, and the result could be quite useful. The way my collaborators and I calculate amplitudes involves building up a mathematical object called a symbol, described in terms of a string of “letters”. What James and collaborators have found is a restriction on which “letters” can appear next to each other, based on the properties of a mathematical object called a cluster algebra. Oddly, the restriction seems to have the same effect as a more physics-based condition we’d been using earlier. This suggests that the abstract mathematical restriction and the physics-based restriction are somehow connected, but we don’t yet understand how. It also could be useful for letting us calculate amplitudes with more particles: previously we thought the number of “letters” we’d have to consider there was going to be infinite, but with James’s restriction we’d only need to consider a finite number.

I didn’t get a chance to read David Dunbar, John Godwin, Guy Jehu, and Warren Perkins’s paper. They’re computing amplitudes in QCD (which unlike N=4 super Yang-Mills actually describes the real world!) and doing so for fairly complicated arrangements of particles. They claim to get remarkably simple expressions: since that sort of claim was what jump-started our investigations into N=4, I should probably read this if only to see if there’s something there in the real world amenable to our technique.

I also haven’t read Rutger Boels and Hui Lui’s paper yet. From the abstract, I’m still not clear which parts of what they’re describing is new, or how much it improves on existing methods. It will probably take a more thorough reading to find out.

I really ought to read Burkhard Eden, Yunfeng Jiang, Dennis le Plat, and Alessandro Sfondrini’s paper. They’re working on a method referred to as the Hexagon Operator Product Expansion, or HOPE. It’s related to an older method, the Pentagon Operator Product Expansion (POPE), but applicable to trickier cases. I’ve been keeping an eye on the HOPE in part because my collaborators have found the POPE very useful, and the HOPE might enable something similar. It will be interesting to find out how Eden et al.’s paper modifies the HOPE story.

Finally, I’ll probably find the time to read my former colleague Sebastian Mizera’s paper. He’s found a connection between the string-theory-like CHY picture of scattering amplitudes and some unusual mathematical structures. I’m not sure what to make of it until I get a better idea of what those structures are.

Bootstrapping in the Real World

I’ll be at Amplitudes, my subfield’s big yearly conference, next week, so I don’t have a lot to talk about. That said, I wanted to give a shout-out to my collaborator and future colleague Andrew McLeod, who is a co-author (along with Øyvind Almelid, Claude Duhr, Einan Gardi, and Chris White) on a rather cool paper that went up on arXiv this week.

Andrew and I work on “bootstrapping” calculations in quantum field theory. In particular, we start with a guess for what the result will be based on a specific set of mathematical functions (in my case, “hexagon functions” involving interactions of six particles). We then narrow things down, using other calculations that by themselves only predict part of the result, until we know the right answer. The metaphor here is that we’re “pulling ourselves up by our own bootstraps”, skipping a long calculation by essentially just guessing the answer.

This method has worked pretty well…in a toy model anyway. The calculations I’ve done with it use N=4 super Yang-Mills, a simpler cousin of the theories that describe the real world. There, fewer functions can show up, so our guess is much less unwieldy than it would be otherwise.

What’s impressive about Andrew and co.’s new paper is that they apply this method, not to N=4 super Yang-Mills, but to QCD, the theory that describes quarks and gluons in the real world. This is exactly the sort of thing I’ve been hoping to see more of, these methods built into something that can help with real, useful calculations.

Currently, what they can do is still fairly limited. For the particular problem they’re looking at, the functions required ended up being relatively simple, involving interactions between at most four particles. So far, they’ve just reproduced a calculation done by other means. Going further (more “loops”) would involve interactions between more particles, as well as mixing different types of functions (different “transcendental weight”), either of which make the problem much more complicated.

That said, the simplicity of their current calculation is also a reason to be optimistic.  Their starting “guess” had just thirteen parameters, while the one Andrew and I are working on right now (in N=4 super Yang-Mills) has over a thousand. Even if things get a lot more complicated for them at the next loop, we’ve shown that “a lot more complicated” can still be quite doable.

So overall, I’m excited. It looks like there are contexts in which one really can “bootstrap” up calculations in a realistic theory, and that’s a method that could end up really useful.