Tag Archives: particle physics

Cabinet of Curiosities: The Coaction

I had two more papers out this week, continuing my cabinet of curiosities. I’ll talk about one of them today, and the other in (probably) two weeks.

This week, I’m talking about a paper I wrote with an excellent Master’s student, Andreas Forum. Andreas came to me looking for a project on the mathematical side. I had a rather nice idea for his project at first, to explain a proof in an old math paper so it could be used by physicists.

Unfortunately, the proof I sent him off to explain didn’t actually exist. Fortunately, by the time we figured this out Andreas had learned quite a bit of math, so he was ready for his next project: a coaction for Calabi-Yau Feynman diagrams.

We chose to focus on one particular diagram, called a sunrise diagram for its resemblance to a sun rising over the sea:

This diagram

Feynman diagrams depict paths traveled by particles. The paths are a metaphor, or organizing tool, for more complicated calculations: computations of the chances fundamental particles behave in different ways. Each diagram encodes a complicated integral. This one shows one particle splitting into many, then those many particles reuniting into one.

Do the integrals in Feynman diagrams, and you get a variety of different mathematical functions. Many of them integrate to functions called polylogarithms, and we’ve gotten really really good at working with them. We can integrate them up, simplify them, and sometimes we can guess them so well we don’t have to do the integrals at all! We can do all of that because we know how to break polylogarithm functions apart, with a mathematical operation called a coaction. The coaction chops polylogarithms up to simpler parts, parts that are easier to work with.

More complicated Feynman diagrams give more complicated functions, though. Some of them give what are called elliptic functions. You can think of these functions as involving a geometrical shape, in this case a torus.

Other functions involve more complicated geometrical shapes, in some cases very complicated. For example, some involve the Calabi-Yau manifolds studied by string theorists. These sunrise diagrams are some of the simplest to involve such complicated geometry.

Other researchers had proposed a coaction for elliptic functions back in 2018. When they derived it, though, they left a recipe for something more general. Follow the instructions in the paper, and you could in principle find a coaction for other diagrams, even the Calabi-Yau ones, if you set it up right.

I had an idea for how to set it up right, and in the grand tradition of supervisors everywhere I got Andreas to do the dirty work of applying it. Despite the delay of our false start and despite the fact that this was probably in retrospect too big a project for a normal Master’s thesis, Andreas made it work!

Our result, though, is a bit weird. The coaction is a powerful tool for polylogarithms because it chops them up finely: keep chopping, and you get down to very simple functions. Our coaction isn’t quite so fine: we don’t chop our functions into as many parts, and the parts are more mysterious, more difficult to handle.

We think these are temporary problems though. The recipe we applied turns out to be a recipe with a lot of choices to make, less like Julia Child and more like one of those books where you mix-and-match recipes. We believe the community can play with the parameters of this recipe, finding new version of the coaction for new uses.

This is one of the shiniest of the curiosities in my cabinet this year, I hope it gets put to good use.

Why the Antipode Was Supposed to Be Useless

A few weeks back, Quanta Magazine had an article about a new discovery in my field, called antipodal duality.

Some background: I’m a theoretical physicist, and I work on finding better ways to make predictions in particle physics. Folks in my field make these predictions with formulas called “scattering amplitudes” that encode the probability that particles bounce, or scatter, in particular ways. One trick we’ve found is that these formulas can often be written as “words” in a kind of “alphabet”. If we know the alphabet, we can make our formulas much simpler, or even guess formulas we could never have calculated any other way.

Quanta’s article describes how a few friends of mine (Lance Dixon, Ömer Gürdoğan, Andrew McLeod, and Matthias Wilhelm) noticed a weird pattern in two of these formulas, from two different calculations. If you flip the “words” around, back to front (an operation called the antipode), you go from a formula describing one collision of particles to a formula for totally different particles. Somehow, the two calculations are “dual”: two different-seeming descriptions that secretly mean the same thing.

Quanta quoted me for their article, and I was (pleasantly) baffled. See, the antipode was supposed to be useless. The mathematicians told us it was something the math allows us to do, like you’re allowed to order pineapple on pizza. But just like pineapple on pizza, we couldn’t imagine a situation where we actually wanted to do it.

What Quanta didn’t say was why we thought the antipode was useless. That’s a hard story to tell, one that wouldn’t fit in a piece like that.

It fits here, though. So in the rest of this post, I’d like to explain why flipping around words is such a strange, seemingly useless thing to do. It’s strange because it swaps two things that in physics we thought should be independent: branch cuts and derivatives, or particles and symmetries.

Let’s start with the first things in each pair: branch cuts, and particles.

The first few letters of our “word” tell us something mathematical, and they tell us something physical. Mathematically, they tell us ways that our formula can change suddenly, and discontinuously.

Take a logarithm, the inverse of e^x. You’re probably used to plugging in positive numbers, and getting out something reasonable, that changes in a smooth and regular way: after all, e^x is always positive, right? But in mathematics, you don’t have to just use positive numbers. You can use negative numbers. Even more interestingly, you can use complex numbers. And if you take the logarithm of a complex number, and look at the imaginary part, it looks like this:

Mostly, this complex logarithm still seems to be doing what it’s supposed to, changing in a nice slow way. But there is a weird “cut” in the graph for negative numbers: a sudden jump, from \pi to -\pi. That jump is called a “branch cut”.

As physicists, we usually don’t like our formulas to make sudden changes. A change like this is an infinitely fast jump, and we don’t like infinities much either. But we do have one good use for a formula like this, because sometimes our formulas do change suddenly: when we have enough energy to make a new particle.

Imagine colliding two protons together, like at the LHC. Colliding particles doesn’t just break the protons into pieces: due to Einstein’s famous E=mc^2, it can create new particles as well. But to create a new particle, you need enough energy: mc^2 worth of energy. So as you dial up the energy of your protons, you’ll notice a sudden change: you couldn’t create, say, a Higgs boson, and now you can. Our formulas represent some of those kinds of sudden changes with branch cuts.

So the beginning of our “words” represent branch cuts, and particles. The end represents derivatives and symmetries.

Derivatives come from the land of calculus, a place spooky to those with traumatic math class memories. Derivatives shouldn’t be so spooky though. They’re just ways we measure change. If we have a formula that is smoothly changing as we change some input, we can describe that change with a derivative.

The ending of our “words” tell us what happens when we take a derivative. They tell us which ways our formulas can smoothly change, and what happens when they do.

In doing so, they tell us about something some physicists make sound spooky, called symmetries. Symmetries are changes we can make that don’t really change what’s important. For example, you could imagine lifting up the entire Large Hadron Collider and (carefully!) carrying it across the ocean, from France to the US. We’d expect that, once all the scared scientists return and turn it back on, it would start getting exactly the same results. Physics has “translation symmetry”: you can move, or “translate” an experiment, and the important stuff stays the same.

These symmetries are closely connected to derivatives. If changing something doesn’t change anything important, that should be reflected in our formulas: they shouldn’t change either, so their derivatives should be zero. If instead the symmetry isn’t quite true, if it’s what we call “broken”, then by knowing how it was “broken” we know what the derivative should be.

So branch cuts tell us about particles, derivatives tell us about symmetries. The weird thing about the antipode, the un-physical bizarre thing, is that it swaps them. It makes the particles of one calculation determine the symmetries of another.

(And lest you’ve heard about particles with symmetries, like gluons and SU(3)…this is a different kind of thing. I don’t have enough room to explain why here, but it’s completely unrelated.)

Why the heck does this duality exist?

A commenter on the last post asked me to speculate. I said there that I have no clue, and that’s most of the answer.

If I had to speculate, though, my answer might be disappointing.

Most of the things in physics we call “dualities” have fairly deep physical meanings, linked to twisting spacetime in complicated ways. AdS/CFT isn’t fully explained, but it seems to be related to something called the holographic principle, the idea that gravity ties together the inside of space with the boundary around it. T duality, an older concept in string theory, is explained: a consequence of how strings “see” the world in terms of things to wrap around and things to spin around. In my field, one of our favorite dualities links back to this as well, amplitude-Wilson loop duality linked to fermionic T-duality.

The antipode doesn’t twist spacetime, it twists the mathematics. And it may be it matters only because the mathematics is so constrained that it’s forced to happen.

The trick that Lance Dixon and co. used to discover antipodal duality is the same trick I used with Lance to calculate complicated scattering amplitudes. It relies on taking a general guess of words in the right “alphabet”, and constraining it: using mathematical and physical principles it must obey and throwing out every illegal answer until there’s only one answer left.

Currently, there are some hints that the principles used for the different calculations linked by antipodal duality are “antipodal mirrors” of each other: that different principles have the same implication when the duality “flips” them around. If so, then it could be this duality is in some sense just a coincidence: not a coincidence limited to a few calculations, but a coincidence limited to a few principles. Thought of in this way, it might not tell us a lot about other situations, it might not really be “deep”.

Of course, I could be wrong about this. It could be much more general, could mean much more. But in that context, I really have no clue what to speculate. The antipode is weird: it links things that really should not be physically linked. We’ll have to see what that actually means.

Amplitudes 2022 Retrospective

I’m back from Amplitudes 2022 with more time to write, and (besides the several papers I’m working on) that means writing about the conference! Casual readers be warned, there’s no way around this being a technical post, I don’t have the space to explain everything!

I mostly said all I wanted about the way the conference was set up in last week’s post, but one thing I didn’t say much about was the conference dinner. Most conference dinners are the same aside from the occasional cool location or haggis speech. This one did have a cool location, and a cool performance by a blind pianist, but the thing I really wanted to comment on was the setup. Typically, the conference dinner at Amplitudes is a sit-down affair: people sit at tables in one big room, maybe getting up occasionally to pick up food, and eventually someone gives an after-dinner speech. This time the tables were standing tables, spread across several rooms. This was a bit tiring on a hot day, but it did have the advantage that it naturally mixed people around. Rather than mostly talking to “your table”, you’d wander, ending up at a new table every time you picked up new food or drinks. It was a good way to meet new people, a surprising number of which in my case apparently read this blog. It did make it harder to do an after-dinner speech, so instead Lance gave an after-conference speech, complete with the now-well-established running joke where Greta Thunberg tries to get us to fly less.

(In another semi-running joke, the organizers tried to figure out who had attended the most of the yearly Amplitudes conferences over the years. Weirdly, no-one has attended all twelve.)

In terms of the content, and things that stood out:

Nima is getting close to publishing his newest ‘hedron, the surfacehedron, and correspondingly was able to give a lot more technical detail about it. (For his first and most famous amplituhedron, see here.) He still didn’t have enough time to explain why he has to use category theory to do it, but at least he was concrete enough that it was reasonably clear where the category theory was showing up. (I wasn’t there for his eight-hour lecture at the school the week before, maybe the students who stuck around until 2am learned some category theory there.) Just from listening in on side discussions, I got the impression that some of the ideas here actually may have near-term applications to computing Feynman diagrams: this hasn’t been a feature of previous ‘hedra and it’s an encouraging development.

Alex Edison talked about progress towards this blog’s namesake problem, the question of whether N=8 supergravity diverges at seven loops. Currently they’re working at six loops on the N=4 super Yang-Mills side, not yet in a form it can be “double-copied” to supergravity. The tools they’re using are increasingly sophisticated, including various slick tricks from algebraic geometry. They are looking to the future: if they’re hoping their methods will reach seven loops, the same methods have to make six loops a breeze.

Xi Yin approached a puzzle with methods from String Field Theory, prompting the heretical-for-us title “on-shell bad, off-shell good”. A colleague reminded me of a local tradition for dealing with heretics.

While Nima was talking about a new ‘hedron, other talks focused on the original amplituhedron. Paul Heslop found that the amplituhedron is not literally a positive geometry, despite slogans to the contrary, but what it is is nonetheless an interesting generalization of the concept. Livia Ferro has made more progress on her group’s momentum amplituhedron: previously only valid at tree level, they now have a picture that can accomodate loops. I wasn’t sure this would be possible, there are a lot of things that work at tree level and not for loops, so I’m quite encouraged that this one made the leap successfully.

Sebastian Mizera, Andrew McLeod, and Hofie Hannesdottir all had talks that could be roughly summarized as “deep principles made surprisingly useful”. Each took topics that were explored in the 60’s and translated them into concrete techniques that could be applied to modern problems. There were surprisingly few talks on the completely concrete end, on direct applications to collider physics. I think Simone Zoia’s was the only one to actually feature collider data with error bars, which might explain why I singled him out to ask about those error bars later.

Likewise, Matthias Wilhelm’s talk was the only one on functions beyond polylogarithms, the elliptic functions I’ve also worked on recently. I wonder if the under-representation of some of these topics is due to the existence of independent conferences: in a year when in-person conferences are packed in after being postponed across the pandemic, when there are already dedicated conferences for elliptics and practical collider calculations, maybe people are just a bit too tired to go to Amplitudes as well.

Talks on gravitational waves seem to have stabilized at roughly a day’s worth, which seems reasonable. While the subfield’s capabilities continue to be impressive, it’s also interesting how often new conceptual challenges appear. It seems like every time a challenge to their results or methods is resolved, a new one shows up. I don’t know whether the field will ever get to a stage of “business as usual”, or whether it will be novel qualitative questions “all the way up”.

I haven’t said much about the variety of talks bounding EFTs and investigating their structure, though this continues to be an important topic. And I haven’t mentioned Lance Dixon’s talk on antipodal duality, largely because I’m planning a post on it later: Quanta Magazine had a good article on it, but there are some aspects even Quanta struggled to cover, and I think I might have a good way to do it.

At Bohr-100: Current Themes in Theoretical Physics

During the pandemic, some conferences went online. Others went dormant.

Every summer before the pandemic, the Niels Bohr International Academy hosted a conference called Current Themes in High Energy Physics and Cosmology. Current Themes is a small, cozy conference, a gathering of close friends some of whom happen to have Nobel prizes. Holding it online would be almost missing the point.

Instead, we waited. Now, at least in Denmark, the pandemic is quiet enough to hold this kind of gathering. And it’s a special year: the 100th anniversary of Niels Bohr’s Nobel, the 101st of the Niels Bohr Institute. So it seemed like the time for a particularly special Current Themes.

For one, it lets us use remarkably simple signs

A particularly special Current Themes means some unusually special guests. Our guests are usually pretty special already (Gerard t’Hooft and David Gross are regulars, to just name the Nobelists), but this year we also had Alexander Polyakov. Polyakov’s talk had a magical air to it. In a quiet voice, broken by an impish grin when he surprised us with a joke, Polyakov began to lay out five unsolved problems he considered interesting. In the end, he only had time to present one, related to turbulence: when Gross asked him to name the remaining four, the second included a term most of us didn’t recognize (striction, known in a magnetic context and which he wanted to explore gravitationally), so the discussion hung while he defined that and we never did learn what the other three problems were.

At the big 100th anniversary celebration earlier in the spring, the Institute awarded a few years worth of its Niels Bohr Institute Medal of Honor. One of the recipients, Paul Steinhardt, couldn’t make it then, so he got his medal now. After the obligatory publicity photos were taken, Steinhardt entertained us all with a colloquium about his work on quasicrystals, including the many adventures involved in finding the first example “in the wild”. I can’t do the story justice in a short blog post, but if you won’t have the opportunity to watch him speak about it then I hear his book is good.

An anniversary conference should have some historical elements as well. For this one, these were ably provided by David Broadhurst, who gave an after-dinner speech cataloguing things he liked about Bohr. Some was based on public information, but the real draw were the anecdotes: his own reminiscences, and those of people he knew who knew Bohr well.

The other talks covered interesting ground: from deep approaches to quantum field theory, to new tools to understand black holes, to the implications of causality itself. One out of the ordinary talk was by Sabrina Pasterski, who advocated a new model of physics funding. I liked some elements (endowed organizations to further a subfield) and am more skeptical of others (mostly involving NFTs). Regardless it, and the rest of the conference more broadly, spurred a lot of good debate.

The Folks With the Best Pictures

Sometimes I envy astronomers. Particle physicists can write books full of words and pages of colorful graphs and charts, and the public won’t retain any of it. Astronomers can mesmerize the world with a single picture.

NASA just released the first images from its James Webb Space Telescope. They’re impressive, and not merely visually: in twelve hours, they probe deeper than the Hubble Space Telescope managed in weeks on the same patch of sky, as well as gathering data that can show what kinds of molecules are present in the galaxies.

(If you’re curious how the James Webb images compare to Hubble ones, here’s a nice site comparing them.)

Images like this enter the popular imagination. The Hubble telescope’s deep field has appeared on essentially every artistic product one could imagine. As of writing this, searching for “Hubble” on Etsy gives almost 5,000 results. “JWST”, the acronym for the James Webb Space Telescope, already gives over 1,000, including several on the front page that already contain just-released images. Despite the Large Hadron Collider having operated for over a decade, searching “LHC” also leads to just around 1,000 results…and a few on the front page are actually pictures of the JWST!

It would be great as particle physicists to have that kind of impact…but I think we shouldn’t stress ourselves too much about it. Ultimately astronomers will always have this core advantage. Space is amazing, visually stunning and mind-bogglingly vast. It has always had a special place for human cultures, and I’m happy for astronomers to inherit that place.

Carving Out the Possible

If you imagine a particle physicist, you probably picture someone spending their whole day dreaming up new particles. They figure out how to test those particles in some big particle collider, and for a lucky few their particle gets discovered and they get a Nobel prize.

Occasionally, a wiseguy asks if we can’t just cut out the middleman. Instead of dreaming up particles to test, why don’t we just write down every possible particle and test for all of them? It would save the Nobel committee a lot of money at least!

It turns out, you can sort of do this, through something called Effective Field Theory. An Effective Field Theory is a type of particle physics theory that isn’t quite true: instead, it’s “effectively” true, meaning true as long as you don’t push it too far. If you test it at low energies and don’t “zoom in” too much then it’s fine. Crank up your collider energy high enough, though, and you expect the theory to “break down”, revealing new particles. An Effective Field Theory lets you “hide” unknown particles inside new interactions between the particles we already know.

To help you picture how this works, imagine that the pink and blue lines here represent familiar particles like electrons and quarks, while the dotted line is a new particle somebody dreamed up. (The picture is called a Feynman diagram, if you don’t know what that is check out this post.)

In an Effective Field Theory, we “zoom out”, until the diagram looks like this:

Now we’ve “hidden” the new particle. Instead, we have a new type of interaction between the particles we already know.

So instead of writing down every possible new particle we can imagine, we only have to write down every possible interaction between the particles we already know.

That’s not as hard as it sounds. In part, that’s because not every interaction actually makes sense. Some of the things you could write down break some important rules. They might screw up cause and effect, letting something happen before its cause instead of after. They might screw up probability, giving you a formula for the chance something happens that gives a number greater than 100%.

Using these rules you can play a kind of game. You start out with a space representing all of the interactions you can imagine. You begin chipping at it, carving away parts that don’t obey the rules, and you see what shape is left over. You end up with plots that look a bit like carving a ham.

People in my subfield are getting good at this kind of game. It isn’t quite our standard fare: usually, we come up with tricks to make calculations with specific theories easier. Instead, many groups are starting to look at these general, effective theories. We’ve made friends with groups in related fields, building new collaborations. There still isn’t one clear best way to do this carving, so each group manages to find a way to chip a little farther. Out of the block of every theory we could imagine, we’re carving out a space of theories that make sense, theories that could conceivably be right. Theories that are worth testing.

The Most Anthropic of All Possible Worlds

Today, we’d call Leibniz a mathematician, a physicist, and a philosopher. As a mathematician, Leibniz turned calculus into something his contemporaries could actually use. As a physicist, he championed a doomed theory of gravity. In philosophy, he seems to be most remembered for extremely cheaty arguments.

Free will and determinism? Can’t it just be a coincidence?

I don’t blame him for this. Faced with a tricky philosophical problem, it’s enormously tempting to just blaze through with an answer that makes every subtlety irrelevant. It’s a temptation I’ve succumbed to time and time again. Faced with a genie, I would always wish for more wishes. On my high school debate team, I once forced everyone at a tournament to switch sides with some sneaky definitions. It’s all good fun, but people usually end up pretty annoyed with you afterwards.

People were annoyed with Leibniz too, especially with his solution to the problem of evil. If you believe in a benevolent, all-powerful god, as Leibniz did, why is the world full of suffering and misery? Leibniz’s answer was that even an all-powerful god is constrained by logic, so if the world contains evil, it must be logically impossible to make the world any better: indeed, we live in the best of all possible worlds. Voltaire famously made fun of this argument in Candide, dragging a Leibniz-esque Professor Pangloss through some of the most creative miseries the eighteenth century had to offer. It’s possibly the most famous satire of a philosopher, easily beating out Aristophanes’ The Clouds (which is also great).

Physicists can also get accused of cheaty arguments, and probably the most mocked is the idea of a multiverse. While it hasn’t had its own Candide, the multiverse has been criticized by everyone from bloggers to Nobel prizewinners. Leibniz wanted to explain the existence of evil, physicists want to explain “unnaturalness”: the fact that the kinds of theories we use to explain the world can’t seem to explain the mass of the Higgs boson. To explain it, these physicists suggest that there are really many different universes, separated widely in space or built in to the interpretation of quantum mechanics. Each universe has a different Higgs mass, and ours just happens to be the one we can live in. This kind of argument is called “anthropic” reasoning. Rather than the best of all possible worlds, it says we live in the world best-suited to life like ours.

I called Leibniz’s argument “cheaty”, and you might presume I think the same of the multiverse. But “cheaty” doesn’t mean “wrong”. It all depends what you’re trying to do.

Leibniz’s argument and the multiverse both work by dodging a problem. For Leibniz, the problem of evil becomes pointless: any evil might be necessary to secure a greater good. With a multiverse, naturalness becomes pointless: with many different laws of physics in different places, the existence of one like ours needs no explanation.

In both cases, though, the dodge isn’t perfect. To really explain any given evil, Leibniz would have to show why it is secretly necessary in the face of a greater good (and Pangloss spends Candide trying to do exactly that). To explain any given law of physics, the multiverse needs to use anthropic reasoning: it needs to show that that law needs to be the way it is to support human-like life.

This sounds like a strict requirement, but in both cases it’s not actually so useful. Leibniz could (and Pangloss does) come up with an explanation for pretty much anything. The problem is that no-one actually knows which aspects of the universe are essential and which aren’t. Without a reliable way to describe the best of all possible worlds, we can’t actually test whether our world is one.

The same problem holds for anthropic reasoning. We don’t actually know what conditions are required to give rise to people like us. “People like us” is very vague, and dramatically different universes might still contain something that can perceive and observe. While it might seem that there are clear requirements, so far there hasn’t been enough for people to do very much with this type of reasoning.

However, for both Leibniz and most of the physicists who believe anthropic arguments, none of this really matters. That’s because the “best of all possible worlds” and “most anthropic of all possible worlds” aren’t really meant to be predictive theories. They’re meant to say that, once you are convinced of certain things, certain problems don’t matter anymore.

Leibniz, in particular, wasn’t trying to argue for the existence of his god. He began the argument convinced that a particular sort of god existed: one that was all-powerful and benevolent, and set in motion a deterministic universe bound by logic. His argument is meant to show that, if you believe in such a god, then the problem of evil can be ignored: no matter how bad the universe seems, it may still be the best possible world.

Similarly, the physicists convinced of the multiverse aren’t really getting there through naturalness. Rather, they’ve become convinced of a few key claims: that the universe is rapidly expanding, leading to a proliferating multiverse, and that the laws of physics in such a multiverse can vary from place to place, due to the huge landscape of possible laws of physics in string theory. If you already believe those things, then the naturalness problem can be ignored: we live in some randomly chosen part of the landscape hospitable to life, which can be anywhere it needs to be.

So despite their cheaty feel, both arguments are fine…provided you agree with their assumptions. Personally, I don’t agree with Leibniz. For the multiverse, I’m less sure. I’m not confident the universe expands fast enough to create a multiverse, I’m not even confident it’s speeding up its expansion now. I know there’s a lot of controversy about the math behind the string theory landscape, about whether the vast set of possible laws of physics are as consistent as they’re supposed to be…and of course, as anyone must admit, we don’t know whether string theory itself is true! I don’t think it’s impossible that the right argument comes around and convinces me of one or both claims, though. These kinds of arguments, “if assumptions, then conclusion” are the kind of thing that seems useless for a while…until someone convinces you of the conclusion, and they matter once again.

So in the end, despite the similarity, I’m not sure the multiverse deserves its own Candide. I’m not even sure Leibniz deserved Candide. But hopefully by understanding one, you can understand the other just a bit better.

Trapped in the (S) Matrix

I’ve tried to convince you that you are a particle detector. You choose your experiment, what actions you take, and then observe the outcome. If you focus on that view of yourself, data out and data in, you start to wonder if the world outside really has any meaning. Maybe you’re just trapped in the Matrix.

From a physics perspective, you actually are trapped in a sort of a Matrix. We call it the S Matrix.

“S” stands for scattering. The S Matrix is a formula we use, a mathematical tool that tells us what happens when fundamental particles scatter: when they fly towards each other, colliding or bouncing off. For each action we could take, the S Matrix gives the probability of each outcome: for each pair of particles we collide, the chance we detect different particles at the end. You can imagine putting every possible action in a giant vector, and every possible observation in another giant vector. Arrange the probabilities for each action-observation pair in a big square grid, and that’s a matrix.

Actually, I lied a little bit. This is particle physics, and particle physics uses quantum mechanics. Because of that, the entries of the S Matrix aren’t probabilities: they’re complex numbers called probability amplitudes. You have to multiply them by their complex conjugate to get probability out.

Ok, that probably seemed like a lot of detail. Why am I telling you all this?

What happens when you multiply the whole S Matrix by its complex conjugate? (Using matrix multiplication, naturally.) You can still pick your action, but now you’re adding up every possible outcome. You’re asking “suppose I take an action. What’s the chance that anything happens at all?”

The answer to that question is 1. There is a 100% chance that something happens, no matter what you do. That’s just how probability works.

We call this property unitarity, the property of giving “unity”, or one. And while it may seem obvious, it isn’t always so easy. That’s because we don’t actually know the S Matrix formula most of the time. We have to approximate it, a partial formula that only works for some situations. And unitarity can tell us how much we can trust that formula.

Imagine doing an experiment trying to detect neutrinos, like the IceCube Neutrino Observatory. For you to detect the neutrinos, they must scatter off of electrons, kicking them off of their atoms or transforming them into another charged particle. You can then notice what happens as the energy of the neutrinos increases. If you do that, you’ll notice the probability also start to increase: it gets more and more likely that the neutrino can scatter an electron. You might propose a formula for this, one that grows with energy. [EDIT: Example changed after a commenter pointed out an issue with it.]

If you keep increasing the energy, though, you run into a problem. Those probabilities you predict are going to keep increasing. Eventually, you’ll predict a probability greater than one.

That tells you that your theory might have been fine before, but doesn’t work for every situation. There’s something you don’t know about, which will change your formula when the energy gets high. You’ve violated unitarity, and you need to fix your theory.

In this case, the fix is already known. Neutrinos and electrons interact due to another particle, called the W boson. If you include that particle, then you fix the problem: your probabilities stop going up and up, instead, they start slowing down, and stay below one.

For other theories, we don’t yet know the fix. Try to write down an S Matrix for colliding gravitational waves (or really, gravitons), and you meet the same kind of problem, a probability that just keeps growing. Currently, we don’t know how that problem should be solved: string theory is one answer, but may not be the only one.

So even if you’re trapped in an S Matrix, sending data out and data in, you can still use logic. You can still demand that probability makes sense, that your matrix never gives a chance greater than 100%. And you can learn something about physics when you do!

At New Ideas in Cosmology

The Niels Bohr Institute is hosting a conference this week on New Ideas in Cosmology. I’m no cosmologist, but it’s a pretty cool field, so as a local I’ve been sitting in on some of the talks. So far they’ve had a selection of really interesting speakers with quite a variety of interests, including a talk by Roger Penrose with his trademark hand-stippled drawings.

Including this old classic

One thing that has impressed me has been the “interdisciplinary” feel of the conference. By all rights this should be one “discipline”, cosmology. But in practice, each speaker came at the subject from a different direction. They all had a shared core of knowledge, common models of the universe they all compare to. But the knowledge they brought to the subject varied: some had deep knowledge of the mathematics of gravity, others worked with string theory, or particle physics, or numerical simulations. Each talk, aware of the varied audience, was a bit “colloquium-style“, introducing a framework before diving in to the latest research. Each speaker knew enough to talk to the others, but not so much that they couldn’t learn from them. It’s been unexpectedly refreshing, a real interdisciplinary conference done right.

You Are a Particle Detector

I mean that literally. True, you aren’t a 7,000 ton assembly of wires and silicon, like the ATLAS experiment inside the Large Hadron Collider. You aren’t managed by thousands of scientists and engineers, trying to sift through data from a billion pairs of protons smashing into each other every second. Nonetheless, you are a particle detector. Your senses detect particles.

Like you, and not like you

Your ears take vibrations in the air and magnify them, vibrating the fluid of your inner ear. Tiny hairs communicate that vibration to your nerves, which signal your brain. Particle detectors, too, magnify signals: photomultipliers take a single particle of light (called a photon) and set off a cascade, multiplying the signal one hundred million times so it can be registered by a computer.

Your nose and tongue are sensitive to specific chemicals, recognizing particular shapes and ignoring others. A particle detector must also be picky. A detector like ATLAS measures far more particle collisions than it could ever record. Instead, it learns to recognize particular “shapes”, collisions that might hold evidence of something interesting. Only those collisions are recorded, passed along to computer centers around the world.

Your sense of touch tells you something about the energy of a collision: specifically, the energy things have when they collide with you. Particle detectors do this with calorimeters, that generate signals based on a particle’s energy. Different parts of your body are more sensitive than others: your mouth and hands are much more sensitive than your back and shoulders. Different parts of a particle detector have different calorimeters: an electromagnetic calorimeter for particles like electrons, and a less sensitive hadronic calorimeter that can catch particles like protons.

You are most like a particle detector, though, in your eyes. The cells of your eyes, rods and cones, detect light, and thus detect photons. Your eyes are more sensitive than you think: you are likely able to detect even a single photon. In an experiment, three people sat in darkness for forty minutes, then heard two sounds, one of which might come accompanied by a single photon of light flashed into their eye. The three didn’t notice the photons every time, that’s not possible for such a small sensation: but they did much better than a random guess.

(You can be even more literal than that. An older professor here told me stories of the early days of particle physics. To check that a machine was on, sometimes physicists would come close, and watch for flashes in the corner of their vision: a sign of electrons flying through their eyeballs!)

You are a particle detector, but you aren’t just a particle detector. A particle detector can’t move, its thousands of tons are fixed in place. That gives it blind spots: for example, the tube that the particles travel through is clear, with no detectors in it, so the particle can get through. Physicists have to account for this, correcting for the missing space in their calculations. In contrast, if you have a blind spot, you can act: move, and see the world from a new point of view. You observe not merely a series of particles, but the results of your actions: what happens when you turn one way or another, when you make one choice or another.

So while you are a particle detector, what’s more, you’re a particle experiment. You can learn a lot more than those big heaps of wires and silicon could on their own. You’re like the whole scientific effort: colliders and detectors, data centers and scientists around the world. May you learn as much in your life as the experiments do in theirs.