Author Archives: 4gravitons

Why I Wasn’t Bothered by the “Science” in Avengers: Endgame

Avengers: Endgame has been out for a while, so I don’t have to worry about spoilers right? Right?

Right?

Anyway, time travel. The spoiler is time travel. They bring back everyone who was eliminated in the previous movie, using time travel.

They also attempt to justify the time travel, using Ant Man-flavored quantum mechanics. This works about as plausibly as you’d expect for a superhero whose shrinking powers not only let him talk to ants, but also go to a “place” called “The Quantum Realm”. Along the way, they manage to throw in splintered references to a half-dozen almost-relevant scientific concepts. It’s the kind of thing that makes some physicists squirm.

And I enjoyed it.

Movies tend to treat time travel in one of two ways. The most reckless, and most common, let their characters rewrite history as they go, like Marty McFly almost erasing himself from existence in Back to the Future. This never makes much sense, and the characters in Avengers: Endgame make fun of it, listing a series of movies that do time travel this way (inexplicably including Wrinkle In Time, which has no time travel at all).

In the other common model, time travel has to happen in self-consistent loops: you can’t change the past, but you can go back and be part of it. This is the model used, for example, in Harry Potter, where Potter is saved by a mysterious spell only to travel back in time and cast it himself. This at least makes logical sense, whether it’s possible physically is an open question.

Avengers: Endgame uses the model of self-consistent loops, but with a twist: if you don’t manage to make your loop self-consistent you instead spawn a parallel universe, doomed to suffer the consequences of your mistakes. This is a rarer setup, but not a unique one, though the only other example I can think of at the moment is Homestuck.

Is there any physics justification for the Avengers: Endgame model? Maybe not. But you can at least guess what they were thinking.

The key clue is a quote from Tony Stark, rattling off a stream of movie-grade scientific gibberish:

“ Quantum fluctuation messes with the Planck scale, which then triggers the Deutsch Proposition. Can we agree on that? ”

From this quote, one can guess not only what scientific results inspired the writers of Avengers: Endgame, but possibly also which Wikipedia entry. David Deutsch is a physicist, and an advocate for the many-worlds interpretation of quantum mechanics. In 1991 he wrote a paper discussing what happens to quantum mechanics in the environment of a wormhole. In it he pointed out that you can make a self-consistent time travel loop, not just in classical physics, but out of a quantum superposition. This offers a weird solution to the classic grandfather paradox of time travel: instead of causing a paradox, you can form a superposition. As Scott Aaronson explains here, “you’re born with probability 1/2, therefore you kill your grandfather with probability 1/2, therefore you’re born with probability 1/2, and so on—everything is consistent.” If you believe in the many-worlds interpretation of quantum mechanics, a time traveler in this picture is traveling between two different branches of the wave-function of the universe: you start out in the branch where you were born, kill your grandfather, and end up in the branch where you weren’t born. This isn’t exactly how Avengers: Endgame handles time travel, but it’s close enough that it seems like a likely explanation.

David Deutsch’s argument uses a wormhole, but how do the Avengers make a wormhole in the first place? There we have less information, just vague references to quantum fluctuations at the Planck scale, the scale at which quantum gravity becomes important. There are a few things they could have had in mind, but one of them might have been physicists Leonard Susskind and Juan Maldacena’s conjecture that quantum entanglement is related to wormholes, a conjecture known as ER=EPR.

Long-time readers of the blog might remember I got annoyed a while back, when Caltech promoted ER=EPR using a different Disney franchise. The key difference here is that Avengers: Endgame isn’t pretending to be educational. Unlike Caltech’s ER=EPR piece, or even the movie Interstellar, Avengers: Endgame isn’t really about physics. It’s a superhero story, one that pairs the occasional scientific term with a character goofily bouncing around from childhood to old age while another character exclaims “you’re supposed to send him through time, not time through him!” The audience isn’t there to learn science, so they won’t come away with any incorrect assumptions.

The a movie like Avengers: Endgame doesn’t teach science, or even advertise it. It does celebrate it though.

That’s why, despite the silly half-correct science, I enjoyed Avengers: Endgame. It’s also why I don’t think it’s inappropriate, as some people do, to classify movies like Star Wars as science fiction. Star Wars and Avengers aren’t really about exploring the consequences of science or technology, they aren’t science fiction in that sense. But they do build off science’s role in the wider culture. They take our world and look at the advances on the horizon, robots and space travel and quantum speculations, and they let their optimism inform their storytelling. That’s not going to be scientifically accurate, and it doesn’t need to be, any more than the comic Abstruse Goose really believes Witten is from Mars. It’s about noticing we live in a scientific world, and having fun with it.

Two Loops, Five Particles

There’s a very long-term view of the amplitudes field that gets a lot of press. We’re supposed to be eliminating space and time, or rebuilding quantum field theory from scratch. We build castles in the clouds, seven-loop calculations and all-loop geometrical quantum jewels.

There’s a shorter-term problem, though, that gets much less press, despite arguably being a bigger part of the field right now. In amplitudes, we take theories and turn them into predictions, order by order and loop by loop. And when we want to compare those predictions to the real world, in most cases the best we can do is two loops and five particles.

Five particles here counts the particles coming in and going out: if two gluons collide and become three gluons, we count that as five particles, two in plus three out. Loops, meanwhile, measure the complexity of the calculation, the number of closed paths you can draw in a Feynman diagram. If you use more loops, you expect more precision: you’re approximating nature step by step.

As a field we’re pretty good at one-loop calculations, enough to do them for pretty much any number of particles. As we try for more loops though, things rapidly get harder. Already for two loops, in many cases, we start struggling. We can do better if we dial down the number of particles: there are three-particle and two-particle calculations that get up to three, four, or even five loops. For more particles though, we can’t do as much. Thus the current state of the art, the field’s short term goal: two loops, five particles.

When you hear people like me talk about crazier calculations, we’ve usually got a trick up our sleeve. Often we’re looking at a much simpler theory, one that doesn’t describe the real world. For example, I like working with a planar theory, with lots of supersymmetry. Remove even one of those simplifications, and suddenly our life becomes a lot harder. Instead of seven loops and six particles, we get genuinely excited about, well, two loops five particles.

Luckily, two loops five particles is also about as good as the experiments can measure. As the Large Hadron Collider gathers more data, it measures physics to higher and higher precision. Currently for five-particle processes, its precision is just starting to be comparable with two-loop calculations. The result has been a flurry of activity, applying everything from powerful numerical techniques to algebraic geometry to the problem, getting results that genuinely apply to the real world.

“Two loops, five particles” isn’t as cool of a slogan as “space-time is doomed”. It doesn’t get much, or any media attention. But, steadily and quietly, it’s become one of the hottest topics in the amplitudes field.

Things I’d Like to Know More About

This is an accountability post, of sorts.

As a kid, I wanted to know everything. Eventually, I realized this was a little unrealistic. Doomed to know some things and not others, I picked physics as a kind of triage. Other fields I could learn as an outsider: not well enough to compete with the experts, but enough to at least appreciate what they were doing. After watching a few string theory documentaries, I realized this wasn’t the case for physics: if I was going to ever understand what those string theorists were up to, I would have to go to grad school in string theory.

Over time, this goal lost focus. I’ve become a very specialized creature, an “amplitudeologist”. I didn’t have time or energy for my old questions. In an irony that will surprise no-one, a career as a physicist doesn’t leave much time for curiosity about physics.

One of the great things about this blog is how you guys remind me of those old questions, bringing me out of my overspecialized comfort zone. In that spirit, in this post I’m going to list a few things in physics that I really want to understand better. The idea is to make a public commitment: within a year, I want to understand one of these topics at least well enough to write a decent blog post on it.

Wilsonian Quantum Field Theory:

When you first learn quantum field theory as a physicist, you learn how unsightly infinite results get covered up via an ad-hoc-looking process called renormalization. Eventually you learn a more modern perspective, that these infinite results show up because we’re ignorant of the complete theory at high energies. You learn that you can think of theories at a particular scale, and characterize them by what happens when you “zoom” in and out, in an approach codified by the physicist Kenneth Wilson.

While I understand the basics of Wilson’s approach, the courses I took in grad school skipped the deeper implications. This includes the idea of theories that are defined at all energies, “flowing” from an otherwise scale-invariant theory perturbed with extra pieces. Other physicists are much more comfortable thinking in these terms, and the topic is important for quite a few deep questions, including what it means to properly define a theory and where laws of nature “live”. If I’m going to have an informed opinion on any of those topics, I’ll need to go back and learn the Wilsonian approach properly.

Wormholes:

If you’re a fan of science fiction, you probably know that wormholes are the most realistic option for faster-than-light travel, something that is at least allowed by the equations of general relativity. “Most realistic” isn’t the same as “realistic”, though. Opening a wormhole and keeping it stable requires some kind of “exotic matter”, and that matter needs to violate a set of restrictions, called “energy conditions”, that normal matter obeys. Some of these energy conditions are just conjectures, some we even know how to violate, while others are proven to hold for certain types of theories. Some energy conditions don’t rule out wormholes, but instead restrict their usefulness: you can have non-traversable wormholes (basically, two inescapable black holes that happen to meet in the middle), or traversable wormholes where the distance through the wormhole is always longer than the distance outside.

I’ve seen a few talks on this topic, but I’m still confused about the big picture: which conditions have been proven, what assumptions were needed, and what do they all imply? I haven’t found a publicly-accessible account that covers everything. I owe it to myself as a kid, not to mention everyone who’s a kid now, to get a satisfactory answer.

Quantum Foundations:

Quantum Foundations is a field that many physicists think is a waste of time. It deals with the questions that troubled Einstein and Bohr, questions about what quantum mechanics really means, or why the rules of quantum mechanics are the way they are. These tend to be quite philosophical questions, where it’s hard to tell if people are making progress or just arguing in circles.

I’m more optimistic about philosophy than most physicists, at least when it’s pursued with enough analytic rigor. I’d like to at least understand the leading arguments for different interpretations, what the constraints on interpretations are and the main loopholes. That way, if I end up concluding the field is a waste of time at least I’d be making an informed decision.

Look Ma, No Ads!

You might notice a change on the site this week: the ads are gone!

When I started this blog back in 2012, it was just a class project. I didn’t want to spend money on it, so I chose WordPress.com’s free hosting option. A consequence of that option is that WordPress got to post ads. These were pretty mild to begin with, I think most of the early posts didn’t even have ads. It seemed like a reasonable deal.

Over the years, WordPress has quietly been adding more ads, and worse ones. I mostly hadn’t noticed: I use an adblocker. For those who don’t, though, the blog began to look increasingly unprofessional, plastered with the kind of shitty, borderline-scam ads that fill certain parts of the internet. Thanks to everyone who let me know this was happening, I don’t think I would have noticed otherwise. To clarify, I never made any money from these ads, all of the revenue went to WordPress.

As of this week I’ve switched the site to a paid hosting plan. The move is long overdue: the plan is actually pretty cheap, and is the sort of thing I could have easily afforded by myself. As it happens I don’t have to afford it by myself: the grant that funds me, a Marie Curie Individual Fellowship, also funds outreach activities. I already had a message thanking them on my About page, but somehow I hadn’t considered actually using their funding here.

The site’s new plan also comes with a free domain, so you can now reach this site with a new simpler address: 4gravitons.com. The old 4gravitons.wordpress.com address should still work as well, if there are any glitches please let me know!

Research Rooms, Collaboration Spaces

Math and physics are different fields with different cultures. Some of those differences are obvious, others more subtle.

I recently remembered a subtle difference I noticed at the University of Waterloo. The math building there has “research rooms”, rooms intended for groups of mathematicians to collaborate. The idea is that you invite visitors to the department, reserve the room, and spend all day with them trying to iron out a proof or the like.

Theoretical physicists collaborate like this sometimes too, but in my experience physics institutes don’t typically have this kind of “research room”. Instead, they have “collaboration spaces”. Unlike a “research room”, you don’t reserve a “collaboration space”. Typically, they aren’t even rooms: they’re a set of blackboards in the coffee room, or a cluster of chairs in the corner between two hallways. They’re open spaces, designed so that passers-by can overhear the conversation and (potentially) join in.

That’s not to say physicists never shut themselves in a room for a day (or night) to work. But when they do, it’s not usually in a dedicated space. Instead, it’s in an office, or a commandeered conference room.

Waterloo’s “research rooms” and physics institutes’ “collaboration spaces” can be used for similar purposes. The difference is in what they encourage.

The point of a “collaboration space” is to start new collaborations. These spaces are open in order to take advantage of serendipity: if you’re getting coffee or walking down the hall, you might hear something interesting and spark something new, with people you hadn’t planned to collaborate with before. Institutes with “collaboration spaces” are trying to make new connections between researchers, to be the starting point for new ideas.

The point of a “research room” is to finish a collaboration. They’re for researchers who are already collaborating, who know they’re going to need a room and can reserve it in advance. They’re enclosed in order to shut out distractions, to make sure the collaborators can sit down and focus and get something done. Institutes with “research rooms” want to give their researchers space to complete projects when they might otherwise be too occupied with other things.

I’m curious if this difference is more widespread. Do math departments generally tend to have “research rooms” or “collaboration spaces”? Are there physics departments with “research rooms”? I suspect there is a real cultural difference here, in what each field thinks it needs to encourage.

The Black Box Theory of Everything

What is science? What makes a theory scientific?

There’s a picture we learn in high school. It’s not the whole story, certainly: philosophers of science have much more sophisticated notions. But for practicing scientists, it’s a picture that often sits in the back of our minds, informing what we do. Because of that, it’s worth examining in detail.

In the high school picture, scientific theories make predictions. Importantly, postdictions don’t count: if you “predict” something that already happened, it’s too easy to cheat and adjust your prediction. Also, your predictions must be different from those of other theories. If all you can do is explain the same results with different words you aren’t doing science, you’re doing “something else” (“metaphysics”, “religion”, “mathematics”…whatever the person you’re talking to wants to make fun of, but definitely not science).

Seems reasonable, right? Let’s try a thought experiment.

In the late 1950’s, the physics of protons and neutrons was still quite mysterious. They seemed to be part of a bewildering zoo of particles that no-one could properly explain. In the 60’s and 70’s the field started converging on the right explanation, from Gell-Mann’s eightfold way to the parton model to the full theory of quantum chromodynamics (QCD for short). Today we understand the theory well enough to package things into computer code: amplitudes programs like BlackHat for collisions of individual quarks, jet algorithms that describe how those quarks become signals in colliders, lattice QCD implemented on supercomputers for pretty much everything else.

Now imagine that you had a time machine, prodigious programming skills, and a grudge against 60’s era-physicists.

Suppose you wrote a computer program that combined the best of QCD in the modern world. BlackHat and more from the amplitudes side, the best jet algorithms and lattice QCD code, and more: a program that could reproduce any calculation in QCD that anyone can do today. Further, suppose you don’t care about silly things like making your code readable. Since I began the list above with BlackHat, we’ll call the combined box of different codes BlackBox.

Now suppose you went back in time, and told the bewildered scientists of the 50’s that nuclear physics was governed by a very complicated set of laws: the ones implemented in BlackBox.

Behold, your theory

Your “BlackBox theory” passes the high school test. Not only would it match all previous observations, it could make predictions for any experiment the scientists of the 50’s could devise. Up until the present day, your theory would match observations as well as…well as well as QCD does today.

(Let’s ignore for the moment that they didn’t have computers that could run this code in the 50’s. This is a thought experiment, we can fudge things a bit.)

Now suppose that one of those enterprising 60’s scientists, Gell-Mann or Feynman or the like, noticed a pattern. Maybe they got it from an experiment scattering electrons off of protons, maybe they saw it in BlackBox’s code. They notice that different parts of “BlackBox theory” run on related rules. Based on those rules, they suggest a deeper reality: protons are made of quarks!

But is this “quark theory” scientific?

“Quark theory” doesn’t make any new predictions. Anything you could predict with quarks, you could predict with BlackBox. According to the high school picture of science, for these 60’s scientists quarks wouldn’t be scientific: they would be “something else”, metaphysics or religion or mathematics.

And in practice? I doubt that many scientists would care.

“Quark theory” makes the same predictions as BlackBox theory, but I think most of us understand that it’s a better theory. It actually explains what’s going on. It takes different parts of BlackBox and unifies them into a simpler whole. And even without new predictions, that would be enough for the scientists in our thought experiment to accept it as science.

Why am I thinking about this? For two reasons:

First, I want to think about what happens when we get to a final theory, a “Theory of Everything”. It’s probably ridiculously arrogant to think we’re anywhere close to that yet, but nonetheless the question is on physicists’ minds more than it has been for most of history.

Right now, the Standard Model has many free parameters, numbers we can’t predict and must fix based on experiments. Suppose there are two options for a final theory: one that has a free parameter, and one that doesn’t. Once that one free parameter is fixed, both theories will match every test you could ever devise (they’re theories of everything, after all).

If we come up with both theories before testing that final parameter, then all is well. The theory with no free parameters will predict the result of that final experiment, the other theory won’t, so the theory without the extra parameter wins the high school test.

What if we do the experiment first, though?

If we do, then we’re in a strange situation. Our “prediction” of the one free parameter is now a “postdiction”. We’ve matched numbers, sure, but by the high school picture we aren’t doing science. Our theory, the same theory that was scientific if history went the other way, is now relegated to metaphysics/religion/mathematics.

I don’t know about you, but I’m uncomfortable with the idea that what is or is not science depends on historical chance. I don’t like the idea that we could be stuck with a theory that doesn’t explain everything, simply because our experimentalists were able to work a bit faster.

My second reason focuses on the here and now. You might think we have nothing like BlackBox on offer, no time travelers taunting us with poorly commented code. But we’ve always had the option of our own Black Box theory: experiment itself.

The Standard Model fixes some of its parameters from experimental results. You do a few experiments, and you can predict the results of all the others. But why stop there? Why not fix all of our parameters with experiments? Why not fix everything with experiments?

That’s the Black Box Theory of Everything. Each individual experiment you could possibly do gets its own parameter, describing the result of that experiment. You do the experiment, fix that parameter, then move on to the next experiment. Your theory will never be falsified, you will never be proven wrong. Sure, you never predict anything either, but that’s just an extreme case of what we have now, where the Standard Model can’t predict the mass of the Higgs.

What’s wrong with the Black Box Theory? (I trust we can all agree that it’s wrong.)

It’s not just that it can’t make predictions. You could make it a Black Box All But One Theory instead, that predicts one experiment and takes every other experiment as input. You could even make a Black Box Except the Standard Model Theory, that predicts everything we can predict now and just leaves out everything we’re still confused by.

The Black Box Theory is wrong because the high school picture of what counts as science is wrong. The high school picture is a useful guide, it’s a good rule of thumb, but it’s not the ultimate definition of science. And especially now, when we’re starting to ask questions about final theories and ultimate parameters, we can’t cling to the high school picture. We have to be willing to actually think, to listen to the philosophers and consider our own motivations, to figure out what, in the end, we actually mean by science.


Still Traveling, and a Black Hole

I’m still at the conference in Natal this week, so I don’t have time for a long post. The big news this week was the Event Horizon Telescope’s close-up of the black hole at the center of galaxy M87. If you’re hungry for coverage of that, Matt Strassler has some of his trademark exceptionally clear posts on the topic, while Katie Mack has a nice twitter thread.

Pictured: Not a black hole