Tag Archives: quantum mechanics

Light and Lens, Collider and Detector

Why do particle physicists need those enormous colliders? Why does it take a big, expensive, atom-smashing machine to discover what happens on the smallest scales?

A machine like the Large Hadron Collider seems pretty complicated. But at its heart, it’s basically just a huge microscope.

Familiar, right?

If you’ve ever used a microscope in school, you probably had one with a light switch. Forget to turn on the light, and you spend a while confused about why you can’t see anything before you finally remember to flick the switch. Just like seeing something normally, seeing something with a microscope means that light is bouncing off that thing and hitting your eyes. Because of this, microscopes are limited by the wavelength of the light that they use. Try to look at something much smaller than that wavelength and the image will be too blurry to understand.

To see smaller details then, people use light with smaller wavelengths. Using massive X-ray producing machines called synchrotrons, scientists can study matter on the sub-nanometer scale. To go further, scientists can take advantage of wave-particle duality, and use electrons instead of light. The higher the energy of the electrons, the smaller their wavelength. The best electron microscopes can see objects measured in angstroms, not just nanometers.

Less familiar?

A particle collider pushes this even further. The Large Hadron Collider accelerates protons until they have 6.5 Tera-electron-Volts of energy. That might be an unfamiliar type of unit, but if you’ve seen it before you can run the numbers, and estimate that this means the LHC can sees details below the attometer scale. That’s a quintillionth of a meter, or a hundred million times smaller than an atom.

A microscope isn’t just light, though, and a collider isn’t just high-energy protons. If it were, we could just wait and look at the sky. So-called cosmic rays are protons and other particles that travel to us from outer space. These can have very high energy: protons with similar energy to those in the LHC hit our atmosphere every day, and rays have been detected that were millions of times more powerful.

People sometimes ask why we can’t just use these cosmic rays to study particle physics. While we can certainly learn some things from cosmic rays, they have a big limitation. They have the “light” part of a microscope, but not the “lens”!

A microscope lens magnifies what you see. Starting from a tiny image, the lens blows it up until it’s big enough that you can see it with your own eyes. Particle colliders have similar technology, using their particle detectors. When two protons collider inside the LHC, they emit a flurry of other particles: photons and electrons, muons and mesons. Each of these particles is too small to see, let alone distinguish with the naked eye. But close to the collision there are detector machines that absorb these particles and magnify their signal. A single electron hitting one of these machines triggers a cascade of more and more electrons, in proportion to the energy of the electron that entered the machine. In the end, you get a strong electrical signal, which you can record with a computer. There are two big machines that do this at the Large Hadron Collider, each with its own independent scientific collaboration to run it. They’re called ATLAS and CMS.

The different layers of the CMS detector, magnifying signals from different types of particles.

So studying small scales needs two things: the right kind of “probe”, like light or protons, and a way to magnify the signal, like a lens or a particle detector. That’s hard to do without a big expensive machine…unless nature is unusually convenient. One interesting possibility is to try to learn about particle physics via astronomy. In the Big Bang particles collided with very high energy, and as the universe has expanded since then those details have been magnified across the sky. That kind of “cosmological collider” has the potential to teach us about physics at much smaller scales than any normal collider could reach. A downside is that, unlike in a collider, we can’t run the experiment over and over again: our “cosmological collider” only ran once. Still, if we want to learn about the very smallest scales, some day that may be our best option.

Who Is, and Isn’t, Counting Angels on a Pinhead

How many angels can dance on the head of a pin?

It’s a question famous for its sheer pointlessness. While probably no-one ever had that exact debate, “how many angels fit on a pin” has become a metaphor, first for a host of old theology debates that went nowhere, and later for any academic study that seems like a waste of time. Occasionally, physicists get accused of doing this: typically string theorists, but also people who debate interpretations of quantum mechanics.

Are those accusations fair? Sometimes yes, sometimes no. In order to tell the difference, we should think about what’s wrong, exactly, with counting angels on the head of a pin.

One obvious answer is that knowing the number of angels that fit on a needle’s point is useless. Wikipedia suggests that was the origin of the metaphor in the first place, a pun on “needle’s point” and “needless point”. But this answer is a little too simple, because this would still be a useful debate if angels were real and we could interact with them. “How many angels fit on the head of a pin” is really a question about whether angels take up space, whether two angels can be at the same place at the same time. Asking that question about particles led physicists to bosons and fermions, which among other things led us to invent the laser. If angelology worked, perhaps we would have angel lasers as well.

Be not afraid of my angel laser

“If angelology worked” is key here, though. Angelology didn’t work, it didn’t lead to angel-based technology. And while Medieval people couldn’t have known that for certain, maybe they could have guessed. When people accuse academics of “counting angels on the head of a pin”, they’re saying they should be able to guess that their work is destined for uselessness.

How do you guess something like that?

Well, one problem with counting angels is that nobody doing the counting had ever seen an angel. Counting angels on the head of a pin implies debating something you can’t test or observe. That can steer you off-course pretty easily, into conclusions that are either useless or just plain wrong.

This can’t be the whole of the problem though, because of mathematics. We rarely accuse mathematicians of counting angels on the head of a pin, but the whole point of math is to proceed by pure logic, without an experiment in sight. Mathematical conclusions can sometimes be useless (though we can never be sure, some ideas are just ahead of their time), but we don’t expect them to be wrong.

The key difference is that mathematics has clear rules. When two mathematicians disagree, they can look at the details of their arguments, make sure every definition is as clear as possible, and discover which one made a mistake. Working this way, what they build is reliable. Even if it isn’t useful yet, the result is still true, and so may well be useful later.

In contrast, when you imagine Medieval monks debating angels, you probably don’t imagine them with clear rules. They might quote contradictory bible passages, argue everyday meanings of words, and win based more on who was poetic and authoritative than who really won the argument. Picturing a debate over how many angels can fit on the head of a pin, it seems more like Calvinball than like mathematics.

This then, is the heart of the accusation. Saying someone is just debating how many angels can dance on a pin isn’t merely saying they’re debating the invisible. It’s saying they’re debating in a way that won’t go anywhere, a debate without solid basis or reliable conclusions. It’s saying, not just that the debate is useless now, but that it will likely always be useless.

As an outsider, you can’t just dismiss a field because it can’t do experiments. What you can and should do, is dismiss a field that can’t produce reliable knowledge. This can be hard to judge, but a key sign is to look for these kinds of Calvinball-style debates. Do people in the field seem to argue the same things with each other, over and over? Or do they make progress and open up new questions? Do the people talking seem to be just the famous ones? Or are there cases of young and unknown researchers who happen upon something important enough to make an impact? Do people just list prior work in order to state their counter-arguments? Or do they build on it, finding consequences of others’ trusted conclusions?

A few corners of string theory do have this Calvinball feel, as do a few of the debates about the fundamentals of quantum mechanics. But if you look past the headlines and blogs, most of each of these fields seems more reliable. Rather than interminable back-and-forth about angels and pinheads, these fields are quietly accumulating results that, one way or another, will give people something to build on.

Reality as an Algebra of Observables

Listen to a physicist talk about quantum mechanics, and you’ll hear the word “observable”. Observables are, intuitively enough, things that can be observed. They’re properties that, in principle, one could measure in an experiment, like the position of a particle or its momentum. They’re the kinds of things linked by uncertainty principles, where the better you know one, the worse you know the other.

Some physicists get frustrated by this focus on measurements alone. They think we ought to treat quantum mechanics, not like a black box that produces results, but as information about some underlying reality. Instead of just observables, they want us to look for “beables“: not just things that can be observed, but things that something can be. From their perspective, the way other physicists focus on observables feels like giving up, like those physicists are abandoning their sacred duty to understand the world. Others, like the Quantum Bayesians or QBists, disagree, arguing that quantum mechanics really is, and ought to be, a theory of how individuals get evidence about the world.

I’m not really going to weigh in on that debate, I still don’t feel like I know enough to even write a decent summary. But I do think that one of the instincts on the “beables” side is wrong. If we focus on observables in quantum mechanics, I don’t think we’re doing anything all that unusual. Even in other parts of physics, we can think about reality purely in terms of observations. Doing so isn’t a dereliction of duty: often, it’s the most useful way to understand the world.

When we try to comprehend the world, we always start alone. From our time in the womb, we have only our senses and emotions to go on. With a combination of instinct and inference we start assembling a consistent picture of reality. Philosophers called phenomenologists (not to be confused with the physicists called phenomenologists) study this process in detail, trying to characterize how different things present themselves to an individual consciousness.

For my point here, these details don’t matter so much. That’s because in practice, we aren’t alone in understanding the world. Based on what others say about the world, we conclude they perceive much like we do, and we learn by their observations just as we learn by our own. We can make things abstract: instead of the specifics of how individuals perceive, we think about groups of scientists making measurements. At the end of this train lie observables: things that we as a community could in principle learn, and share with each other, ignoring the details of how exactly we measure them.

If each of these observables was unrelated, just scattered points of data, then we couldn’t learn much. Luckily, they are related. In quantum mechanics, some of these relationships are the uncertainty principles I mentioned earlier. Others relate measurements at different places, or at different times. The fancy way to refer to all these relationships is as an algebra: loosely, it’s something you can “do algebra with”, like you did with numbers and variables in high school. When physicists and mathematicians want to do quantum mechanics or quantum field theory seriously, they often talk about an “algebra of observables”, a formal way of thinking about all of these relationships.

Focusing on those two things, observables and how they are related, isn’t just useful in the quantum world. It’s an important way to think in other areas of physics too. If you’ve heard people talk about relativity, the focus on measurement screams out, in thought experiments full of abstract clocks and abstract yardsticks. Without this discipline, you find paradoxes, only to resolve them when you carefully track what each person can observe. More recently, physicists in my field have had success computing the chance particles collide by focusing on the end result, the actual measurements people can make, ignoring what might happen in between to cause that measurement. We can then break measurements down into simpler measurements, or use the structure of simpler measurements to guess more complicated ones. While we typically have done this in quantum theories, that’s not really a limitation: the same techniques make sense for problems in classical physics, like computing the gravitational waves emitted by colliding black holes.

With this in mind, we really can think of reality in those terms: not as a set of beable objects, but as a set of observable facts, linked together in an algebra of observables. Paring things down to what we can know in this way is more honest, and it’s also more powerful and useful. Far from a betrayal of physics, it’s the best advantage we physicists have in our quest to understand the world.

The Multiverse You Can Visit Is Not the True Multiverse

I don’t want to be the kind of science blogger who constantly complains about science fiction, but sometimes I can’t help myself.

When I blogged about zero-point energy a few weeks back, there was a particular book that set me off. Ian McDonald’s River of Gods depicts the interactions of human and AI agents in a fragmented 2047 India. One subplot deals with a power company pursuing zero-point energy, using an imagined completion of M theory called M* theory. This post contains spoilers for that subplot.

What frustrated me about River of Gods is that the physics in it almost makes sense. It isn’t just an excuse for magic, or a standard set of tropes. Even the name “M* theory” is extremely plausible, the sort of term that could get used for technical reasons in a few papers and get accidentally stuck as the name of our fundamental theory of nature. But because so much of the presentation makes sense, it’s actively frustrating when it doesn’t.

The problem is the role the landscape of M* theory plays in the story. The string theory (or M theory) landscape is the space of all consistent vacua, a list of every consistent “default” state the world could have. In the story, one of the AIs is trying to make a portal to somewhere else in the landscape, a world of pure code where AIs can live in peace without competing with humans.

The problem is that the landscape is not actually a real place in string theory. It’s a metaphorical mathematical space, a list organized by some handy coordinates. The other vacua, the other “default states”, aren’t places you can travel to, there just other ways the world could have been.

Ok, but what about the multiverse?

There are physicists out there who like to talk about multiple worlds. Some think they’re hypothetical, others argue they must exist. Sometimes they’ll talk about the string theory landscape. But to get a multiverse out of the string theory landscape, you need something else as well.

Two options for that “something else” exist. One is called eternal inflation, the other is the many-worlds interpretation of quantum mechanics. And neither lets you travel around the multiverse.

In eternal inflation, the universe is expanding faster and faster. It’s expanding so fast that, in most places, there isn’t enough time for anything complicated to form. Occasionally, though, due to quantum randomness, a small part of the universe expands a bit more slowly: slow enough for stars, planets, and maybe life. Each small part like that is its own little “Big Bang”, potentially with a different “default” state, a different vacuum from the string landscape. If eternal inflation is true then you can get multiple worlds, but they’re very far apart, and getting farther every second: not easy to visit.

The many-worlds interpretation is a way to think about quantum mechanics. One way to think about quantum mechanics is to say that quantum states are undetermined until you measure them: a particle could be spinning left or right, Schrödinger’s cat could be alive or dead, and only when measured is their state certain. The many-worlds interpretation offers a different way: by doing away with measurement, it instead keeps the universe in the initial “undetermined” state. The universe only looks determined to us because of our place in it: our states become entangled with those of particles and cats, so that our experiences only correspond to one determined outcome, the “cat alive branch” or the “cat dead branch”. Combine this with the string landscape, and our universe might have split into different “branches” for each possible stable state, each possible vacuum. But you can’t travel to those places, your experiences are still “just on one branch”. If they weren’t, many-worlds wouldn’t be an interpretation, it would just be obviously wrong.

In River of Gods, the AI manipulates a power company into using a particle accelerator to make a bubble of a different vacuum in the landscape. Surprisingly, that isn’t impossible. Making a bubble like that is a bit like what the Large Hadron Collider does, but on a much larger scale. When the Large Hadron Collider detected a Higgs boson, it had created a small ripple in the Higgs field, a small deviation from its default state. One could imagine a bigger ripple doing more: with vastly more energy, maybe you could force the Higgs all the way to a different default, a new vacuum in its landscape of possibilities.

Doing that doesn’t create a portal to another world, though. It destroys our world.

That bubble of a different vacuum isn’t another branch of quantum many-worlds, and it isn’t a far-off big bang from eternal inflation. It’s a part of our own universe, one with a different “default state” where the particles we’re made of can’t exist. And typically, a bubble like that spreads at the speed of light.

In the story, they have a way to stabilize the bubble, stop it from growing or shrinking. That’s at least vaguely believable. But it means that their “portal to another world” is just a little bubble in the middle of a big expensive device. Maybe the AI can live there happily…until the humans pull the plug.

Or maybe they can’t stabilize it, and the bubble spreads and spreads at the speed of light destroying everything. That would certainly be another way for the AI to live without human interference. It’s a bit less peaceful than advertised, though.

Particles vs Waves, Particles vs Strings

On my “Who Am I?” page, I open with my background, calling myself a string theorist, then clarify: “in practice I’m more of a Particle Theorist, describing the world not in terms of short lengths of string but rather with particles that each occupy a single point in space”.

When I wrote that I didn’t think it would confuse people. Now that I’m older and wiser, I know people can be confused in a variety of ways. And since I recently saw someone confused about this particular phrase (yes I’m vagueblogging, but I suspect you’re reading this and know who you are 😉 ), I figured I’d explain it.

If you’ve learned a few things about quantum mechanics, maybe you have this slogan in mind:

“What we used to think of as particles are really waves. They spread out over an area, with peaks and troughs that interfere, and you never know exactly where you will measure them.”

With that in mind, my talk of “particles that each occupy a single point” doesn’t make sense. Doesn’t the slogan mean that particles don’t exist?

Here’s the thing: that’s the wrong slogan. The right slogan is just a bit different:

“What we used to think of as particles are ALSO waves. They spread out over an area, with peaks and troughs that interfere, and you never know exactly where you will measure them.”

The principle you were remembering is often called “wave-particle duality“. That doesn’t mean “particles don’t exist”. It means “waves and particles are the same thing”.

This matters, because just as wave-like properties are important, particle-like properties are important. And while it’s true that you can never know exactly where you will measure a particle, it’s also true that it’s useful, and even necessary, to think of it as occupying a single point.

That’s because particles can only affect each other when they’re at the same point. Physicists call this the principle of locality, the idea that there is no real “action at a distance”, everything happens because of something traveling from point A to point B. Wave-particle duality doesn’t change that, it just makes the specific point uncertain. It means you have to add up over every specific point where the particles could have interacted, but each term in your sum has to still involve a specific point: quantum mechanics doesn’t let particles affect each other non-locally.

Strings, in turn, are a little bit different. Strings have length, particles don’t. Particles interact at a point, strings can interact anywhere along the string. Strings introduce a teeny bit of non-locality.

When you compare particles and waves, you’re thinking pre-quantum mechanics, two classical things neither of which is the full picture. When you compare particles and strings, both are quantum, both are also waves. But in a meaningful sense one occupies a single point, and the other doesn’t.

The Wolfram Physics Project Makes Me Queasy

Stephen Wolfram is…Stephen Wolfram.

Once a wunderkind student of Feynman, Wolfram is now best known for his software, Mathematica, a tool used by everyone from scientists to lazy college students. Almost all of my work is coded in Mathematica, and while it has some flaws (can someone please speed up the linear solver? Maple’s is so much better!) it still tends to be the best tool for the job.

Wolfram is also known for being a very strange person. There’s his tendency to name, or rename, things after himself. (There’s a type of Mathematica file that used to be called “.m”. Now by default they’re “.wl”, “Wolfram Language” files.) There’s his live-streamed meetings. And then there’s his physics.

In 2002, Wolfram wrote a book, “A New Kind of Science”, arguing that computational systems called cellular automata were going to revolutionize science. A few days ago, he released an update: a sprawling website for “The Wolfram Physics Project”. In it, he claims to have found a potential “theory of everything”, unifying general relativity and quantum physics in a cellular automata-like form.

If that gets your crackpot klaxons blaring, yeah, me too. But Wolfram was once a very promising physicist. And he has collaborators this time, who are currently promising physicists. So I should probably give him a fair reading.

On the other hand, his introduction for a technical audience is 448 pages long. I may have more time now due to COVID-19, but I still have a job, and it isn’t reading that.

So I compromised. I didn’t read his 448-page technical introduction. I read his 90-ish page blog post. The post is written for a non-technical audience, so I know it isn’t 100% accurate. But by seeing how someone chooses to promote their work, I can at least get an idea of what they value.

I started out optimistic, or at least trying to be. Wolfram starts with simple mathematical rules, and sees what kinds of structures they create. That’s not an unheard of strategy in theoretical physics, including in my own field. And the specific structures he’s looking at look weirdly familiar, a bit like a generalization of cluster algebras.

Reading along, though, I got more and more uneasy. That unease peaked when I saw him describe how his structures give rise to mass.

Wolfram had already argued that his structures obey special relativity. (For a critique of this claim, see this twitter thread.) He found a way to define energy and momentum in his system, as “fluxes of causal edges”. He picks out a particular “flux of causal edges”, one that corresponds to “just going forward in time”, and defines it as mass. Then he “derives” E=mc^2, saying,

Sometimes in the standard formalism of physics, this relation by now seems more like a definition than something to derive. But in our model, it’s not just a definition, and in fact we can successfully derive it.

In “the standard formalism of physics”, E=mc^2 means “mass is the energy of an object at rest”. It means “mass is the energy of an object just going forward in time”. If the “standard formalism of physics” “just defines” E=mc^2, so does Wolfram.

I haven’t read his technical summary. Maybe this isn’t really how his “derivation” works, maybe it’s just how he decided to summarize it. But it’s a pretty misleading summary, one that gives the reader entirely the wrong idea about some rather basic physics. It worries me, because both as a physicist and a blogger, he really should know better. I’m left wondering whether he meant to mislead, or whether instead he’s misleading himself.

That feeling kept recurring as I kept reading. There was nothing else as extreme as that passage, but a lot of pieces that felt like they were making a big deal about the wrong things, and ignoring what a physicist would find the most important questions.

I was tempted to get snarkier in this post, to throw in a reference to Lewis’s trilemma or some variant of the old quip that “what is new is not good; and what is good is not new”. For now, I’ll just say that I probably shouldn’t have read a 90 page pop physics treatise before lunch, and end the post with that.

What I Was Not Saying in My Last Post

Science communication is a gradual process. Anything we say is incomplete, prone to cause misunderstanding. Luckily, we can keep talking, give a new explanation that corrects those misunderstandings. This of course will lead to new misunderstandings. We then explain again, and so on. It sounds fruitless, but in practice our audience nevertheless gets closer and closer to the truth.

Last week, I tried to explain physicists’ notion of a fundamental particle. In particular, I wanted to explain what these particles aren’t: tiny, indestructible spheres, like Democritus imagined. Instead, I emphasized the idea of fields, interacting and exchanging energy, with particles as just the tip of the field iceberg.

I’ve given this kind of explanation before. And when I do, there are two things people often misunderstand. These correspond to two topics which use very similar language, but talk about different things. So this week, I thought I’d get ahead of the game and correct those misunderstandings.

The first misunderstanding: None of that post was quantum.

If you’ve heard physicists explain quantum mechanics, you’ve probably heard about wave-particle duality. Things we thought were waves, like light, also behave like particles, things we thought were particles, like electrons, also behave like waves.

If that’s on your mind, and you see me say particles don’t exist, maybe you think I mean waves exist instead. Maybe when I say “fields”, you think I’m talking about waves. Maybe you think I’m choosing one side of the duality, saying that waves exist and particles don’t.

To be 100% clear: I am not saying that.

Particles and waves, in quantum physics, are both manifestations of fields. Is your field just at one specific point? Then it’s a particle. Is it spread out, with a fixed wavelength and frequency? Then it’s a wave. These are the two concepts connected by wave-particle duality, where the same object can behave differently depending on what you measure. And both of them, to be clear, come from fields. Neither is the kind of thing Democritus imagined.

The second misunderstanding: This isn’t about on-shell vs. off-shell.

Some of you have seen some more “advanced” science popularization. In particular, you might have listened to Nima Arkani-Hamed, of amplituhedron fame, talk about his perspective on particle physics. Nima thinks we need to reformulate particle physics, as much as possible, “on-shell”. “On-shell” means that particles obey their equations of motion, normally quantum calculations involve “off-shell” particles that violate those equations.

To again be clear: I’m not arguing with Nima here.

Nima (and other people in our field) will sometimes talk about on-shell vs off-shell as if it was about particles vs. fields. Normal physicists will write down a general field, and let it be off-shell, we try to do calculations with particles that are on-shell. But once again, on-shell doesn’t mean Democritus-style. We still don’t know what a fully on-shell picture of physics will look like. Chances are it won’t look like the picture of sloshing, omnipresent fields we started with, at least not exactly. But it won’t bring back indivisible, unchangeable atoms. Those are gone, and we have no reason to bring them back.

The Teaching Heuristic for Non-Empirical Science

Science is by definition empirical. We discover how the world works not by sitting and thinking, but by going out and observing the world. But sometimes, all the observing we can do can’t possibly answer a question. In those situations, we might need “non-empirical science”.

The blog Slate Star Codex had a series of posts on this topic recently. He hangs out with a crowd that supports the many-worlds interpretation of quantum mechanics: the idea that quantum events are not truly random, but instead that all outcomes happen, the universe metaphorically splitting into different possible worlds. These metaphorical universes can’t be observed, so no empirical test can tell the difference between this and other interpretations of quantum mechanics: if we could ever know the difference, it would have to be for “non-empirical” reasons.

What reasons are those? Slate Star Codex teases out a few possible intuitions. He points out that we reject theories that have “unnecessary” ideas. He imagines a world where chemists believe that mixing an acid and a base also causes a distant star to go supernova, and a creationist world where paleontologists believe fossils are placed by the devil. In both cases, there might be no observable difference between their theories and ours, but because their theories have “extra pieces” (the distant star, the devil), we reject them for non-empirical reasons. Slate Star Codex asks if this supports many-worlds: without the extra assumption that quantum events randomly choose one outcome, isn’t quantum mechanics simpler?

I agree with some of this. Science really does use non-empirical reasoning. Without it, there’s no reason not to treat the world as a black box, a series of experiments with no mechanism behind it. But while we do reject theories with unnecessary ideas, that isn’t our only standard. We also need our theories to teach us about the world.

Ultimately, we trust science because it allows us to do things. If we understand the world, we can interact with it: we can build technology, design new experiments, and propose new theories. With this in mind, we can judge scientific theories by how well they help us do these things. A good scientific theory is one that gives us more power to interact with the world. It can do this by making correct predictions, but it can also do this by explaining things, making it easier for us to reason about them. Beyond empiricism, we can judge science by how well it teaches us.

This gives us an objection to the “supernova theory” of Slate Star Codex’s imagined chemists: it’s much more confusing to teach. To teach chemistry in that world you also have to teach the entire life cycle of stars, a subject that students won’t use in any other part of the course. The creationists’ “devil theory” of paleontology has the same problem: if their theory really makes the right predictions they’d have to teach students everything our paleontologists do: every era of geologic history, every theory of dinosaur evolution, plus an extra course in devil psychology. They end up with a mix that only makes it harder to understand the subject.

Many-worlds may seem simpler than other interpretations of quantum mechanics, but that doesn’t make it more useful, or easier to teach. You still need to teach students how to predict the results of experiments, and those results will still be random. If you teach them many-worlds, you need to add more discussion much earlier on, advanced topics like self-localizing uncertainty and decoherence. You need a quite extensive set of ideas, many of which won’t be used again, to justify rules another interpretation could have introduced much more simply. This would be fine if those ideas made additional predictions, but they don’t: like every interpretation of quantum mechanics, you end up doing the same experiments and building the same technology in the end.

I’m not saying I know many-worlds is false, or that I know another interpretation is true. All I’m saying is that, when physicists criticize many-worlds, they’re not just blindly insisting on empiricism. They’re rejecting many-worlds, in part, because all it does is make their work harder. And that, more than elegance or simplicity, is how we judge theories.

Facts About Our Capabilities Are Facts About the World

A paper leaked from Google last week claimed that their researchers had achieved “quantum supremacy”, the milestone at which a quantum computer performs a calculation faster than any existing classical computer. Scott Aaronson has a great explainer about this. The upshot is that Google’s computer is much too small to crack all our encryptions (only 53 qubits, the equivalent of bits for quantum computers), but it still appears to be a genuine quantum computer doing a genuine quantum computation that is genuinely not feasible otherwise.

How impressed should we be about this?

On one hand, the practical benefits of a 53-qubit computer are pretty minimal. Scott discusses some applications: you can generate random numbers, distributed in a way that will let others verify that they are truly random, the kind of thing it’s occasionally handy to do in cryptography. Still, by itself this won’t change the world, and compared to the quantum computing hype I can understand if people find this underwhelming.

On the other hand, as Scott says, this falsifies the Extended Church-Turing Thesis! And that sounds pretty impressive, right?

Ok, I’m actually just re-phrasing what I said before. The Extended Church-Turing Thesis proposes that a classical computer (more specifically, a probabilistic Turing machine) can efficiently simulate any reasonable computation. Falsifying it means finding something that a classical computer cannot compute efficiently but another sort of computer (say, a quantum computer) can. If the calculation Google did truly can’t be done efficiently on a classical computer (this is not proven, though experts seem to expect it to be true) then yes, that’s what Google claims to have done.

So we get back to the real question: should we be impressed by quantum supremacy?

Well, should we have been impressed by the Higgs?

The detection of the Higgs boson in 2012 hasn’t led to any new Higgs-based technology. No-one expected it to. It did teach us something about the world: that the Higgs boson exists, and that it has a particular mass. I think most people accept that that’s important: that it’s worth knowing how the world works on a fundamental level.

Google may have detected the first-known violation of the Extended Church-Turing Thesis. This could eventually lead to some revolutionary technology. For now, though, it hasn’t. Instead, it teaches us something about the world.

It may not seem like it, at first. Unlike the Higgs boson, “Extended Church-Turing is false” isn’t a law of physics. Instead, it’s a fact about our capabilities. It’s a statement about the kinds of computers we can and cannot build, about the kinds of algorithms we can and cannot implement, the calculations we can and cannot do.

Facts about our capabilities are still facts about the world. They’re still worth knowing, for the same reasons that facts about the world are still worth knowing. They still give us a clearer picture of how the world works, which tells us in turn what we can and cannot do. According to the leaked paper, Google has taught us a new fact about the world, a deep fact about our capabilities. If that’s true we should be impressed, even without new technology.

Why I Wasn’t Bothered by the “Science” in Avengers: Endgame

Avengers: Endgame has been out for a while, so I don’t have to worry about spoilers right? Right?

Right?

Anyway, time travel. The spoiler is time travel. They bring back everyone who was eliminated in the previous movie, using time travel.

They also attempt to justify the time travel, using Ant Man-flavored quantum mechanics. This works about as plausibly as you’d expect for a superhero whose shrinking powers not only let him talk to ants, but also go to a “place” called “The Quantum Realm”. Along the way, they manage to throw in splintered references to a half-dozen almost-relevant scientific concepts. It’s the kind of thing that makes some physicists squirm.

And I enjoyed it.

Movies tend to treat time travel in one of two ways. The most reckless, and most common, let their characters rewrite history as they go, like Marty McFly almost erasing himself from existence in Back to the Future. This never makes much sense, and the characters in Avengers: Endgame make fun of it, listing a series of movies that do time travel this way (inexplicably including Wrinkle In Time, which has no time travel at all).

In the other common model, time travel has to happen in self-consistent loops: you can’t change the past, but you can go back and be part of it. This is the model used, for example, in Harry Potter, where Potter is saved by a mysterious spell only to travel back in time and cast it himself. This at least makes logical sense, whether it’s possible physically is an open question.

Avengers: Endgame uses the model of self-consistent loops, but with a twist: if you don’t manage to make your loop self-consistent you instead spawn a parallel universe, doomed to suffer the consequences of your mistakes. This is a rarer setup, but not a unique one, though the only other example I can think of at the moment is Homestuck.

Is there any physics justification for the Avengers: Endgame model? Maybe not. But you can at least guess what they were thinking.

The key clue is a quote from Tony Stark, rattling off a stream of movie-grade scientific gibberish:

“ Quantum fluctuation messes with the Planck scale, which then triggers the Deutsch Proposition. Can we agree on that? ”

From this quote, one can guess not only what scientific results inspired the writers of Avengers: Endgame, but possibly also which Wikipedia entry. David Deutsch is a physicist, and an advocate for the many-worlds interpretation of quantum mechanics. In 1991 he wrote a paper discussing what happens to quantum mechanics in the environment of a wormhole. In it he pointed out that you can make a self-consistent time travel loop, not just in classical physics, but out of a quantum superposition. This offers a weird solution to the classic grandfather paradox of time travel: instead of causing a paradox, you can form a superposition. As Scott Aaronson explains here, “you’re born with probability 1/2, therefore you kill your grandfather with probability 1/2, therefore you’re born with probability 1/2, and so on—everything is consistent.” If you believe in the many-worlds interpretation of quantum mechanics, a time traveler in this picture is traveling between two different branches of the wave-function of the universe: you start out in the branch where you were born, kill your grandfather, and end up in the branch where you weren’t born. This isn’t exactly how Avengers: Endgame handles time travel, but it’s close enough that it seems like a likely explanation.

David Deutsch’s argument uses a wormhole, but how do the Avengers make a wormhole in the first place? There we have less information, just vague references to quantum fluctuations at the Planck scale, the scale at which quantum gravity becomes important. There are a few things they could have had in mind, but one of them might have been physicists Leonard Susskind and Juan Maldacena’s conjecture that quantum entanglement is related to wormholes, a conjecture known as ER=EPR.

Long-time readers of the blog might remember I got annoyed a while back, when Caltech promoted ER=EPR using a different Disney franchise. The key difference here is that Avengers: Endgame isn’t pretending to be educational. Unlike Caltech’s ER=EPR piece, or even the movie Interstellar, Avengers: Endgame isn’t really about physics. It’s a superhero story, one that pairs the occasional scientific term with a character goofily bouncing around from childhood to old age while another character exclaims “you’re supposed to send him through time, not time through him!” The audience isn’t there to learn science, so they won’t come away with any incorrect assumptions.

The a movie like Avengers: Endgame doesn’t teach science, or even advertise it. It does celebrate it though.

That’s why, despite the silly half-correct science, I enjoyed Avengers: Endgame. It’s also why I don’t think it’s inappropriate, as some people do, to classify movies like Star Wars as science fiction. Star Wars and Avengers aren’t really about exploring the consequences of science or technology, they aren’t science fiction in that sense. But they do build off science’s role in the wider culture. They take our world and look at the advances on the horizon, robots and space travel and quantum speculations, and they let their optimism inform their storytelling. That’s not going to be scientifically accurate, and it doesn’t need to be, any more than the comic Abstruse Goose really believes Witten is from Mars. It’s about noticing we live in a scientific world, and having fun with it.