Tag Archives: PublicPerception

What Tells Your Story

I watched Hamilton on Disney+ recently. With GIFs and songs from the show all over social media for the last few years, there weren’t many surprises. One thing that nonetheless struck me was the focus on historical evidence. The musical Hamilton is based on Ron Chernow’s biography of Alexander Hamilton, and it preserves a surprising amount of the historian’s care for how we know what we know, hidden within the show’s other themes. From the refrain of “who tells your story”, to the importance of Eliza burning her letters with Hamilton (not just the emotional gesture but the “gap in the narrative” it created for historians), to the song “The Room Where It Happens” (which looked from GIFsets like it was about Burr’s desire for power, but is mostly about how much of history is hidden in conversations we can only partly reconstruct), the show keeps the puzzle of reasoning from incomplete evidence front-and-center.

Any time we try to reason about the past, we are faced with these kinds of questions. They don’t just apply to history, but to the so-called historical sciences as well, sciences that study the past. Instead of asking “who” told the story, such scientists must keep in mind “what” is telling the story. For example, paleontologists reason from fossils, and thus are limited by what does and doesn’t get preserved. As a result after a century of studying dinosaurs, only in the last twenty years did it become clear they had feathers.

Astronomy, too, is a historical science. Whenever astronomers look out at distant stars, they are looking at the past. And just like historians and paleontologists, they are limited by what evidence happened to be preserved, and what part of that evidence they can access.

These limitations lead to mysteries, and often controversies. Before LIGO, astronomers had an idea of what the typical mass of a black hole was. After LIGO, a new slate of black holes has been observed, with much higher mass. It’s still unclear why.

Try to reason about the whole universe, and you end up asking similar questions. When we see the movement of “standard candle” stars, is that because the universe’s expansion is accelerating, or are the stars moving as a group?

Push far enough back and the evidence doesn’t just lead to controversy, but to hard limits on what we can know. No matter how good our telescopes are, we won’t see light older than the cosmic microwave background: before that background was emitted the universe was filled with plasma, which would have absorbed any earlier light, erasing anything we could learn from it. Gravitational waves may one day let us probe earlier, and make discoveries as surprising as feathered dinosaurs. But there is yet a stronger limit to how far back we can go, beyond which any evidence has been so diluted that it is indistinguishable from random noise. We can never quite see into “the room where it happened”.

It’s gratifying to see questions of historical evidence in a Broadway musical, in the same way it was gratifying to hear fractals mentioned in a Disney movie. It’s important to think about who, and what, is telling the stories we learn. Spreading that lesson helps all of us reason better.

Science and Its Customers

In most jobs, you know who you’re working for.

A chef cooks food, and people eat it. A tailor makes clothes, and people wear them. An artist has an audience, an engineer has end users, a teacher has students. Someone out there benefits directly from what you do. Make them happy, and they’ll let you know. Piss them off, and they’ll stop hiring you.

Science benefits people too…but most of its benefits are long-term. The first person to magnetize a needle couldn’t have imagined worldwide electronic communication, and the scientists who uncovered quantum mechanics couldn’t have foreseen transistors, or personal computers. The world benefits just by having more expertise in it, more people who spend their lives understanding difficult things, and train others to understand difficult things. But those benefits aren’t easy to see for each individual scientist. As a scientist, you typically don’t know who your work will help, or how much. You might not know for years, or even decades, what impact your work will have. Even then, it will be difficult to tease out your contribution from the other scientists of your time.

We can’t ask the customers of the future to pay for the scientists of today. (At least, not straightforwardly.) In practice, scientists are paid by governments and foundations, groups trying on some level to make the future a better place. Instead of feedback from customers we get feedback from each other. If our ideas get other scientists excited, maybe they’ll matter down the road.

This is a risky thing to do, of course. Governments, foundations, and scientists can’t tell the future. They can try to act in the interests of future generations, but they might just act for themselves instead. Trying to plan ahead like this makes us prey to all the cognitive biases that flesh is heir to.

But we don’t really have an alternative. If we want to have a future at all, if we want a happier and more successful world, we need science. And if we want science, we can’t ask its real customers, the future generations, to choose whether to pay for it. We need to work for the smiles on our colleagues faces and the checks from government grant agencies. And we need to do it carefully enough that at the end of the day, we still make a positive difference.

Truth Doesn’t Have to Break the (Word) Budget

Imagine you saw this headline:

Scientists Say They’ve Found the Missing 40 Percent of the Universe’s Matter

It probably sounds like they’re talking about dark matter, right? And if scientists found dark matter, that could be a huge discovery: figuring out what dark matter is made of is one of the biggest outstanding mysteries in physics. Still, maybe that 40% number makes you a bit suspicious…

Now, read this headline instead:

Astronomers Have Finally Found Most of The Universe’s Missing Visible Matter

Visible matter! Ah, what a difference a single word makes!

These are two articles, the first from this year and the second from 2017, talking about the same thing. Leave out dark matter and dark energy, and the rest of the universe is made of ordinary protons, neutrons, and electrons. We sometimes call that “visible matter”, but that doesn’t mean it’s easy to spot. Much of it lingers in threads of gas and dust between galaxies, making it difficult to detect. These two articles are about astronomers who managed to detect this matter in different ways. But while the articles cover the same sort of matter, one headline is a lot more misleading.

Now, I know science writing is hard work. You can’t avoid misleading your readers, if only a little, because you can never include every detail. Introduce too many new words and you’ll use up your “vocabulary budget” and lose your audience. I also know that headlines get tweaked by editors at the last minute to maximize “clicks”, and that news that doesn’t get enough “clicks” dies out, replaced by news that does.

But that second headline? It’s shorter than the first. They were able to fit that crucial word “visible” in, without breaking the budget. And while I don’t have the data, I doubt the first headline was that much more viral. They could have afforded to get this right, if they wanted to.

Read each article further, and you see the same pattern. The 2020 article does mention visible matter in the first sentence at least, so they don’t screw that one up completely. But another important detail never gets mentioned.

See, you might be wondering, if one of these articles is from 2017 and the other is from 2020, how are they talking about the same thing? If astronomers found this matter already in 2017, how did they find it again in 2020?

There’s a key detail that the 2017 article mentions and the 2020 article leaves out. Here’s a quote from the 2017 article, emphasis mine:

We now have our first solid piece of evidence that this matter has been hiding in the delicate threads of cosmic webbing bridging neighbouring galaxies, right where the models predicted.

This “missing” matter was expected to exist, was predicted by models to exist. It just hadn’t been observed yet. In 2017, astronomers detected some of this matter indirectly, through its effect on the Cosmic Microwave Background. In 2020, they found it more directly, through X-rays shot out from the gases themselves.

Once again, the difference is just a short phrase. By saying “right where the models predicted”, the 2017 article clears up an important point, that this matter wasn’t a surprise. And all it took was five words.

These little words and phrases make a big difference. If you’re writing about science, you will always face misunderstandings. But if you’re careful and clever, you can clear up the most obvious ones. With just a few well-chosen words, you can have a much better piece.

Which Things Exist in Quantum Field Theory

If you ever think metaphysics is easy, learn a little quantum field theory.

Someone asked me recently about virtual particles. When talking to the public, physicists sometimes explain the behavior of quantum fields with what they call “virtual particles”. They’ll describe forces coming from virtual particles going back and forth, or a bubbling sea of virtual particles and anti-particles popping out of empty space.

The thing is, this is a metaphor. What’s more, it’s a metaphor for an approximation. As physicists, when we draw diagrams with more and more virtual particles, we’re trying to use something we know how to calculate with (particles) to understand something tougher to handle (interacting quantum fields). Virtual particles, at least as you’re probably picturing them, don’t really exist.

I don’t really blame physicists for talking like that, though. Virtual particles are a metaphor, sure, a way to talk about a particular calculation. But so is basically anything we can say about quantum field theory. In quantum field theory, it’s pretty tough to say which things “really exist”.

I’ll start with an example, neutrino oscillation.

You might have heard that there are three types of neutrinos, corresponding to the three “generations” of the Standard Model: electron-neutrinos, muon-neutrinos, and tau-neutrinos. Each is produced in particular kinds of reactions: electron-neutrinos, for example, get produced by beta-plus decay, when a proton turns into a neutron, an anti-electron, and an electron-neutrino.

Leave these neutrinos alone though, and something strange happens. Detect what you expect to be an electron-neutrino, and it might have changed into a muon-neutrino or a tau-neutrino. The neutrino oscillated.

Why does this happen?

One way to explain it is to say that electron-neutrinos, muon-neutrinos, and tau-neutrinos don’t “really exist”. Instead, what really exists are neutrinos with specific masses. These don’t have catchy names, so let’s just call them neutrino-one, neutrino-two, and neutrino-three. What we think of as electron-neutrinos, muon-neutrinos, and tau-neutrinos are each some mix (a quantum superposition) of these “really existing” neutrinos, specifically the mixes that interact nicely with electrons, muons, and tau leptons respectively. When you let them travel, it’s these neutrinos that do the traveling, and due to quantum effects that I’m not explaining here you end up with a different mix than you started with.

This probably seems like a perfectly reasonable explanation. But it shouldn’t. Because if you take one of these mass-neutrinos, and interact with an electron, or a muon, or a tau, then suddenly it behaves like a mix of the old electron-neutrinos, muon-neutrinos, and tau-neutrinos.

That’s because both explanations are trying to chop the world up in a way that can’t be done consistently. There aren’t electron-neutrinos, muon-neutrinos, and tau-neutrinos, and there aren’t neutrino-ones, neutrino-twos, and neutrino-threes. There’s a mathematical object (a vector space) that can look like either.

Whether you’re comfortable with that depends on whether you think of mathematical objects as “things that exist”. If you aren’t, you’re going to have trouble thinking about the quantum world. Maybe you want to take a step back, and say that at least “fields” should exist. But that still won’t do: we can redefine fields, add them together or even use more complicated functions, and still get the same physics. The kinds of things that exist can’t be like this. Instead you end up invoking another kind of mathematical object, equivalence classes.

If you want to be totally rigorous, you have to go a step further. You end up thinking of physics in a very bare-bones way, as the set of all observations you could perform. Instead of describing the world in terms of “these things” or “those things”, the world is a black box, and all you’re doing is finding patterns in that black box.

Is there a way around this? Maybe. But it requires thought, and serious philosophy. It’s not intuitive, it’s not easy, and it doesn’t lend itself well to 3d animations in documentaries. So in practice, whenever anyone tells you about something in physics, you can be pretty sure it’s a metaphor. Nice describable, non-mathematical things typically don’t exist.

Kicking Students Out of Their Homes During a Pandemic: A Bad Idea

I avoid talking politics on this blog. There are a few issues, though, where I feel not just able, but duty-bound, to speak out. Those are issues affecting graduate students.

This week, US Immigration and Customs Enforcement (ICE) announced that, if a university switched to online courses as a response to COVID-19, international students would have to return to their home countries or transfer to a school that still teaches in-person.

This is already pretty unreasonable for many undergrads. But think about PhD students.

Suppose you’re a foreign PhD student at a US university. Maybe your school is already planning to have classes online this fall, like Harvard is. Maybe your school is planning to have classes in person, but will change its mind a few weeks in, when so many students and professors are infected that it’s clearly unreasonable to continue. Maybe your school never changes its mind, but your state does, and the school has to lock down anyway.

As a PhD student, you likely don’t live in the dorms. More likely you live in a shared house, or an apartment. You’re an independent adult. Your parents aren’t paying for you to go to school. Your school is itself a full-time job, one that pays (as little as the university thinks it can get away with).

What happens when your school goes online? If you need to leave the country?

You’d have to find some way out of your lease, or keep paying for it. You’d have to find a flight on short notice. You’d have to pack up all your belongings, ship or sell anything you can’t store, or find friends to hold on to it.

You’d have to find somewhere to stay in your “home country”. Some could move in with their parents temporarily, many can’t. Some of those who could in other circumstances, shouldn’t if they’re fleeing from an outbreak: their parents are likely older, and vulnerable to the virus. So you have to find a hotel, eventually perhaps a new apartment, far from what was until recently your home.

Reminder: you’re doing all of this on a shoestring budget, because the university pays you peanuts.

Can you transfer instead? In a word, no.

PhD students are specialists. They’re learning very specific things from very specific people. Academics aren’t the sort of omnidisciplinary scientists you see in movies. Bruce Banner or Tony Stark could pick up a new line of research on a whim, real people can’t. This is why, while international students may be good at the undergraduate level, they’re absolutely necessary for PhDs. When only three people in the world study the thing you want to study, you don’t have the luxury of staying in your birth country. And you can’t just transfer schools when yours goes online.

It feels like the people who made this decision didn’t think about any of this. That they don’t think grad students matter, or forgot they exist altogether. It seems frustratingly common for policy that affects grad students to be made by people who know nothing about grad students, and that baffles me. PhDs are a vital part of the academic career, without them universities in their current form wouldn’t even exist. Ignoring them is like if hospital policy ignored residencies.

I hope that this policy gets reversed, or halted, or schools find some way around it. At the moment, anyone starting school in the US this fall is in a very tricky position. And anyone already there is in a worse one.

As usual, I’m going to ask that the comments don’t get too directly political. As a partial measure to tone things down, I’d like to ask you to please avoid mentioning any specific politicians, political parties, or political ideologies. Feel free to talk instead about your own experiences: how this policy is likely to affect you, or your loved ones. Please also feel free to talk more technically on the policy/legal side. I’d like to know what universities can do to work around this, and whether there are plausible paths to change or halt the policy. Please be civil, and be kind to your fellow commenters.

In Defense of Shitty Code

Scientific programming was in the news lately, when doubts were raised about a coronavirus simulation by researchers at Imperial College London. While the doubts appear to have been put to rest, doing so involved digging through some seriously messy code. The whole situation seems to have gotten a lot of people worried. If these people are that bad at coding, why should we trust their science?

I don’t know much about coronavirus simulations, my knowledge there begins and ends with a talk I saw last month. But I know a thing or two about bad scientific code, because I write it. My code is atrocious. And I’ve seen published code that’s worse.

Why do scientists write bad code?

In part, it’s a matter of training. Some scientists have formal coding training, but most don’t. I took two CS courses in college and that was it. Despite that lack of training, we’re expected and encouraged to code. Before I took those courses, I spent a summer working in a particle physics lab, where I was expected to pick up the C++-based interface pretty much on the fly. I don’t think there’s another community out there that has as much reason to code as scientists do, and as little training for it.

Would it be useful for scientists to have more of the tools of a trained coder? Sometimes, yeah. Version control is a big one, I’ve collaborated on papers that used Git and papers that didn’t, and there’s a big difference. There are coding habits that would speed up our work and lead to fewer dead ends, and they’re worth picking up when we have the time.

But there’s a reason we don’t prioritize “proper coding”. It’s because the things we’re trying to do, from a coding perspective, are really easy.

What, code-wise, is a coronavirus simulation? A vector of “people”, really just simple labels, all randomly infecting each other and recovering, with a few parameters describing how likely they are to do so and how long it takes. What do I do, code-wise? Mostly, giant piles of linear algebra.

These are not some sort of cutting-edge programming tasks. These are things people have been able to do since the dawn of computers. These are things that, when you screw them up, become quite obvious quite quickly.

Compared to that, the everyday tasks of software developers, like making a reliable interface for users, or efficient graphics, are much more difficult. They’re tasks that really require good coding practices, that just can’t function without them.

For us, the important part is not the coding itself, but what we’re doing with it. Whatever bugs are in a coronavirus simulation, they will have much less impact than, for example, the way in which the simulation includes superspreaders. Bugs in my code give me obviously wrong answers, bad scientific assumptions are much harder for me to root out.

There’s an exception that proves the rule here, and it’s that, when the coding task is actually difficult, scientists step up and write better code. Scientists who want to run efficiently on supercomputers, who are afraid of numerical error or need to simulate on many scales at once, these people learn how to code properly. The code behind the LHC still might be jury-rigged by industry standards, but it’s light-years better than typical scientific code.

I get the furor around the Imperial group’s code. I get that, when a government makes a critical decision, you hope that their every input is as professional as possible. But without getting too political for this blog, let me just say that whatever your politics are, if any of it is based on science, it comes from code like this. Psychology studies, economic modeling, polling…they’re using code, and it’s jury-rigged to hell. Scientists just have more important things to worry about.

Socratic Grilling, Crackpots, and Trolls

The blog Slate Star Codex had an interesting post last month, titled Socratic Grilling. The post started with a dialogue, a student arguing with a teacher about germ theory.

Student: Hey, wait. If germs are spread from person to person on touch, why doesn’t the government just mandate one week when nobody is allowed to touch anyone else? Then all the germs will die and we’ll never have to worry about germs again.

Out of context, the student looks like a crackpot. But in context, the student is just trying to learn, practicing a more aggressive version of Socratic questioning which the post dubbed “Socratic grilling”.

The post argued that Socratic grilling is normal and unavoidable, and that experts treat it with far more hostility than they should. Experts often reject this kind of questioning as arrogant, unless the non-expert doing the grilling is hilariously deferential. (The post’s example: “I know I am but a mere student, and nowhere near smart enough to actually challenge you, so I’m sure I’m just misunderstanding this, but the thing you just said seems really confusing to me, and I’m not saying it’s not true, but I can’t figure out how it possibly could be true, which is my fault and not yours, but could you please try to explain it differently?”)

The post made me think a bit about my own relationship with crackpots. I’d like to say that when a non-expert challenges me I listen to them regardless of their tone, that you don’t need to be so deferential around me. In practice, though…well, it certainly helps.

What I want (or at least what I want to want) is not humility, but intellectual humility. You shouldn’t have to talk about how inexperienced you are to get me to listen to you. But you should make clear what you know, how you know it, and what the limits of that evidence are. If I’m right, it helps me understand what you’re misunderstanding. If you’re right, it helps me get why your argument works.

I’ve referred to both non-experts and crackpots in this post. To be clear, I think of one as a subgroup of the other. When I refer to crackpots, I’m thinking of a specific sort of non-expert: one with a very detailed idea they have invested a lot of time and passion into, which the mainstream considers impossible. If you’re just skeptical of general relativity or quantum mechanics, you’re not a crackpot. But if you’ve come up with your own replacement to general relativity or quantum mechanics, you probably are. Note also that, no matter how dumb their ideas, I don’t think of experts in a topic as crackpots on that topic. Garrett Lisi is silly, and probably wrong, but he’s not a crackpot.

A result of this is that crackpots (as I define them) rarely do actual Socratic grilling. For a non-expert who hasn’t developed their own theory, Socratic grilling can be a good way to figure out what the heck those experts are thinking. But for a crackpot, the work they have invested in their ideas means they’re often much less interested in what the experts have to say.

This isn’t always the case. I’ve had some perfectly nice conversations with crackpots. I remember an email exchange with a guy who had drawn what he thought were Feynman diagrams without really knowing what they were, and wanted me to calculate them. While I quit that conversation out of frustration, it was my fault, not his.

Sometimes, though, it’s clear from the tactics that someone isn’t trying to learn. There’s a guy who has tried to post variations of the same comment on this blog sixteen times. He picks a post that mentions math, and uses that as an excuse to bring up his formula for the Hubble constant (“you think you’re so good at math, then explain this!”). He says absolutely nothing about the actual post, and concludes by mentioning that his book is available on Kindle.

It’s pretty clear that spammers like that aren’t trying to learn. They aren’t doing Socratic grilling, they’re just trying (and failing) to get people to buy their book.

It’s less clear how to distinguish Socratic grilling from trolling. Sometimes, someone asks an aggressive series of questions because they think you’re wrong, and want to clarify why. Sometimes, though, someone asks an aggressive series of questions because they want to annoy you.

How can you tell if someone is just trolling? Inconsistency is one way. A Socratic grill-er will have a specific position in mind, even if you can’t quite tell what it is. A troll will say whatever they need to to keep arguing. If it becomes clear that there isn’t any consistent picture behind what the other person is saying, they’re probably just a troll.

In the end, no-one is a perfect teacher. If you aren’t making headway explaining something, if an argument just keeps going in circles, then you probably shouldn’t continue. You may be dealing with a troll, or it might just be honest Socratic grilling, but either way it doesn’t matter: if you’re stuck, you’re stuck, and it’s more productive to back off than to get in a screaming match.

That’s been my philosophy anyway. I engage with Socratic grilling as long as it’s productive, whether or not you’re a crackpot. But if you spam, I’ll block your comments, while if I think you’re trolling or not listening I’ll just stop responding. It’s not worth my time at that point, and it’s not worth yours either.

The Real E=mc^2

It’s the most famous equation in all of physics, written on thousands of chalkboard stock photos. Part of its charm is its simplicity: E for energy, m for mass, c for the speed of light, just a few simple symbols in a one-line equation. Despite its simplicity, E=mc^2 is deep and important enough that there are books dedicated to explaining it.

What does E=mc^2 mean?

Some will tell you it means mass can be converted to energy, enabling nuclear power and the atomic bomb. This is a useful picture for chemists, who like to think about balancing ingredients: this much mass on one side, this much energy on the other. It’s not the best picture for physicists, though. It makes it sound like energy is some form of “stuff” you can pour into your chemistry set flask, and energy really isn’t like that.

There’s another story you might have heard, in older books. In that story, E=mc^2 tells you that in relativity mass, like distance and time, is relative. The more energy you have, the more mass you have. Those books will tell you that this is why you can’t go faster than light: the faster you go, the greater your mass, and the harder it is to speed up.

Modern physicists don’t talk about it that way. In fact, we don’t even write E=mc^2 that way. We’re more likely to write:

E=\frac{mc^2}{\sqrt{1-\frac{v^2}{c^2}}}

“v” here stands for the velocity, how fast the mass is moving. The faster the mass moves, the more energy it has. Take v to zero, and you get back the familiar E=mc^2.

The older books weren’t lying to you, but they were thinking about a different notion of mass: “relativistic mass” m_r instead of “rest mass” $m_0$, related like this:

m_r=\frac{m_0}{\sqrt{1-\frac{v^2}{c^2}}}

which explains the difference in how we write E=mc^2.

Why the change? In part, it’s because of particle physics. In particle physics, we care about the rest mass of particles. Different particles have different rest mass: each electron has one rest mass, each top quark has another, regardless of how fast they’re going. They still get more energy, and harder to speed up, the faster they go, but we don’t describe it as a change in mass. Our equations match the old books, we just talk about them differently.

Of course, you can dig deeper, and things get stranger. You might hear that mass does change with energy, but in a very different way. You might hear that mass is energy, that they’re just two perspectives on the same thing. But those are stories for another day.

I titled this post “The Real E=mc^2”, but to clarify, none of these explanations are more “real” than the others. They’re words, useful in different situations and for different people. “The Real E=mc^2” isn’t the E=mc^2 of nuclear chemists, or old books, or modern physicists. It’s the theory itself, the mathematical rules and principles that all the rest are just trying to describe.

Why I Wasn’t Bothered by the “Science” in Avengers: Endgame

Avengers: Endgame has been out for a while, so I don’t have to worry about spoilers right? Right?

Right?

Anyway, time travel. The spoiler is time travel. They bring back everyone who was eliminated in the previous movie, using time travel.

They also attempt to justify the time travel, using Ant Man-flavored quantum mechanics. This works about as plausibly as you’d expect for a superhero whose shrinking powers not only let him talk to ants, but also go to a “place” called “The Quantum Realm”. Along the way, they manage to throw in splintered references to a half-dozen almost-relevant scientific concepts. It’s the kind of thing that makes some physicists squirm.

And I enjoyed it.

Movies tend to treat time travel in one of two ways. The most reckless, and most common, let their characters rewrite history as they go, like Marty McFly almost erasing himself from existence in Back to the Future. This never makes much sense, and the characters in Avengers: Endgame make fun of it, listing a series of movies that do time travel this way (inexplicably including Wrinkle In Time, which has no time travel at all).

In the other common model, time travel has to happen in self-consistent loops: you can’t change the past, but you can go back and be part of it. This is the model used, for example, in Harry Potter, where Potter is saved by a mysterious spell only to travel back in time and cast it himself. This at least makes logical sense, whether it’s possible physically is an open question.

Avengers: Endgame uses the model of self-consistent loops, but with a twist: if you don’t manage to make your loop self-consistent you instead spawn a parallel universe, doomed to suffer the consequences of your mistakes. This is a rarer setup, but not a unique one, though the only other example I can think of at the moment is Homestuck.

Is there any physics justification for the Avengers: Endgame model? Maybe not. But you can at least guess what they were thinking.

The key clue is a quote from Tony Stark, rattling off a stream of movie-grade scientific gibberish:

“ Quantum fluctuation messes with the Planck scale, which then triggers the Deutsch Proposition. Can we agree on that? ”

From this quote, one can guess not only what scientific results inspired the writers of Avengers: Endgame, but possibly also which Wikipedia entry. David Deutsch is a physicist, and an advocate for the many-worlds interpretation of quantum mechanics. In 1991 he wrote a paper discussing what happens to quantum mechanics in the environment of a wormhole. In it he pointed out that you can make a self-consistent time travel loop, not just in classical physics, but out of a quantum superposition. This offers a weird solution to the classic grandfather paradox of time travel: instead of causing a paradox, you can form a superposition. As Scott Aaronson explains here, “you’re born with probability 1/2, therefore you kill your grandfather with probability 1/2, therefore you’re born with probability 1/2, and so on—everything is consistent.” If you believe in the many-worlds interpretation of quantum mechanics, a time traveler in this picture is traveling between two different branches of the wave-function of the universe: you start out in the branch where you were born, kill your grandfather, and end up in the branch where you weren’t born. This isn’t exactly how Avengers: Endgame handles time travel, but it’s close enough that it seems like a likely explanation.

David Deutsch’s argument uses a wormhole, but how do the Avengers make a wormhole in the first place? There we have less information, just vague references to quantum fluctuations at the Planck scale, the scale at which quantum gravity becomes important. There are a few things they could have had in mind, but one of them might have been physicists Leonard Susskind and Juan Maldacena’s conjecture that quantum entanglement is related to wormholes, a conjecture known as ER=EPR.

Long-time readers of the blog might remember I got annoyed a while back, when Caltech promoted ER=EPR using a different Disney franchise. The key difference here is that Avengers: Endgame isn’t pretending to be educational. Unlike Caltech’s ER=EPR piece, or even the movie Interstellar, Avengers: Endgame isn’t really about physics. It’s a superhero story, one that pairs the occasional scientific term with a character goofily bouncing around from childhood to old age while another character exclaims “you’re supposed to send him through time, not time through him!” The audience isn’t there to learn science, so they won’t come away with any incorrect assumptions.

The a movie like Avengers: Endgame doesn’t teach science, or even advertise it. It does celebrate it though.

That’s why, despite the silly half-correct science, I enjoyed Avengers: Endgame. It’s also why I don’t think it’s inappropriate, as some people do, to classify movies like Star Wars as science fiction. Star Wars and Avengers aren’t really about exploring the consequences of science or technology, they aren’t science fiction in that sense. But they do build off science’s role in the wider culture. They take our world and look at the advances on the horizon, robots and space travel and quantum speculations, and they let their optimism inform their storytelling. That’s not going to be scientifically accurate, and it doesn’t need to be, any more than the comic Abstruse Goose really believes Witten is from Mars. It’s about noticing we live in a scientific world, and having fun with it.

The Particle Physics Curse of Knowledge

There’s a debate raging right now in particle physics, about whether and how to build the next big collider. CERN’s Future Circular Collider group has been studying different options, some more expensive and some less (Peter Woit has a nice summary of these here). This year, the European particle physics community will debate these proposals, deciding whether to include them in an updated European Strategy for Particle Physics. After that, it will be up to the various countries that are members of CERN to decide whether to fund the proposal. With the costs of the more expensive options hovering around $20 billion, this has led to substantial controversy.

I’m not going to offer an opinion here one way or another. Weighing this kind of thing requires knowing the alternatives: what else the European particle physics community might lobby for in the next few years, and once they decide, what other budget priorities each individual country has. I know almost nothing about either.

Instead of an opinion, I have an observation:

Imagine that primatologists had proposed a $20 billion primate center, able to observe gorillas in greater detail than ever before. The proposal might be criticized in any number of ways: there could be much cheaper ways to accomplish the same thing, the project might fail, it might be that we simply don’t care enough about primate behavior to spend $20 billion on it.

What you wouldn’t expect is the claim that a $20 billion primate center would teach us nothing new.

It probably wouldn’t teach us “$20 billion worth of science”, whatever that means. But a center like that would be guaranteed to discover something. That’s because we don’t expect primatologists’ theories to be exact. Even if gorillas behaved roughly as primatologists expected, the center would still see new behaviors, just as a consequence of looking at a new level of detail.

To pick a physics example, consider the gravitational wave telescope LIGO. Before their 2016 observation of two black holes merging, LIGO faced substantial criticism. After their initial experiments didn’t detect anything, many physicists thought that the project was doomed to fail: that it would never be sensitive enough to detect the faint signals of gravitational waves past the messy vibrations of everyday life on Earth.

When it finally worked, though, LIGO did teach us something new. Not the existence of gravitational waves, we already knew about them. Rather, LIGO taught us new things about the kinds of black holes that exist. LIGO observed much bigger black holes than astronomers expected, a surprise big enough that it left some people skeptical. Even if it hadn’t, though, we still would almost certainly observe something new: there’s no reason to expect astronomers to perfectly predict the size of the universe’s black holes.

Particle physics is different.

I don’t want to dismiss the work that goes in to collider physics (far too many people have dismissed it recently). Much, perhaps most, of the work on the LHC is dedicated not to detecting new particles, but to confirming and measuring the Standard Model. A new collider would bring heroic scientific effort. We’d learn revolutionary new things about how to build colliders, how to analyze data from colliders, and how to use the Standard Model to make predictions for colliders.

In the end, though, we expect those predictions to work. And not just to work reasonably well, but to work perfectly. While we might see something beyond the Standard Model, the default expectation is that we won’t, that after doing the experiments and analyzing the data and comparing to predictions we’ll get results that are statistically indistinguishable from an equation we can fit on a T-shirt. We’ll fix the constants on that T-shirt to an unprecedented level of precision, yes, but the form of the equation may well stay completely the same.

I don’t think there’s another field where that’s even an option. Nowhere else in all of science could we observe the world in unprecedented detail, capturing phenomena that had never been seen before…and end up perfectly matching our existing theory. There’s no other science where anyone would even expect that to happen.

That makes the argument here different from any argument we’ve faced before. It forces people to consider their deep priorities, to think not just about the best way to carry out this test or that but about what science is supposed to be for. I don’t think there are any easy answers. We’re in what may well be a genuinely new situation, and we have to figure out how to navigate it together.

Postscript: I still don’t want to give an opinion, but given that I didn’t have room for this above let me give a fragment of an opinion: Higgs triple couplings!!!