Tag Archives: PublicPerception

Why I Wasn’t Bothered by the “Science” in Avengers: Endgame

Avengers: Endgame has been out for a while, so I don’t have to worry about spoilers right? Right?


Anyway, time travel. The spoiler is time travel. They bring back everyone who was eliminated in the previous movie, using time travel.

They also attempt to justify the time travel, using Ant Man-flavored quantum mechanics. This works about as plausibly as you’d expect for a superhero whose shrinking powers not only let him talk to ants, but also go to a “place” called “The Quantum Realm”. Along the way, they manage to throw in splintered references to a half-dozen almost-relevant scientific concepts. It’s the kind of thing that makes some physicists squirm.

And I enjoyed it.

Movies tend to treat time travel in one of two ways. The most reckless, and most common, let their characters rewrite history as they go, like Marty McFly almost erasing himself from existence in Back to the Future. This never makes much sense, and the characters in Avengers: Endgame make fun of it, listing a series of movies that do time travel this way (inexplicably including Wrinkle In Time, which has no time travel at all).

In the other common model, time travel has to happen in self-consistent loops: you can’t change the past, but you can go back and be part of it. This is the model used, for example, in Harry Potter, where Potter is saved by a mysterious spell only to travel back in time and cast it himself. This at least makes logical sense, whether it’s possible physically is an open question.

Avengers: Endgame uses the model of self-consistent loops, but with a twist: if you don’t manage to make your loop self-consistent you instead spawn a parallel universe, doomed to suffer the consequences of your mistakes. This is a rarer setup, but not a unique one, though the only other example I can think of at the moment is Homestuck.

Is there any physics justification for the Avengers: Endgame model? Maybe not. But you can at least guess what they were thinking.

The key clue is a quote from Tony Stark, rattling off a stream of movie-grade scientific gibberish:

“ Quantum fluctuation messes with the Planck scale, which then triggers the Deutsch Proposition. Can we agree on that? ”

From this quote, one can guess not only what scientific results inspired the writers of Avengers: Endgame, but possibly also which Wikipedia entry. David Deutsch is a physicist, and an advocate for the many-worlds interpretation of quantum mechanics. In 1991 he wrote a paper discussing what happens to quantum mechanics in the environment of a wormhole. In it he pointed out that you can make a self-consistent time travel loop, not just in classical physics, but out of a quantum superposition. This offers a weird solution to the classic grandfather paradox of time travel: instead of causing a paradox, you can form a superposition. As Scott Aaronson explains here, “you’re born with probability 1/2, therefore you kill your grandfather with probability 1/2, therefore you’re born with probability 1/2, and so on—everything is consistent.” If you believe in the many-worlds interpretation of quantum mechanics, a time traveler in this picture is traveling between two different branches of the wave-function of the universe: you start out in the branch where you were born, kill your grandfather, and end up in the branch where you weren’t born. This isn’t exactly how Avengers: Endgame handles time travel, but it’s close enough that it seems like a likely explanation.

David Deutsch’s argument uses a wormhole, but how do the Avengers make a wormhole in the first place? There we have less information, just vague references to quantum fluctuations at the Planck scale, the scale at which quantum gravity becomes important. There are a few things they could have had in mind, but one of them might have been physicists Leonard Susskind and Juan Maldacena’s conjecture that quantum entanglement is related to wormholes, a conjecture known as ER=EPR.

Long-time readers of the blog might remember I got annoyed a while back, when Caltech promoted ER=EPR using a different Disney franchise. The key difference here is that Avengers: Endgame isn’t pretending to be educational. Unlike Caltech’s ER=EPR piece, or even the movie Interstellar, Avengers: Endgame isn’t really about physics. It’s a superhero story, one that pairs the occasional scientific term with a character goofily bouncing around from childhood to old age while another character exclaims “you’re supposed to send him through time, not time through him!” The audience isn’t there to learn science, so they won’t come away with any incorrect assumptions.

The a movie like Avengers: Endgame doesn’t teach science, or even advertise it. It does celebrate it though.

That’s why, despite the silly half-correct science, I enjoyed Avengers: Endgame. It’s also why I don’t think it’s inappropriate, as some people do, to classify movies like Star Wars as science fiction. Star Wars and Avengers aren’t really about exploring the consequences of science or technology, they aren’t science fiction in that sense. But they do build off science’s role in the wider culture. They take our world and look at the advances on the horizon, robots and space travel and quantum speculations, and they let their optimism inform their storytelling. That’s not going to be scientifically accurate, and it doesn’t need to be, any more than the comic Abstruse Goose really believes Witten is from Mars. It’s about noticing we live in a scientific world, and having fun with it.

The Particle Physics Curse of Knowledge

There’s a debate raging right now in particle physics, about whether and how to build the next big collider. CERN’s Future Circular Collider group has been studying different options, some more expensive and some less (Peter Woit has a nice summary of these here). This year, the European particle physics community will debate these proposals, deciding whether to include them in an updated European Strategy for Particle Physics. After that, it will be up to the various countries that are members of CERN to decide whether to fund the proposal. With the costs of the more expensive options hovering around $20 billion, this has led to substantial controversy.

I’m not going to offer an opinion here one way or another. Weighing this kind of thing requires knowing the alternatives: what else the European particle physics community might lobby for in the next few years, and once they decide, what other budget priorities each individual country has. I know almost nothing about either.

Instead of an opinion, I have an observation:

Imagine that primatologists had proposed a $20 billion primate center, able to observe gorillas in greater detail than ever before. The proposal might be criticized in any number of ways: there could be much cheaper ways to accomplish the same thing, the project might fail, it might be that we simply don’t care enough about primate behavior to spend $20 billion on it.

What you wouldn’t expect is the claim that a $20 billion primate center would teach us nothing new.

It probably wouldn’t teach us “$20 billion worth of science”, whatever that means. But a center like that would be guaranteed to discover something. That’s because we don’t expect primatologists’ theories to be exact. Even if gorillas behaved roughly as primatologists expected, the center would still see new behaviors, just as a consequence of looking at a new level of detail.

To pick a physics example, consider the gravitational wave telescope LIGO. Before their 2016 observation of two black holes merging, LIGO faced substantial criticism. After their initial experiments didn’t detect anything, many physicists thought that the project was doomed to fail: that it would never be sensitive enough to detect the faint signals of gravitational waves past the messy vibrations of everyday life on Earth.

When it finally worked, though, LIGO did teach us something new. Not the existence of gravitational waves, we already knew about them. Rather, LIGO taught us new things about the kinds of black holes that exist. LIGO observed much bigger black holes than astronomers expected, a surprise big enough that it left some people skeptical. Even if it hadn’t, though, we still would almost certainly observe something new: there’s no reason to expect astronomers to perfectly predict the size of the universe’s black holes.

Particle physics is different.

I don’t want to dismiss the work that goes in to collider physics (far too many people have dismissed it recently). Much, perhaps most, of the work on the LHC is dedicated not to detecting new particles, but to confirming and measuring the Standard Model. A new collider would bring heroic scientific effort. We’d learn revolutionary new things about how to build colliders, how to analyze data from colliders, and how to use the Standard Model to make predictions for colliders.

In the end, though, we expect those predictions to work. And not just to work reasonably well, but to work perfectly. While we might see something beyond the Standard Model, the default expectation is that we won’t, that after doing the experiments and analyzing the data and comparing to predictions we’ll get results that are statistically indistinguishable from an equation we can fit on a T-shirt. We’ll fix the constants on that T-shirt to an unprecedented level of precision, yes, but the form of the equation may well stay completely the same.

I don’t think there’s another field where that’s even an option. Nowhere else in all of science could we observe the world in unprecedented detail, capturing phenomena that had never been seen before…and end up perfectly matching our existing theory. There’s no other science where anyone would even expect that to happen.

That makes the argument here different from any argument we’ve faced before. It forces people to consider their deep priorities, to think not just about the best way to carry out this test or that but about what science is supposed to be for. I don’t think there are any easy answers. We’re in what may well be a genuinely new situation, and we have to figure out how to navigate it together.

Postscript: I still don’t want to give an opinion, but given that I didn’t have room for this above let me give a fragment of an opinion: Higgs triple couplings!!!

Pan Narrans Scientificus

As scientists, we want to describe the world as objectively as possible. We try to focus on what we can establish conclusively, to leave out excessive speculation and stick to cold, hard facts.

Then we have to write application letters.

Stick to the raw, un-embellished facts, and an application letter would just be a list: these papers in these journals, these talks and awards. Though we may sometimes wish applications worked that way, we don’t live in that kind of world. To apply for a job or a grant, we can’t just stick to the most easily measured facts. We have to tell a story.

The author Terry Pratchett called humans Pan Narrans, the Storytelling Ape. Stories aren’t just for fun, they’re how we see the world, how we organize our perceptions and actions. Without a story, the world doesn’t make sense. And that applies even to scientists.

Applications work best when they tell a story: how did you get here, and where are you going? Scientific papers, similarly, require some sort of narrative: what did you do, and why did you do it? When teaching or writing about science, we almost never just present the facts. We try to fit it into a story, one that presents the facts but also makes sense, in that deliciously human way. A story, more than mere facts, lets us project to the future, anticipating what you’ll do with that grant money or how others will take your research in new directions.

It’s important to remember, though, that stories aren’t actually facts. You can’t get too attached to one story, you have to be willing to shift as new facts come in. Those facts can be scientific measurements, but they can also be steps in your career. You aren’t going to tell the same story when applying to grad school as when you’re trying for tenure, and that’s not just because you’ll have more to tell. The facts of your life will be organized in new ways, rearranging in importance as the story shifts.

Keep your stories in mind as you write or do science. Think about your narrative, the story you’re using to understand the world. Think about what it predicts, how the next step in the story should go. And be ready to start a new story when you need to.

When You Shouldn’t Listen to a Distinguished but Elderly Scientist

Of science fiction author Arthur C. Clarke’s sayings, the most famous is “Clarke’s third law”, that “Any sufficiently advanced technology is indistinguishable from magic.” Almost as famous, though, is his first law:

“When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”

Recently Michael Atiyah, an extremely distinguished but also rather elderly mathematician, claimed that something was possible: specifically, he claimed it was possible that he had proved the Riemann hypothesis, one of the longest-standing and most difficult puzzles in mathematics. I won’t go into the details here, but people are, well, skeptical.

This post isn’t really about Atiyah. I’m not close enough to that situation to comment. Instead, it’s about a more general problem.

See, the public seems to mostly agree with Clarke’s law. They trust distinguished, elderly scientists, at least when they’re saying something optimistic. Other scientists know better. We know that scientists are human, that humans age…and that sometimes scientific minds don’t age gracefully.

Some of the time, that means Alzheimer’s, or another form of dementia. Other times, it’s nothing so extreme, just a mind slowing down with age, opinions calcifying and logic getting just a bit more fuzzy.

And the thing is, watching from the sidelines, you aren’t going to know the details. Other scientists in the field will, but this kind of thing is almost never discussed with the wider public. Even here, though specific physicists come to mind as I write this, I’m not going to name them. It feels rude, to point out that kind of all-too-human weakness in someone who accomplished so much. But I think it’s important for the public to keep in mind that these people exist. When an elderly Nobelist claims to have solved a problem that baffles mainstream science, the news won’t tell you they’re mentally ill. All you can do is keep your eyes open, and watch for warning signs:

Be wary of scientists who isolate themselves. Scientists who still actively collaborate and mentor almost never have this kind of problem. There’s a nasty feedback loop when those contacts start to diminish. Being regularly challenged is crucial to test scientific ideas, but it’s also important for mental health, especially in the elderly. As a scientist thinks less clearly, they won’t be able to keep up with their collaborators as much, worsening the situation.

Similarly, beware those famous enough to surround themselves with yes-men. With Nobel prizewinners in particular, many of the worst cases involve someone treated with so much reverence that they forget to question their own ideas. This is especially risky when commenting on an unfamiliar field: often, the Nobelist’s contacts in the new field have a vested interest in holding on to their big-name support, and ignoring signs of mental illness.

Finally, as always, bigger claims require better evidence. If everything someone works on is supposed to revolutionize science as we know it, then likely none of it will. The signs that indicate crackpots apply here as well: heavily invoking historical scientists, emphasis on notation over content, a lack of engagement with the existing literature. Be especially wary if the argument seems easy, deep problems are rarely so simple to solve.

Keep this in mind, and the next time a distinguished but elderly scientist states that something is possible, don’t trust them blindly. Ultimately, we’re still humans beings. We don’t last forever.

Underdetermination of Theory by Metaphor

Sometimes I explain science in unconventional ways. I’ll talk about quantum mechanics without ever using the word “measurement”, or write the action of the Standard Model in legos.

Whenever I do this, someone asks me why. Why use a weird, unfamiliar explanation? Why not just stick to the tried and true, metaphors that have been tested and honed in generations of popular science books?

It’s not that I have a problem with the popular explanations, most of the time. It’s that, even when the popular explanation does a fine job, there can be good reason to invent a new metaphor. To demonstrate my point, here’s a new metaphor to explain why:

In science, we sometimes talk about underdetermination of a theory by the data. We want to find a theory whose math matches the experimental results, but sometimes the experiments just don’t tell us enough. If multiple theories match the data, we say that the theory is underdetermined, and we go looking for more data to resolve the problem.

What if you’re not a scientist, though? Often, that means you hear about theories secondhand, from some science popularizer. You’re not hearing the full math of the theory, you’re not seeing the data. You’re hearing metaphors and putting together your own picture of the theory. Metaphors are your data, in some sense. And just as scientists can find their theories underdetermined by the experimental data, you can find them underdetermined by the metaphors.

This can happen if a metaphor is consistent with two very different interpretations. If you hear that time runs faster in lower gravity, maybe you picture space and time as curved…or maybe you think low gravity makes you skip ahead, so you end up in the “wrong timeline”. Even if the popularizer you heard it from was perfectly careful, you base your understanding of the theory on the metaphor, and you can end up with the wrong understanding.

In science, the only way out of underdetermination of a theory is new, independent data. In science popularization, it’s new, independent metaphors. New metaphors shake you out of your comfort zone. If you misunderstood the old metaphor, now you’ll try to fit that misunderstanding with the new metaphor too. Often, that won’t work: different metaphors lead to different misunderstandings. With enough different metaphors, your picture of the theory won’t be underdetermined anymore: there will be only one picture, one understanding, that’s consistent with every metaphor.

That’s why I experiment with metaphors, why I try new, weird explanations. I want to wake you up, to make sure you aren’t sticking to the wrong understanding. I want to give you more data to determine your theory.

Adversarial Collaborations for Physics

Sometimes physics debates get ugly. For the scientists reading this, imagine your worst opponents. Think of the people who always misinterpret your work while using shoddy arguments to prop up their own, where every question at a talk becomes a screaming match until you just stop going to the same conferences at all.

Now, imagine writing a paper with those people.

Adversarial collaborations, subject of a recent a contest on the blog Slate Star Codex, are a proposed method for resolving scientific debates. Two scientists on opposite sides of an argument commit to writing a paper together, describing the overall state of knowledge on the topic. For the paper to get published, both sides have to sign off on it: they both have to agree that everything in the paper is true. This prevents either side from cheating, or from coming back later with made-up objections: if a point in the paper is wrong, one side or the other is bound to catch it.

This won’t work for the most vicious debates, when one (or both) sides isn’t interested in common ground. But for some ongoing debates in physics, I think this approach could actually help.

One advantage of adversarial collaborations is in preventing accusations of bias. The debate between dark matter and MOND-like proposals is filled with these kinds of accusations: claims that one group or another is ignoring important data, being dishonest about the parameters they need to fit, or applying standards of proof they would never require of their own pet theory. Adversarial collaboration prevents these kinds of accusations: whatever comes out of an adversarial collaboration, both sides would make sure the other side didn’t bias it.

Another advantage of adversarial collaborations is that they make it much harder for one side to move the goalposts, or to accuse the other side of moving the goalposts. From the sidelines, one thing that frustrates me watching string theorists debate whether the theory can describe de Sitter space is that they rarely articulate what it would take to decisively show that a particular model gives rise to de Sitter. Any conclusion of an adversarial collaboration between de Sitter skeptics and optimists would at least guarantee that both parties agreed on the criteria. Similarly, I get the impression that many debates about interpretations of quantum mechanics are bogged down by one side claiming they’ve closed off a loophole with a new experiment, only for the other to claim it wasn’t the loophole they were actually using, something that could be avoided if both sides were involved in the experiment from the beginning.

It’s possible, even likely, that no-one will try adversarial collaboration for these debates. Even if they did, it’s quite possible the collaborations wouldn’t be able to agree on anything! Still, I have to hope that someone takes the plunge and tries writing a paper with their enemies. At minimum, it’ll be an interesting read!

Journalists Need to Adapt to Preprints, Not Ignore Them

Nature has an article making the rounds this week, decrying the dangers of preprints.

On the surface, this is a bit like an article by foxes decrying the dangers of henhouses. There’s a pretty big conflict of interest when a journal like Nature, that makes huge amounts of money out of research scientists would be happy to publish for free, gets snippy about scientists sharing their work elsewhere. I was expecting an article about how “important” the peer review process is, how we can’t just “let anyone” publish, and the like.

Instead, I was pleasantly surprised. The article is about a real challenge, the weakening of journalistic embargoes. While this is still a problem I think journalists can think their way around, it’s a bit subtler than the usual argument.

For the record, peer review is usually presented as much more important than it actually is. When a scientific article gets submitted to a journal, it gets sent to two or three experts in the field for comment. In the best cases, these experts read the paper carefully and send criticism back. They don’t replicate the experiments, they don’t even (except for a few heroic souls) reproduce the calculations. That kind of careful reading is important, but it’s hardly unique: it’s something scientists do on their own when they want to build off of someone else’s paper, and it’s what good journalists get when they send a paper to experts for comments before writing an article. If peer review in a journal is important, it’s to ensure that this careful reading happens at least once, a sort of minimal evidence that the paper is good enough to appear on a scientist’s CV.

The Nature article points out that peer review serves another purpose, specifically one of delay. While a journal is preparing to publish an article they can send it out to journalists, after making them sign an agreement (an embargo) that they won’t tell the public until the journal publishes. This gives the journalists a bit of lead time, so the more responsible ones can research and fact-check before publishing.

Open-access preprints cut out the lead time. If the paper just appears online with no warning and no embargoes, journalists can write about it immediately. The unethical journalists can skip fact-checking and publish first, and the ethical ones have to follow soon after, or risk publishing “old news”. Nobody gets the time to properly vet, or understand, a new paper.

There’s a simple solution I’ve seen from a few folks on Twitter: “Don’t be an unethical journalist!” That doesn’t actually solve the problem though. The question is, if you’re an ethical journalist, but other people are unethical journalists, what do you do?

Apparently, what some ethical journalists do is to carry on as if preprints didn’t exist. The Nature article describes journalists who, after a preprint has been covered extensively by others, wait until a journal publishes it and then cover it as if nothing had happened. The article frames this as virtuous, but doomed: journalists sticking to their ethics even if it means publishing “old news”.

To be 100% clear here, this is not virtuous. If you present a paper’s publication in a journal as news, when it was already released as a preprint, you are actively misleading the public. I can’t count the number of times I’ve gotten messages from readers, confused because they saw a scientific result covered again months later and thought it was new. It leads to a sort of mental “double-counting”, where the public assumes that the scientific result was found twice, and therefore that it’s more solid. Unless the publication itself is unexpected (something that wasn’t expected to pass peer review, or something controversial like Mochizuki’s proof of the ABC conjecture) mere publication in a journal of an already-public result is not news.

What science journalists need to do here is to step back, and think about how their colleagues cover stories. Current events these days don’t have embargoes, they aren’t fed through carefully managed press releases. There’s a flurry of initial coverage, and it gets things wrong and misses details and misleads people, because science isn’t the only field that’s complicated, real life is complicated. Journalists have adapted to this schedule, mostly, by specializing. Some journalists and news outlets cover breaking news as it happens, others cover it later with more in-depth analysis. Crucially, the latter journalists don’t present the topic as new. They write explicitly in the light of previous news, as a response to existing discussion. That way, the public isn’t misled, and their existing misunderstandings can be corrected.

The Nature article brings up public health, and other topics where misunderstandings can do lasting damage, as areas where embargoes are useful. While I agree, I would hope many of these areas would figure out embargoes on their own. My field certainly does: the big results of scientific collaborations aren’t just put online as preprints, they’re released only after the collaboration sets up its own journalistic embargoes, and prepares its own press releases. In a world of preprints, this sort of practice needs to happen for important controversial public health and environmental results as well. Unethical scientists might still release too fast, to keep journalists from fact-checking, but they could do that anyway, without preprints. You don’t need a preprint to call a journalist on the phone and claim you cured cancer.

As open-access preprints become the norm, journalists will have to adapt. I’m confident they will be able to, but only if they stop treating science journalism as unique, and start treating it as news. Science journalism isn’t teaching, you’re not just passing down facts someone else has vetted. You’re asking the same questions as any other journalist: who did what? And what really happened? If you can do that, preprints shouldn’t be scary.