Monthly Archives: December 2016

What’s in a Conjecture? An ER=EPR Example

A few weeks back, Caltech’s Institute of Quantum Information and Matter released a short film titled Quantum is Calling. It’s the second in what looks like will become a series of pieces featuring Hollywood actors popularizing ideas in physics. The first used the game of Quantum Chess to talk about superposition and entanglement. This one, featuring Zoe Saldana, is about a conjecture by Juan Maldacena and Leonard Susskind called ER=EPR. The conjecture speculates that pairs of entangled particles (as investigated by Einstein, Podolsky, and Rosen) are in some sense secretly connected by wormholes (or Einstein-Rosen bridges).

The film is fun, but I’m not sure ER=EPR is established well enough to deserve this kind of treatment.

At this point, some of you are nodding your heads for the wrong reason. You’re thinking I’m saying this because ER=EPR is a conjecture.

I’m not saying that.

The fact of the matter is, conjectures play a very important role in theoretical physics, and “conjecture” covers a wide range. Some conjectures are supported by incredibly strong evidence, just short of mathematical proof. Others are wild speculations, “wouldn’t it be convenient if…” ER=EPR is, well…somewhere in the middle.

Most popularizers don’t spend much effort distinguishing things in this middle ground. I’d like to talk a bit about the different sorts of evidence conjectures can have, using ER=EPR as an example.

octopuswormhole_v1

Our friendly neighborhood space octopus

The first level of evidence is motivation.

At its weakest, motivation is the “wouldn’t it be convenient if…” line of reasoning. Some conjectures never get past this point. Hawking’s chronology protection conjecture, for instance, points out that physics (and to some extent logic) has a hard time dealing with time travel, and wouldn’t it be convenient if time travel was impossible?

For ER=EPR, this kind of motivation comes from the black hole firewall paradox. Without going into it in detail, arguments suggested that the event horizons of older black holes would resemble walls of fire, incinerating anything that fell in, in contrast with Einstein’s picture in which passing the horizon has no obvious effect at the time. ER=EPR provides one way to avoid this argument, making event horizons subtle and smooth once more.

Motivation isn’t just “wouldn’t it be convenient if…” though. It can also include stronger arguments: suggestive comparisons that, while they could be coincidental, when put together draw a stronger picture.

In ER=EPR, this comes from certain similarities between the type of wormhole Maldacena and Susskind were considering, and pairs of entangled particles. Both connect two different places, but both do so in an unusually limited way. The wormholes of ER=EPR are non-traversable: you cannot travel through them. Entangled particles can’t be traveled through (as you would expect), but more generally can’t be communicated through: there are theorems to prove it. This is the kind of suggestive similarity that can begin to motivate a conjecture.

(Amusingly, the plot of the film breaks this in both directions. Keanu Reeves can neither steal your cat through a wormhole, nor send you coded messages with entangled particles.)

rjxhfqj

Nor live forever as the portrait in his attic withers away

Motivation is a good reason to investigate something, but a bad reason to believe it. Luckily, conjectures can have stronger forms of evidence. Many of the strongest conjectures are correspondences, supported by a wealth of non-trivial examples.

In science, the gold standard has always been experimental evidence. There’s a reason for that: when you do an experiment, you’re taking a risk. Doing an experiment gives reality a chance to prove you wrong. In a good experiment (a non-trivial one) the result isn’t obvious from the beginning, so that success or failure tells you something new about the universe.

In theoretical physics, there are things we can’t test with experiments, either because they’re far beyond our capabilities or because the claims are mathematical. Despite this, the overall philosophy of experiments is still relevant, especially when we’re studying a correspondence.

“Correspondence” is a word we use to refer to situations where two different theories are unexpectedly computing the same thing. Often, these are very different theories, living in different dimensions with different sorts of particles. With the right “dictionary”, though, you can translate between them, doing a calculation in one theory that matches a calculation in the other one.

Even when we can’t do non-trivial experiments, then, we can still have non-trivial examples. When the result of a calculation isn’t obvious from the beginning, showing that it matches on both sides of a correspondence takes the same sort of risk as doing an experiment, and gives the same sort of evidence.

Some of the best-supported conjectures in theoretical physics have this form. AdS/CFT is technically a conjecture: a correspondence between string theory in a hyperbola-shaped space and my favorite theory, N=4 super Yang-Mills. Despite being a conjecture, the wealth of nontrivial examples is so strong that it would be extremely surprising if it turned out to be false.

ER=EPR is also a correspondence, between entangled particles on the one hand and wormholes on the other. Does it have nontrivial examples?

Some, but not enough. Originally, it was based on one core example, an entangled state that could be cleanly matched to the simplest wormhole. Now, new examples have been added, covering wormholes with electric fields and higher spins. The full “dictionary” is still unclear, with some pairs of entangled particles being harder to describe in terms of wormholes. So while this kind of evidence is being built, it isn’t as solid as our best conjectures yet.

I’m fine with people popularizing this kind of conjecture. It deserves blog posts and press articles, and it’s a fine idea to have fun with. I wouldn’t be uncomfortable with the Bohemian Gravity guy doing a piece on it, for example. But for the second installment of a star-studded series like the one Caltech is doing…it’s not really there yet, and putting it there gives people the wrong idea.

I hope I’ve given you a better idea of the different types of conjectures, from the most fuzzy to those just shy of certain. I’d like to do this kind of piece more often, though in future I’ll probably stick with topics in my sub-field (where I actually know what I’m talking about 😉 ). If there’s a particular conjecture you’re curious about, ask in the comments!

A Tale of Two Archives

When it comes to articles about theoretical physics, I have a pet peeve, one made all the more annoying by the fact that it appears even in pieces that are otherwise well written. It involves the following disclaimer:

“This article has not been peer-reviewed.”

Here’s the thing: if you’re dealing with experiments, peer review is very important. Plenty of experiments have subtle problems with their methods, enough that it’s important to have a group of experts who can check them. In experimental fields, you really shouldn’t trust things that haven’t been through a journal yet: there’s just a lot that can go wrong.

In theoretical physics, though, peer review is important for different reasons. Most papers are mathematically rigorous enough that they’re not going to be wrong per se, and most of the ways they could be wrong won’t be caught by peer review. While peer review sometimes does catch mistakes, much more often it’s about assessing the significance of a result. Peer review determines whether a result gets into a prestigious journal or a less prestigious one, which in turn matters for job and grant applications.

As such, it doesn’t really make sense for a journalist to point out that a theoretical physics paper hasn’t been peer reviewed yet. If you think it’s important enough to write an article about, then you’ve already decided it’s significant: peer review wasn’t going to tell you anything else.

We physicists post our papers to arXiv, a free-to-access paper repository, before submitting them to journals. While arXiv does have some moderation, it’s not much: pretty much anyone in the field can post whatever they want.

This leaves a lot of people confused. In that sort of system, how do we know which papers to trust?

Let’s compare to another archive: Archive of Our Own, or AO3 for short.

Unlike arXiv, AO3 hosts not physics, but fanfiction. However, like arXiv it’s quite lightly moderated and free to access. On arXiv you want papers you can trust, on AO3 you want stories you enjoy. In each case, if anyone can post, how do you find them?

The first step is filtering. AO3 and arXiv both have systems of tags and subject headings. The headings on arXiv are simpler and more heavily moderated than those on AO3, but they both serve the purpose of letting people filter out the subjects, whether scientific or fictional, that they find interesting. If you’re interested in astrophysics, try astro-ph on arXiv. If you want Harry Potter fanfiction, try the “Harry Potter – J.K. Rowling” tag on AO3.

Beyond that, it helps to pay attention to authors. When an author has written something you like, it’s worth it not only to keep up with other things they write, but to see which other authors they like and pay attention to them as well. That’s true whether the author is Juan Maldacena or your favorite source of Twilight fanfic.

Even if you follow all of this, you can’t trust every paper you find on arXiv. You also won’t enjoy everything you dig up on AO3. Either way, publication (in journals or books) won’t solve your problem: both are an additional filter, but not an infallible one. Judgement is still necessary.

This is all to say that “this article has not been peer-reviewed” can be a useful warning, but often isn’t. In theoretical physics, knowing who wrote an article and what it’s about will often tell you much more than whether or not it’s been peer-reviewed yet.

Have You Given Your Kids “The Talk”?

If you haven’t seen it yet, I recommend reading this delightful collaboration between Scott Aaronson (of Shtetl-Optimized) and Zach Weinersmith (of Saturday Morning Breakfast Cereal). As explanations of a concept beyond the standard popular accounts go, this one is pretty high quality, correcting some common misconceptions about quantum computing.

I especially liked the following exchange:

ontology

I’ve complained before about people trying to apply ontology to physics, and I think this gets at the root of one of my objections.

People tend to think that the world should be describable with words. From that perspective, mathematics is just a particular tool, a system we’ve created. If you look at the world in that way, mathematics looks unreasonably effective: it’s ability to describe the real world seems like a miraculous coincidence.

Mathematics isn’t just one tool though, or just one system. It’s all of them: not just numbers and equations, but knots and logic and everything else. Deep down, mathematics is just a collection of all the ways we’ve found to state things precisely.

Because of that, it shouldn’t surprise you that we “put complex numbers in our ontologies”. Complex numbers are just one way we’ve found to make precise statements about the world, one that comes in handy when talking about quantum mechanics. There doesn’t need to be a “correct” description in words: the math is already stating things as precisely as we know how.

That doesn’t mean that ontology is a useless project. It’s worthwhile to develop new ways of talking about things. I can understand the goal of building up a philosophical language powerful enough to describe the world in terms of words, and if such a language was successful it might well inspire us to ask new scientific questions.

But it’s crucial to remember that there’s real work to be done there. There’s no guarantee that the project will work, that words will end up sufficient. When you put aside our best tools to make precise statements, you’re handicapping yourself, making the problem harder than it needed to be. It’s your responsibility to make sure you’re getting something worthwhile out of it.

Words, Words, Words

If there’s one thing the Center for Communicating Science drummed into me at Stony Brook, it’s to be careful with words. You can teach your audience new words, but only a few: effectively, you have a vocabulary budget.

Sometimes, the risk is that your audience will misunderstand you. If you’re a biologist who talks about treating disease in a model, be careful: the public is more likely to think of mannequins than mice.

220px-harvey_front

NOT what you’re talking about

Sometimes, though, the risk is subtler. Even if the audience understands you, you might still be using up your vocabulary budget.

Recently, Perimeter’s monthly Public Lecture was given by an expert on regenerative medicine. When talking about trying to heal eye tissue, she mentioned looking for a “pupillary response”.

Now, “pupillary response” isn’t exactly hard to decipher. It’s pretty clearly a response by the pupil of the eye. From there, you can think about how eyes respond to bright light, or to darkness, and have an idea of what she’s talking about.

So nobody is going to misunderstand “pupillary response”. Nonetheless, that chain of reasoning? It takes time, and it takes effort. People do have to stop and think, if only for a moment, to know what you mean.

That adds up. Every time your audience has to take a moment to think back and figure out what you just said? That eats into your vocabulary budget. Enough moments like that, and your audience won’t have the energy to follow what you’re saying: you’ll lose them.

The last few Public Lectures haven’t had as much online engagement as they used to. Lots of people still watch them, but fewer have been asking questions on twitter, for example. I have a few guesses about why this is…but I wonder if this kind of thing is part of it. The last few speakers have been more free with technical terms, more lax with their vocabulary budget. I worry that, while people still show up for the experience, they aren’t going away with any understanding.

We don’t need to dumb things down to be understood. (Or not very much anyway.) We do need to be careful with our words. Use our vocabulary budget sparingly, and we can really teach people. Spend it too fast…and we lose them.

arXiv vs. snarXiv: Can You Tell the Difference?

Have you ever played arXiv vs snarXiv?

arXiv is a preprint repository: it’s where we physicists put our papers before they’re published to journals.

snarXiv is…well..sound it out.

A creation of David Simmons-Duffin, snarXiv randomly generates titles and abstracts out of trendy arXiv buzzwords. It’s designed so that the papers on it look almost plausible…until you take a closer look, anyway.

Hence the game, arXiv vs snarXiv. Given just the titles of two papers, can you figure out which one is real, and which is fake?

I played arXiv vs snarXiv for a bit today, waiting for some code to run. Out of twenty questions, I only got two wrong.

Sometimes, it was fairly clear which paper was fake because snarXiv overreached. By trying to pile on too many buzzwords, it ended up with a title that repeated itself, or didn’t quite work grammatically.

Other times, I had to use some actual physics knowledge. Usually, this meant noticing when a title tied together unrelated areas in an implausible way. When a title claims to tie obscure mathematical concepts from string theory to a concrete problem in astronomy, it’s pretty clearly snarXiv talking.

The toughest questions, including the ones I got wrong, were when snarXiv went for something subtle. For short enough titles, the telltale signs of snarXiv were suppressed. There just weren’t enough buzzwords for a mistake to show up. I’m not sure there’s a way to distinguish titles like that, even for people in the relevant sub-field.

How well do you do at arXiv vs snarXiv? Any tips?