Tag Archives: PublicPerception

What Space Can Tell Us about Fundamental Physics

Back when LIGO announced its detection of gravitational waves, there was one question people kept asking me: “what does this say about quantum gravity?”

The answer, each time, was “nothing”. LIGO’s success told us nothing about quantum gravity, and very likely LIGO will never tell us anything about quantum gravity.

The sheer volume of questions made me think, though. Astronomy, astrophysics, and cosmology fascinate people. They capture the public’s imagination in a way that makes them expect breakthroughs about fundamental questions. Especially now, with the LHC so far seeing nothing new since the Higgs, people are turning to space for answers.

Is that a fair expectation? Well, yes and no.

Most astrophysicists aren’t concerned with finding new fundamental laws of nature. They’re interested in big systems like stars and galaxies, where we know most of the basic rules but can’t possibly calculate all their consequences. Like most physicists, they’re doing the vital work of “physics of decimals”.

At the same time, there’s a decent chunk of astrophysics and cosmology that does matter for fundamental physics. Just not all of it. Here are some of the key areas where space has something important to say about the fundamental rules that govern our world:

 

1. Dark Matter:

Galaxies rotate at different speeds than their stars would alone. Clusters of galaxies bend light that passes by, and do so more than their visible mass would suggest. And when scientists try to model the evolution of the universe, from early images to its current form, the models require an additional piece: extra matter that cannot interact with light. All of this suggests that there is some extra “dark” matter in the universe, not described by our standard model of particle physics.

If we want to understand this dark matter, we need to know more about its properties, and much of that can be learned from astronomy. If it turns out dark matter isn’t really matter after all, if it can be explained by a modification of gravity or better calculations of gravity’s effects, then it still will have important implications for fundamental physics, and astronomical evidence will still be key to finding those implications.

2. Dark Energy (/Cosmological Constant/Inflation/…):

The universe is expanding, and its expansion appears to be accelerating. It also seems more smooth and uniform than expected, suggesting that it had a period of much greater acceleration early on. Both of these suggest some extra quantity: a changing acceleration, a “dark energy”, the sort of thing that can often be explained by a new scalar field like the Higgs.

Again, the specifics: how (and perhaps if) the universe is expanding now, what kinds of early expansion (if any) the shape of the universe suggests, these will almost certainly have implications for fundamental physics.

3. Limits on stable stuff:

Let’s say you have a new proposal for particle physics. You’ve predicted a new particle, but it can’t interact with anything else, or interacts so weakly we’d never detect it. If your new particle is stable, then you can still say something about it, because its mass would have an effect on the early universe. Too many such particles and they would throw off cosmologists’ models, ruling them out.

Alternatively, you might predict something that could be detected, but hasn’t, like a magnetic monopole. Then cosmologists can tell you how many such particles would have been produced in the early universe, and thus how likely we would be to detect them today. If you predict too many particles and we don’t see them, then that becomes evidence against your proposal.

4. “Cosmological Collider Physics”:

A few years back, Nima Arkani-Hamed and Juan Maldacena suggested that the early universe could be viewed as an extremely high energy particle collider. While this collider performed only one experiment, the results from that experiment are spread across the sky, and observed patterns in the early universe should tell us something about the particles produced by the cosmic collider.

People are still teasing out the implications of this idea, but it looks promising, and could mean we have a lot more to learn from examining the structure of the universe.

5. Big Weird Space Stuff:

If you suspect we live in a multiverse, you might want to look for signs of other universes brushing up against our own. If your model of the early universe predicts vast cosmic strings, maybe a gravitational wave detector like LIGO will be able to see them.

6. Unexpected weirdness:

In all likelihood, nothing visibly “quantum” happens at the event horizons of astrophysical black holes. If you think there’s something to see though, the Event Horizon Telescope might be able to see it. There’s a grab bag of other predictions like this: situations where we probably won’t see anything, but where at least one person thinks there’s a question worth asking.

 

I’ve probably left something out here, but this should give you a general idea. There is a lot that fundamental physics can learn from astronomy, from the overall structure and origins of the universe to unexplained phenomena like dark matter. But not everything in astronomy has these sorts of implications: for the most part, astronomy is interesting not because it tells us something about the fundamental laws of nature, but because it tells us how the vast space above us actually happens to work.

Popularization as News, Popularization as Signpost

Lubos Motl has responded to my post from last week about the recent Caltech short, Quantum is Calling. His response is pretty much exactly what you’d expect, including the cameos by Salma Hayek and Kaley Cuoco.

The only surprise was his lack of concern for accuracy. Quantum is Calling got the conjecture it was trying to popularize almost precisely backwards. I was expecting that to bother him, at least a little.

Should it bother you?

That depends on what you think Quantum is Calling is trying to do.

Science popularization, even good science popularization, tends to get things wrong. Some of that is inevitable, a result of translating complex concepts to a wider audience.

Sometimes, though, you can’t really chalk it up to translation. Interstellar had some extremely accurate visualizations of black holes, but it also had an extremely silly love-powered tesseract. That wasn’t their attempt to convey some subtle scientific truth, it was just meant to sound cool.

And the thing is, that’s not a bad thing to do. For a certain kind of piece, sounding cool really is the point.

Imagine being an explorer. You travel out into the wilderness and find a beautiful waterfall.

south_falls_silver_falls_state_park

Example:

How do you tell people about it?

One option is the press. The news can cover your travels, so people can stay up to date with the latest in waterfall discoveries. In general, you’d prefer this sort of thing to be fairly accurate: the goal here is to inform people, to give them a better idea of the world around them.

Alternatively, you can advertise. You put signposts up around town pointing toward the waterfall, complete with vivid pictures. Here, accuracy matters a lot less: you’re trying to get people excited, knowing that as they get closer they can get more detailed information.

In science popularization, the “news” here isn’t just news. It’s also blog posts, press releases, and public lectures. It’s the part of science popularization that’s supposed to keep people informed, and it’s one that we hope is mostly accurate, at least as far as possible.

The “signposts”, meanwhile, are things like Interstellar. Their audience is as wide as it can possibly be, and we don’t expect them to get things right. They’re meant to excite people, to get them interested in science. The expectation is that a few students will find the imagery interesting enough to go further, at which point they can learn the full story and clear up any remaining misconceptions.

Quantum is Calling is pretty clearly meant to be a signpost. The inaccuracy is one way to tell, but it should be clear just from the context. We’re talking about a piece with Hollywood stars here. The relative star-dom of Zoe Saldana and Keanu Reeves doesn’t matter, the presence of any mainstream film stars whatsoever means they’re going for the broadest possible audience.

(Of course, the fact that it’s set up to look like an official tie-in to the Star Trek films doesn’t hurt matters either.)

They’re also quite explicit about their goals. The piece’s predecessor has Keanu Reeves send a message back in time, with the goal of inspiring a generation of young scientists to build a future paradise. They’re not subtle about this.

Ok, so what’s the problem? Signposts are allowed to be inaccurate, so the inaccuracy shouldn’t matter. Eventually people will climb up to the waterfall and see it for themselves, right?

What if the waterfall isn’t there?

wonder_mountain_dry_backside_waterfall

Like so:

The evidence for ER=EPR (the conjecture that Quantum is Calling is popularizing) isn’t like seeing a waterfall. It’s more like finding it via surveying. By looking at the slope of nearby terrain and following the rivers, you can get fairly confident that there should be a waterfall there, even if you can’t yet see it over the next ridge. You can then start sending scouts, laying in supplies, and getting ready for a push to the waterfall. You can alert the news, telling journalists of the magnificent waterfall you expect to find, so the public can appreciate the majesty of your achievement.

What you probably shouldn’t do is put up a sign for tourists.

As I hope I made clear in my last post, ER=EPR has some decent evidence. It hasn’t shown that it can handle “foot traffic”, though. The number of researchers working on it is still small. (For a fun but not especially rigorous exercise, try typing “ER=EPR” and “AdS/CFT” into physics database INSPIRE.) Conjectures at this stage are frequently successful, but they often fail, and ER=EPR still has a decent chance of doing so. Tying your inspiring signpost to something that may well not be there risks sending tourists up to an empty waterfall. They won’t come down happy.

As such, I’m fine with “news-style” popularizations of ER=EPR. And I’m fine with “signposts” for conjectures that have shown they can handle some foot traffic. (A piece that sends Zoe Saldana to the holodeck to learn about holography could be fun, for example.) But making this sort of high-profile signpost for ER=EPR feels irresponsible and premature. There will be plenty of time for a Star Trek tie-in to ER=EPR once it’s clear the idea is here to stay.

What’s in a Conjecture? An ER=EPR Example

A few weeks back, Caltech’s Institute of Quantum Information and Matter released a short film titled Quantum is Calling. It’s the second in what looks like will become a series of pieces featuring Hollywood actors popularizing ideas in physics. The first used the game of Quantum Chess to talk about superposition and entanglement. This one, featuring Zoe Saldana, is about a conjecture by Juan Maldacena and Leonard Susskind called ER=EPR. The conjecture speculates that pairs of entangled particles (as investigated by Einstein, Podolsky, and Rosen) are in some sense secretly connected by wormholes (or Einstein-Rosen bridges).

The film is fun, but I’m not sure ER=EPR is established well enough to deserve this kind of treatment.

At this point, some of you are nodding your heads for the wrong reason. You’re thinking I’m saying this because ER=EPR is a conjecture.

I’m not saying that.

The fact of the matter is, conjectures play a very important role in theoretical physics, and “conjecture” covers a wide range. Some conjectures are supported by incredibly strong evidence, just short of mathematical proof. Others are wild speculations, “wouldn’t it be convenient if…” ER=EPR is, well…somewhere in the middle.

Most popularizers don’t spend much effort distinguishing things in this middle ground. I’d like to talk a bit about the different sorts of evidence conjectures can have, using ER=EPR as an example.

octopuswormhole_v1

Our friendly neighborhood space octopus

The first level of evidence is motivation.

At its weakest, motivation is the “wouldn’t it be convenient if…” line of reasoning. Some conjectures never get past this point. Hawking’s chronology protection conjecture, for instance, points out that physics (and to some extent logic) has a hard time dealing with time travel, and wouldn’t it be convenient if time travel was impossible?

For ER=EPR, this kind of motivation comes from the black hole firewall paradox. Without going into it in detail, arguments suggested that the event horizons of older black holes would resemble walls of fire, incinerating anything that fell in, in contrast with Einstein’s picture in which passing the horizon has no obvious effect at the time. ER=EPR provides one way to avoid this argument, making event horizons subtle and smooth once more.

Motivation isn’t just “wouldn’t it be convenient if…” though. It can also include stronger arguments: suggestive comparisons that, while they could be coincidental, when put together draw a stronger picture.

In ER=EPR, this comes from certain similarities between the type of wormhole Maldacena and Susskind were considering, and pairs of entangled particles. Both connect two different places, but both do so in an unusually limited way. The wormholes of ER=EPR are non-traversable: you cannot travel through them. Entangled particles can’t be traveled through (as you would expect), but more generally can’t be communicated through: there are theorems to prove it. This is the kind of suggestive similarity that can begin to motivate a conjecture.

(Amusingly, the plot of the film breaks this in both directions. Keanu Reeves can neither steal your cat through a wormhole, nor send you coded messages with entangled particles.)

rjxhfqj

Nor live forever as the portrait in his attic withers away

Motivation is a good reason to investigate something, but a bad reason to believe it. Luckily, conjectures can have stronger forms of evidence. Many of the strongest conjectures are correspondences, supported by a wealth of non-trivial examples.

In science, the gold standard has always been experimental evidence. There’s a reason for that: when you do an experiment, you’re taking a risk. Doing an experiment gives reality a chance to prove you wrong. In a good experiment (a non-trivial one) the result isn’t obvious from the beginning, so that success or failure tells you something new about the universe.

In theoretical physics, there are things we can’t test with experiments, either because they’re far beyond our capabilities or because the claims are mathematical. Despite this, the overall philosophy of experiments is still relevant, especially when we’re studying a correspondence.

“Correspondence” is a word we use to refer to situations where two different theories are unexpectedly computing the same thing. Often, these are very different theories, living in different dimensions with different sorts of particles. With the right “dictionary”, though, you can translate between them, doing a calculation in one theory that matches a calculation in the other one.

Even when we can’t do non-trivial experiments, then, we can still have non-trivial examples. When the result of a calculation isn’t obvious from the beginning, showing that it matches on both sides of a correspondence takes the same sort of risk as doing an experiment, and gives the same sort of evidence.

Some of the best-supported conjectures in theoretical physics have this form. AdS/CFT is technically a conjecture: a correspondence between string theory in a hyperbola-shaped space and my favorite theory, N=4 super Yang-Mills. Despite being a conjecture, the wealth of nontrivial examples is so strong that it would be extremely surprising if it turned out to be false.

ER=EPR is also a correspondence, between entangled particles on the one hand and wormholes on the other. Does it have nontrivial examples?

Some, but not enough. Originally, it was based on one core example, an entangled state that could be cleanly matched to the simplest wormhole. Now, new examples have been added, covering wormholes with electric fields and higher spins. The full “dictionary” is still unclear, with some pairs of entangled particles being harder to describe in terms of wormholes. So while this kind of evidence is being built, it isn’t as solid as our best conjectures yet.

I’m fine with people popularizing this kind of conjecture. It deserves blog posts and press articles, and it’s a fine idea to have fun with. I wouldn’t be uncomfortable with the Bohemian Gravity guy doing a piece on it, for example. But for the second installment of a star-studded series like the one Caltech is doing…it’s not really there yet, and putting it there gives people the wrong idea.

I hope I’ve given you a better idea of the different types of conjectures, from the most fuzzy to those just shy of certain. I’d like to do this kind of piece more often, though in future I’ll probably stick with topics in my sub-field (where I actually know what I’m talking about 😉 ). If there’s a particular conjecture you’re curious about, ask in the comments!

A Tale of Two Archives

When it comes to articles about theoretical physics, I have a pet peeve, one made all the more annoying by the fact that it appears even in pieces that are otherwise well written. It involves the following disclaimer:

“This article has not been peer-reviewed.”

Here’s the thing: if you’re dealing with experiments, peer review is very important. Plenty of experiments have subtle problems with their methods, enough that it’s important to have a group of experts who can check them. In experimental fields, you really shouldn’t trust things that haven’t been through a journal yet: there’s just a lot that can go wrong.

In theoretical physics, though, peer review is important for different reasons. Most papers are mathematically rigorous enough that they’re not going to be wrong per se, and most of the ways they could be wrong won’t be caught by peer review. While peer review sometimes does catch mistakes, much more often it’s about assessing the significance of a result. Peer review determines whether a result gets into a prestigious journal or a less prestigious one, which in turn matters for job and grant applications.

As such, it doesn’t really make sense for a journalist to point out that a theoretical physics paper hasn’t been peer reviewed yet. If you think it’s important enough to write an article about, then you’ve already decided it’s significant: peer review wasn’t going to tell you anything else.

We physicists post our papers to arXiv, a free-to-access paper repository, before submitting them to journals. While arXiv does have some moderation, it’s not much: pretty much anyone in the field can post whatever they want.

This leaves a lot of people confused. In that sort of system, how do we know which papers to trust?

Let’s compare to another archive: Archive of Our Own, or AO3 for short.

Unlike arXiv, AO3 hosts not physics, but fanfiction. However, like arXiv it’s quite lightly moderated and free to access. On arXiv you want papers you can trust, on AO3 you want stories you enjoy. In each case, if anyone can post, how do you find them?

The first step is filtering. AO3 and arXiv both have systems of tags and subject headings. The headings on arXiv are simpler and more heavily moderated than those on AO3, but they both serve the purpose of letting people filter out the subjects, whether scientific or fictional, that they find interesting. If you’re interested in astrophysics, try astro-ph on arXiv. If you want Harry Potter fanfiction, try the “Harry Potter – J.K. Rowling” tag on AO3.

Beyond that, it helps to pay attention to authors. When an author has written something you like, it’s worth it not only to keep up with other things they write, but to see which other authors they like and pay attention to them as well. That’s true whether the author is Juan Maldacena or your favorite source of Twilight fanfic.

Even if you follow all of this, you can’t trust every paper you find on arXiv. You also won’t enjoy everything you dig up on AO3. Either way, publication (in journals or books) won’t solve your problem: both are an additional filter, but not an infallible one. Judgement is still necessary.

This is all to say that “this article has not been peer-reviewed” can be a useful warning, but often isn’t. In theoretical physics, knowing who wrote an article and what it’s about will often tell you much more than whether or not it’s been peer-reviewed yet.

Have You Given Your Kids “The Talk”?

If you haven’t seen it yet, I recommend reading this delightful collaboration between Scott Aaronson (of Shtetl-Optimized) and Zach Weinersmith (of Saturday Morning Breakfast Cereal). As explanations of a concept beyond the standard popular accounts go, this one is pretty high quality, correcting some common misconceptions about quantum computing.

I especially liked the following exchange:

ontology

I’ve complained before about people trying to apply ontology to physics, and I think this gets at the root of one of my objections.

People tend to think that the world should be describable with words. From that perspective, mathematics is just a particular tool, a system we’ve created. If you look at the world in that way, mathematics looks unreasonably effective: it’s ability to describe the real world seems like a miraculous coincidence.

Mathematics isn’t just one tool though, or just one system. It’s all of them: not just numbers and equations, but knots and logic and everything else. Deep down, mathematics is just a collection of all the ways we’ve found to state things precisely.

Because of that, it shouldn’t surprise you that we “put complex numbers in our ontologies”. Complex numbers are just one way we’ve found to make precise statements about the world, one that comes in handy when talking about quantum mechanics. There doesn’t need to be a “correct” description in words: the math is already stating things as precisely as we know how.

That doesn’t mean that ontology is a useless project. It’s worthwhile to develop new ways of talking about things. I can understand the goal of building up a philosophical language powerful enough to describe the world in terms of words, and if such a language was successful it might well inspire us to ask new scientific questions.

But it’s crucial to remember that there’s real work to be done there. There’s no guarantee that the project will work, that words will end up sufficient. When you put aside our best tools to make precise statements, you’re handicapping yourself, making the problem harder than it needed to be. It’s your responsibility to make sure you’re getting something worthwhile out of it.

Words, Words, Words

If there’s one thing the Center for Communicating Science drummed into me at Stony Brook, it’s to be careful with words. You can teach your audience new words, but only a few: effectively, you have a vocabulary budget.

Sometimes, the risk is that your audience will misunderstand you. If you’re a biologist who talks about treating disease in a model, be careful: the public is more likely to think of mannequins than mice.

220px-harvey_front

NOT what you’re talking about

Sometimes, though, the risk is subtler. Even if the audience understands you, you might still be using up your vocabulary budget.

Recently, Perimeter’s monthly Public Lecture was given by an expert on regenerative medicine. When talking about trying to heal eye tissue, she mentioned looking for a “pupillary response”.

Now, “pupillary response” isn’t exactly hard to decipher. It’s pretty clearly a response by the pupil of the eye. From there, you can think about how eyes respond to bright light, or to darkness, and have an idea of what she’s talking about.

So nobody is going to misunderstand “pupillary response”. Nonetheless, that chain of reasoning? It takes time, and it takes effort. People do have to stop and think, if only for a moment, to know what you mean.

That adds up. Every time your audience has to take a moment to think back and figure out what you just said? That eats into your vocabulary budget. Enough moments like that, and your audience won’t have the energy to follow what you’re saying: you’ll lose them.

The last few Public Lectures haven’t had as much online engagement as they used to. Lots of people still watch them, but fewer have been asking questions on twitter, for example. I have a few guesses about why this is…but I wonder if this kind of thing is part of it. The last few speakers have been more free with technical terms, more lax with their vocabulary budget. I worry that, while people still show up for the experience, they aren’t going away with any understanding.

We don’t need to dumb things down to be understood. (Or not very much anyway.) We do need to be careful with our words. Use our vocabulary budget sparingly, and we can really teach people. Spend it too fast…and we lose them.

Wait, How Do Academics Make Money?

I’ve been working on submitting one of my papers to a journal, which reminded me of the existence of publication fees. That in turn reminded me of a conversation I saw on tumblr a while back:

beatontumblr

“beatonna” here is Kate Beaton, of the history-themed webcomic Hark! a Vagrant. She’s about as academia-adjacent as a non-academic gets, but even she thought that the academic database JSTOR paid academics for their contributions, presumably on some kind of royalty system.

In fact, academics don’t get paid by databases, journals, or anyone else that publishes or hosts our work. In the case of journals, we’re often the ones who pay publication fees. Those who write textbooks get royalties, but that’s about it on that front.

Kate Beaton’s confusion here is part of a more general confusion: in my experience, most people don’t know how academics are paid.

The first assumption is usually that we’re paid to teach. I can’t count the number of times I’ve heard someone respond to someone studying physics or math with the question “Oh, so you’re going to teach?”

This one is at least sort of true. Most academics work at universities, and usually have teaching duties. Often, part of an academic’s salary is explicitly related to teaching.

Still, it’s a bit misleading to think of academics as paid to teach: at a big research university, teaching often doesn’t get much emphasis. The extent to which the quality of teaching determines a professor’s funding or career prospects is often quite minimal. Academics teach, but their job isn’t “teacher”.

From there, the next assumption is the one Kate Beaton made. If academics aren’t paid to teach, are they paid to write?

Academia is often described as publish-or-perish, and research doesn’t really “count” until it’s made it to a journal. It would be reasonable to assume that academics are like writers, paid when someone buys our content. As mentioned, though, that’s just not how it works: if anything, sometimes we are the ones who pay the publishers!

It’s probably more accurate (though still not the full story) to say that academics are paid to research.

Research universities expect professors not only to teach, but to do novel and interesting research. Publications are important not because we get paid to write them, but because they give universities an idea of how productive we are. Promotions and the like, at least at research universities, are mostly based on those sorts of metrics.

Professors get some of their money from their universities, for teaching and research. The rest comes from grants. Usually, these come from governments, though private donors are a longstanding and increasingly important group. In both cases, someone decides that a certain general sort of research ought to be done and solicits applications from people interested in doing it. Different people apply with specific proposals, which are assessed with a wide range of esoteric criteria (but yes publications are important), and some people get funding. That funding includes not just equipment, but contributions to salaries as well. Academics really are, in many cases, paid by grants.

This is really pretty dramatically different from any other job. There’s no “customer” in the normal sense, and even the people in charge of paying us are more concerned that a certain sort of work be done than that they have control over it. It’s completely understandable that the public rounds that off to “teaching” or “writing”. It’s certainly more familiar.

 

What If the Field Is Doomed?

Around Halloween, I have a tradition of exploring the spooky and/or scary side of physics (sometimes rather tenuously). This time, I want to talk about something particle physicists find scary: the future of the field.

For a long time, now, our field has centered around particle colliders. Early colliders confirmed the existence of quarks and gluons, and populated the Standard Model with a wealth of particles, some expected and some not. Now, an enormous amount of effort has poured into the Large Hadron Collider, which found the Higgs…and so far, nothing else.

Plans are being discussed for an even larger collider, in Europe or China, but it’s not clear that either will be funded. Even if the case for new physics isn’t as strong in such a collider, there are properties of the Higgs that the LHC won’t be able to measure, things it’s important to check with a more powerful machine.

That’s the case we’ll have to make to the public, if we want such a collider to be built. But in addition to the scientific reasons, there are selfish reasons to hope for a new collider. Without one, it’s not clear the field can survive in its current form.

By “the field”, here, I don’t just mean those focused on making predictions for collider physics. My work isn’t plugged particularly tightly into the real world, the same is true of most string theorists. Naively, you’d think it wouldn’t matter to us if a new collider gets built.

The trouble is, physics is interconnected. We may not all make predictions about the world, but the purpose of the tools we build and concepts we explore is to eventually make contact. On grant applications, we talk about that future, one that leads not just to understanding the mathematics and models we use but to understanding reality. And for a long while, a major theme in those grant applications has been collider physics.

Different sub-fields are vulnerable to this in different ways. Surprisingly, the people who directly make predictions for the LHC might have it easiest. Many of them can pivot, and make predictions for cosmological observations and cheaper dark matter detection experiments. Quite a few are already doing so.

It’s harder for my field, for amplitudeology. We try to push the calculation techniques of theoretical physics to greater and greater precision…but without colliders, there are fewer experiments that can match that precision. Cosmological observations and dark matter detection won’t need four-loop calculations.

If there isn’t a next big collider, our field won’t dry up overnight. Our work is disconnected enough, at a far enough remove from reality, that it takes time for that sort of change to be reflected in our funding. Optimistically, this gives people enough time to change gears and alter their focus to the less collider-dependent parts of the field. Pessimistically, it means people would be working on a zombie field, shambling around in a field that is already dead but can’t admit it.

z-nation-field-of-zombies

Well I had to use some Halloween imagery

My hope is that this won’t happen. Even if the new colliders don’t get approved and collider physics goes dormant, I’d like to think my colleagues are adaptable enough to stay useful as the world’s demands change. But I’m young in this field, I haven’t seen it face these kinds of challenges before. And so, I worry.

“Maybe” Isn’t News

It’s been published several places, but you’ve probably seen this headline:

expansionheadlineIf you’ve been following me for a while, you know where this is going:

No, these physicists haven’t actually shown that the Universe isn’t expanding at an accelerated rate.

What they did show is that the original type of data used to discover that the universe was accelerating back in the 90’s, measurements of supernovae, doesn’t live up to the rigorous standards that we physicists use to evaluate discoveries. We typically only call something a discovery if the evidence is good enough that, in a world where the discovery wasn’t actually true, we’d only have a one in 3.5 million chance of getting the same evidence (“five sigma” evidence). In their paper, Nielsen, Guffanti, and Sarkar argue that looking at a bigger collection of supernovae leads to a hazier picture: the chance that we could get the same evidence in a universe that isn’t accelerating is closer to one in a thousand, giving “three sigma” evidence.

This might sound like statistical quibbling: one in a thousand is still pretty unlikely, after all. But a one in a thousand chance still happens once in a thousand times, and there’s a long history of three sigma evidence turning out to just be random noise. If the discovery of the accelerating universe was new, this would be an important objection, a reason to hold back and wait for more data before announcing a discovery.

The trouble is, the discovery isn’t new. In the twenty years since it was discovered that the universe was accelerating, people have built that discovery into the standard model of cosmology. They’ve used that model to make other predictions, explaining a wide range of other observations. People have built on the discovery, and their success in doing so is its own kind of evidence.

So the objection, that one source of evidence isn’t as strong as people thought, doesn’t kill cosmic acceleration. What it is is a “maybe”, showing that there is at least room in some of the data for a non-accelerating universe.

People publish “maybes” all the time, nothing bad about that. There’s a real debate to be had about how strong the evidence is, and how much it really establishes. (And there are already voices on the other side of that debate.)

But a “maybe” isn’t news. It just isn’t.

Science journalists (and university press offices) have a habit of trying to turn “maybes” into stories. I’ve lost track of the times I’ve seen ideas that were proposed a long time ago (technicolor, MOND, SUSY) get new headlines not for new evidence or new ideas, but just because they haven’t been ruled out yet. “SUSY hasn’t been ruled out yet” is an opinion piece, perhaps a worthwhile one, but it’s no news article.

The thing is, I can understand why journalists do this. So much of science is building on these kinds of “maybes”, working towards the tipping point where a “maybe” becomes a “yes” (or a “no”). And journalists (and university press offices, and to some extent the scientists themselves) can’t just take time off and wait for something legitimately newsworthy. They’ve got pages to fill and careers to advance, they need to say something.

I post once a week. As a consequence, a meaningful fraction of my posts are garbage. I’m sure that if I posted every day, most of my posts would be garbage.

Many science news sites post multiple times a day. They’ve got multiple writers, sure, and wider coverage…but they still don’t have the luxury of skipping a “maybe” when someone hands it to them.

I don’t know if there’s a way out of this. Maybe we need a new model for science journalism, something that doesn’t try to ape the pace of the rest of the news cycle. For the moment, though, it’s publish or perish, and that means lots and lots of “maybes”.

EDIT: More arguments against the paper in question, pointing out that they made some fairly dodgy assumptions.

EDIT: The paper’s authors respond here.

The Parable of the Entanglers and the Bootstrappers

There’s been some buzz around a recent Quanta article by K. C. Cole, The Strange Second Life of String Theory. I found it a bit simplistic of a take on the topic, so I thought I’d offer a different one.

String theory has been called the particle physicist’s approach to quantum gravity. Other approaches use the discovery of general relativity as a model: they’re looking for a big conceptual break from older theories. String theory, in contrast, starts out with a technical problem (naive quantum gravity calculations that give infinity) proposes physical objects that could solve the problem (strings, branes), and figures out which theories of these objects are consistent with existing data (originally the five superstring theories, now all understood as parts of M theory).

That approach worked. It didn’t work all the way, because regardless of whether there are indirect tests that can shed light on quantum gravity, particle physics-style tests are far beyond our capabilities. But in some sense, it went as far as it can: we’ve got a potential solution to the problem, and (apart from some controversy about the cosmological constant) it looks consistent with observations. Until actual evidence surfaces, that’s the end of that particular story.

When people talk about the failure of string theory, they’re usually talking about its aspirations as a “theory of everything”. String theory requires the world to have eleven dimensions, with seven curled up small enough that we can’t observe them. Different arrangements of those dimensions lead to different four-dimensional particles. For a time, it was thought that there would be only a few possible arrangements: few enough that people could find the one that describes the world and use it to predict undiscovered particles.

That particular dream didn’t work out. Instead, it became apparent that there were a truly vast number of different arrangements of dimensions, with no unique prediction likely to surface.

By the time I took my first string theory course in grad school, all of this was well established. I was entering a field shaped by these two facts: string theory’s success as a particle-physics style solution to quantum gravity, and its failure as a uniquely predictive theory of everything.

The quirky thing about science: sociologically, success and failure look pretty similar. Either way, it’s time to find a new project.

A colleague of mine recently said that we’re all either entanglers or bootstrappers. It was a joke, based on two massive grants from the Simons Foundation. But it’s also a good way to summarize two different ways string theory has moved on, from its success and from its failure.

The entanglers start from string theory’s success and say, what’s next?

As it turns out, a particle-physics style understanding of quantum gravity doesn’t tell you everything you need to know. Some of the big conceptual questions the more general relativity-esque approaches were interested in are still worth asking. Luckily, string theory provides tools to answer them.

Many of those answers come from AdS/CFT, the discovery that string theory in a particular warped space-time is dual (secretly the same theory) to a more particle-physics style theory on the edge of that space-time. With that discovery, people could start understanding properties of gravity in terms of properties of particle-physics style theories. They could use concepts like information, complexity, and quantum entanglement (hence “entanglers”) to ask deeper questions about the structure of space-time and the nature of black holes.

The bootstrappers, meanwhile, start from string theory’s failure and ask, what can we do with it?

Twisting up the dimensions of string theory yields a vast number of different arrangements of particles. Rather than viewing this as a problem, why not draw on it as a resource?

“Bootstrappers” explore this space of particle-physics style theories, using ones with interesting properties to find powerful calculation tricks. The name comes from the conformal bootstrap, a technique that finds conformal theories (roughly: theories that are the same at every scale) by “pulling itself by its own boostraps”, using nothing but a kind of self-consistency.

Many accounts, including Cole’s, attribute people like the boostrappers to AdS/CFT as well, crediting it with inspiring string theorists to take a closer look at particle physics-style theories. That may be true in some cases, but I don’t think it’s the whole story: my subfield is bootstrappy, and while it has drawn on AdS/CFT that wasn’t what got it started. Overall, I think it’s more the case that the tools of string theory’s “particle physics-esque approach”, like conformal theories and supersymmetry, ended up (perhaps unsurprisingly) useful for understanding particle physics-style theories.

Not everyone is a “boostrapper” or an “entangler”, even in the broad sense I’m using the words. The two groups also sometimes overlap. Nevertheless, it’s a good way to think about what string theorists are doing these days. Both of these groups start out learning string theory: it’s the only way to learn about AdS/CFT, and it introduces the bootstrappers to a bunch of powerful particle physics tools all in one course. Where they go from there varies, and can be more or less “stringy”. But it’s research that wouldn’t have existed without string theory to get it started.