Tag Archives: science communication

What’s in a Conjecture? An ER=EPR Example

A few weeks back, Caltech’s Institute of Quantum Information and Matter released a short film titled Quantum is Calling. It’s the second in what looks like will become a series of pieces featuring Hollywood actors popularizing ideas in physics. The first used the game of Quantum Chess to talk about superposition and entanglement. This one, featuring Zoe Saldana, is about a conjecture by Juan Maldacena and Leonard Susskind called ER=EPR. The conjecture speculates that pairs of entangled particles (as investigated by Einstein, Podolsky, and Rosen) are in some sense secretly connected by wormholes (or Einstein-Rosen bridges).

The film is fun, but I’m not sure ER=EPR is established well enough to deserve this kind of treatment.

At this point, some of you are nodding your heads for the wrong reason. You’re thinking I’m saying this because ER=EPR is a conjecture.

I’m not saying that.

The fact of the matter is, conjectures play a very important role in theoretical physics, and “conjecture” covers a wide range. Some conjectures are supported by incredibly strong evidence, just short of mathematical proof. Others are wild speculations, “wouldn’t it be convenient if…” ER=EPR is, well…somewhere in the middle.

Most popularizers don’t spend much effort distinguishing things in this middle ground. I’d like to talk a bit about the different sorts of evidence conjectures can have, using ER=EPR as an example.

octopuswormhole_v1

Our friendly neighborhood space octopus

The first level of evidence is motivation.

At its weakest, motivation is the “wouldn’t it be convenient if…” line of reasoning. Some conjectures never get past this point. Hawking’s chronology protection conjecture, for instance, points out that physics (and to some extent logic) has a hard time dealing with time travel, and wouldn’t it be convenient if time travel was impossible?

For ER=EPR, this kind of motivation comes from the black hole firewall paradox. Without going into it in detail, arguments suggested that the event horizons of older black holes would resemble walls of fire, incinerating anything that fell in, in contrast with Einstein’s picture in which passing the horizon has no obvious effect at the time. ER=EPR provides one way to avoid this argument, making event horizons subtle and smooth once more.

Motivation isn’t just “wouldn’t it be convenient if…” though. It can also include stronger arguments: suggestive comparisons that, while they could be coincidental, when put together draw a stronger picture.

In ER=EPR, this comes from certain similarities between the type of wormhole Maldacena and Susskind were considering, and pairs of entangled particles. Both connect two different places, but both do so in an unusually limited way. The wormholes of ER=EPR are non-traversable: you cannot travel through them. Entangled particles can’t be traveled through (as you would expect), but more generally can’t be communicated through: there are theorems to prove it. This is the kind of suggestive similarity that can begin to motivate a conjecture.

(Amusingly, the plot of the film breaks this in both directions. Keanu Reeves can neither steal your cat through a wormhole, nor send you coded messages with entangled particles.)

rjxhfqj

Nor live forever as the portrait in his attic withers away

Motivation is a good reason to investigate something, but a bad reason to believe it. Luckily, conjectures can have stronger forms of evidence. Many of the strongest conjectures are correspondences, supported by a wealth of non-trivial examples.

In science, the gold standard has always been experimental evidence. There’s a reason for that: when you do an experiment, you’re taking a risk. Doing an experiment gives reality a chance to prove you wrong. In a good experiment (a non-trivial one) the result isn’t obvious from the beginning, so that success or failure tells you something new about the universe.

In theoretical physics, there are things we can’t test with experiments, either because they’re far beyond our capabilities or because the claims are mathematical. Despite this, the overall philosophy of experiments is still relevant, especially when we’re studying a correspondence.

“Correspondence” is a word we use to refer to situations where two different theories are unexpectedly computing the same thing. Often, these are very different theories, living in different dimensions with different sorts of particles. With the right “dictionary”, though, you can translate between them, doing a calculation in one theory that matches a calculation in the other one.

Even when we can’t do non-trivial experiments, then, we can still have non-trivial examples. When the result of a calculation isn’t obvious from the beginning, showing that it matches on both sides of a correspondence takes the same sort of risk as doing an experiment, and gives the same sort of evidence.

Some of the best-supported conjectures in theoretical physics have this form. AdS/CFT is technically a conjecture: a correspondence between string theory in a hyperbola-shaped space and my favorite theory, N=4 super Yang-Mills. Despite being a conjecture, the wealth of nontrivial examples is so strong that it would be extremely surprising if it turned out to be false.

ER=EPR is also a correspondence, between entangled particles on the one hand and wormholes on the other. Does it have nontrivial examples?

Some, but not enough. Originally, it was based on one core example, an entangled state that could be cleanly matched to the simplest wormhole. Now, new examples have been added, covering wormholes with electric fields and higher spins. The full “dictionary” is still unclear, with some pairs of entangled particles being harder to describe in terms of wormholes. So while this kind of evidence is being built, it isn’t as solid as our best conjectures yet.

I’m fine with people popularizing this kind of conjecture. It deserves blog posts and press articles, and it’s a fine idea to have fun with. I wouldn’t be uncomfortable with the Bohemian Gravity guy doing a piece on it, for example. But for the second installment of a star-studded series like the one Caltech is doing…it’s not really there yet, and putting it there gives people the wrong idea.

I hope I’ve given you a better idea of the different types of conjectures, from the most fuzzy to those just shy of certain. I’d like to do this kind of piece more often, though in future I’ll probably stick with topics in my sub-field (where I actually know what I’m talking about 😉 ). If there’s a particular conjecture you’re curious about, ask in the comments!

Have You Given Your Kids “The Talk”?

If you haven’t seen it yet, I recommend reading this delightful collaboration between Scott Aaronson (of Shtetl-Optimized) and Zach Weinersmith (of Saturday Morning Breakfast Cereal). As explanations of a concept beyond the standard popular accounts go, this one is pretty high quality, correcting some common misconceptions about quantum computing.

I especially liked the following exchange:

ontology

I’ve complained before about people trying to apply ontology to physics, and I think this gets at the root of one of my objections.

People tend to think that the world should be describable with words. From that perspective, mathematics is just a particular tool, a system we’ve created. If you look at the world in that way, mathematics looks unreasonably effective: it’s ability to describe the real world seems like a miraculous coincidence.

Mathematics isn’t just one tool though, or just one system. It’s all of them: not just numbers and equations, but knots and logic and everything else. Deep down, mathematics is just a collection of all the ways we’ve found to state things precisely.

Because of that, it shouldn’t surprise you that we “put complex numbers in our ontologies”. Complex numbers are just one way we’ve found to make precise statements about the world, one that comes in handy when talking about quantum mechanics. There doesn’t need to be a “correct” description in words: the math is already stating things as precisely as we know how.

That doesn’t mean that ontology is a useless project. It’s worthwhile to develop new ways of talking about things. I can understand the goal of building up a philosophical language powerful enough to describe the world in terms of words, and if such a language was successful it might well inspire us to ask new scientific questions.

But it’s crucial to remember that there’s real work to be done there. There’s no guarantee that the project will work, that words will end up sufficient. When you put aside our best tools to make precise statements, you’re handicapping yourself, making the problem harder than it needed to be. It’s your responsibility to make sure you’re getting something worthwhile out of it.

Words, Words, Words

If there’s one thing the Center for Communicating Science drummed into me at Stony Brook, it’s to be careful with words. You can teach your audience new words, but only a few: effectively, you have a vocabulary budget.

Sometimes, the risk is that your audience will misunderstand you. If you’re a biologist who talks about treating disease in a model, be careful: the public is more likely to think of mannequins than mice.

220px-harvey_front

NOT what you’re talking about

Sometimes, though, the risk is subtler. Even if the audience understands you, you might still be using up your vocabulary budget.

Recently, Perimeter’s monthly Public Lecture was given by an expert on regenerative medicine. When talking about trying to heal eye tissue, she mentioned looking for a “pupillary response”.

Now, “pupillary response” isn’t exactly hard to decipher. It’s pretty clearly a response by the pupil of the eye. From there, you can think about how eyes respond to bright light, or to darkness, and have an idea of what she’s talking about.

So nobody is going to misunderstand “pupillary response”. Nonetheless, that chain of reasoning? It takes time, and it takes effort. People do have to stop and think, if only for a moment, to know what you mean.

That adds up. Every time your audience has to take a moment to think back and figure out what you just said? That eats into your vocabulary budget. Enough moments like that, and your audience won’t have the energy to follow what you’re saying: you’ll lose them.

The last few Public Lectures haven’t had as much online engagement as they used to. Lots of people still watch them, but fewer have been asking questions on twitter, for example. I have a few guesses about why this is…but I wonder if this kind of thing is part of it. The last few speakers have been more free with technical terms, more lax with their vocabulary budget. I worry that, while people still show up for the experience, they aren’t going away with any understanding.

We don’t need to dumb things down to be understood. (Or not very much anyway.) We do need to be careful with our words. Use our vocabulary budget sparingly, and we can really teach people. Spend it too fast…and we lose them.

“Maybe” Isn’t News

It’s been published several places, but you’ve probably seen this headline:

expansionheadlineIf you’ve been following me for a while, you know where this is going:

No, these physicists haven’t actually shown that the Universe isn’t expanding at an accelerated rate.

What they did show is that the original type of data used to discover that the universe was accelerating back in the 90’s, measurements of supernovae, doesn’t live up to the rigorous standards that we physicists use to evaluate discoveries. We typically only call something a discovery if the evidence is good enough that, in a world where the discovery wasn’t actually true, we’d only have a one in 3.5 million chance of getting the same evidence (“five sigma” evidence). In their paper, Nielsen, Guffanti, and Sarkar argue that looking at a bigger collection of supernovae leads to a hazier picture: the chance that we could get the same evidence in a universe that isn’t accelerating is closer to one in a thousand, giving “three sigma” evidence.

This might sound like statistical quibbling: one in a thousand is still pretty unlikely, after all. But a one in a thousand chance still happens once in a thousand times, and there’s a long history of three sigma evidence turning out to just be random noise. If the discovery of the accelerating universe was new, this would be an important objection, a reason to hold back and wait for more data before announcing a discovery.

The trouble is, the discovery isn’t new. In the twenty years since it was discovered that the universe was accelerating, people have built that discovery into the standard model of cosmology. They’ve used that model to make other predictions, explaining a wide range of other observations. People have built on the discovery, and their success in doing so is its own kind of evidence.

So the objection, that one source of evidence isn’t as strong as people thought, doesn’t kill cosmic acceleration. What it is is a “maybe”, showing that there is at least room in some of the data for a non-accelerating universe.

People publish “maybes” all the time, nothing bad about that. There’s a real debate to be had about how strong the evidence is, and how much it really establishes. (And there are already voices on the other side of that debate.)

But a “maybe” isn’t news. It just isn’t.

Science journalists (and university press offices) have a habit of trying to turn “maybes” into stories. I’ve lost track of the times I’ve seen ideas that were proposed a long time ago (technicolor, MOND, SUSY) get new headlines not for new evidence or new ideas, but just because they haven’t been ruled out yet. “SUSY hasn’t been ruled out yet” is an opinion piece, perhaps a worthwhile one, but it’s no news article.

The thing is, I can understand why journalists do this. So much of science is building on these kinds of “maybes”, working towards the tipping point where a “maybe” becomes a “yes” (or a “no”). And journalists (and university press offices, and to some extent the scientists themselves) can’t just take time off and wait for something legitimately newsworthy. They’ve got pages to fill and careers to advance, they need to say something.

I post once a week. As a consequence, a meaningful fraction of my posts are garbage. I’m sure that if I posted every day, most of my posts would be garbage.

Many science news sites post multiple times a day. They’ve got multiple writers, sure, and wider coverage…but they still don’t have the luxury of skipping a “maybe” when someone hands it to them.

I don’t know if there’s a way out of this. Maybe we need a new model for science journalism, something that doesn’t try to ape the pace of the rest of the news cycle. For the moment, though, it’s publish or perish, and that means lots and lots of “maybes”.

EDIT: More arguments against the paper in question, pointing out that they made some fairly dodgy assumptions.

EDIT: The paper’s authors respond here.

Ingredients of a Good Talk

It’s one of the hazards of physics that occasionally we have to attend talks about other people’s sub-fields.

Physics is a pretty heavily specialized field. It’s specialized enough that an otherwise perfectly reasonable talk can be totally incomprehensible to someone just a few sub-fields over.

I went to a talk this week on someone else’s sub-field, and was pleasantly surprised by how much I could follow. I thought I’d say a bit about what made it work.

In my experience, a good talk tells me why I should care, what was done, and what we know now.

Most talks start with a Motivation section, covering the why I should care part. If a talk doesn’t provide any motivation, it’s assuming that everyone finds the point of the research self-evident, and that’s a risky assumption.

Even for talks with a Motivation section, though, there’s a lot of variety. I’ve been to plenty of talks where the motivation presented is very sketchy: “this sort of thing is important in general, so we’re going to calculate one”. While that’s technically a motivation, all it does for an outsider is to tell them which sub-field you’re part of. Ideally, a motivation section does more: for a good talk, the motivation should not only say why you’re doing the work, but what question you’re asking and how your work can answer it.

The bulk of any talk covers what was done, but here there’s also varying quality. Bad talks often make it unclear how much was done by the presenter versus how much was done before. This is important not just to make sure the right people get credit, but because it can be hard to tell how much progress has been made. A good talk makes it clear not only what was done, but why it wasn’t done before. The whole point of a talk is to show off something new, so it should be clear what the new thing is.

If those two parts are done well, it becomes a lot easier to explain what we know now. If you’re clear on what question you were asking and what you did to answer it, then you’ve already framed things in those terms, and the rest is just summarizing. If not, you have to build it up from scratch, ending up with the important information packed in to the last few minutes.

This isn’t everything you need for a good talk, but it’s important, and far too many people neglect it. I’ll be giving a few talks next week, and I plan to keep this structure in mind.

Science Is a Collection of Projects, Not a Collection of Beliefs

Read a textbook, and you’ll be confronted by a set of beliefs about the world.

(If it’s a half-decent textbook, it will give justifications for those beliefs, and they will be true, putting you well on the way to knowledge.)

The same is true of most science popularization. In either case, you’ll be instructed that a certain set of statements about the world (or about math, or anything else) are true.

If most of your experience with science comes from popularizations and textbooks, you might think that all of science is like this. In particular, you might think of scientific controversies as matters of contrasting beliefs. Some scientists “believe in” supersymmetry, some don’t. Some “believe in” string theory, some don’t. Some “believe in” a multiverse, some don’t.

In practice, though, only settled science takes the form of beliefs. The rest, science as it is actually practiced, is better understood as a collection of projects.

Scientists spend most of their time working on projects. (Well, or procrastinating in my case.) Those projects, not our beliefs about the world, are how we influence other scientists, because projects build off each other. Any time we successfully do a calculation or make a measurement, we’re opening up new calculations and measurements for others to do. We all need to keep working and publishing, so anything that gives people something concrete to do is going to be influential.

The beliefs that matter come later. They come once projects have been so successful, and so widespread, that their success itself is evidence for beliefs. They’re the beliefs that serve as foundational assumptions for future projects. If you’re going to worry that some scientists are behaving unscientifically, these are the sorts of beliefs you want to worry about. Even then, things are often constrained by viable projects: in many fields, you can’t have a textbook without problem sets.

Far too many people seem to miss this distinction. I’ve seen philosophers focus on scientists’ public statements instead of their projects when trying to understand the implications of their science. I’ve seen bloggers and journalists who mostly describe conflicts of beliefs, what scientists expect and hope to be true rather than what they actually work on.

Do scientists have beliefs about controversial topics? Absolutely. Do those beliefs influence what they work on? Sure. But only so far as there’s actually something there to work on.

That’s why you see quite a few high-profile physicists endorsing some form of multiverse, but barely any actual journal articles about it. The belief in a multiverse may or may not be true, but regardless, there just isn’t much that one can do with the idea right now, and it’s what scientists are doing, not what they believe, that constitutes the health of science.

Different fields seem to understand this to different extents. I’m reminded of a story I heard in grad school, of two dueling psychologists. One of them believed that conversation was inherently cooperative, and showed that, unless unusually stressed or busy, people would put in the effort to understand the other person’s perspective. The other believed that conversation was inherently egocentric, and showed that, the more you stressed or busy people are, the more they assume that everyone else has the same perspective they do.

Strip off the “beliefs”, and these two worked on the exact same thing, with the same results. With their beliefs included, though, they were bitter rivals who bristled if their grad students so much as mentioned the other scientist.

We need to avoid this kind of mistake. The skills we have, the kind of work we do, these are important, these are part of science. The way we talk about it to reporters, the ideas we champion when we debate, those are sidelines. They have some influence, dragging people one way or another. But they’re not what science is, because on the front lines, science is about projects, not beliefs.

Physics Is about Legos

There’s a summer camp going on at Waterloo’s Institute for Quantum Computing called QCSYS, the Quantum Cryptography School for Young Students. A lot of these kids are interested in physics in general, not just quantum computing, so they give them a tour of Perimeter. While they’re here, they get a talk from a local postdoc, and this year that postdoc was me.

There’s an image that Perimeter has tossed around a lot recently, All Known Physics in One Equation. This article has an example from a talk given by Neil Turok. I thought it would be fun to explain that equation in terms a (bright, recently taught about quantum mechanics) high school student could understand. To do that, I’d have to explain what the equation is made of: spinors and vectors and tensors and the like.

The last time I had to explain that kind of thing here, I used a video game metaphor. For this talk, I came up with a better metaphor: legos.

Vectors are legos. Spinors are legos. Tensors are legos. They’re legos because they can be connected up together, but only in certain ways. Their “bumps” have to line up properly. And their nature as legos determines what you can build with them.

If you’re interested, here’s my presentation. Experts be warned: there’s a handwaving warning early in this talk, and it applies to a lot of it. In particular, the discussion of gauge group indices leaves out a lot. My goal in this talk was to give a vague idea of what the Standard Model Lagrangian is “made of”, and from the questions I got I think I succeeded.

The (but I’m Not a) Crackpot Style Guide

Ok, ok, I believe you. You’re not a crackpot. You’re just an outsider, one with a brilliant new idea that would overturn the accepted paradigms of physics, if only someone would just listen.

Here’s the problem: you’re not alone. There are plenty of actual crackpots. We get contacted by them fairly regularly. And most of the time, they’re frustrating and unpleasant to deal with.

If you want physicists to listen to you, you need to show us you’re not one of those people. Otherwise, most of us won’t bother.

I can’t give you a foolproof way to do that. But I can give some suggestions that will hopefully make the process a little less frustrating for everyone involved.

Don’t spam:

Nobody likes spam. Nobody reads spam. If you send a mass email to every physicist whose email address you can find, none of them will read it. If you repeatedly post the same thing in a comment thread, nobody will read it. If you want people to listen to you, you have to show that you care about what they have to say, and in order to do that you have to tailor your message. This leads in to the next point,

Ask the right people:

Before you start reaching out, you should try to get an idea of who to talk to. Physics is quite specialized, so if you’re taking your ideas seriously you should try to contact people with a relevant specialization.

Now, I know what you’re thinking: your ideas are unique, no-one in physics is working on anything similar.

Here, it’s important to distinguish the problem you’re trying to solve with how you’re trying to solve it. Chances are, no-one else is working on your specific idea…but plenty of people are interested in the same problems.

Think quantum mechanics is built on shoddy assumptions? There are people who spend their lives trying to modify quantum mechanics. Have a beef against general relativity? There’s a whole sub-field of people who modify gravity.

These people are a valuable resource for you, because they know what doesn’t work. They’ve been trying to change the system, and they know just how hard it is to change, and just what evidence you need to be consistent with.

Contacting someone whose work just uses quantum mechanics or relativity won’t work. If you’re making elementary mistakes, we can put you on the right track…but if you think you’re making elementary mistakes, you should start out by asking help from a forum or the like, not contacting a professional. If you think you’ve really got a viable replacement to an established idea, you need to contact people who work on overturning established ideas, since they’re most aware of the complicated webs of implications involved. Relatedly,

Take ownership of your work:

I don’t know how many times someone has “corrected” something in the comments, and several posts later admitted that the “correction” comes from their own theory. If you’re arguing from your own work, own it! If you don’t, people will assume you’re trying to argue from an established theory, and are just confused about how that theory works. This is a special case of a broader principle,

Epistemic humility:

I’m not saying you need to be humble in general, but if you want to talk productively you need to be epistemically humble. That means being clear about why you know what you know. Did you get it from a mathematical proof? A philosophical argument? Reading pop science pieces? Something you remember from high school? Being clear about your sources makes it easier for people to figure out where you’re coming from, and avoids putting your foot in your mouth if it turns out your source is incomplete.

Context is crucial:

If you’re commenting on a blog like this one, pay attention to context. Your comment needs to be relevant enough that people won’t parse it as spam.

If all a post does is mention something like string theory, crowing about how your theory is a better explanation for quantum gravity isn’t relevant. Ditto for if all it does is mention a scientific concept that you think is mistaken.

What if the post is promoting something that you’ve found to be incorrect, though? What if someone is wrong on the internet?

In that case, it’s important to keep in mind the above principles. A popularization piece will usually try to present the establishment view, and merits a different response than a scientific piece arguing something new. In both cases, own your own ideas and be specific about how you know what you know. Be clear on whether you’re talking about something that’s controversial, or something that’s broadly agreed on.

You can get an idea of what works and what doesn’t by looking at comments on this blog. When I post about dark matter, or cosmic inflation, there are people who object, and the best ones are straightforward about why. Rather than opening with “you’re wrong”, they point out which ideas are controversial. They’re specific about whose ideas they’re referencing, and are clear about what is pedagogy and what is science.

Those comments tend to get much better responses than the ones that begin with cryptic condemnations, follow with links, and make absolute statements without backing them up.

On the internet, it’s easy for misunderstandings to devolve into arguments. Want to avoid that? Be direct, be clear, be relevant.

Fun with Misunderstandings

Perimeter had its last Public Lecture of the season this week, with Mario Livio giving some highlights from his book Brilliant Blunders. The lecture should be accessible online, either here or on Perimeter’s YouTube page.

These lectures tend to attract a crowd of curious science-fans. To give them something to do while they’re waiting, a few local researchers walk around with T-shirts that say “Ask me, I’m a scientist!” Sometimes we get questions about the upcoming lecture, but more often people just ask us what they’re curious about.

Long-time readers will know that I find this one of the most fun parts of the job. In particular, there’s a unique challenge in figuring out just why someone asked a question. Often, there’s a hidden misunderstanding they haven’t recognized.

The fun thing about these misunderstandings is that they usually make sense, provided you’re working from the person in question’s sources. They heard a bit of this and a bit of that, and they come to the most reasonable conclusion they can given what’s available. For those of us who have heard a more complete story, this often leads to misunderstandings we would never have thought of, but that in retrospect are completely understandable.

One of the simpler ones I ran into was someone who was confused by people claiming that we were running out of water. How could there be a water shortage, he asked, if the Earth is basically a closed system? Where could the water go?

The answer is that when people are talking about a water shortage, they’re not talking about water itself running out. Rather, they’re talking about a lack of safe drinking water. Maybe the water is polluted, or stuck in the ocean without expensive desalinization. This seems like the sort of thing that would be extremely obvious, but if you just hear people complaining that water is running out without the right context then you might just not end up hearing that part of the story.

A more involved question had to do with time dilation in general relativity. The guy had heard that atomic clocks run faster if you’re higher up, and that this was because time itself runs faster in lower gravity.

Given that, he asked, what happens if someone travels to an area of low gravity and then comes back? If more time has passed for them, then they’d be in the future, so wouldn’t they be at the “wrong time” compared to other people? Would they even be able to interact with them?

This guy’s misunderstanding came from hearing what happens, but not why. While he got that time passes faster in lower gravity, he was still thinking of time as universal: there is some past, and some future, and if time passes faster for one person and slower for another that just means that one person is “skipping ahead” into the other person’s future.

What he was missing was the explanation that time dilation comes from space and time bending. Rather than “skipping ahead”, a person for whom time passes faster just experiences more time getting to the same place, because they’re traveling on a curved path through space-time.

As usual, this is easier to visualize in space than in time. I ended up drawing a picture like this:

IMG_20160602_101423

Imagine person A and person B live on a circle. If person B stays the same distance from the center while person A goes out further, they can both travel the same angle around the circle and end up in the same place, but A will have traveled further, even ignoring the trips up and down.

What’s completely intuitive in space ends up quite a bit harder to visualize in time. But if you at least know what you’re trying to think about, that there’s bending involved, then it’s easier to avoid this guy’s kind of misunderstanding. Run into the wrong account, though, and even if it’s perfectly correct (this guy had heard some of Hawking’s popularization work on the subject), if it’s not emphasizing the right aspects you can come away with the wrong impression.

Misunderstandings are interesting because they reveal how people learn. They’re windows into different thought processes, into what happens when you only have partial evidence. And because of that, they’re one of the most fascinating parts of science popularization.

Particles Aren’t Vibrations (at Least, Not the Ones You Think)

You’ve probably heard this story before, likely from Brian Greene.

In string theory, the fundamental particles of nature are actually short lengths of string. These strings can vibrate, and like a string on a violin, that vibration is arranged into harmonics. The more energy in the string, the more complex the vibration. In string theory, each of these vibrations corresponds to a different particle, explaining how the zoo of particles we observe can come out of a single type of fundamental string.

250px-moodswingerscale-svg

Particles. Probably.

It’s a nice story. It’s even partly true. But it gives a completely wrong idea of where the particles we’re used to come from.

Making a string vibrate takes energy, and that energy is determined by the tension of the string. It’s a lot harder to wiggle a thick rubber band than a thin one, if you’re holding both tightly.

String theory’s strings are under a lot of tension, so it takes a lot of energy to make them vibrate. From our perspective, that energy looks like mass, so the more complicated harmonics on a string correspond to extremely massive particles, close to the Planck mass!

Those aren’t the particles you’re used to. They’re not electrons, they’re not dark matter. They’re particles we haven’t observed, and may never observe. They’re not how string theory explains the fundamental particles of nature.

So how does string theory go from one fundamental type of string to all of the particles in the universe, if not through these vibrations? As it turns out, there are several different ways it can happen, tricks that allow the lightest and simplest vibrations to give us all the particles we’ve observed.* I’ll describe a few.

The first and most important trick here is supersymmetry. Supersymmetry relates different types of particles to each other. In string theory, it means that along with vibrations that go higher and higher, there are also low-energy vibrations that behave like different sorts of particles. In a sense, string theory sticks a quantum field theory inside another quantum field theory, in a way that would make Xzibit proud.

Even with supersymmetry, string theory doesn’t give rise to all of the right sorts of particles. You need something else, like compactifications or branes.

The strings of string theory live in ten dimensions, it’s the only place they’re mathematically consistent. Since our world looks four-dimensional, something has to happen to the other six dimensions. They have to be curled up, in a process called compactification. There are lots and lots (and lots) of ways to do this compactification, and different ways of curling up the extra dimensions give different places for strings to move. These new options make the strings look different in our four-dimensional world: a string curled around a donut hole looks very different from one that moves freely. Each new way the string can move or vibrate can give rise to a new particle.

Another option to introduce diversity in particles is to use branes. Branes (short for membranes) are surfaces that strings can end on. If two strings end on the same brane, those ends can meet up and interact. If they end on different branes though, then they can’t. By cleverly arranging branes, then, you can have different sets of strings that interact with each other in different ways, reproducing the different interactions of the particles we’re familiar with.

In string theory, the particles we’re used to aren’t just higher harmonics, or vibrations with more and more energy. They come from supersymmetry, from compactifications and from branes. The higher harmonics are still important: there are theorems that you can’t fix quantum gravity with a finite number of extra particles, so the infinite tower of vibrations allows string theory to exploit a key loophole. They just don’t happen to be how string theory gets the particles of the Standard Model. The idea that every particle is just a higher vibration is a common misconception, and I hope I’ve given you a better idea of how string theory actually works.

 

*But aren’t these lightest vibrations still close to the Planck mass? Nope! See the discussion with TE in the comments for details.