Tag Archives: PublicPerception

Why we Physics

There are a lot of good reasons to study theories in theoretical physics, even the ones that aren’t true. They teach us how to do calculations in other theories, including those that do describe reality, which lets us find out fundamental facts about nature. They let us hone our techniques, developing novel methods that often find use later, in some cases even spinoff technology. (Mathematica came out of the theoretical physics community, while experimental high energy physics led to the birth of the modern internet.)

Of course, none of this is why physicists actually do physics. Sure, Nima Arkani-Hamed might need to tell himself that space-time is doomed to get up in the morning, but for a lot of us, it isn’t about proving any wide-ranging point about the universe. It’s not even all about the awesome, as some would have it: most of what we do on a day-to-day basis isn’t especially awesome. It goes a bit deeper than that.

Science, in the end, is about solving puzzles. And solving puzzles is immensely satisfying, on a deep, fundamental level.

There’s a unique feeling that you get when all the pieces come together, when you’re calculating something and everything cancels and you’re left with a simple answer, and for some people that’s the best thing in existence.

It’s especially true when you’re working with an ansatz or using some other method where you fix parameters and fill in uncertainties, one by one. You can see how close you are to the answer, which means each step gives you that little thrill of getting just that much closer. One of my colleagues describes the calculations he does in supergravity as not tedious but “delightful” for precisely this reason: a calculation where every step puts another piece in the right place just feels good.

Theoretical physicists are the kind of people who would get a Lego set for their birthday, build it up to completion, and then never play with it again (unless it was to take it apart and make something else). We do it for the pure joy of seeing something come together and become complete. Save what it’s “for” for the grant committees, we’ve got a different rush in mind.

Editors, Please Stop Misquoting Hawking

If you’ve been following science news recently, you’ve probably heard the apparently alarming news that Steven Hawking has turned his back on black holes, or that black holes can actually be escaped, or…how about I just show you some headlines:

FoxHawking

NatureHawking

YahooHawking

Now, Hawking didn’t actually say that black holes don’t exist, but while there are a few good pieces on the topic, in many cases the real message has gotten lost in the noise.

From Hawking’s paper:

ActualPaperHawking

What Hawking is proposing is that the “event horizon” around a black hole, rather than being an absolute permanent boundary from which nothing can escape, is a more temporary “apparent” horizon, the properties of which he goes on to describe in detail.

Why is he proposing this? It all has to do with the debate over black hole firewalls.

Starting with a paper by Polchinski and colleagues a year and a half ago, the black hole firewall paradox centers on contradictory predictions from general relativity and quantum mechanics. General relativity predicts that an astronaut falling past a black hole’s event horizon will notice nothing particularly odd about the surrounding space, but that once past the event horizon none of the “information” that specifies the astronaut’s properties can escape to the outside world. Quantum mechanics on the other hand predicts that information cannot be truly lost. The combination appears to suggest something radical, a “firewall” of high energy radiation around the event horizon carrying information from everything that fell into the black hole in the past, so powerful that it would burn our hypothetical astronaut to a crisp.

Since then, a wide variety of people have made one proposal or another, either attempting to avoid the seemingly preposterous firewall or to justify and further explain it. The reason the debate is so popular is because it touches on some of the fundamental principles of quantum mechanics.

Now, as I have pointed out before, I’m not a good person to ask about the fundamental principles of quantum mechanics. (Incidentally, I’d love it if some of the more quantum information or general relativity-focused bloggers would take a more substantial crack at this! Carroll, Preskill, anyone?) What I can talk about, though, is hype.

All of the headlines I listed take Hawking’s quote out of context, but not all of the articles do. The problem isn’t so much the journalists, as the editors.

One of an editor’s responsibilities is to take articles and give them titles that draw in readers. The editor wants a title that will get people excited, make them curious, and most importantly, get them to click. While a journalist won’t have any particular incentive to improve ad revenue, the same cannot be said for an editor. Thus, editors will often rephrase the title of an article in a way that makes the whole story seem more shocking.

Now that, in itself, isn’t a problem. I’ve used titles like that myself. The problem comes when the title isn’t just shocking, but misleading.

When I call astrophysics “impossible”, nobody is going to think I mean it literally. The title is petulant and ridiculous enough that no-one would take it at face value, but still odd enough to make people curious. By contrast, when you say that Hawking has “changed his mind” about black holes or said that “black holes do not exist”, there are people who will take that at face value as supporting their existing beliefs, as the Borowitz Report humorously points out. These people will go off thinking that Hawking really has given up on black holes. If the title confirms their beliefs enough, people might not even bother to read the article. Thus, by using an actively misleading title, you may actually be decreasing clicks!

It’s not that hard to write a title that’s both enough of a hook to draw people in and won’t mislead. Editors of the world, you’re well-trained writers, certainly much better than me. I’m sure you can manage it.

There really is some interesting news here, if people had bothered to look into it. The firewall debate has been going on for a year and a half, and while Hawking isn’t the universal genius the media occasionally depicts he’s still the world’s foremost expert on the quantum properties of black holes. Why did he take so long to weigh in? Is what he’s proposing even particularly new? I seem to remember people discussing eliminating the horizon in one way or another (even “naked” singularities) much earlier in the firewall debate…what makes Hawking’s proposal novel and different?

This is the sort of thing you can use to draw in interest, editors of the world. Don’t just write titles that cause ignorant people to roll their eyes and move on, instead, get people curious about what’s really going on in science! More ad revenue for you, more science awareness for us, sounds like a win-win!

How (Not) to Sum the Natural Numbers: Zeta Function Regularization

1+2+3+4+5+6+\ldots=-\frac{1}{12}

If you follow Numberphile on YouTube or Bad Astronomy on Slate you’ve already seen this counter-intuitive sum written out. Similarly, if you follow those people or Sciencetopia’s Good Math, Bad Math, you’re aware that the way that sum was presented by Numberphile in that video was seriously flawed.

There is a real sense in which adding up all of the natural numbers (numbers 1, 2, 3…) really does give you minus twelve, despite all the reasons this should be impossible. However, there is also a real sense in which it does not, and cannot, do any such thing. To explain this, I’m going to introduce two concepts: complex analysis and regularization.

This discussion is not going to be mathematically rigorous, but it should give an authentic and accurate view of where these results come from. If you’re interested in the full mathematical details, a later discussion by Numberphile should help, and the mathematically confident should read Terence Tao’s treatment from back in 2010.

With that said, let’s talk about sums! Well, one sum in particular:

\frac{1}{1^s}+\frac{1}{2^s}+\frac{1}{3^s}+\frac{1}{4^s}+\frac{1}{5^s}+\frac{1}{6^s}+\ldots = \zeta(s)

If s is greater than one, then each term in this infinite sum gets smaller and smaller fast enough that you can add them all up and get a number. That number is referred to as \zeta(s), the Riemann Zeta Function.

So what if s is smaller than one?

The infinite sum that I described doesn’t converge for s less than one. Add it up in any reasonable way, and it just approaches infinity. Put another way, the sum is not properly defined. But despite this, \zeta(s) is not infinite for s less than one!

Now as you might object, we only defined the Riemann Zeta Function for s greater than one. How do we know anything at all about it for s less than one?

That is where complex analysis comes in. Complex analysis sounds like a made-up term for something unreasonably complicated, but it’s quite a bit more approachable when you know what it means. Analysis is the type of mathematics that deals with functions, infinite series, and the basis of calculus. It’s often contrasted with Algebra, which usually considers mathematical concepts that are discrete rather than smooth (this definition is a huge simplification, but it’s not very relevant to this post). Complex means that complex analysis deals with functions, not of everyday real numbers, but of complex numbers, or numbers with an imaginary part.

So what does complex analysis say about the Riemann Zeta Function?

One of the most impressive results of complex analysis is the discovery that if a function of a complex number is sufficiently smooth (the technical term is analytic) then it is very highly constrained. In particular, if you know how the function behaves over an area (technical term: open set), then you know how it behaves everywhere else!

If you’re expecting me to explain why this is true, you’ll be disappointed. This is serious mathematics, and serious mathematics isn’t the sort of thing you can give the derivation for in a few lines. It takes as much effort and knowledge to replicate a mathematical result as it does to replicate many lab results in science.

What I can tell you is that this sort of approach crops up in many places, and is part of a general theme. There is a lot you can tell about a mathematical function just by looking at its behavior in some limited area, because mathematics is often much more constrained than it appears. It’s the same sort of principle behind the work I’ve been doing recently.

In the case of the Riemann Zeta Function, we have a definition for s greater than one. As it turns out, this definition still works if s is a complex number, as long as the real part of s is greater than one. Using this information, the value of the Riemann Zeta Function for a large area (half of the complex numbers), complex analysis tells us its value for every other number. In particular, it tells us this:

\zeta(-1)= -\frac{1}{12}

If the Riemann Zeta Function is consistently defined for every complex number, then it must have this value when s is minus one.

If we still trusted the sum definition for this value of s, we could plug in -1 and get

 1+2+3+4+5+6+\ldots=-\frac{1}{12}

Does that make this statement true? Sort of. It all boils down to a concept from physics called regularization.

In physics, we know that in general there is no such thing as infinity. With a few exceptions, nothing in nature should be infinite, and finite evidence (without mathematical trickery) should never lead us to an infinite conclusion.

Despite this, occasionally calculations in physics will give infinite results. Almost always, this is evidence that we are doing something wrong: we are not thinking hard enough about what’s really going on, or there is something we don’t know or aren’t taking into account.

Doing physics research isn’t like taking a physics class: sometimes, nobody knows how to do the problem correctly! In many cases where we find infinities, we don’t know enough about “what’s really going on” to correct them. That’s where regularization comes in handy.

Regularization is the process by which an infinite result is replaced with a finite result (made “regular”), in a way so that it keeps the same properties. These finite results can then be used to do calculations and make predictions, and so long as the final predictions are regularization independent (that is, the same if you had done a different regularization trick instead) then they are legitimate.

In string theory, one way to compute the required dimensions of space and time ends up giving you an infinite sum, a sum that goes 1+2+3+4+5+…. In context, this result is obviously wrong, so we regularize it. In particular, we say that what we’re really calculating is the Riemann Zeta Function, which we happen to be evaluating at -1. Then we replace 1+2+3+4+5+… with -1/12.

Now remember when I said that getting infinities is a sign that you’re doing something wrong? These days, we have a more rigorous way to do this same calculation in string theory, one that never forces us to take an infinite sum. As expected, it gives the same result as the old method, showing that the old calculation was indeed regularization independent.

Sometimes we don’t have a better way of doing the calculation, and that’s when regularization techniques come in most handy. A particular family of tricks called renormalization is quite important, and I’ll almost certainly discuss it in a future post.

So can you really add up all the natural numbers and get -1/12? No. But if a calculation tells you to add up all the natural numbers, and it’s obvious that the result can’t be infinite, then it may secretly be asking you to calculate the Riemann Zeta Function at -1. And that, as we know from complex analysis, is indeed -1/12.

Astrophysics, the Impossible Science

Last week, Nobel Laureate Martinus Veltman gave a talk at the Simons Center. After the talk, a number of people asked him questions about several things he didn’t know much about, including supersymmetry and dark matter. After deflecting a few such questions, he proceeded to go on a brief rant against astrophysics, professing suspicion of the field’s inability to do experiments and making fun of an astrophysicist colleague’s imprecise data. The rant was a rather memorable feat of curmudgeonliness, and apparently typical Veltman behavior. It left several of my astrophysicist friends fuming. For my part, it inspired me to write a positive piece on astrophysics, highlighting something I don’t think is brought up enough.

The thing about astrophysics, see, is that astrophysics is impossible.

Imagine, if you will, an astrophysical object. As an example, picture a black hole swallowing a star.

Are you picturing it?

Now think about where you’re looking from. Chances are, you’re at some point up above the black hole, watching the star swirl around, seeing something like this:

Where are you in this situation? On a spaceship? Looking through a camera on some probe?

Astrophysicists don’t have spaceships that can go visit black holes. Even the longest-ranging probes have barely left the solar system. If an astrophysicist wants to study a black hole swallowing a star, they can’t just look at a view like that. Instead, they look at something like this:

The image on the right is an artist’s idea of what a black hole looks like. The three on the left? They’re what the astrophysicist actually sees. And even that is cleaned up a bit, the raw output can be even more opaque.

A black hole swallowing a star? Just a few blobs of light, pixels on screen. You can measure brightness and dimness, filter by color from gamma rays to radio waves, and watch how things change with time. You don’t even get a whole lot of pixels for distant objects. You can’t do experiments, either, you just have to wait for something interesting to happen and try to learn from the results.

It’s like staring at the static on a TV screen, day after day, looking for patterns, until you map out worlds and chart out new laws of physics and infer a space orders of magnitude larger than anything anyone’s ever experienced.

And naively, that’s just completely and utterly impossible.

And yet…and yet…and yet…it works!

Crazy people staring at a screen can’t successfully make predictions about what another part of the screen will look like. They can’t compare results and hone their findings. They can’t demonstrate principles (like General Relativity) that change technology here on Earth. Astrophysics builds on itself, discovery by discovery, in a way that can only be explained by accepting that it really does work (a theme that I’ve had occasion to harp on before).

Physics began with astrophysics. Trying to explain the motion of dots in a telescope and objects on the ground with the same rules led to everything we now know about the world. Astrophysics is hard, arguably impossible…but impossible or not, there are people who spend their lives successfully making it work.

The Amplituhedron and Other Excellently Silly Words

Nima Arkani-Hamed recently gave a talk at the Simons Center on the topic of what he and Jaroslav Trnka are calling the Amplituhedron.

There’s an article on it in Quanta Magazine. The article starts out a bit hype-y for my taste (too much language of importance, essentially), but it has several very solid descriptions of the history of the situation. I particularly like how the author concisely describes the Feynman diagram picture in the space of a single paragraph, and I would recommend reading that part even if you don’t have time to read the whole article. In general it’s worth it to get a picture of what’s going on.

That said, I obviously think I can clear a few things up, otherwise I wouldn’t be writing about it, so here I go!

“The” Amplituhedron

Nima’s new construction, the Amplituhedron, encodes amplitudes (building blocks of probabilities in particle physics) in N=4 super Yang-Mills as the “area” of a multi-dimensional analog of a polyhedron (hence, Amplitu-hedron).

Now, I’m a big supporter of silly-sounding words with amplitu- at the beginning (amplitudeologist, anyone?), and this is no exception. Anyway, the word Amplitu-hedron isn’t what’s confusing people. What’s confusing people is the word the.

When the Quanta article says that Nima has found “the” Amplituhedron, it makes it sound like he has discovered one central formula that somehow contains the whole universe. If you read the comments, many readers went away with that impression.

In case you needed me to say it, that’s not what is going on. The problem is in the use of the word “the”.

Suppose it was 1886, and I told you that a fellow named Carl Benz had invented “the Automobile”, a marvelous machine that can get everyone to work on time (as well as become the dominant form of life on Long Island).

My use of “the” might make you imagine that Benz invented some single, giant machine that would roam across the country, picking people up and somehow transporting everyone to work. You’d be skeptical of this, of course, expecting that long queues to use this gigantic, wondrous machine would swiftly ruin any speed advantage it might possess…

The Automobile, here to take you to work.

Or, you could view “the” in another light, as indicating a type of thing.

Much like “the Automobile” is a concept, manifested in many different cars and trucks across the country, “the Amplituhedron” is a concept, manifested in many different amplituhedra, each corresponding to a particular calculation that we might attempt.

Advantages…

Each amplituhedron has to do with an amplitude involving a specific number of particles, with a particular number of internal loops. (The Quanta article has a pretty good explanation of loops, here’s mine if you’d rather read that). Based on the problem you’re trying to solve, there are a set of rules that you use to construct the particular amplituhedron you need. The “area” of this amplituhedron (in quotation marks because I mean the area in an abstract, mathematical sense) is the amplitude for the process, which lets you calculate the probability that whatever particle physics situation you’re describing will happen.

Now, we already have many methods to calculate these probabilities. The amplituhedron’s advantage is that it makes these calculations much simpler. What was once quite a laborious and complicated four-loop calculation, Nima claims can be done by hand using amplituhedra. I didn’t get a chance to ask whether the same efficiency improvement holds true at six loops, but Nima’s description made it sound like it would at least speed things up.

[Edit: Some of my fellow amplitudeologists have reminded me of two things. First, that paper I linked above paved the way to more modern methods for calculating these things, which also let you do the four-loop calculation by hand. (You need only six or so diagrams). Second, even back then the calculation wasn’t exactly “laborious”, there were some pretty slick tricks that sped things up. With that in mind, I’m not sure Nima’s method is faster per se. But it is a fast method that has the other advantages described below.]

The amplituhedron has another, more sociological advantage. By describing the amplitude in terms of a geometrical object rather than in terms of our usual terminology, we phrase things in a way that mathematicians are more likely to understand. By making things more accessible to mathematicians (and the more math-headed physicists), we invite them to help us solve our problems, so that together we can come up with more powerful methods of calculation.

Nima and the Quanta article both make a big deal about how the amplituhedron gets rid of the principles of locality and unitarity, two foundational principles of quantum field theory. I’m a bit more impressed by this than Woit is. The fine distinction that needs to be made here is that the amplituhedron isn’t simply “throwing out” locality and unitarity. Rather, it’s written in such a way that it doesn’t need locality and unitarity to function. In the end, the formulas it computes still obey both principles. Nima’s hope is that, now that we are able to write amplitudes without needing locality and unitarity, if we end up having to throw out either of those principles to make a new theory we will be able to do so. That’s legitimately quite a handy advantage to have, it just doesn’t mean that locality and unitarity must be thrown out right now.

…and Disadvantages

It’s important to remember that this whole story is limited to N=4 super Yang-Mills. Nima doesn’t know how to apply it to other theories, and nobody else seems to have any good ideas either. In addition, this only applies to the planar part of the theory. I’m not going to explain what that term means here; for now just be aware that while there are tricks that let you “square” a calculation in super Yang-Mills to get a similar calculation in quantum gravity, those tricks rely on having non-planar data, or information beyond the planar part of the theory. So at this point, this doesn’t give us any new hints about quantum gravity. It’s conceivable that physicists will find ways around both of these limits, but for now this result, though impressive, is quite limited.

Nima hasn’t found some sort of singular “jewel at the heart of physics”. Rather, he’s found a very slick, very elegant, quite efficient way to make calculations within one particular theory. This is profound, because it expresses things in terms that mathematicians can address, and because it shows that we can write down formulas without relying on what are traditionally some of the most fundamental principles of quantum field theory. Only time will tell whether Nima or others can generalize this picture, taking it beyond planar N=4 super Yang-Mills and into the tougher theories that still await this sort of understanding.

Hype versus Miscommunication, or the Language of Importance

A fellow amplitudes-person was complaining to me recently about the hype surrounding the debate regarding whether black holes have “firewalls”. New York Times coverage seems somewhat excessive for what is, in the end, a fairly technical debate, and its enthusiasm was (rightly?) mocked in several places.

There’s an attitude I often run into among other physicists. The idea is that when hype like this happens, it’s because senior physicists are, at worst, cynically manipulating the press to further their positions or, at best, so naïve that they really see what they’re working on as so important that it deserves hype-y coverage. Occasionally, the blame will instead be put on the journalists, with largely the same ascribed motivations: cynical need for more page views, or naïve acceptance of whatever story they’re handed.

In my opinion, what’s going on there is a bit deeper, and not so easily traceable to any particular person.

In the articles on the (2, 0) theory I put up in the last few weeks, I made some disparaging comments about the tone of this Scientific American blog post. After exchanging a few tweets with the author, I think I have a better idea of what went down.

The problem here is that when you ask a scientist about something they’re excited about, they’re going to tell you why they’re excited about it. That’s what happened here when Nima Arkani-Hamed was interviewed for the above article: he was asked about the (2, 0) theory, and he seems to have tried to convey his enthusiasm with a metaphor that explained how the situation felt to him.

The reason this went wrong and led to a title as off-base and hype-sounding as “the Ultimate Ultimate Theory of Physics” was that we (scientists and science journalists) are taught to express enthusiasm in the language of importance.

There has been an enormous resurgence in science communication in recent years, but it has come with a very us-vs.-them mentality. The prevailing attitude is that the public will only pay attention to a scientific development if they are told that it is important. As such, both scientists and journalists try to make whatever they’re trying to communicate sound central, either to daily life or to our understanding of the universe. When both sides of the conversation are operating under this attitude, it creates an echo chamber where a concept’s importance is blown up many times greater than it really deserves, without either side doing anything other than communicating science in the only way they know.

We all have to step back and realize that most of the time, science isn’t interesting because of its absolute “importance”. Rather, a puzzle is often interesting simply because it is a puzzle. That’s what’s going on with the (2, 0) theory, or with firewalls: they’re hard to figure out, and that’s why we care.

Being honest about this is not going to lose us public backing, or funding. It’s not just scientists who value interesting things because they are challenging. People choose the path of their lives not based on some absolute relevance to the universe at large, but because things make sense in context. You don’t fall in love because the target of your affections is the most perfect person in the universe, you fall in love because they’re someone who can constantly surprise you.

Scientists are in love with what they do. We need to make sure that that, and not some abstract sense of importance, is what we’re communicating. If we do that, if we calm down and make a bit more effort to be understood, maybe we can win back some of the trust that we’ve lost by appearing to promote Ultimate Ultimate Theories of Everything.

Blackboards

As a college student, I already knew that theoretical physicists weren’t like how they were portrayed in movies. They didn’t wear lab coats, or have universally frizzy, unkempt white hair. I knew they didn’t have labs, or plot to take over the world. And I was pretty sure that they didn’t constantly use blackboards.

After all, blackboards are a teaching tool. They’re nice for getting equations up so that the guy way in the back can see them. But if you were actually doing a real calculation, surely you’d prefer paper, or a computer, or some other method that doesn’t involve an unkempt scrawl and a heap of loose white dust all over your clothing.

Right?

Right?

Over the last few years I’ve come to appreciate the value of blackboards. Blackboards actually can be used for calculations. You don’t want to use them all the time, but there are times when it’s useful to have a lot of room on a page, to be able to make notes and structure the board around concepts. More importantly, though, there is a third function that I didn’t even consider back in college. Between calculation and teaching, there is collaboration.

Go to a physics or math department, and you’ll find blackboards on the walls. You’ll find them not just in classrooms, but in offices, and occasionally in corridors. Go to a high-class physics location like the Perimeter Institute or the Simons Center, and they’ll brag to you about how many blackboards they have strewn around their common areas.

The purpose of these blackboards is to facilitate conversation. If you want to explain your work to someone else and you aren’t using a blog post, you need space to write in a way that you can both see what you’re doing. Blackboards are ideal for that sort of conversation, and as such are essential for collaboration and communication among scientists.

What about whiteboards? Well, whiteboards are just evil, obviously.

Hawking vs. Witten: A Primer

Have you seen the episode of Star Trek where Data plays poker with Stephen Hawking? How about the times he appeared on Futurama or the Simpsons? Or the absurd number of times he has come up in one way or another on The Big Bang Theory?

Stephen Hawking is probably the most recognizable theoretical physicist to laymen. Wheelchair-bound and speaking through a voice synthesizer, Hawking presents a very distinct image, while his work on black holes and the big bang, along with his popular treatments of science in books like A Brief History of Time, has made him synonymous in the public’s mind with genius.

He is not, however, the most recognizable theoretical physicist when talking to physicists. If Sheldon from The Big Bang Theory were a real string theorist he wouldn’t be obsessed with Hawking. He might, however, be obsessed with Edward Witten.

Edward Witten is tall and has an awkwardly high voice (for a sample, listen to the clip here). He’s also smart, smart enough to dabble in basically every subfield of theoretical physics and manage to make important contributions while doing so. He has a knack for digging up ideas from old papers and dredging out the solution to current questions of interest.

And far more than Hawking, he represents a clear target for parody, at least when that parody is crafted by physicists and mathematicians. Abstruse Goose has a nice take on his role in theoretical physics, while his collaboration with another physicist named Seiberg on what came to be known as Seiberg-Witten theory gave rise to the cyber-Witten pun.

If you would look into the mouth of physics-parody madness, let this link be your guide…

So why hasn’t this guy appeared on Futurama? (After all, his dog does!)

Witten is famous among theorists, but he hasn’t done as much as Hawking to endear himself to the general public. He hasn’t written popular science books, and he almost never gives public talks. So when a well-researched show like The Big Bang Theory wants to mention a famous physicist, they go to Hawking, not to Witten, because people know about him. And unless Witten starts interfacing more with the public (or blog posts like this become more common), that’s not about to change.

Sound Bite Management; or the Merits of Shock and Awe

First off, for the small demographic who haven’t seen it already (and aren’t reading this because of it), I wrote an article for Ars Technica. Go read it.

After the article went up, a professor from my department told me that he and several others were concerned about the title.

Now before I go on, I’d like to clarify that this isn’t going to be a story about the department trying to “shut me down” or anything paranoid like that. The professor in question was expressing a valid concern in a friendly way, and it deserves some thought.

The concern was the following: isn’t a title like Earning a PhD by studying a theory that we know is wrong” bad publicity for the field? Regardless of whether the article rebuts the idea that “wrong” is a meaningful descriptor for this sort of theory, doesn’t a title like that give fuel to the fire, sharpening the cleavers of the field’s detractors as one commenter put it? In other words, even if it’s a good article, isn’t it a bad sound bite?

It’s worryingly easy for a catchy sound bite to eclipse everything else about a piece. As one commenter pointed out, that’s roughly what happened with Palin’s fruit fly comment itself. And with that in mind, the claim that people are earning PhDs based on “false” theories definitely sounds like the sort of sound bite that could get out of hand in a hurry if the wrong community picked it up.

There is, at least, one major difference between my sound bite and Palin’s. In the political climate of 2008 it was easy to believe that Sarah Palin didn’t understand the concept of fruit fly research. On the other hand, it’s quite a bit less plausible that Ars would air a piece calling most work in theoretical physics useless.

In operation here is the old, powerful technique of using a shocking, dissonant headline to lure people in. A sufficiently out-of-character statement won’t be taken at face value; rather, it will inspire readers to dig in to the full article to figure out what they’re missing. This is the principle behind provocateurs in many fields, and while there are always risks, often this is the only way to get people to think about complex issues (Peter Singer often seems to exemplify the risks and rewards of this tactic, just to give an example).

What’s the alternative here? In referring to the theory I study as “wrong”, I’m attempting to bring readers face to face with a common misconception: the idea that every theory in physics is designed to approximate some part of the real world. For the physicists in the audience, this is the public perception that everything in theoretical physics is phenomenology. If we don’t bring this perception to light and challenge it, then we’re sweeping a substantial amount of theoretical physics under the rug for the sake of a simpler message. And that’s risky, because if people don’t understand what physics really is then they’re likely to balk when they glimpse what they think is “illegitimate” physics.

In my view, shocking people by describing my type of physics as not “true” is the best way to teach people about what physicists actually do. But it is risky, and it could easily give people the wrong impression. Only time will tell.

In Defense of Pure Theory

I’d like to preface this by saying that this post will be a bit more controversial than usual. I have somewhat unconventional opinions about the nature and purpose of science, and what I say below shouldn’t be taken as representative of the field in general.

A bit more than a week ago, Not Even Wrong had a post on the Fundamental Physics Prize. Peter Woit is often…I’m going to say annoying…and this post was no exception.

The Fundamental Physics Prize, for those not in the know, is a fairly recently established prize for physicists, mostly theoretical physicists.  Clocking in at three million dollars, the prize is larger than the Nobel, and is currently the largest prize of its sort. Woit has several objections to the current choice of award recipient (Alexander Polyakov). I sympathize with some of these objections, in particular the snarky observation that a large number of the awardees are from Princeton’s Institute for Advanced Study. But there is one objection in particular that I feel the need to rebut, if only due to its wording: the gripe that “Viewers of the part I saw would have no idea that string theory is not tested, settled science.”

There are two problems with this statement. The first is something that Woit is likely aware of, but it probably isn’t obvious to everyone reading this. To be clear, the fact that a certain theory is not experimentally tested is not a barrier to its consideration for the Fundamental Physics Prize. Far from it, the purpose of the Fundamental Physics Prize is precisely to honor powerful insights in theoretical physics that have not yet been experimentally verified. The Fundamental Physics Prize was created, in part, to remedy what was perceived as unfairness in the awarding of the Nobel Prize, as the Nobel is only awarded to theorists after their theories have received experimental confirmation. Since the whole purpose of this prize is to honor theories that have not been experimentally tested, griping that the prizes are being awarded to untested theories is a bit like griping that Oscars aren’t awarded to scientists, or objecting that viewers of the Oscars would have no idea that the winners haven’t done anything especially amazing for humanity. If you’re watching the ceremony, you probably know what it’s for.

Has this been experimentally verified?

The other problem is a difference of philosophy. When Woit says that string theory is not “tested, settled science” he is implying that in order to be “settled science”, a theory must be tested, and while I can’t be sure of his intent I’m guessing he means tested experimentally. It is this latter implication I want to address: whether or not Woit is making it here, it serves to underscore an important point about the structure of physics as an institution.

Past readers will be aware that a theory can be valuable even if it doesn’t correspond to the real world because of what it can teach us about theories that do correspond to the real world. And while that is an important point, the point I’d like to make here is a bit more controversial. I would like to argue that pure theory, theory unconnected with experiment, can be important and valuable and “settled science” in and of itself.

First off, let’s talk about how such a theory can be science, and in particular how it can be physics. Plenty of people do work that doesn’t correspond to the experimentally accessible real world.  Mathematicians are the clearest example, but the point also arguably applies to fields like literary analysis. Physics is ostensibly supposed to be special, though: as part of science, we expect it to concern itself with the real world, otherwise one would argue that it is simply mathematics. However, as I have argued before, the difference between mathematics and physics is not one of subject matter, but of methods. This makes sense, provided you think of physics not as some sort of fixed school of thought, but as an institution. Physicists train new physicists, and as such physicists learn methods common to other physicists. That which physicists like to do, then, is physics, which means that physics is defined much more by the methods used to do it than by its object of study.

How can such a theory be settled, then? After all, if reality is out, what possible criteria could there be for deciding what is or is not a “good” theory?

The thing about physics as an institution is that physics is done by physicists, and physicists have careers. Over the course of those careers, those physicists need to publish papers, which need to catch the attention and approval of other physicists. They also need to have projects for grad students to do, so as to produce more physicists. Because of this, a “good” theory cannot be worked on alone. It has to be a theory with many implications, a theory that can be worked on and understood consistently by different people. It also needs to constrain further progress, to make sure that not just anyone can create novel results: this is what allows papers to catch the attention of other physicists! If you have all that, you have all of the relevant advantages of reality.

String theory has not been experimentally tested, but it meets all of these criteria. String theory has been a major force in theoretical physics for the past thirty years because it can fuel careers and lead to discussion in a way that nothing else on the table can. It has been tested mathematically in numerous ways, ways which demonstrate its robustness as a theory of quantum gravity. In this sense, string theory is a prime example of tested, settled science.