Monthly Archives: January 2014

Editors, Please Stop Misquoting Hawking

If you’ve been following science news recently, you’ve probably heard the apparently alarming news that Steven Hawking has turned his back on black holes, or that black holes can actually be escaped, or…how about I just show you some headlines:

FoxHawking

NatureHawking

YahooHawking

Now, Hawking didn’t actually say that black holes don’t exist, but while there are a few good pieces on the topic, in many cases the real message has gotten lost in the noise.

From Hawking’s paper:

ActualPaperHawking

What Hawking is proposing is that the “event horizon” around a black hole, rather than being an absolute permanent boundary from which nothing can escape, is a more temporary “apparent” horizon, the properties of which he goes on to describe in detail.

Why is he proposing this? It all has to do with the debate over black hole firewalls.

Starting with a paper by Polchinski and colleagues a year and a half ago, the black hole firewall paradox centers on contradictory predictions from general relativity and quantum mechanics. General relativity predicts that an astronaut falling past a black hole’s event horizon will notice nothing particularly odd about the surrounding space, but that once past the event horizon none of the “information” that specifies the astronaut’s properties can escape to the outside world. Quantum mechanics on the other hand predicts that information cannot be truly lost. The combination appears to suggest something radical, a “firewall” of high energy radiation around the event horizon carrying information from everything that fell into the black hole in the past, so powerful that it would burn our hypothetical astronaut to a crisp.

Since then, a wide variety of people have made one proposal or another, either attempting to avoid the seemingly preposterous firewall or to justify and further explain it. The reason the debate is so popular is because it touches on some of the fundamental principles of quantum mechanics.

Now, as I have pointed out before, I’m not a good person to ask about the fundamental principles of quantum mechanics. (Incidentally, I’d love it if some of the more quantum information or general relativity-focused bloggers would take a more substantial crack at this! Carroll, Preskill, anyone?) What I can talk about, though, is hype.

All of the headlines I listed take Hawking’s quote out of context, but not all of the articles do. The problem isn’t so much the journalists, as the editors.

One of an editor’s responsibilities is to take articles and give them titles that draw in readers. The editor wants a title that will get people excited, make them curious, and most importantly, get them to click. While a journalist won’t have any particular incentive to improve ad revenue, the same cannot be said for an editor. Thus, editors will often rephrase the title of an article in a way that makes the whole story seem more shocking.

Now that, in itself, isn’t a problem. I’ve used titles like that myself. The problem comes when the title isn’t just shocking, but misleading.

When I call astrophysics “impossible”, nobody is going to think I mean it literally. The title is petulant and ridiculous enough that no-one would take it at face value, but still odd enough to make people curious. By contrast, when you say that Hawking has “changed his mind” about black holes or said that “black holes do not exist”, there are people who will take that at face value as supporting their existing beliefs, as the Borowitz Report humorously points out. These people will go off thinking that Hawking really has given up on black holes. If the title confirms their beliefs enough, people might not even bother to read the article. Thus, by using an actively misleading title, you may actually be decreasing clicks!

It’s not that hard to write a title that’s both enough of a hook to draw people in and won’t mislead. Editors of the world, you’re well-trained writers, certainly much better than me. I’m sure you can manage it.

There really is some interesting news here, if people had bothered to look into it. The firewall debate has been going on for a year and a half, and while Hawking isn’t the universal genius the media occasionally depicts he’s still the world’s foremost expert on the quantum properties of black holes. Why did he take so long to weigh in? Is what he’s proposing even particularly new? I seem to remember people discussing eliminating the horizon in one way or another (even “naked” singularities) much earlier in the firewall debate…what makes Hawking’s proposal novel and different?

This is the sort of thing you can use to draw in interest, editors of the world. Don’t just write titles that cause ignorant people to roll their eyes and move on, instead, get people curious about what’s really going on in science! More ad revenue for you, more science awareness for us, sounds like a win-win!

How (Not) to Sum the Natural Numbers: Zeta Function Regularization

1+2+3+4+5+6+\ldots=-\frac{1}{12}

If you follow Numberphile on YouTube or Bad Astronomy on Slate you’ve already seen this counter-intuitive sum written out. Similarly, if you follow those people or Sciencetopia’s Good Math, Bad Math, you’re aware that the way that sum was presented by Numberphile in that video was seriously flawed.

There is a real sense in which adding up all of the natural numbers (numbers 1, 2, 3…) really does give you minus twelve, despite all the reasons this should be impossible. However, there is also a real sense in which it does not, and cannot, do any such thing. To explain this, I’m going to introduce two concepts: complex analysis and regularization.

This discussion is not going to be mathematically rigorous, but it should give an authentic and accurate view of where these results come from. If you’re interested in the full mathematical details, a later discussion by Numberphile should help, and the mathematically confident should read Terence Tao’s treatment from back in 2010.

With that said, let’s talk about sums! Well, one sum in particular:

\frac{1}{1^s}+\frac{1}{2^s}+\frac{1}{3^s}+\frac{1}{4^s}+\frac{1}{5^s}+\frac{1}{6^s}+\ldots = \zeta(s)

If s is greater than one, then each term in this infinite sum gets smaller and smaller fast enough that you can add them all up and get a number. That number is referred to as \zeta(s), the Riemann Zeta Function.

So what if s is smaller than one?

The infinite sum that I described doesn’t converge for s less than one. Add it up in any reasonable way, and it just approaches infinity. Put another way, the sum is not properly defined. But despite this, \zeta(s) is not infinite for s less than one!

Now as you might object, we only defined the Riemann Zeta Function for s greater than one. How do we know anything at all about it for s less than one?

That is where complex analysis comes in. Complex analysis sounds like a made-up term for something unreasonably complicated, but it’s quite a bit more approachable when you know what it means. Analysis is the type of mathematics that deals with functions, infinite series, and the basis of calculus. It’s often contrasted with Algebra, which usually considers mathematical concepts that are discrete rather than smooth (this definition is a huge simplification, but it’s not very relevant to this post). Complex means that complex analysis deals with functions, not of everyday real numbers, but of complex numbers, or numbers with an imaginary part.

So what does complex analysis say about the Riemann Zeta Function?

One of the most impressive results of complex analysis is the discovery that if a function of a complex number is sufficiently smooth (the technical term is analytic) then it is very highly constrained. In particular, if you know how the function behaves over an area (technical term: open set), then you know how it behaves everywhere else!

If you’re expecting me to explain why this is true, you’ll be disappointed. This is serious mathematics, and serious mathematics isn’t the sort of thing you can give the derivation for in a few lines. It takes as much effort and knowledge to replicate a mathematical result as it does to replicate many lab results in science.

What I can tell you is that this sort of approach crops up in many places, and is part of a general theme. There is a lot you can tell about a mathematical function just by looking at its behavior in some limited area, because mathematics is often much more constrained than it appears. It’s the same sort of principle behind the work I’ve been doing recently.

In the case of the Riemann Zeta Function, we have a definition for s greater than one. As it turns out, this definition still works if s is a complex number, as long as the real part of s is greater than one. Using this information, the value of the Riemann Zeta Function for a large area (half of the complex numbers), complex analysis tells us its value for every other number. In particular, it tells us this:

\zeta(-1)= -\frac{1}{12}

If the Riemann Zeta Function is consistently defined for every complex number, then it must have this value when s is minus one.

If we still trusted the sum definition for this value of s, we could plug in -1 and get

 1+2+3+4+5+6+\ldots=-\frac{1}{12}

Does that make this statement true? Sort of. It all boils down to a concept from physics called regularization.

In physics, we know that in general there is no such thing as infinity. With a few exceptions, nothing in nature should be infinite, and finite evidence (without mathematical trickery) should never lead us to an infinite conclusion.

Despite this, occasionally calculations in physics will give infinite results. Almost always, this is evidence that we are doing something wrong: we are not thinking hard enough about what’s really going on, or there is something we don’t know or aren’t taking into account.

Doing physics research isn’t like taking a physics class: sometimes, nobody knows how to do the problem correctly! In many cases where we find infinities, we don’t know enough about “what’s really going on” to correct them. That’s where regularization comes in handy.

Regularization is the process by which an infinite result is replaced with a finite result (made “regular”), in a way so that it keeps the same properties. These finite results can then be used to do calculations and make predictions, and so long as the final predictions are regularization independent (that is, the same if you had done a different regularization trick instead) then they are legitimate.

In string theory, one way to compute the required dimensions of space and time ends up giving you an infinite sum, a sum that goes 1+2+3+4+5+…. In context, this result is obviously wrong, so we regularize it. In particular, we say that what we’re really calculating is the Riemann Zeta Function, which we happen to be evaluating at -1. Then we replace 1+2+3+4+5+… with -1/12.

Now remember when I said that getting infinities is a sign that you’re doing something wrong? These days, we have a more rigorous way to do this same calculation in string theory, one that never forces us to take an infinite sum. As expected, it gives the same result as the old method, showing that the old calculation was indeed regularization independent.

Sometimes we don’t have a better way of doing the calculation, and that’s when regularization techniques come in most handy. A particular family of tricks called renormalization is quite important, and I’ll almost certainly discuss it in a future post.

So can you really add up all the natural numbers and get -1/12? No. But if a calculation tells you to add up all the natural numbers, and it’s obvious that the result can’t be infinite, then it may secretly be asking you to calculate the Riemann Zeta Function at -1. And that, as we know from complex analysis, is indeed -1/12.

Astrophysics, the Impossible Science

Last week, Nobel Laureate Martinus Veltman gave a talk at the Simons Center. After the talk, a number of people asked him questions about several things he didn’t know much about, including supersymmetry and dark matter. After deflecting a few such questions, he proceeded to go on a brief rant against astrophysics, professing suspicion of the field’s inability to do experiments and making fun of an astrophysicist colleague’s imprecise data. The rant was a rather memorable feat of curmudgeonliness, and apparently typical Veltman behavior. It left several of my astrophysicist friends fuming. For my part, it inspired me to write a positive piece on astrophysics, highlighting something I don’t think is brought up enough.

The thing about astrophysics, see, is that astrophysics is impossible.

Imagine, if you will, an astrophysical object. As an example, picture a black hole swallowing a star.

Are you picturing it?

Now think about where you’re looking from. Chances are, you’re at some point up above the black hole, watching the star swirl around, seeing something like this:

Where are you in this situation? On a spaceship? Looking through a camera on some probe?

Astrophysicists don’t have spaceships that can go visit black holes. Even the longest-ranging probes have barely left the solar system. If an astrophysicist wants to study a black hole swallowing a star, they can’t just look at a view like that. Instead, they look at something like this:

The image on the right is an artist’s idea of what a black hole looks like. The three on the left? They’re what the astrophysicist actually sees. And even that is cleaned up a bit, the raw output can be even more opaque.

A black hole swallowing a star? Just a few blobs of light, pixels on screen. You can measure brightness and dimness, filter by color from gamma rays to radio waves, and watch how things change with time. You don’t even get a whole lot of pixels for distant objects. You can’t do experiments, either, you just have to wait for something interesting to happen and try to learn from the results.

It’s like staring at the static on a TV screen, day after day, looking for patterns, until you map out worlds and chart out new laws of physics and infer a space orders of magnitude larger than anything anyone’s ever experienced.

And naively, that’s just completely and utterly impossible.

And yet…and yet…and yet…it works!

Crazy people staring at a screen can’t successfully make predictions about what another part of the screen will look like. They can’t compare results and hone their findings. They can’t demonstrate principles (like General Relativity) that change technology here on Earth. Astrophysics builds on itself, discovery by discovery, in a way that can only be explained by accepting that it really does work (a theme that I’ve had occasion to harp on before).

Physics began with astrophysics. Trying to explain the motion of dots in a telescope and objects on the ground with the same rules led to everything we now know about the world. Astrophysics is hard, arguably impossible…but impossible or not, there are people who spend their lives successfully making it work.

What does Copernicus have to say about String Theory?

Putting aside some highly controversial exceptions, string theory has made no testable predictions. Conceivably, a world governed by string theory and a world governed by conventional particle physics would be indistinguishable to every test we could perform today. Furthermore, it’s not even possible to say that string theory predicts the same things with fewer fudge-factors, as string theory descriptions of our world seem to have dramatically many more free parameters than conventional ones.

Critics of string theory point to this as a reason why string theory should be excluded from science, sent off to the chilly arctic wasteland of the math department. (No offense to mathematicians, I’m sure your department is actually quite warm and toasty.) What these critics are missing is an important feature of the scientific process: before scientists are able to make predictions, they propose explanations.

To explain what I mean by that, let’s go back to the beginning of the 16th century.

At the time, the authority on astronomy was still Ptolemy’s Syntaxis Mathematica, a book so renowned that it is better known by the Arabic-derived superlative Almagest, “the greatest”. Ptolemy modeled the motions of the planets and stars as a series of interlocking crystal spheres with the Earth at the center, and did so well enough that until that time only minor improvements on the model had been made.

This is much trickier than it sounds, because even in Ptolemy’s day astronomers could tell that the planets did not move in simple circles around the Earth. There were major distortions from circular motion, the most dramatic being the phenomenon of retrograde motion.

If the planets really were moving in simple circles around the Earth, you would expect them to keep moving in the same direction. However, ancient astronomers saw that sometimes, some of the planets moved backwards. The planet would slow down, turn around, go backwards a bit, then come to a stop and turn again.

Thus sparking the invention of the spirograph.

In order to take this into account, Ptolemy introduced epicycles, extra circles of motion for the planets. The epicycle would move on the planet’s primary circle, or deferent, and the planet would rotate around the epicycle, like so:

French Wikipedia had a better picture.

These epicycles weren’t just for retrograde motion, though. They allowed Ptolemy to model all sorts of irregularities in the planets’ motions. Any deviation from a circle could conceivably be plotted out by adding another epicycle (though Ptolemy had other methods to model this sort of thing, among them something called an equant). Enter Copernicus.

Enter Copernicus’s hair.

Copernicus didn’t like Ptolemy’s model. He didn’t like equants, and what’s more, he didn’t like the idea that the Earth was the center of the universe. Like Plato, he preferred the idea that the center of the universe was a divine fire, a source of heat and light like the Sun. He decided to put together a model of the planets with the Sun in the center. And what he found, when he did, was an explanation for retrograde motion.

In Copernicus’s model, the planets always go in one direction around the Sun, never turning back. However, some of the planets are faster than the Earth, and some are slower. If a planet is slower than the Earth and it passes by it will look like it is going backwards, due to the Earth’s speed. This is tricky to visualize, but hopefully the picture below will help: As you can see in the picture, Mars starts out ahead of Earth in its orbit, then falls behind, making it appear to move backwards.

Despite this simplification, Copernicus still needed epicycles. The planets’ motions simply aren’t perfect circles, even around the Sun. After getting rid of the equants from Ptolemy’s theory, Copernicus’s model ended up having just as many epicycles as Ptolemy’s!

Copernicus’s model wasn’t any better at making predictions (in fact, due to some technical lapses in its presentation, it was even a little bit worse). It didn’t have fewer “fudge factors”, as it had about the same number of epicycles. If you lived in the 16th century, you would have been completely justified in believing that the Earth was the center of the universe, and not the Sun. Copernicus had failed to establish his model as scientific truth.

However, Copernicus had still done something Ptolemy didn’t: he had explained retrograde motion. Retrograde motion was a unique, qualitative phenomenon, and while Ptolemy could include it in his math, only Copernicus gave you a reason why it happened.

That’s not enough to become the reigning scientific truth, but it’s a damn good reason to pay attention. It was justification for astronomers to dedicate years of their lives to improving the model, to working with it and trying to get unique predictions out of it. It was enough that, over half a century later, Kepler could take it and turn it into a theory that did make predictions better than Ptolemy, that did have fewer fudge-factors.

String theory as a model of the universe doesn’t make novel predictions, it doesn’t have fewer fudge factors. What it does is explain, explaining spectra of particles in terms of shapes of space and time, the existence of gravity and light in terms of closed and open strings, the temperature of black holes in terms of what’s going on inside them (this last really ought to be the subject of its own post, it’s one of the big triumphs of string theory). You don’t need to accept it as scientific truth. Like Copernicus’s model in his day, we don’t have the evidence for that yet. But you should understand that, as a powerful explanation, the idea of string theory as a model of the universe is worth spending time on.

Of course, string theory is useful for many things that aren’t modeling the universe. But that’s the subject of another post.

Update on the Amplituhedron

Awhile back I wrote a post on the Amplituhedron, a type of mathematical object  found by Nima Arkani-Hamed and Jaroslav Trnka that can be used to do calculations of scattering amplitudes in planar N=4 super Yang-Mills theory. (Scattering amplitudes are formulas used to calculate probabilities in particle physics, from the probability that an unstable particle will decay to the probability that a new particle could be produced by a collider.) Since then, they published two papers on the topic, the most recent of which came out the day before New Year’s Eve. These papers laid out the amplituhedron concept in some detail, and answered a few lingering questions. The latest paper focused on one particular formula, the probability that two particles bounce off each other. In discussing this case, the paper serves two purposes:

1. Demonstrating that Arkani-Hamed and Trnka did their homework.

2. Showing some advantages of the amplituhedron setup.

Let’s talk about them one at a time.

Doing their homework

There’s already a lot known about N=4 super Yang-Mills theory. In order to propose a new framework like the amplituhedron, Arkani-Hamed and Trnka need to show that the new framework can reproduce the old knowledge. Most of the paper is dedicated to doing just that. In several sections Arkani-Hamed and Trnka show that the amplituhedron reproduces known properties of the amplitude, like the behavior of its logarithm, its collinear limit (the situation when two momenta in the calculation become parallel), and, of course, unitarity.

What, you heard the amplituhedron “removes” unitarity? How did unitarity get back in here?

This is something that has confused several commenters, both here and on Ars Technica, so it bears some explanation.

Unitarity is the principle that enforces the laws of probability. In its simplest form, unitarity requires that all probabilities for all possible events add up to one. If this seems like a pretty basic and essential principle, it is! However, it and locality (the idea that there is no true “action at a distance”, that particles must meet to interact) can be problematic, causing paradoxes for some approaches to quantum gravity. Paradoxes like these inspired Arkani-Hamed to look for ways to calculate scattering amplitudes that don’t rely on locality and unitarity, and with the amplituhedron he succeeded.

However, just because the amplituhedron doesn’t rely on unitarity and locality, doesn’t mean it violates them. The amplituhedron, for all its novelty, still calculates quantities in N=4 super Yang-Mills. N=4 super Yang-Mills is well understood, it’s well-behaved and cuddly, and it obeys locality and unitarity.

This is why the amplituhedron is not nearly as exciting as a non-physicist might think. The amplituhedron, unlike most older methods, isn’t based on unitarity and locality. However, the final product still has to obey unitarity and locality, because it’s the same final product that others calculate through other means. So it’s not as if we’ve completely given up on basic principles of physics.

Not relying on unitarity and locality is valuable. For those who research scattering amplitudes, it has often been useful to try to “eliminate” one principle or another from our calculations. 20 years ago, avoiding Feynman diagrams was the key to finding dramatic simplifications. Now, many different approaches try to sidestep different principles. (For example, while the amplituhedron calculates an integrand and leaves a final integral to be done, I’m working on approaches that never employ an integrand.)

If we can avoid relying on some “basic” principle, that’s usually good evidence that the principle might be a consequence of something even more basic. By showing how unitarity can arise from the amplituhedron, Arkani-Hamed and Trnka have shown that a seemingly basic principle can come out of a theory that doesn’t impose it.

Advantages of the Amplituhedron

Not all of the paper compares to old results and principles, though. A few sections instead investigate novel territory, and in doing so show some of the advantages and disadvantages of the amplituhedron.

Last time I wrote on this topic, I was unclear on whether the amplituhedron was more efficient than existing methods. At this point, it appears that it is not. While the formula that the amplituhedron computes has been found by other methods up to seven loops, the amplituhedron itself can only get up to three loops or so in practical cases. (Loops are a way that calculations are classified in particle physics. More loops means a more complex calculation, and a more precise final result.)

The amplituhedron’s primary advantage is not in efficiency, but rather in the fact that its mathematical setup makes it straightforward to derive interesting properties for any number of loops desired. As Trnka occasionally puts it, the central accomplishment of the amplituhedron is to find “the question to which the amplitude is the answer”. By being able to phrase this “question” mathematically, one can be very general, which allows them to discover several properties that should hold no matter how complex the rest of the calculation becomes. It also has another implication: if this mathematical question has a complete mathematical answer, that answer could calculate the amplitude for any number of loops. So while the amplituhedron is not more efficient than other methods now, it has the potential to be dramatically more efficient if it can be fully understood.

All that said, it’s important to remember that the amplituhedron is still limited in scope. Currently, it applies to a particular theory, one that doesn’t (and isn’t meant to) describe the real world. It’s still too early to tell whether similar concepts can be defined for more realistic theories. If they can, though, it won’t depend on supersymmetry or string theory. One of the most powerful techniques for making predictions for the Large Hadron Collider, the technique of generalized unitarity, was first applied to N=4 super Yang-Mills. While the amplituhedron is limited now, I would not be surprised if it (and its competitors) give rise to practical techniques ten or twenty years down the line. It’s happened before, after all.