Tag Archives: science communication

No, Hawking didn’t say that a particle collider could destroy the universe

So apparently Hawking says that the Higgs could destroy the universe.

HawkingHiggs

I’ve covered this already, right? No need to say anything more?

Ok, fine, I’ll write a real blog post.

The Higgs is a scalar field: a number, sort of like temperature, that can vary across space and time. In the case of the Higgs this number determines the mass of almost every fundamental particle (the jury is still somewhat out on neutrinos). The Higgs doesn’t vary much at all, in fact it takes an enormous (Large Hadron Collider-sized) amount of energy to get it to wobble even a little bit. That is because the Higgs is in a very very stable state.

Hawking was pointing out that, given our current model of the Higgs, there’s actually another possible state for the Higgs to be in, one that’s even more stable (because it takes less energy, essentially). In that state, the number the Higgs corresponds to is much larger, so everything would be much more massive, with potentially catastrophic results. (Matt Strassler goes into some detail about the assumptions behind this.)

For those who have been following my blog for a while, you may find these “stable states” familiar. They’re vacua, different possible ways to set up “empty” space. In that post, I may have given the impression that there’s no way to change from one stable state, one “vacuum”, to another. In the case of the Higgs, the state it’s in is so stable that vast amounts of energy (again, a Large Hadron Collider-worth) only serve to create a small, unstable fluctuation, the Higgs boson, which vanishes in a fraction of a second.

And that would be the full story, were it not for a curious phenomenon called quantum tunneling.

If you’ve heard someone else describe quantum tunneling, you’ve probably heard that quantum particles placed on one side of a wall have a very small chance of being found later on the other side of the wall, as if they had tunneled there.

Using their incredibly tiny shovels.

However, quantum tunneling applies to much more than just walls. In general, a particle in an otherwise stable state (whether stable because there are walls keeping it in place, or for other reasons) can tunnel into another state, provided that the new state is “more stable” (has lower energy).

The chance of doing this is small, and it gets smaller the more “stable” the particle’s initial state is. Still, if you apply that logic to the Higgs, you realize there’s a very very very small chance that one day the Higgs could just “tunnel” away from its current stable state, destroying the universe as we know it in the process.

If that happened, everything we know would vanish at the speed of light, and we wouldn’t see it coming.

While that may sound scary, it’s also absurdly unlikely, to the extent that it probably won’t happen until the universe is many times older than it is now. It’s not the sort of thing anybody should worry about, at least on a personal level.

Is Hawking fear-mongering, then, by pointing this out? Hardly. He’s just explaining science. Pointing out the possibility that the Higgs could spontaneously change and end the universe is a great way to emphasize the sheer scale of physics, and it’s pretty common for science communicators to mention it. I seem to recall a section about it in Particle Fever, and Sean Carroll even argues that it’s a good thing, due to killing off spooky Boltzmann Brains.

What do particle colliders have to do with all this? Well, apart from quantum tunneling, just inputting enough energy in the right way can cause a transition from one stable state to another. Here “enough energy” means about a million times that produced by the Large Hadron Collider. As Hawking jokes, you’d need a particle collider the size of the Earth to get this effect. I don’t know whether he actually ran the numbers, but if anything I’d guess that a Large Earth Collider would actually be insufficient.

Either way, Hawking is just doing standard science popularization, which isn’t exactly newsworthy. Once again, “interpret something Hawking said in the most ridiculous way possible” seems to be the du jour replacement for good science writing.

“China” plans super collider

When I saw the headline, I was excited.

“China plans super collider” says Nature News.

There’s been a lot of worry about what may happen if the Large Hadron Collider finishes its run without discovering anything truly new. If that happens, finding new particles might require a much bigger machine…and since even that machine has no guarantee of finding anything at all, world governments may be understandably reluctant to fund it.

As such, several prominent people in the physics community have put their hopes on China. The country’s somewhat autocratic nature means that getting funding for a collider is a matter of convincing a few powerful people, not a whole fractious gaggle of legislators. It’s a cynical choice, but if it keeps the field alive so be it.

If China was planning a super collider, then, that would be great news!

Too bad it’s not.

Buried eight paragraphs in to Nature’s article we find the following:

The Chinese government is yet to agree on any funding, but growing economic confidence in the country has led its scientists to believe that the political climate is ripe, says Nick Walker, an accelerator physicist at DESY, Germany’s high-energy physics laboratory in Hamburg. Although some technical issues remain, such as keeping down the power demands of an energy-hungry ring, none are major, he adds.

The Chinese government is yet to agree on any funding. China, if by China you mean the Chinese government, is not planning a super collider.

So who is?

Someone must have drawn these diagrams, after all.

Reading the article, the most obvious answer is Beijing’s Institute of High Energy Physics (IHEP). While this is true, the article leaves out any mention of a more recently founded site, the Center for Future High Energy Physics (CFHEP).

This is a bit odd, given that CFHEP’s whole purpose is to compose a plan for the next generation of colliders, and persuade China’s government to implement it. They were founded, with heavy involvement from non-Chinese physicists including their director Nima Arkani-Hamed, with that express purpose in mind. And since several of the quotes in the article come from Yifang Wang, director of IHEP and member of the advisory board of CFHEP, it’s highly unlikely that this isn’t CFHEP’s plan.

So what’s going on here? On one level, it could be a problem on the journalists’ side. News editors love to rewrite headlines to be more misleading and click-bait-y, and claiming that China is definitely going to build a collider draws much more attention than pointing out the plans of a specialized think tank. I hope that it’s just something like that, and not the sort of casual racism that likes to think of China as a single united will. Similarly, I hope that the journalists involved just didn’t dig deep enough to hear about CFHEP, or left it out to simplify things, because there is a somewhat darker alternative.

CFHEP’s goal is to convince the Chinese government to build a collider, and what better way to do that than to present them with a fait accompli? If the public thinks that this is “China’s” plan, that wheels are already in motion, wouldn’t it benefit the Chinese government to play along? Throw in a few sweet words about the merits of international collaboration (a big part of the strategy of CFHEP is to bring international scientists to China to show the sort of community a collider could attract) and you’ve got a winning argument, or at least enough plausibility to get US and European funding agencies in a competitive mood.

This…is probably more cynical than what’s actually going on. For one, I don’t even know whether this sort of tactic would work.

Do these guys look like devious manipulators?

Indeed, it might just be a journalistic omission, part of a wider tendency of science journalists to focus on big projects and ignore the interesting part, the nitty-gritty things that people do to push them forward. It’s a shame, because people are what drive the news forward, and as long as science is viewed as something apart from real human beings people are going to continue to mistrust and misunderstand it.

Either way, one thing is clear. The public deserves to hear a lot more about CFHEP.

Look what I made!

In a few weeks, I’ll be giving a talk for Stony Brook’s Graduate Awards Colloquium, to an audience of social science grad students and their parents.

One of the most useful tools when talking to people in other fields is a shared image. You want something from your field that they’ve seen, that they’re used to, that they’ll recognize. Building off of that kind of thing can be a great way to communicate.

If there’s one particle physics image that lots and lots of people have seen, it’s the Standard Model. Generally, it’s organized into charts like this:

Standard_Model_of_Elementary_Particles

I thought that if people saw a chart like that, but for N=4 super Yang-Mills, it might make the theory seem a bit more familiar. N=4 super Yang-Mills has a particle much like the Standard Model’s gluon with spin 1, paired with four gluinos, particles that are sort of but not really like quarks with spin 1/2, and six scalars, particles whose closest analogue in the Standard Model is the Higgs with spin 0.

In N=4 super Yang-Mills, none of these particles have any mass, since if supersymmetry isn’t “broken” all particles have the same mass. So where mass is written in the Standard Model table, I can just put zero. The table I linked also gives the electric charge of each particle. That doesn’t really mean anything for N=4 super Yang-Mills. It isn’t a theory that tries to describe the real world, so there’s no direct equivalent to a real-world force like electromagnetism. Since everything in the theory has to have the same charge, again due to supersymmetry, I can just list all of their “electric charges” as zero.

Putting it all together, I get the diagram below. The theory has eleven particles in total, so it won’t fit into a nice neat square. Still, this should be more familiar than most of the ways I could present things.

N4SYMParticleContent

The Four Ways Physicists Name Things

If you’re a biologist and you discover a new animal, you’ve always got Latin to fall back on. If you’re an astronomer, you can describe what you see. But if you’re a physicist, your only option appears to involve falling back on one of a few terrible habits.

The most reasonable option is just to name it after a person. Yang-Mills and the Higgs Boson may sound silly at first, but once you know the stories of C. N. Yang, Robert Mills, Peter Higgs and Satyendra Nath Bose you start appreciating what the names mean. While this is usually the most elegant option, the increasingly collaborative nature of physics means that many things have to be named with a series of initials, like ABJM, BCJ and KKLT.

A bit worse is the tendency to just give it the laziest name possible. What do you call the particles that “glue” protons and neutrons together? Why gluons, of course, yuk yuk yuk!

This is particularly common when it comes to supersymmetry, where putting the word “super” in front of something almost always works. If that fails, it’s time to go for more specific conventions: to find the partner of an existing particle, if the new particle is a boson, just add “s-” for “super”“scalar” apparently to the name. This creates perfectly respectable names like stau, sneutrino, and selectron. If the new particle is a fermion, instead you add “-ino” to the end, getting something like a gluino if you start with a gluon. If you’ve heard of neutrinos, you may know that neutrino means “little neutral one”. You might perfectly rationally expect that gluino means “little gluon”, if you had any belief that physicists name things logically. We don’t. A gluino is called a gluino because it’s a fermion, and neutrinos are fermions, and the physicists who named it were too lazy to check what “neutrino” actually means.

Pictured: the superpartner of Nidoran?

Worse still are names that are obscure references and bad jokes. These are mercifully rare, and at least memorable when they occur. In quantum mechanics, you write down probabilities using brackets of two quantum states, \langle a | b\rangle. What if you need to separate the two states, \langle a| and |b\rangle? Then you’ve got a “bra” and a “ket”!

Or have you heard the story of how quarks were named? Quarks, for those of you unfamiliar with them, are found in protons and neutrons in groups of three. Murray Gell-Mann, one of the two people who first proposed the existence of quarks, got their name from Finnegan’s Wake, a novel by James Joyce, which at one point calls for “Three quarks for Muster Mark!” While this may at first sound like a heartwarming tale of respect for the literary classics, it should be kept in mind that a) Finnegan’s Wake is a novel composed almost entirely of gibberish, read almost exclusively by people who pretend to understand it to seem intelligent and b) this isn’t exactly the most important or memorable line in the book. So Gell-Mann wasn’t so much paying homage to a timeless work of literature as he was referencing the most mind-numbingly obscure piece of nerd trivia before the invention of Mara Jade. Luckily these days we have better ways to remember the name.

Albeit wrinklier ways.

The final, worst category, though, don’t even have good stories going for them. They are the names that tell you absolutely nothing about the thing they are naming.

Probably the worst examples of this from my experience are the a-theorem and the c-theorem. In both cases, a theory happened to have a parameter in it labeled by a letter. When a theorem was proven about that parameter, rather than giving it a name that told you anything at all about what it was, people just called it by the name of the parameter. Mathematics is full of names like this too. Without checking Wikipedia, what’s the difference between a set, a group, and a category? What the heck is a scheme?

If you ever have to name something, be safe and name it after a person. If you don’t, just try to avoid falling into these bad habits of physics naming.

A Question of Audience

I’ve been thinking a bit about science communication recently.

One of the most important parts of communicating science (or indeed, communicating anything) is knowing your audience. Much of the time if a piece is flawed, it’s flawed because the author didn’t have a clear idea of who they’re talking to.

A persistent worry among people who communicate science to the public is that we’re really just talking to ourselves. If all the people praising you for your clear language are scientists, then maybe it’s time to take a step back and think about whether you’re actually being understood by anyone else.

This blog’s goal has always been to communicate science to the general public, and most of my posts are written with as little background assumed as possible. That said, I sometimes wonder whether that’s actually the audience I’m reaching.
Wordpress has a handy feature to let me track which links people click on to get to my blog, which gives me a rough way to gauge my audience.

When a new post goes up, I get around ten to twenty clicks from Facebook. Those are people I know, which for the most part these days means physicists. I get a couple clicks from Twitter, where my followers are a mix of young scientists, science journalists, and amateurs interested in science. On WordPress, my followers are also a mix of specialists and enthusiasts. Most interesting, to me at least, are the followers who get to my blog via Google searches. Naturally, they come in regardless of whether I have a new post or not, adding an extra twenty-five or so views every day. Judging by the sites (google.fr, google.ca) these people come from all over the world, and judging by their queries they run from physics PhD students to people with no physics knowledge whatsoever.

Overall then, I think I’m doing a pretty good job getting the word out. As my site’s Google rankings improve, more non-physicists will read what I have to say. It’s a diverse audience, but I think I’m up to the challenge.

Editors, Please Stop Misquoting Hawking

If you’ve been following science news recently, you’ve probably heard the apparently alarming news that Steven Hawking has turned his back on black holes, or that black holes can actually be escaped, or…how about I just show you some headlines:

FoxHawking

NatureHawking

YahooHawking

Now, Hawking didn’t actually say that black holes don’t exist, but while there are a few good pieces on the topic, in many cases the real message has gotten lost in the noise.

From Hawking’s paper:

ActualPaperHawking

What Hawking is proposing is that the “event horizon” around a black hole, rather than being an absolute permanent boundary from which nothing can escape, is a more temporary “apparent” horizon, the properties of which he goes on to describe in detail.

Why is he proposing this? It all has to do with the debate over black hole firewalls.

Starting with a paper by Polchinski and colleagues a year and a half ago, the black hole firewall paradox centers on contradictory predictions from general relativity and quantum mechanics. General relativity predicts that an astronaut falling past a black hole’s event horizon will notice nothing particularly odd about the surrounding space, but that once past the event horizon none of the “information” that specifies the astronaut’s properties can escape to the outside world. Quantum mechanics on the other hand predicts that information cannot be truly lost. The combination appears to suggest something radical, a “firewall” of high energy radiation around the event horizon carrying information from everything that fell into the black hole in the past, so powerful that it would burn our hypothetical astronaut to a crisp.

Since then, a wide variety of people have made one proposal or another, either attempting to avoid the seemingly preposterous firewall or to justify and further explain it. The reason the debate is so popular is because it touches on some of the fundamental principles of quantum mechanics.

Now, as I have pointed out before, I’m not a good person to ask about the fundamental principles of quantum mechanics. (Incidentally, I’d love it if some of the more quantum information or general relativity-focused bloggers would take a more substantial crack at this! Carroll, Preskill, anyone?) What I can talk about, though, is hype.

All of the headlines I listed take Hawking’s quote out of context, but not all of the articles do. The problem isn’t so much the journalists, as the editors.

One of an editor’s responsibilities is to take articles and give them titles that draw in readers. The editor wants a title that will get people excited, make them curious, and most importantly, get them to click. While a journalist won’t have any particular incentive to improve ad revenue, the same cannot be said for an editor. Thus, editors will often rephrase the title of an article in a way that makes the whole story seem more shocking.

Now that, in itself, isn’t a problem. I’ve used titles like that myself. The problem comes when the title isn’t just shocking, but misleading.

When I call astrophysics “impossible”, nobody is going to think I mean it literally. The title is petulant and ridiculous enough that no-one would take it at face value, but still odd enough to make people curious. By contrast, when you say that Hawking has “changed his mind” about black holes or said that “black holes do not exist”, there are people who will take that at face value as supporting their existing beliefs, as the Borowitz Report humorously points out. These people will go off thinking that Hawking really has given up on black holes. If the title confirms their beliefs enough, people might not even bother to read the article. Thus, by using an actively misleading title, you may actually be decreasing clicks!

It’s not that hard to write a title that’s both enough of a hook to draw people in and won’t mislead. Editors of the world, you’re well-trained writers, certainly much better than me. I’m sure you can manage it.

There really is some interesting news here, if people had bothered to look into it. The firewall debate has been going on for a year and a half, and while Hawking isn’t the universal genius the media occasionally depicts he’s still the world’s foremost expert on the quantum properties of black holes. Why did he take so long to weigh in? Is what he’s proposing even particularly new? I seem to remember people discussing eliminating the horizon in one way or another (even “naked” singularities) much earlier in the firewall debate…what makes Hawking’s proposal novel and different?

This is the sort of thing you can use to draw in interest, editors of the world. Don’t just write titles that cause ignorant people to roll their eyes and move on, instead, get people curious about what’s really going on in science! More ad revenue for you, more science awareness for us, sounds like a win-win!

How (Not) to Sum the Natural Numbers: Zeta Function Regularization

1+2+3+4+5+6+\ldots=-\frac{1}{12}

If you follow Numberphile on YouTube or Bad Astronomy on Slate you’ve already seen this counter-intuitive sum written out. Similarly, if you follow those people or Sciencetopia’s Good Math, Bad Math, you’re aware that the way that sum was presented by Numberphile in that video was seriously flawed.

There is a real sense in which adding up all of the natural numbers (numbers 1, 2, 3…) really does give you minus twelve, despite all the reasons this should be impossible. However, there is also a real sense in which it does not, and cannot, do any such thing. To explain this, I’m going to introduce two concepts: complex analysis and regularization.

This discussion is not going to be mathematically rigorous, but it should give an authentic and accurate view of where these results come from. If you’re interested in the full mathematical details, a later discussion by Numberphile should help, and the mathematically confident should read Terence Tao’s treatment from back in 2010.

With that said, let’s talk about sums! Well, one sum in particular:

\frac{1}{1^s}+\frac{1}{2^s}+\frac{1}{3^s}+\frac{1}{4^s}+\frac{1}{5^s}+\frac{1}{6^s}+\ldots = \zeta(s)

If s is greater than one, then each term in this infinite sum gets smaller and smaller fast enough that you can add them all up and get a number. That number is referred to as \zeta(s), the Riemann Zeta Function.

So what if s is smaller than one?

The infinite sum that I described doesn’t converge for s less than one. Add it up in any reasonable way, and it just approaches infinity. Put another way, the sum is not properly defined. But despite this, \zeta(s) is not infinite for s less than one!

Now as you might object, we only defined the Riemann Zeta Function for s greater than one. How do we know anything at all about it for s less than one?

That is where complex analysis comes in. Complex analysis sounds like a made-up term for something unreasonably complicated, but it’s quite a bit more approachable when you know what it means. Analysis is the type of mathematics that deals with functions, infinite series, and the basis of calculus. It’s often contrasted with Algebra, which usually considers mathematical concepts that are discrete rather than smooth (this definition is a huge simplification, but it’s not very relevant to this post). Complex means that complex analysis deals with functions, not of everyday real numbers, but of complex numbers, or numbers with an imaginary part.

So what does complex analysis say about the Riemann Zeta Function?

One of the most impressive results of complex analysis is the discovery that if a function of a complex number is sufficiently smooth (the technical term is analytic) then it is very highly constrained. In particular, if you know how the function behaves over an area (technical term: open set), then you know how it behaves everywhere else!

If you’re expecting me to explain why this is true, you’ll be disappointed. This is serious mathematics, and serious mathematics isn’t the sort of thing you can give the derivation for in a few lines. It takes as much effort and knowledge to replicate a mathematical result as it does to replicate many lab results in science.

What I can tell you is that this sort of approach crops up in many places, and is part of a general theme. There is a lot you can tell about a mathematical function just by looking at its behavior in some limited area, because mathematics is often much more constrained than it appears. It’s the same sort of principle behind the work I’ve been doing recently.

In the case of the Riemann Zeta Function, we have a definition for s greater than one. As it turns out, this definition still works if s is a complex number, as long as the real part of s is greater than one. Using this information, the value of the Riemann Zeta Function for a large area (half of the complex numbers), complex analysis tells us its value for every other number. In particular, it tells us this:

\zeta(-1)= -\frac{1}{12}

If the Riemann Zeta Function is consistently defined for every complex number, then it must have this value when s is minus one.

If we still trusted the sum definition for this value of s, we could plug in -1 and get

 1+2+3+4+5+6+\ldots=-\frac{1}{12}

Does that make this statement true? Sort of. It all boils down to a concept from physics called regularization.

In physics, we know that in general there is no such thing as infinity. With a few exceptions, nothing in nature should be infinite, and finite evidence (without mathematical trickery) should never lead us to an infinite conclusion.

Despite this, occasionally calculations in physics will give infinite results. Almost always, this is evidence that we are doing something wrong: we are not thinking hard enough about what’s really going on, or there is something we don’t know or aren’t taking into account.

Doing physics research isn’t like taking a physics class: sometimes, nobody knows how to do the problem correctly! In many cases where we find infinities, we don’t know enough about “what’s really going on” to correct them. That’s where regularization comes in handy.

Regularization is the process by which an infinite result is replaced with a finite result (made “regular”), in a way so that it keeps the same properties. These finite results can then be used to do calculations and make predictions, and so long as the final predictions are regularization independent (that is, the same if you had done a different regularization trick instead) then they are legitimate.

In string theory, one way to compute the required dimensions of space and time ends up giving you an infinite sum, a sum that goes 1+2+3+4+5+…. In context, this result is obviously wrong, so we regularize it. In particular, we say that what we’re really calculating is the Riemann Zeta Function, which we happen to be evaluating at -1. Then we replace 1+2+3+4+5+… with -1/12.

Now remember when I said that getting infinities is a sign that you’re doing something wrong? These days, we have a more rigorous way to do this same calculation in string theory, one that never forces us to take an infinite sum. As expected, it gives the same result as the old method, showing that the old calculation was indeed regularization independent.

Sometimes we don’t have a better way of doing the calculation, and that’s when regularization techniques come in most handy. A particular family of tricks called renormalization is quite important, and I’ll almost certainly discuss it in a future post.

So can you really add up all the natural numbers and get -1/12? No. But if a calculation tells you to add up all the natural numbers, and it’s obvious that the result can’t be infinite, then it may secretly be asking you to calculate the Riemann Zeta Function at -1. And that, as we know from complex analysis, is indeed -1/12.

Update on the Amplituhedron

Awhile back I wrote a post on the Amplituhedron, a type of mathematical object  found by Nima Arkani-Hamed and Jaroslav Trnka that can be used to do calculations of scattering amplitudes in planar N=4 super Yang-Mills theory. (Scattering amplitudes are formulas used to calculate probabilities in particle physics, from the probability that an unstable particle will decay to the probability that a new particle could be produced by a collider.) Since then, they published two papers on the topic, the most recent of which came out the day before New Year’s Eve. These papers laid out the amplituhedron concept in some detail, and answered a few lingering questions. The latest paper focused on one particular formula, the probability that two particles bounce off each other. In discussing this case, the paper serves two purposes:

1. Demonstrating that Arkani-Hamed and Trnka did their homework.

2. Showing some advantages of the amplituhedron setup.

Let’s talk about them one at a time.

Doing their homework

There’s already a lot known about N=4 super Yang-Mills theory. In order to propose a new framework like the amplituhedron, Arkani-Hamed and Trnka need to show that the new framework can reproduce the old knowledge. Most of the paper is dedicated to doing just that. In several sections Arkani-Hamed and Trnka show that the amplituhedron reproduces known properties of the amplitude, like the behavior of its logarithm, its collinear limit (the situation when two momenta in the calculation become parallel), and, of course, unitarity.

What, you heard the amplituhedron “removes” unitarity? How did unitarity get back in here?

This is something that has confused several commenters, both here and on Ars Technica, so it bears some explanation.

Unitarity is the principle that enforces the laws of probability. In its simplest form, unitarity requires that all probabilities for all possible events add up to one. If this seems like a pretty basic and essential principle, it is! However, it and locality (the idea that there is no true “action at a distance”, that particles must meet to interact) can be problematic, causing paradoxes for some approaches to quantum gravity. Paradoxes like these inspired Arkani-Hamed to look for ways to calculate scattering amplitudes that don’t rely on locality and unitarity, and with the amplituhedron he succeeded.

However, just because the amplituhedron doesn’t rely on unitarity and locality, doesn’t mean it violates them. The amplituhedron, for all its novelty, still calculates quantities in N=4 super Yang-Mills. N=4 super Yang-Mills is well understood, it’s well-behaved and cuddly, and it obeys locality and unitarity.

This is why the amplituhedron is not nearly as exciting as a non-physicist might think. The amplituhedron, unlike most older methods, isn’t based on unitarity and locality. However, the final product still has to obey unitarity and locality, because it’s the same final product that others calculate through other means. So it’s not as if we’ve completely given up on basic principles of physics.

Not relying on unitarity and locality is valuable. For those who research scattering amplitudes, it has often been useful to try to “eliminate” one principle or another from our calculations. 20 years ago, avoiding Feynman diagrams was the key to finding dramatic simplifications. Now, many different approaches try to sidestep different principles. (For example, while the amplituhedron calculates an integrand and leaves a final integral to be done, I’m working on approaches that never employ an integrand.)

If we can avoid relying on some “basic” principle, that’s usually good evidence that the principle might be a consequence of something even more basic. By showing how unitarity can arise from the amplituhedron, Arkani-Hamed and Trnka have shown that a seemingly basic principle can come out of a theory that doesn’t impose it.

Advantages of the Amplituhedron

Not all of the paper compares to old results and principles, though. A few sections instead investigate novel territory, and in doing so show some of the advantages and disadvantages of the amplituhedron.

Last time I wrote on this topic, I was unclear on whether the amplituhedron was more efficient than existing methods. At this point, it appears that it is not. While the formula that the amplituhedron computes has been found by other methods up to seven loops, the amplituhedron itself can only get up to three loops or so in practical cases. (Loops are a way that calculations are classified in particle physics. More loops means a more complex calculation, and a more precise final result.)

The amplituhedron’s primary advantage is not in efficiency, but rather in the fact that its mathematical setup makes it straightforward to derive interesting properties for any number of loops desired. As Trnka occasionally puts it, the central accomplishment of the amplituhedron is to find “the question to which the amplitude is the answer”. By being able to phrase this “question” mathematically, one can be very general, which allows them to discover several properties that should hold no matter how complex the rest of the calculation becomes. It also has another implication: if this mathematical question has a complete mathematical answer, that answer could calculate the amplitude for any number of loops. So while the amplituhedron is not more efficient than other methods now, it has the potential to be dramatically more efficient if it can be fully understood.

All that said, it’s important to remember that the amplituhedron is still limited in scope. Currently, it applies to a particular theory, one that doesn’t (and isn’t meant to) describe the real world. It’s still too early to tell whether similar concepts can be defined for more realistic theories. If they can, though, it won’t depend on supersymmetry or string theory. One of the most powerful techniques for making predictions for the Large Hadron Collider, the technique of generalized unitarity, was first applied to N=4 super Yang-Mills. While the amplituhedron is limited now, I would not be surprised if it (and its competitors) give rise to practical techniques ten or twenty years down the line. It’s happened before, after all.

Hype versus Miscommunication, or the Language of Importance

A fellow amplitudes-person was complaining to me recently about the hype surrounding the debate regarding whether black holes have “firewalls”. New York Times coverage seems somewhat excessive for what is, in the end, a fairly technical debate, and its enthusiasm was (rightly?) mocked in several places.

There’s an attitude I often run into among other physicists. The idea is that when hype like this happens, it’s because senior physicists are, at worst, cynically manipulating the press to further their positions or, at best, so naïve that they really see what they’re working on as so important that it deserves hype-y coverage. Occasionally, the blame will instead be put on the journalists, with largely the same ascribed motivations: cynical need for more page views, or naïve acceptance of whatever story they’re handed.

In my opinion, what’s going on there is a bit deeper, and not so easily traceable to any particular person.

In the articles on the (2, 0) theory I put up in the last few weeks, I made some disparaging comments about the tone of this Scientific American blog post. After exchanging a few tweets with the author, I think I have a better idea of what went down.

The problem here is that when you ask a scientist about something they’re excited about, they’re going to tell you why they’re excited about it. That’s what happened here when Nima Arkani-Hamed was interviewed for the above article: he was asked about the (2, 0) theory, and he seems to have tried to convey his enthusiasm with a metaphor that explained how the situation felt to him.

The reason this went wrong and led to a title as off-base and hype-sounding as “the Ultimate Ultimate Theory of Physics” was that we (scientists and science journalists) are taught to express enthusiasm in the language of importance.

There has been an enormous resurgence in science communication in recent years, but it has come with a very us-vs.-them mentality. The prevailing attitude is that the public will only pay attention to a scientific development if they are told that it is important. As such, both scientists and journalists try to make whatever they’re trying to communicate sound central, either to daily life or to our understanding of the universe. When both sides of the conversation are operating under this attitude, it creates an echo chamber where a concept’s importance is blown up many times greater than it really deserves, without either side doing anything other than communicating science in the only way they know.

We all have to step back and realize that most of the time, science isn’t interesting because of its absolute “importance”. Rather, a puzzle is often interesting simply because it is a puzzle. That’s what’s going on with the (2, 0) theory, or with firewalls: they’re hard to figure out, and that’s why we care.

Being honest about this is not going to lose us public backing, or funding. It’s not just scientists who value interesting things because they are challenging. People choose the path of their lives not based on some absolute relevance to the universe at large, but because things make sense in context. You don’t fall in love because the target of your affections is the most perfect person in the universe, you fall in love because they’re someone who can constantly surprise you.

Scientists are in love with what they do. We need to make sure that that, and not some abstract sense of importance, is what we’re communicating. If we do that, if we calm down and make a bit more effort to be understood, maybe we can win back some of the trust that we’ve lost by appearing to promote Ultimate Ultimate Theories of Everything.

Hawking vs. Witten: A Primer

Have you seen the episode of Star Trek where Data plays poker with Stephen Hawking? How about the times he appeared on Futurama or the Simpsons? Or the absurd number of times he has come up in one way or another on The Big Bang Theory?

Stephen Hawking is probably the most recognizable theoretical physicist to laymen. Wheelchair-bound and speaking through a voice synthesizer, Hawking presents a very distinct image, while his work on black holes and the big bang, along with his popular treatments of science in books like A Brief History of Time, has made him synonymous in the public’s mind with genius.

He is not, however, the most recognizable theoretical physicist when talking to physicists. If Sheldon from The Big Bang Theory were a real string theorist he wouldn’t be obsessed with Hawking. He might, however, be obsessed with Edward Witten.

Edward Witten is tall and has an awkwardly high voice (for a sample, listen to the clip here). He’s also smart, smart enough to dabble in basically every subfield of theoretical physics and manage to make important contributions while doing so. He has a knack for digging up ideas from old papers and dredging out the solution to current questions of interest.

And far more than Hawking, he represents a clear target for parody, at least when that parody is crafted by physicists and mathematicians. Abstruse Goose has a nice take on his role in theoretical physics, while his collaboration with another physicist named Seiberg on what came to be known as Seiberg-Witten theory gave rise to the cyber-Witten pun.

If you would look into the mouth of physics-parody madness, let this link be your guide…

So why hasn’t this guy appeared on Futurama? (After all, his dog does!)

Witten is famous among theorists, but he hasn’t done as much as Hawking to endear himself to the general public. He hasn’t written popular science books, and he almost never gives public talks. So when a well-researched show like The Big Bang Theory wants to mention a famous physicist, they go to Hawking, not to Witten, because people know about him. And unless Witten starts interfacing more with the public (or blog posts like this become more common), that’s not about to change.