Monthly Archives: October 2014

Why I Can’t Explain Ghosts: Or, a Review of a Popular Physics Piece

Since today is Halloween, I really wanted to write a post talking about the spookiest particles in physics, ghosts.

And their superpartners, ghost riders.

The problem is, in order to explain ghosts I’d have to explain something called gauge symmetry. And gauge symmetry is quite possibly the hardest topic in modern physics to explain to a general audience.

Deep down, gauge symmetry is the idea that irrelevant extra parts of how we represent things in physics should stay irrelevant. While that sounds obvious, it’s far from obvious how you can go from that to predicting new particles like the Higgs boson.

Explaining this is tough! Tough enough that I haven’t thought of a good way to do it yet.

Which is why I was fairly stoked when a fellow postdoc pointed out a recent popular physics article by Juan Maldacena, explaining gauge symmetry.

Juan Maldacena is a Big Deal. He’s the guy who figured out the AdS/CFT correspondence, showing that string theory (in a particular hyperbola-shaped space called AdS) and everybody’s favorite N=4 super Yang-Mills theory are secretly the same, a discovery which led to a Big Blue Dot on Paperscape. So naturally, I was excited to see what he had to say.

Big Blue Dot pictured here.

Big Blue Dot pictured here.

The core analogy he makes is with currencies in different countries. Just like gauge symmetry, currencies aren’t measuring anything “real”: they’re arbitrary conventions put in place because we don’t have a good way of just buying things based on pure “value”. However, also like gauge symmetry, then can have real-life consequences, as different currency exchange rates can lead to currency speculation, letting some people make money and others lose money. In Maldacena’s analogy the Higgs field works like a precious metal, making differences in exchange rates manifest as different prices of precious metals in different countries.

It’s a solid analogy, and one that is quite close to the real mathematics of the problem (as the paper’s Appendix goes into detail to show). However, I have some reservations, both about the paper as a whole and about the core analogy.

In general, Maldacena doesn’t do a very good job of writing something publicly accessible. There’s a lot of stilted, academic language, and a lot of use of “we” to do things other than lead the reader through a thought experiment. There’s also a sprinkling of terms that I don’t think the average person will understand; for example, I doubt the average college student knows flux as anything other than a zany card game.

Regarding the analogy itself, I think Maldacena has fallen into the common physicist trap of making an analogy that explains things really well…if you already know the math.

This is a problem I see pretty frequently. I keep picking on this article, and I apologize for doing so, but it’s got a great example of this when it describes supersymmetry as involving “a whole new class of number that can be thought of as the square roots of zero”. That’s a really great analogy…if you’re a student learning about the math behind supersymmetry. If you’re not, it doesn’t tell you anything about what supersymmetry does, or how it works, or why anyone might study it. It relates something unfamiliar to something unfamiliar.

I’m worried that Maldacena is doing that in this paper. His setup is mathematically rigorous, but doesn’t say much about the why of things: why do physicists use something like this economic model to understand these forces? How does this lead to what we observe around us in the real world? What’s actually going on, physically? What do particles have to do with dimensionless constants? (If you’re curious about that last one, I like to think I have a good explanation here.)

It’s not that Maldacena ignores these questions, he definitely puts effort into answering them. The problem is that his analogy itself doesn’t really address them. They’re the trickiest part, the part that people need help picturing and framing, the part that would benefit the most from a good analogy. Instead, the core imagery of the piece is wasted on details that don’t really do much for a non-expert.

Maybe I’m wrong about this, and I welcome comments from non-physicists. Do you feel like Maldacena’s account gives you a satisfying idea of what gauge symmetry is?

The Hardest Audience Knows Just Enough to Be Dangerous

You’d think that it would be hard to explain physics to people who know absolutely nothing about physics.

And you might be right, if there was anyone these days who knew absolutely nothing about physics. If someone didn’t know what atoms were, or didn’t know what a physicist was, then yes it would take quite a while to explain anything more than the basics. But most people know what atoms are, and know what physicists are, and at least have a basic idea that there are things called protons and neutrons and electrons.

And that’s often enough. Starting with a basis like that, I can talk people through the Large Hadron Collider, I can get them to picture Feynman Diagrams, I can explain, roughly, what it is I do.

On the other end, it’s not all that hard to explain what I do to people in my sub-field. Working on the same type of physics is like sharing a language, we have all sorts of terms to make explaining easier. While it’s still possible to trip up and explain too much or too little (a recent talk I gave left out the one part that one member of the audience needed…because everyone else would have gotten nothing out of it), you’re protected by a buffer of mutual understanding.

The hardest talks aren’t for the public, and they aren’t for fellow amplitudes-researchers. They’re for a general physics audience.

If you’re talking to physicists, you can’t start with protons and neutrons. Do that, and your audience is going to get annoyed with you rather quickly. You can’t rely on the common understanding everyone has of physics. In addition to making your audience feel like they’re being talked down to, you won’t manage to say anything substantial. You need to start at a higher level so that when you do describe what you do, it’s in enough detail that your audience feels like they really understand it.

At the same time, you can’t start with the jargon of your sub-field. If you want to really explain something (and not just have fifteen minutes of background before everyone tunes out) you need to build off of a common understanding.

The tricky part is, that “common understanding” is more elusive than you might think. For example, pretty much every physicist has some familiarity with Quantum Field Theory…but that can mean anything from “uses it every day” to “saw it a couple times back in grad school”. Too much background, and half your audience is bored. Too little, and half your audience is lost. You have to strike the proper balance, trying to show everyone enough to feel satisfied.

There are tricks to make this easier. I’ve noticed that some of the best speakers begin with a clever and unique take on something everyone understands. That way, people in very different fields will still have something they recognize, while people in the same field will still be seeing something new. Of course, the tricky part is coming up with a new example in the first place!

In general, I need to get better at estimating where my audience is. Talking to you guys is fun, but I ought to also practice a “physics voice” for discussions with physicists (as well as grants and applications), and an “amplitudes voice” for fellow specialists. The key to communication, as always, is knowing your audience.

A Nobel for Blue LEDs, or, How Does That Count as Physics?

When I first heard about this year’s Nobel Prize in Physics, I didn’t feel the need to post on it. The prize went to Isamu Akasaki, Hiroshi Amano and Shuji Nakamura, whose discoveries enabled blue LEDs. It’s a more impressive accomplishment than it might seem: while red LEDs have been around since the 60’s and 70’s, blue LEDs were only developed in the 90’s, and only with both can highly efficient, LED-based white light sources be made. Still, I didn’t consider posting on it because it’s pretty much entirely outside my field.

p-device20blue20led-23

Shiny, though

It took a conversation with another PI postdoc to point out one way I can comment on the Nobel, and it started when we tried to figure out what type of physicists Akasaki, Amano, and Nakamura are. After tossing around terms like “device physicist” and “condensed matter”, someone wondered whether the development of blue LEDs wasn’t really a matter of engineering.

At that point I realized, I’ve talked about something like this before.

Physicists work on lots of different things, and many of them don’t seem to have much to do with physics. They study geometry and topology, biological molecules and the nature of evolution, income inequality and, yes, engineering.

On the surface, these don’t have much to do with physics. A friend of mine used to quip that condensed matter physicists seem to just “pick whatever they want to research”.

There is something that ties all of these topics together, though. They’re all things that physicists are good at.

Physics grad school gives you a wide variety of tools with which to understand the world. Thermodynamics gives you a way to understand large, complicated systems with statistics, while quantum field theory lets you understand everything with quantum properties, not just fundamental particles but materials as well. This batch of tools can be applied to “traditional” topics, but they’re equally applicable if you’re researching something else entirely, as long as it obeys the right kinds of rules.

In the end, the best definition of physics is the most useful one. Physicists should be people who can benefit from being part of physics organizations, from reading physics journals, and especially from training (and having been) physics grad students. The whole reason we have scientific disciplines in the first place is to make it easier for people with common interests to work together. That’s why Akasaki, Amano, and Nakamura aren’t “just” engineers, and why I and my fellow string theorists aren’t “just” mathematicians. We use our knowledge of physics to do our jobs, and that, more than anything else, makes us physicists.


Edit: It has been pointed out to me that there’s a bit more to this story than the main accounts have let on. Apparently another researcher named Herbert Paul Maruska was quite close to getting a blue LED up and running back in the early 1970’s, getting far enough to have a working prototype. There’s a whole fascinating story about the quest for a blue LED, related here. Maruska seems to be on friendly terms with Akasaki, Amano, and Nakamura, and doesn’t begrudge them their recognition.

Science is Debugging

What do I do, when I get to work in the morning?

I debug programs.

I debug programs literally, in that most of the calculations I do are far to complicated to do by hand. I write programs to do my calculations, and invariably these programs have bugs. So, I debug.

I debug programs in a broader sense, too.

In science, a research program is a broad approach, taken by a number of people, used to make progress on some set of important scientific questions. Someone suggests a way forward (“Let’s try using an ansatz of transcendental functions!” “Let’s try to represent our calculations with a geometrical object!”) and they and others apply the new method to as many problems as they can. Eventually the program loses steam, and a new program is proposed.

The thing about these programs is, they’re pretty much never fully fleshed out at the beginning. There’s a general idea, and a good one, but it usually requires refinement. If you just follow the same steps as the first person in the program you’re bound to fail. Instead, you have to tweak the program, broadening it and adapting it to the problem you’re trying to solve.

It’s a heck of a lot like debugging a computer program, really. You start out with a hastily written script, and you try applying it as-is, hoping that it works. Often it doesn’t, and you have to go back, step by step, and figure out what’s going wrong.

So when I debug computer programs at work, I’m doing it with a broader goal. I’m running a scientific program, looking for bugs in that. If and when I find them, I can write new computer programs to figure out what’s going wrong. Then I have to debug those computer programs…

I’ll just leave this here.

Love It or Hate It, Don’t Fear the Multiverse

“In an infinite universe, anything is possible.”

A nice maxim for science fiction, perhaps. But it probably doesn’t sound like productive science.

A growing number of high profile scientists and science popularizers have come out in favor of the idea that there may exist a “multiverse” of multiple universes, and that this might explain some of the unusual properties of our universe. If there are multiple universes, each with different physical laws, then we must exist in one of the universes with laws capable of supporting us, no matter how rare or unlikely such a universe is. This sort of argument is called anthropic reasoning.

(If you’re picky about definitions and don’t like the idea of more than one universe, think instead of a large universe with many different regions, each one separated from the others. There are some decent physics-based reasons to suppose we live in such a universe.)

Not to mention continuity reasons.

Why is anyone in favor of this idea? It all goes back to the Higgs.

The Higgs field interacts with other particles, giving them mass. What most people don’t mention is that the effect, in some sense, goes both ways. Because the Higgs interacts with other particles, the mass of the Higgs is also altered. This alteration is large, much larger than the observed mass of the Higgs. (In fact, in a sense it’s infinite!)

In order for the Higgs to have the mass we observe, then, something has to cancel out these large corrections. That cancellation can either be a coincidence, or there can be a reason for it.

The trouble is, we’re running out of good reasons. One of the best was supersymmetry, the idea that each particle has a partner with tightly related properties. But if supersymmetry was going to save the day, we probably would have detected some of those partners at the Large Hadron Collider by now. More generally, it can be argued that almost all possible “good reasons” require some new particle to be found at the LHC.

If there are no good reasons, then we’re stuck with a coincidence. (This is often referred to as the Naturalness Problem in particle physics.) And it’s this uncomfortable coincidence that has driven prominent physicists to the arms of the multiverse.

There’s a substantial backlash, though. Many people view the multiverse as a cop-out. Some believe it to be even more toxic than that: if there’s a near-infinite number of possible universes then in principle any unusual feature of our universe could be explained by anthropic reasoning, which sounds like it could lead to the end of physics as we know it.

You can disdain the multiverse as a cop-out, but, as I’ll argue here, you shouldn’t fear it. Those who think the multiverse will destroy physics are fundamentally misunderstanding the way physics research works.

The key thing to keep in mind is that almost nobody out there prefers the multiverse. When a prominent physicist supports the multiverse, that doesn’t mean they’re putting aside productive work on other solutions to the problem. In general, it means they don’t have other solutions to the problem. Supporting the multiverse isn’t going to stop them from having ideas they wouldn’t have had to begin with.

And indeed, many of these people are quite supportive of alternatives to the multiverse. I’ve seen Nima Arkani-Hamed talk about the multiverse, and he generally lists a number of other approaches (some quite esoteric!) that he has worked (and failed to make progress) on, and encourages the audience to look into them.

Physics isn’t a zero-sum game, nor is it ruled by a few prominent people. If a young person has a good idea about how to explain something without the multiverse, they’re going to have all the support and recognition that such an idea deserves.

What the multiverse adds is another track, another potentially worthwhile line of research. Surprising as it may seem, the multiverse doesn’t automatically answer every question. It might not even answer the question of the mass of the Higgs! All that the existence of a multiverse tells us is that we should exist somewhere where intelligent life could exist…but if intelligent life is more likely to exist in a universe very different from ours, then we’re back to square one. There’s a lot of research involved in figuring out just what the multiverse implies, research by people who wouldn’t have been working on this sort of problem if the idea of the multiverse hadn’t been proposed.

That’s the key take-away message here. The multiverse may be wrong, but just considering it isn’t going to destroy physics. Rather, it’s opened up new avenues of research, widening the community of those trying to solve the Naturalness Problem. It may well be a cop-out for individuals, but science as a whole doesn’t have cop-outs: there’s always room for someone with a good idea to sweep away the cobwebs and move things forward.