Monthly Archives: November 2015

Knowing Too Little, Knowing Too Much

(Commenter nueww has asked me to comment on the flurry of blog posts around an interview with Lance Dixon that recently went up on the SLAC website. I’m not going to comment on it until I have a chance to talk with Lance, beyond saying that this is a remarkable amount of attention paid to a fairly workaday organizational puff piece.)

I’ve been in Oregon this week, giving talks at Oregon State and at the University of Oregon. After my talk at Brown in front of some of the world’s top experts in my subfield, I’ve had to adapt quite a bit for these talks. Oregon State doesn’t have any particle theorists at all, while at the University of Oregon I gave a seminar for their Institute of Theoretical Science, which contains a mix of researchers ranging from particle theorists to theoretical chemists.

Guess which talk was harder to give?

If you guessed the UofO talk, you’re right. At Oregon State, I had a pretty good idea of everyone’s background. I knew these were people who would be pretty familiar with quantum mechanics, but probably wouldn’t have heard of Feynman diagrams. From that, I could build a strategy, and end up giving a pretty good talk.

At the University of Oregon, if I aimed for the particle physicists in the audience, I’d lose the chemists. So I should aim for the chemists, right?

That has its problems too. I’ve talked about some of them: the risk that the experts in your audience feel talked-down to, that you don’t cover the more important parts of your work. But there’s another problem, one that I noticed when I tried to prepare this talk: knowing too little can lead to misunderstandings, but so can knowing too much.

What would happen if I geared the talk completely to the chemists? Well, I’d end up being very vague about key details of what I did. And for the chemists, that would be fine: they’d get a flavor of what I do, and they’d understand not to read any more into it. People are pretty good at putting something in the “I don’t understand this completely” box, as long as it’s reasonably clearly labeled.

That vagueness, though, would be a disaster for the physicists in the audience. It’s not just that they wouldn’t get the full story: unless I was very careful, they’d end up actively misled. The same vague descriptions that the chemists would accept as “flavor”, the physicists would actively try to read for meaning. And with the relevant technical terms replaced with terms the chemists would recognize, they would end up with an understanding that would be actively wrong.

In the end, I ended up giving a talk mostly geared to the physicists, but with some background and vagueness to give the chemists some value. I don’t feel like I did as good of a job as I would like, and neither group really got as much out of the talk as I wanted them to. It’s tricky talking for a mixed audience, and it’s something I’m still learning how to do.

Pi in the Sky Science Journalism

You’ve probably seen it somewhere on your facebook feed, likely shared by a particularly wide-eyed friend: pi found hidden in the hydrogen atom!

FionaPi

ChoPi

OuellettePi

From the headlines, this sounds like some sort of kabbalistic nonsense, like finding the golden ratio in random pictures.

Read the actual articles, and the story is a bit more reasonable. The last two I linked above seem to be decent takes on it, they’re just saddled with ridiculous headlines. As usual, I blame the editors. This time, they’ve obscured an interesting point about the link between physics and mathematics.

So what does “pi found hidden in the hydrogen atom” actually mean?

It doesn’t mean that there’s some deep importance to the number pi in nature, beyond its relevance in mathematics in general. The reason that pi is showing up here isn’t especially deep.

It isn’t trivial either, though. I’ve seen a few people whose first response to this article was “of course they found pi in the hydrogen atom, hydrogen atoms are spherical!” That’s not what’s going on here. The connection isn’t about the shape of the hydrogen atom, it’s about one particular technique for estimating its energy.

Carl Hagen is a physicist at the University of Rochester who was teaching a quantum mechanics class in which he taught a well-known approximation technique called the variational principle. Specifically, he had his students apply this technique to the hydrogen atom. The nice thing about the hydrogen atom is that it’s one of the few atoms simple enough that it’s possible to find its energy levels exactly. The exact calculation can then be compared to the approximation.

What Hagen noticed was that this approximation was surprisingly good, especially for high energy states for which it wasn’t expected to be. In the end, working with Rochester math professor Tamar Friedmann, he figured out that the variational principle was making use of a particular identity between a type of mathematical functions, called Gamma functions, that are quite common in physics. Using those Gamma functions, the two researchers were able to re-derive what turned out to be a 17th century formula for pi, giving rise to a much cleaner proof for that formula than had been known previously.

So pi isn’t appearing here because “the hydrogen atom is a sphere”. It’s appearing because pi appears all over the place in physics, and because in general, the same sorts of structures appear again and again in mathematics.

Pi’s appearance in the hydrogen atom is thus not very special, regardless. What is a little bit special is the fact that, using the hydrogen atom, these folks were able to find a cleaner proof of an old approximation for pi, one that mathematicians hadn’t found before.

That, if anything, is the interesting part of this news story, but it’s also part of a broader trend, one in which physicists provide “physics proofs” for mathematical results. One of the more famous accomplishments of string theory is a class of “physics proofs” of this sort, using a principle called mirror symmetry.

The existence of  “physics proofs” doesn’t mean that mathematics is secretly constrained by the physical world. Rather, they’re a result of the fact that physicists are interested in different aspects of mathematics, and in general are a bit more reckless in using approximations that haven’t been mathematically vetted. A physicist can sometimes prove something in just a few lines that mathematicians would take many pages to prove, but usually they do this by invoking a structure that would take much longer for a mathematician to define. As physicists, we’re building on the shoulders of other physicists, using concepts that mathematicians usually don’t have much reason to bother with. That’s why it’s always interesting when we find something like the Amplituhedron, a clean mathematical concept hidden inside what would naively seem like a very messy construction. It’s also why “physics proofs” like this can happen: we’re dealing with things that mathematicians don’t naturally consider.

So please, ignore the pi-in-the-sky headlines. Some physicists found a trick, some mathematicians found it interesting, the hydrogen atom was (quite tangentially) involved…and no nonsense needs to be present.

Map Your Dead Ends

I’m at Brown this week, where I’ve been chatting with Mark Spradlin and Anastasia Volovich, two of the founding figures of my particular branch of amplitudeology. Back in 2010 they figured out how to turn this seventeen-page two-loop amplitude:

Why yes, this is one equation that covers seventeen pages. You're lucky I didn't post the eight-hundred page one.

into a formula that just takes up two lines:gsvvformThis got everyone very excited, it inspired some of my collaborators to do work that would eventually give rise to the Hexagon Functions, my main research project for the past few years.

Unfortunately, when we tried to push this to higher loops, we didn’t get the sort of nice, clean-looking formulas that the Brown team did. Each “loop” is an additional layer of complexity, a series of approximations that get closer to the exact result. And so far, our answers look more like that first image than the second: hundreds of pages with no clear simplifications in sight.

At the time, people wondered whether some simple formula might be enough. As it turns out, you can write down a formula similar to the one found by Spradlin and Volovich, generalized to a higher number of loops. It’s clean, it’s symmetric, it makes sense…and it’s not the right answer.

That happens in science a lot more often than science fans might expect. When you hear about this sort of thing in the news, it always works: someone suggests a nice, simple answer, and it turns out to be correct, and everyone goes home happy. But for every nice simple guess that works, there are dozens that don’t: promising ideas that just lead to dead ends.

One of the postdocs here at Brown worked on this “wrong” formula, and while chatting with him here he asked a very interesting question: why is it wrong? Sure, we know that it’s wrong, we can check that it’s wrong…but what, specifically, is missing? Is it “part” of the right answer in some sense, with some predictable corrections?

As it turns out, this is a very interesting question! We’ve been looking into it, and the “wrong” answer has some interesting relationships with some of our Hexagon Functions. It may have been a “dead end”, but it still could turn out to be a useful one.

A good physics advisor will tell their students to document their work. This doesn’t just mean taking notes: most theoretical physicists will maintain files, in standard journal article format, with partial results. One reason to do this is that, if things work out, you’ll have some of your paper already written. But if something doesn’t work out, you’ll end up with a pdf on your hard drive carefully explaining an idea that didn’t quite work. Physicists often end up with dozens of these files squirreled away on their computers. Put together, they’re a map: a map of dead ends.

There’s a handy thing about having a map: it lets you retrace your steps. Any one of these paths may lead nowhere, but each one will contain some substantive work. And years later, often enough, you end up needing some of it: some piece of the calculation, some old idea. You follow the map, dig it up…and build it into something new.

Using Effective Language

Physicists like to use silly names for things, but sometimes it’s best to just use an everyday word. It can trigger useful intuitions, and it makes remembering concepts easier. What gets confusing, though, is when the everyday word you use has a meaning that’s not quite the same as the colloquial one.

“Realism” is a pretty classic example, where Bell’s elegant use of the term in quantum mechanics doesn’t quite match its common usage, leading to inevitable confusion whenever it’s brought up. “Theory” is such a useful word that multiple branches of science use it…with different meanings! In both cases, the naive meaning of the word is the basis of how it gets used scientifically…just not the full story.

There are two things to be wary of here. First, those of us who communicate science must be sure to point out when a word we use doesn’t match its everyday meaning, to guide readers’ intuitions away from first impressions to understand how the term is used in our field. Second, as a reader, you need to be on the look-out for hidden technical terms, especially when you’re reading technical work.

I remember making a particularly silly mistake along these lines. It was early on in grad school, back when I knew almost nothing about quantum field theory. One of our classes was a seminar, structured so that each student would give a talk on some topic that could be understood by the whole group. Unfortunately, some grad students with deeper backgrounds in theoretical physics hadn’t quite gotten the memo.

It was a particular phrase that set me off: “This theory isn’t an effective theory”.

My immediate response was to raise my hand. “What’s wrong with it? What about this theory makes it ineffective?”

The presenter boggled for a moment before responding. “Well, it’s complete up to high energies…it has no ultraviolet divergences…”

“Then shouldn’t that make it even more effective?”

After a bit more of this back-and-forth, we finally cleared things up. As it turns out, “effective field theory” is a technical term! An “effective field theory” is only “effectively” true, describing physics at low energies but not at high energies. As you can see, the word “effective” here is definitely pulling its weight, helping to make the concept understandable…but if you don’t recognize it as a technical term and interpret it literally, you’re going to leave everyone confused!

Over time, I’ve gotten better at identifying when something is a technical term. It really is a skill you can learn: there are different tones people use when speaking, different cadences when writing, a sense of uneasiness that can clue you in to a word being used in something other than its literal sense. Without that skill, you end up worried about mathematicians’ motives for their evil schemes. With it, you’re one step closer to what may be the most important skill in science: the ability to recognize something you don’t know yet.