Category Archives: Science Communication

The “Lies to Children” Model of Science Communication, and The “Amplitudes Are Weird” Model of Amplitudes

Let me tell you a secret.

Scattering amplitudes in N=4 super Yang-Mills don’t actually make sense.

Scattering amplitudes calculate the probability that particles “scatter”: coming in from far away, interacting in some fashion, and producing new particles that travel far away in turn. N=4 super Yang-Mills is my favorite theory to work with: a highly symmetric version of the theory that describes the strong nuclear force. In particular, N=4 super Yang-Mills has conformal symmetry: if you re-scale everything larger or smaller, you should end up with the same predictions.

You might already see the contradiction here: scattering amplitudes talk about particles coming in from very far away…but due to conformal symmetry, “far away” doesn’t mean anything, since we can always re-scale it until it’s not far away anymore!

So when I say that I study scattering amplitudes in N=4 super Yang-Mills, am I lying?

Well…yes. But it’s a useful type of lie.

There’s a concept in science writing called “lies to children”, first popularized in a fantasy novel.

the-science-of-discworld-1

This one.

When you explain science to the public, it’s almost always impossible to explain everything accurately. So much background is needed to really understand most of modern science that conveying even a fraction of it would bore the average audience to tears. Instead, you need to simplify, to skip steps, and even (to be honest) to lie.

The important thing to realize here is that “lies to children” aren’t meant to mislead. Rather, they’re chosen in such a way that they give roughly the right impression, even as they leave important details out. When they told you in school that energy is always conserved, that was a lie: energy is a consequence of symmetry in time, and when that symmetry is broken energy doesn’t have to be conserved. But “energy is conserved” is a useful enough rule that lets you understand most of everyday life.

In this case, the “lie” that we’re calculating scattering amplitudes is fairly close to the truth. We’re using the same methods that people use to calculate scattering amplitudes in theories where they do make sense, like QCD. For a while, people thought these scattering amplitudes would have to be zero, since anything else “wouldn’t make sense”…but in practice, we found they were remarkably similar to scattering amplitudes in other theories. Now, we have more rigorous definitions for what we’re calculating that avoid this problem, involving objects called polygonal Wilson loops.

This illustrates another principle, one that hasn’t (yet) been popularized by a fantasy novel. I’d like to call it the “amplitudes are weird” principle. Time and again we amplitudes-folks will do a calculation that doesn’t really make sense, find unexpected structure, and go back to figure out what that structure actually means. It’s been one of the defining traits of the field, and we’ve got a pretty good track record with it.

A couple of weeks back, Lance Dixon gave an interview for the SLAC website, talking about his work on quantum gravity. This was immediately jumped on by Peter Woit and Lubos Motl as ammo for the long-simmering string wars. To one extent or another, both tried to read scientific arguments into the piece. This is in general a mistake: it is in the nature of a popularization piece to contain some volume of lies-to-children, and reading a piece aimed at a lower audience can be just as confusing as reading one aimed at a higher audience.

In the remainder of this post, I’ll try to explain what Lance was talking about in a slightly higher-level way. There will still be lies-to-children involved, this is a popularization blog after all. But I should be able to clear up a few misunderstandings. Lubos probably still won’t agree with the resulting argument, but it isn’t the self-evidently wrong one he seems to think it is.

Lance Dixon has done a lot of work on quantum gravity. Those of you who’ve read my old posts might remember that quantum gravity is not so difficult in principle: general relativity naturally leads you to particles called gravitons, which can be treated just like other particles. The catch is that the theory that you get by doing this fails to be predictive: one reason why is that you get an infinite number of erroneous infinite results, which have to be papered over with an infinite number of arbitrary constants.

Working with these non-predictive theories, however, can still yield interesting results. In the article, Lance mentions the work of Bern, Carrasco, and Johansson. BCJ (as they are abbreviated) have found that calculating a gravity amplitude often just amounts to calculating a (much easier to find) Yang-Mills amplitude, and then squaring the right parts. This was originally found in the context of string theory by another three-letter group, Kawai, Lewellen, and Tye (or KLT). In string theory, it’s particularly easy to see how this works, as it’s a basic feature of how string theory represents gravity. However, the string theory relations don’t tell the whole story: in particular, they only show that this squaring procedure makes sense on a classical level. Once quantum corrections come in, there’s no known reason why this squaring trick should continue to work in non-string theories, and yet so far it has. It would be great if we had a good argument why this trick should continue to work, a proof based on string theory or otherwise: for one, it would allow us to be much more confident that our hard work trying to apply this trick will pay off! But at the moment, this falls solidly under the “amplitudes are weird” principle.

Using this trick, BCJ and collaborators (frequently including Lance Dixon) have been calculating amplitudes in N=8 supergravity, a highly symmetric version of those naive, non-predictive gravity theories. For this particular, theory, the theory you “square” for the above trick is N=4 super Yang-Mills. N=4 super Yang-Mills is special for a number of reasons, but one is that the sorts of infinite results that lose you predictive power in most other quantum field theories never come up. Remarkably, the same appears to be true of N=8 supergravity. We’re still not sure, the relevant calculation is still a bit beyond what we’re capable of. But in example after example, N=8 supergravity seems to be behaving similarly to N=4 super Yang-Mills, and not like people would have predicted from its gravitational nature. Once again, amplitudes are weird, in a way that string theory helped us discover but by no means conclusively predicted.

If N=8 supergravity doesn’t lose predictive power in this way, does that mean it could describe our world?

In a word, no. I’m not claiming that, and Lance isn’t claiming that. N=8 supergravity simply doesn’t have the right sorts of freedom to give you something like the real world, no matter how you twist it. You need a broader toolset (string theory generally) to get something realistic. The reason why we’re interested in N=8 supergravity is not because it’s a candidate for a real-world theory of quantum gravity. Rather, it’s because it tells us something about where the sorts of dangerous infinities that appear in quantum gravity theories really come from.

That’s what’s going on in the more recent paper that Lance mentioned. There, they’re not working with a supersymmetric theory, but with the naive theory you’d get from just trying to do quantum gravity based on Einstein’s equations. What they found was that the infinity you get is in a certain sense arbitrary. You can’t get rid of it, but you can shift it around (infinity times some adjustable constant 😉 ) by changing the theory in ways that aren’t physically meaningful. What this suggests is that, in a sense that hadn’t been previously appreciated, the infinite results naive gravity theories give you are arbitrary.

The inevitable question, though, is why would anyone muck around with this sort of thing when they could just use string theory? String theory never has any of these extra infinities, that’s one of its most important selling points. If we already have a perfectly good theory of quantum gravity, why mess with wrong ones?

Here, Lance’s answer dips into lies-to-children territory. In particular, Lance brings up the landscape problem: the fact that there are 10^500 configurations of string theory that might loosely resemble our world, and no clear way to sift through them to make predictions about the one we actually live in.

This is a real problem, but I wouldn’t think of it as the primary motivation here. Rather, it gets at a story people have heard before while giving the feeling of a broader issue: that string theory feels excessive.

princess_diana_wedding_dress

Why does this have a Wikipedia article?

Think of string theory like an enormous piece of fabric, and quantum gravity like a dress. You can definitely wrap that fabric around, pin it in the right places, and get a dress. You can in fact get any number of dresses, elaborate trains and frilly togas and all sorts of things. You have to do something with the extra material, though, find some tricky but not impossible stitching that keeps it out of the way, and you have a fair number of choices of how to do this.

From this perspective, naive quantum gravity theories are things that don’t qualify as dresses at all, scarves and socks and so forth. You can try stretching them, but it’s going to be pretty obvious you’re not really wearing a dress.

What we amplitudes-folks are looking for is more like a pencil skirt. We’re trying to figure out the minimal theory that covers the divergences, the minimal dress that preserves modesty. It would be a dress that fits the form underneath it, so we need to understand that form: the infinities that quantum gravity “wants” to give rise to, and what it takes to cancel them out. A pencil skirt is still inconvenient, it’s hard to sit down for example, something that can be solved by adding extra material that allows it to bend more. Similarly, fixing these infinities is unlikely to be the full story, there are things called non-perturbative effects that probably won’t be cured. But finding the minimal pencil skirt is still going to tell us something that just pinning a vast stretch of fabric wouldn’t.

This is where “amplitudes are weird” comes in in full force. We’ve observed, repeatedly, that amplitudes in gravity theories have unexpected properties, traits that still aren’t straightforwardly explicable from the perspective of string theory. In our line of work, that’s usually a sign that we’re on the right track. If you’re a fan of the amplituhedron, the project here is along very similar lines: both are taking the results of plodding, not especially deep loop-by-loop calculations, observing novel simplifications, and asking the inevitable question: what does this mean?

That far-term perspective, looking off into the distance at possible insights about space and time, isn’t my style. (It isn’t usually Lance’s either.) But for the times that you want to tell that kind of story…well, this isn’t that outlandish of a story to tell. And unless your primary concern is whether a piece gives succor to the Woits of the world, it shouldn’t be an objectionable one.

Knowing Too Little, Knowing Too Much

(Commenter nueww has asked me to comment on the flurry of blog posts around an interview with Lance Dixon that recently went up on the SLAC website. I’m not going to comment on it until I have a chance to talk with Lance, beyond saying that this is a remarkable amount of attention paid to a fairly workaday organizational puff piece.)

I’ve been in Oregon this week, giving talks at Oregon State and at the University of Oregon. After my talk at Brown in front of some of the world’s top experts in my subfield, I’ve had to adapt quite a bit for these talks. Oregon State doesn’t have any particle theorists at all, while at the University of Oregon I gave a seminar for their Institute of Theoretical Science, which contains a mix of researchers ranging from particle theorists to theoretical chemists.

Guess which talk was harder to give?

If you guessed the UofO talk, you’re right. At Oregon State, I had a pretty good idea of everyone’s background. I knew these were people who would be pretty familiar with quantum mechanics, but probably wouldn’t have heard of Feynman diagrams. From that, I could build a strategy, and end up giving a pretty good talk.

At the University of Oregon, if I aimed for the particle physicists in the audience, I’d lose the chemists. So I should aim for the chemists, right?

That has its problems too. I’ve talked about some of them: the risk that the experts in your audience feel talked-down to, that you don’t cover the more important parts of your work. But there’s another problem, one that I noticed when I tried to prepare this talk: knowing too little can lead to misunderstandings, but so can knowing too much.

What would happen if I geared the talk completely to the chemists? Well, I’d end up being very vague about key details of what I did. And for the chemists, that would be fine: they’d get a flavor of what I do, and they’d understand not to read any more into it. People are pretty good at putting something in the “I don’t understand this completely” box, as long as it’s reasonably clearly labeled.

That vagueness, though, would be a disaster for the physicists in the audience. It’s not just that they wouldn’t get the full story: unless I was very careful, they’d end up actively misled. The same vague descriptions that the chemists would accept as “flavor”, the physicists would actively try to read for meaning. And with the relevant technical terms replaced with terms the chemists would recognize, they would end up with an understanding that would be actively wrong.

In the end, I ended up giving a talk mostly geared to the physicists, but with some background and vagueness to give the chemists some value. I don’t feel like I did as good of a job as I would like, and neither group really got as much out of the talk as I wanted them to. It’s tricky talking for a mixed audience, and it’s something I’m still learning how to do.

Pi in the Sky Science Journalism

You’ve probably seen it somewhere on your facebook feed, likely shared by a particularly wide-eyed friend: pi found hidden in the hydrogen atom!

FionaPi

ChoPi

OuellettePi

From the headlines, this sounds like some sort of kabbalistic nonsense, like finding the golden ratio in random pictures.

Read the actual articles, and the story is a bit more reasonable. The last two I linked above seem to be decent takes on it, they’re just saddled with ridiculous headlines. As usual, I blame the editors. This time, they’ve obscured an interesting point about the link between physics and mathematics.

So what does “pi found hidden in the hydrogen atom” actually mean?

It doesn’t mean that there’s some deep importance to the number pi in nature, beyond its relevance in mathematics in general. The reason that pi is showing up here isn’t especially deep.

It isn’t trivial either, though. I’ve seen a few people whose first response to this article was “of course they found pi in the hydrogen atom, hydrogen atoms are spherical!” That’s not what’s going on here. The connection isn’t about the shape of the hydrogen atom, it’s about one particular technique for estimating its energy.

Carl Hagen is a physicist at the University of Rochester who was teaching a quantum mechanics class in which he taught a well-known approximation technique called the variational principle. Specifically, he had his students apply this technique to the hydrogen atom. The nice thing about the hydrogen atom is that it’s one of the few atoms simple enough that it’s possible to find its energy levels exactly. The exact calculation can then be compared to the approximation.

What Hagen noticed was that this approximation was surprisingly good, especially for high energy states for which it wasn’t expected to be. In the end, working with Rochester math professor Tamar Friedmann, he figured out that the variational principle was making use of a particular identity between a type of mathematical functions, called Gamma functions, that are quite common in physics. Using those Gamma functions, the two researchers were able to re-derive what turned out to be a 17th century formula for pi, giving rise to a much cleaner proof for that formula than had been known previously.

So pi isn’t appearing here because “the hydrogen atom is a sphere”. It’s appearing because pi appears all over the place in physics, and because in general, the same sorts of structures appear again and again in mathematics.

Pi’s appearance in the hydrogen atom is thus not very special, regardless. What is a little bit special is the fact that, using the hydrogen atom, these folks were able to find a cleaner proof of an old approximation for pi, one that mathematicians hadn’t found before.

That, if anything, is the interesting part of this news story, but it’s also part of a broader trend, one in which physicists provide “physics proofs” for mathematical results. One of the more famous accomplishments of string theory is a class of “physics proofs” of this sort, using a principle called mirror symmetry.

The existence of  “physics proofs” doesn’t mean that mathematics is secretly constrained by the physical world. Rather, they’re a result of the fact that physicists are interested in different aspects of mathematics, and in general are a bit more reckless in using approximations that haven’t been mathematically vetted. A physicist can sometimes prove something in just a few lines that mathematicians would take many pages to prove, but usually they do this by invoking a structure that would take much longer for a mathematician to define. As physicists, we’re building on the shoulders of other physicists, using concepts that mathematicians usually don’t have much reason to bother with. That’s why it’s always interesting when we find something like the Amplituhedron, a clean mathematical concept hidden inside what would naively seem like a very messy construction. It’s also why “physics proofs” like this can happen: we’re dealing with things that mathematicians don’t naturally consider.

So please, ignore the pi-in-the-sky headlines. Some physicists found a trick, some mathematicians found it interesting, the hydrogen atom was (quite tangentially) involved…and no nonsense needs to be present.

Using Effective Language

Physicists like to use silly names for things, but sometimes it’s best to just use an everyday word. It can trigger useful intuitions, and it makes remembering concepts easier. What gets confusing, though, is when the everyday word you use has a meaning that’s not quite the same as the colloquial one.

“Realism” is a pretty classic example, where Bell’s elegant use of the term in quantum mechanics doesn’t quite match its common usage, leading to inevitable confusion whenever it’s brought up. “Theory” is such a useful word that multiple branches of science use it…with different meanings! In both cases, the naive meaning of the word is the basis of how it gets used scientifically…just not the full story.

There are two things to be wary of here. First, those of us who communicate science must be sure to point out when a word we use doesn’t match its everyday meaning, to guide readers’ intuitions away from first impressions to understand how the term is used in our field. Second, as a reader, you need to be on the look-out for hidden technical terms, especially when you’re reading technical work.

I remember making a particularly silly mistake along these lines. It was early on in grad school, back when I knew almost nothing about quantum field theory. One of our classes was a seminar, structured so that each student would give a talk on some topic that could be understood by the whole group. Unfortunately, some grad students with deeper backgrounds in theoretical physics hadn’t quite gotten the memo.

It was a particular phrase that set me off: “This theory isn’t an effective theory”.

My immediate response was to raise my hand. “What’s wrong with it? What about this theory makes it ineffective?”

The presenter boggled for a moment before responding. “Well, it’s complete up to high energies…it has no ultraviolet divergences…”

“Then shouldn’t that make it even more effective?”

After a bit more of this back-and-forth, we finally cleared things up. As it turns out, “effective field theory” is a technical term! An “effective field theory” is only “effectively” true, describing physics at low energies but not at high energies. As you can see, the word “effective” here is definitely pulling its weight, helping to make the concept understandable…but if you don’t recognize it as a technical term and interpret it literally, you’re going to leave everyone confused!

Over time, I’ve gotten better at identifying when something is a technical term. It really is a skill you can learn: there are different tones people use when speaking, different cadences when writing, a sense of uneasiness that can clue you in to a word being used in something other than its literal sense. Without that skill, you end up worried about mathematicians’ motives for their evil schemes. With it, you’re one step closer to what may be the most important skill in science: the ability to recognize something you don’t know yet.

A Tale of Two CMB Measurements

While trying to decide what to blog about this week, I happened to run across this article by Matthew Francis on Ars Technica.

Apparently, researchers have managed to use Planck‘s measurement of the Cosmic Microwave Background to indirectly measure a more obscure phenomenon, the Cosmic Neutrino Background.

The Cosmic Microwave Background, or CMB is often described as the light of the Big Bang, dimmed and spread to the present day. More precisely, it’s the light released from the first time the universe became transparent. When electrons and protons joined to form the first atoms, light no longer spent all its time being absorbed and released by electrical charges, and was free to travel in a mostly-neutral universe.

This means that the CMB is less like a view of the Big Bang, and more like a screen separating us from it. Light and charged particles from before the CMB was formed will never be observable to us, because they would have been absorbed by the early universe. If we want to see beyond this screen, we need something with no electric charge.

That’s where the Cosmic Neutrino Background comes in. Much as the CMB consists of light from the first time the universe became transparent, the CNB consists of neutrinos from the first time the universe was cool enough for them to travel freely. Since this happened a bit before the universe was transparent to light, the CNB gives information about an earlier stage in the universe’s history.

Unfortunately, neutrinos are very difficult to detect, the low-energy ones left over from the CNB even more so. Rather than detecting the CNB directly, it has to be observed through its indirect effects on the CMB, and that’s exactly what these researchers did.

Now does all of this sound just a little bit familiar?

Gravitational waves are also hard to detect, hard enough that we haven’t directly detected any yet. They’re also electrically neutral, so they can also give us information from behind the screen of the CMB, letting us learn about the very early universe. And when the team at BICEP2 purported to measure these primordial gravitational waves indirectly, by measuring the CMB, the press went crazy about it.

This time, though? That Ars Technica article is the most prominent I could find. There’s nothing in major news outlets at all.

I don’t think that this is just a case of people learning from past mistakes. I also don’t think that BICEP2’s results were just that much more interesting: they were making a claim about cosmic inflation rather than just buttressing the standard Big Bang model, but (outside of certain contrarians here at Perimeter) inflation is not actually all that controversial. It really looks like hype is the main difference here, and that’s kind of sad. The difference between a big (premature) announcement that got me to write four distinct posts and an article I almost didn’t notice is just one of how the authors chose to make their work known.

Journalists Are Terrible at Quasiparticles

TerribleQuasiparticleHeadlineNo, they haven’t, and no, that’s not what they found, and no, that doesn’t make sense.

Quantum field theory is how we understand particle physics. Each fundamental particle comes from a quantum field, a law of nature in its own right extending across space and time. That’s why it’s so momentous when we detect a fundamental particle, like the Higgs, for the first time, why it’s not just like discovering a new species of plant.

That’s not the only thing quantum field theory is used for, though. Quantum field theory is also enormously important in condensed matter and solid state physics, the study of properties of materials.

When studying materials, you generally don’t want to start with fundamental particles. Instead, you usually want to think about overall properties, ways the whole material can move and change overall. If you want to understand the quantum properties of these changes, you end up describing them the same way particle physicists talk about fundamental fields: you use quantum field theory.

In particle physics, particles come from vibrations in fields. In condensed matter, your fields are general properties of the material, but they can also vibrate, and these vibrations give rise to quasiparticles.

Probably the simplest examples of quasiparticles are the “holes” in semiconductors. Semiconductors are materials used to make transistors. They can be “doped” with extra slots for electrons. Electrons in the semiconductor will move around from slot to slot. When an electron moves, though, you can just as easily think about it as a “hole”, an empty slot, that “moved” backwards. As it turns out, thinking about electrons and holes independently makes understanding semiconductors a lot easier, and the same applies to other types of quasiparticles in other materials.

Unfortunately, the article I linked above is pretty impressively terrible, and communicates precisely none of that.

The problem starts in the headline:

Scientists have finally discovered massless particles, and they could revolutionise electronics

Scientists have finally discovered massless particles, eh? So we haven’t seen any massless particles before? You can’t think of even one?

After 85 years of searching, researchers have confirmed the existence of a massless particle called the Weyl fermion for the first time ever. With the unique ability to behave as both matter and anti-matter inside a crystal, this strange particle can create electrons that have no mass.

Ah, so it’s a massless fermion, I see. Well indeed, there are no known fundamental massless fermions, not since we discovered neutrinos have mass anyway. The statement that these things “create electrons” of any sort is utter nonsense, however, let alone that they create electrons that themselves have no mass.

Electrons are the backbone of today’s electronics, and while they carry charge pretty well, they also have the tendency to bounce into each other and scatter, losing energy and producing heat. But back in 1929, a German physicist called Hermann Weyl theorised that a massless fermion must exist, that could carry charge far more efficiently than regular electrons.

Ok, no. Just no.

The problem here is that this particular journalist doesn’t understand the difference between pure theory and phenomenology. Weyl didn’t theorize that a massless fermion “must exist”, nor did he say anything about their ability to carry charge. Weyl described, mathematically, how a massless fermion could behave. Weyl fermions aren’t some proposed new fundamental particle, like the Higgs boson: they’re a general type of particle. For a while, people thought that neutrinos were Weyl fermions, before it was discovered that they had mass. What we’re seeing here isn’t some ultimate experimental vindication of Weyl, it’s just an old mathematical structure that’s been duplicated in a new material.

What’s particularly cool about the discovery is that the researchers found the Weyl fermion in a synthetic crystal in the lab, unlike most other particle discoveries, such as the famous Higgs boson, which are only observed in the aftermath of particle collisions. This means that the research is easily reproducible, and scientists will be able to immediately begin figuring out how to use the Weyl fermion in electronics.

Arrgh!

Fundamental particles from particle physics, like the Higgs boson, and quasiparticles, like this particular Weyl fermion, are completely different things! Comparing them like this, as if this is some new efficient trick that could have been used to discover the Higgs, just needlessly confuses people.

Weyl fermions are what’s known as quasiparticles, which means they can only exist in a solid such as a crystal, and not as standalone particles. But further research will help scientists work out just how useful they could be. “The physics of the Weyl fermion are so strange, there could be many things that arise from this particle that we’re just not capable of imagining now,” said Hasan.

In the very last paragraph, the author finally mentions quasiparticles. There’s no mention of the fact that they’re more like waves in the material than like fundamental particles, though. From this description, it makes it sound like they’re just particles that happen to chill inside crystals, like they’re agoraphobic or something.

What the scientists involved here actually discovered is probably quite interesting. They’ve discovered a new sort of ripple in the material they studied. The ripple can carry charge, and because it can behave like a massless particle it can carry charge much faster than electrons can. (To get a basic idea as to how this works, think about waves in the ocean. You can have a wave that goes much faster than the ocean’s current. As the wave travels, no actual water molecules travel from one side to the other. Instead, it is the motion that travels, the energy pushing the wave up and down being transferred along.)

There’s no reason to compare this to particle physics, to make it sound like another Higgs boson. This sort of thing dilutes the excitement of actual particle discoveries, perpetuating the misconception of particles as just more species to find and catalog. Furthermore, it’s just completely unnecessary: condensed matter is a very exciting field, one that the majority of physicists work on. It doesn’t need to ride on the coat-tails of particle physics rhetoric in order to capture peoples’ attention. I’ve seen journalists do this kind of thing before, comparing new quasiparticles and composite particles with fundamental particles like the Higgs, and every time I cringe. Don’t you have any respect for the subject you’re writing about?

No-One Can Tell You What They Don’t Understand

On Wednesday, Amanda Peet gave a Public Lecture at Perimeter on string theory and black holes, while I and other Perimeter-folk manned the online chat. If you missed it, it’s recorded online here.

We get a lot of questions in the online chat. Some are quite insightful, some are basic, and some…well, some are kind of strange. Like the person who asked us how holography could be compatible with irrational numbers.

In physics, holography is the idea that you can encode the physics of a wider space using only information on its boundary. If you remember the 90’s or read Buzzfeed a lot, you might remember holograms: weird rainbow-colored images that looked 3d when you turned your head.

On a computer screen, they instead just look awkward.

Holograms in physics are a lot like that, but rather than a 2d image looking like a 3d object, they can be other combinations of dimensions as well. The most famous, AdS/CFT, relates a ten-dimensional space full of strings to a four-dimensional space on its boundary, where the four-dimensional space contains everybody’s favorite theory, N=4 super Yang-Mills.

So from this explanation, it’s probably not obvious what holography has to do with irrational numbers. That’s because there is no connection: holography has nothing to do with irrational numbers.

Naturally, we were all a bit confused, so one of us asked this person what they meant. They responded by asking if we knew what holograms and irrational numbers were. After all, the problem should be obvious then, right?

In this sort of situation, it’s tempting to assume you’re being trolled. In reality, though, the problem was one of the most common in science communication: people can’t tell you what they don’t understand, because they don’t understand it.

When a teacher asks “any questions?”, they’re assuming students will know what they’re missing. But a deep enough misunderstanding doesn’t show itself that way. Misunderstand things enough, and you won’t know you’re missing anything. That’s why it takes real insight to communicate science: you have to anticipate ways that people might misunderstand you.

In this situation, I thought about what associations people have with holograms. While some might remember the rainbow holograms of old, there are other famous holograms that might catch peoples’ attention.

Please state the nature of the medical emergency.

In science fiction, holograms are 3d projections, ways that computers can create objects out of thin air. The connection to a 2d image isn’t immediately apparent, but the idea that holograms are digital images is central.

Digital images are the key, here. A computer has to express everything in a finite number of bits. It can’t express an irrational number, a number with a decimal expansion that goes on to infinity, at least not without tricks. So if you think that holography is about reality being digital, rather than lower-dimensional, then the question makes perfect sense: how could a digital reality contain irrational numbers?

This is the sort of thing we have to keep in mind when communicating science. It’s easy to misunderstand, to take some aspect of what someone said and read it through a different lens. We have to think about how others will read our words, we have to be willing to poke and prod until we root out the source of the confusion. Because nobody is just going to tell us what they don’t get.

Outreach as the End Product of Science

Sabine Hossenfelder recently wrote a blog post about physics outreach. In it, she identifies two goals: inspiration, and education.

Inspiration outreach is all about making science seem cool. It’s the IFLScience side of things, stoking the science fandom and getting people excited.

Education outreach, by contrast, is about making sure peoples’ beliefs are accurate. It teaches the audience something about the world around them, giving them a better understanding of how the world works.

In both cases, though, Sabine finds it hard to convince other scientists that outreach is valuable. Maybe inspiration helps increase grant funding, maybe education makes people vote better on scientific issues like climate change…but there isn’t a lot of research that shows that outreach really accomplishes either.

Sabine has a number of good suggestions in her post for how to make outreach more effective, but I’d like to take a step back and suggest that maybe we as a community are thinking about outreach in the wrong way. And in order to do that, I’m going to do a little outreach myself, and talk about black holes.

The black hole of physics outreach.

Black holes are collapsed stars, crushed in on themselves by their own gravity so much that one you get close enough (past the event horizon) not even light can escape. This means that if you sent an astronaut past the event horizon, there would be no way for them to communicate with you: any way they might try to get information to you would travel, at most, at the speed of light.

Einstein’s equations keep working fine past the event horizon, but despite that there are some people who view any prediction of what happens inside to be outside the scope of science. If there’s no way to report back, then how could we ever test our predictions? And if we can’t test our predictions, aren’t we missing the cornerstone of science itself?

In a rather entertaining textbook, physicists Edwin F. Taylor and John Archibald Wheeler suggest a way around this: instead of sending just one astronaut, send multiple! Send a whole community! That way, while we might not be able to test our predictions about the inside of the event horizon, the scientific community that falls in certainly can. For them, those predictions aren’t just meaningless speculation, but testable science.

If something seems unsatisfying about this, congratulations: you now understand the purpose of outreach.

As long as scientific advances never get beyond a small community, we’re like Taylor and Wheeler’s astronauts inside the black hole. We can test our predictions among each other, verify them to our heart’s content…but if they never reach the wider mass of humanity, then what have we really accomplished? Have we really created knowledge, when only a few people will ever know it?

In my Who Am I? post, I express the hope that one day the science I blog about will be as well known as electrons and protons. That might sound farfetched, but I really do think it’s possible. In one hundred years, electrons and protons went from esoteric discoveries of a few specialists to something children learn about in grade school. If science is going to live up to its purpose, if we’re going to escape the black hole of our discipline, then in another hundred years quantum field theory needs to do the same. And by doing outreach work, each of us is taking steps in that direction.

What’s the Matter with Dark Matter, Matt?

It’s very rare that I disagree with Matt Strassler. That said, I can’t help but think that, when he criticizes the press for focusing their LHC stories on dark matter, he’s missing an important element.

From his perspective, when the media says that the goal of the new run of the LHC is to detect dark matter, they’re just being lazy. People have heard of dark matter. They might have read that it makes up 23% of the universe, more than regular matter at 4%. So when an LHC physicist wants to explain what they’re working on to a journalist, the easiest way is to talk about dark matter. And when the journalist wants to explain the LHC to the public, they do the same thing.

This explanation makes sense, but it’s a little glib. What Matt Strassler is missing is that, from the public’s perspective, dark matter really is a central part of the LHC’s justification.

Now, I’m not saying that the LHC’s main goal is to detect dark matter! Directly detecting dark matter is pretty low on the LHC’s list of priorities. Even if it detects a new particle with the right properties to be dark matter, it still wouldn’t be able to confirm that it really is dark matter without help from another experiment that actually observes some consequence of the new particle among the stars. I agree with Matt when he writes that the LHC’s priorities for the next run are

  1. studying the newly discovered Higgs particle in great detail, checking its properties very carefully against the predictions of the “Standard Model” (the equations that describe the known apparently-elementary particles and forces)  to see whether our current understanding of the Higgs field is complete and correct, and

  2. trying to find particles or other phenomena that might resolve the naturalness puzzle of the Standard Model, a puzzle which makes many particle physicists suspicious that we are missing an important part of the story, and

  3. seeking either dark matter particles or particles that may be shown someday to be “associated” with dark matter.

Here’s the thing, though:

From the public’s perspective, why do we need to study the properties of the Higgs? Because we think it might be different than the Standard Model predicts.

Why do we think it might be different than the Standard Model predicts? More generally, why do we expect the world to be different from the Standard Model at all? Well there are a few reasons, but they generally boil down to two things: the naturalness puzzle, and the fact that the Standard Model doesn’t have anything that could account for dark matter.

Naturalness is a powerful motivation, but it’s hard to sell to the general public. Does the universe appear fine-tuned? Then maybe it just is fine-tuned! Maybe someone fine-tuned it!

These arguments miss the real problem with fine-tuning, but they’re hard to correct in a short article. Getting the public worried about naturalness is tough, tough enough that I don’t think we can demand it of the average journalist, or accuse them of being lazy if they fail to do it.

That leaves dark matter. And for all that naturalness is philosophically murky, dark matter is remarkably clear. We don’t know what 96% of the universe is made of! That’s huge, and not just in a “gee-whiz-cool” way. It shows, directly and intuitively, that physics still has something it needs to solve, that we still have particles to find. Unless you are a fan of (increasingly dubious) modifications to gravity like MOND, dark matter is the strongest possible justification for machines like the LHC.

The LHC won’t confirm dark matter on its own. It might not directly detect it, that’s still quite up-in-the-air. And even if it finds deviations from the Standard Model, it’s not likely they’ll be directly caused by dark matter, at least not in a simple way.

But the reason that the press is describing the LHC’s mission in terms of dark matter isn’t just laziness. It’s because, from the public’s perspective, dark matter is the only vaguely plausible reason to spend billions of dollars searching for new particles, especially when we’ve already found the Higgs. We’re lucky it’s such a good reason.

What Counts as a Fundamental Force?

I’m giving a presentation next Wednesday for Learning Unlimited, an organization that presents educational talks to seniors in Woodstock, Ontario. The talk introduces the fundamental forces and talks about Yang and Mills before moving on to introduce my work.

While practicing the talk today, someone from Perimeter’s outreach department pointed out a rather surprising missing element: I never mention gravity!

Most people know that there are four fundamental forces of nature. There’s Electromagnetism, there’s Gravity, there’s the Weak Nuclear Force, and there’s the Strong Nuclear Force.

Listed here by their most significant uses.

What ties these things together, though? What makes them all “fundamental forces”?

Mathematically, gravity is the odd one out here. Electromagnetism, the Weak Force, and the Strong Force all share a common description: they’re Yang-Mills forces. Gravity isn’t. While you can sort of think of it as a Yang-Mills force “squared”, it’s quite a bit more complicated than the Yang-Mills forces.

You might be objecting that the common trait of the fundamental forces is obvious: they’re forces! And indeed, you can write down a force law for gravity, and a force law for E&M, and umm…

[Mumble Mumble]

Ok, it’s not quite as bad as xkcd would have us believe. You can actually write down a force law for the weak force, if you really want to, and it’s at least sort of possible to talk about the force exerted by the strong interaction.

All that said, though, why are we thinking about this in terms of forces? Forces are a concept from classical mechanics. For a beginning physics student, they come up again and again, in free-body diagram after free-body diagram. But by the time a student learns quantum mechanics, and quantum field theory, they’ve already learned other ways of framing things where forces aren’t mentioned at all. So while forces are kind of familiar to people starting out, they don’t really match onto anything that most quantum field theorists work with, and it’s a bit weird to classify things that only really appear in quantum field theory (the Weak Nuclear Force, the Strong Nuclear Force) based on whether or not they’re forces.

Isn’t there some connection, though? After all, gravity, electromagnetism, the strong force, and the weak force may be different mathematically, but at least they all involve bosons.

Well, yes. And so does the Higgs.

The Higgs is usually left out of listings of the fundamental forces, because it’s not really a “force”. It doesn’t have a direction, instead it works equally at every point in space. But if you include spin 2 gravity and spin 1 Yang-Mills forces, why not also include the spin 0 Higgs?

Well, if you’re doing that, why not include fermions as well? People often think of fermions as “matter” and bosons as “energy”, but in fact both have energy, and neither is made of it. Electrons and quarks are just as fundamental as photons and gluons and gravitons, just as central a part of how the universe works.

I’m still trying to decide whether my presentation about Yang-Mills forces should also include gravity. On the one hand, it would make everything more familiar. On the other…pretty much this entire post.