Tag Archives: theoretical physics

Science Never Forgets

I’ll just be doing a short post this week, I’ve been busy at a workshop on Flux Tubes here at Perimeter.

If you’ve ever heard someone tell the history of string theory, you’ve probably heard that it was first proposed not as a quantum theory of gravity, but as a way to describe the strong nuclear force. Colliders of the time had discovered particles, called mesons, that seemed to have a key role in the strong nuclear force that held protons and neutrons together. These mesons had an unusual property: the faster they spun, the higher their mass, following a very simple and regular pattern known as a Regge trajectory. Researchers found that they could predict this kind of behavior if, rather than particles, these mesons were short lengths of “string”, and with this discovery they invented string theory.

As it turned out, these early researchers were wrong. Mesons are not lengths of string, rather, they are pairs of quarks. The discovery of quarks explained how the strong force acted on protons and neutrons, each made of three quarks, and it also explained why mesons acted a bit like strings: in each meson, the two quarks are linked by a flux tube, a roughly cylindrical area filled with the gluons that carry the strong nuclear force. So rather than strings, mesons turned out to be more like bolas.

Leonin sold separately.

If you’ve heard this story before, you probably think it’s ancient history. We know about quarks and gluons now, and string theory has moved on to bigger and better things. You might be surprised to hear that at this week’s workshop, several presenters have been talking about modeling flux tubes between quarks in terms of string theory!

The thing is, science never forgets a good idea. String theory was superseded by quarks in describing the strong force, but it was only proposed in the first place because it matched the data fairly well. Now, with string theory-inspired techniques, people are calculating the first corrections to the string-like behavior of these flux tubes, comparing them with simulations of quarks and gluons, and finding surprisingly good agreement!

Science isn’t a linear story, where the past falls away to the shiny new theories of the future. It’s a marketplace. Some ideas are traded more widely, some less…but if a product works, even only sometimes, chances are someone out there will have a reason to buy it.

String Theorists Who Don’t Touch Strings

This week I’ve been busy, attending a workshop here at Perimeter on Superstring Perturbation Theory.

Superstrings are the supersymmetric strings that string theorists use to describe fundamental particles, while perturbation theory is the trick, common in almost every area of physics, of solving a problem by a series of increasingly precise approximations.

Based on that description, you’d think that superstring perturbation theory would be a central topic in string theory research. You wouldn’t expect it to be the sort of thing only a few people at the top of the field dabble in. You definitely wouldn’t expect one of the speakers at the workshop to mention that this might be the first conference on superstring perturbation theory he’s been to since the 1980’s.

String perturbation theory is an important subject, but it’s not one many string theorists use. And the reason why is that, oddly enough, very few string theorists actually use strings.

Looking at arXiv as I’m writing this, I can see only one paper in the theoretical physics section that directly uses strings. Most of them use something else: either older concepts like black holes, quantum field theory, and supergravity, or newer ones like d-branes. If you talked to the people who wrote those papers, though, most of them would describe themselves as string theorists.

The reason for the disconnect is that string theory as a field is much more than just the study of strings. String theory is a ten-dimensional universe (or eleven with M theory), where different ways of twisting up some of the dimensions result in different apparent physics in the remaining ones. It’s got strings, but also higher-dimensional membranes (and in the eleven dimensions of M theory it only has membranes, not strings). It’s the recipe for a long list of exotic quantum field theories, and a list of possible relations between them. It’s a new way to look at geometry, to think about the intersection of the nature of space and the dynamics of what inhabits it.

If string theory were really just about strings, it likely wouldn’t have grown any bigger than its quantum gravity rivals, like Loop Quantum Gravity. String theory grew because it inspired research directions that went far afield, and far beyond its conceptual core.

That’s part of why most string theorists will be baffled if you insist that string theory needs proof, or that it’s not the right approach to quantum gravity. For most string theorists, it doesn’t matter whether we live in a stringy world, whether gravity might eventually be described by another model. For most string theorists, string theory is a tool, one that opened up fields of inquiry that don’t have much to do with predicting the output of the LHC or describing the early universe. Or, in many cases, actually using strings.

Only the Boring Kind of Parallel Universes

PARALLEL UNIVERSES AT THE LHC??

No. No. Bad journalist. See what happens when you…

Mir Faizal, one of the three-strong team of physicists behind the experiment, said: “Just as many parallel sheets of paper, which are two dimensional objects [breadth and length] can exist in a third dimension [height], parallel universes can also exist in higher dimensions.

Bad physicist, bad! No biscuit for you!

Not nice at all!

For the technically-minded, Sabine Hossenfelder goes into thorough detail about what went wrong here. Not only do parallel universes have nothing to do with what Mir Faizal and collaborators have been studying, but the actual paper they’re hyping here is apparently riddled with holes.

BLACK holes! …no, actually, just logic holes.

But why did parallel universes even come up? If they have nothing to do with Faizal’s work, why did he mention them? Do parallel universes ever come up in real physics at all?

The answer to this last question is yes. There are real, viable ideas in physics that involve parallel universes. The universes involved, however, are usually boring ones.

The ideas are generally referred to as brane-world theories. If you’ve heard of string theory, you’ve probably heard that it proposes that the world is made of tiny strings. That’s all well and good, but it’s not the whole story. String theory has other sorts of objects in it too: higher dimensional generalizations of strings called membranes, branes for short. In fact, M theory, the theory of which every string theory is some low-energy limit, has no strings at all, just branes.

When these branes are one-dimensional, they’re strings. When they’re two-dimensional, they’re what you would normally picture as a membrane, a vibrating sheet, potentially infinite in size. When they’re three-dimensional, they fill three-dimensional space, again potentially up to infinity.

Filling three dimensional space, out to infinity…well that sure sounds a whole lot like what we’d normally call a universe.

In brane-world constructions, what we call our universe is precisely this sort of three-dimensional brane. It then lives in a higher-dimensional space, where its position in this space influences things like the strength of gravity, or the speed at which the universe expands.

Sometimes (not all the time!) these sorts of constructions include other branes, besides the one that contains our universe. These other branes behave in a similar way, and can have very important effects on our universe. They, if anything, are the parallel universes of theoretical physics.

It’s important to point out, though that these aren’t the sort of sci-fi parallel universes you might imagine! You aren’t going to find a world where everyone has a goatee, or even a world with an empty earth full of teleporting apes.

Pratchett reference!

That’s because, in order for these extra branes to do useful physical work, they generally have to be very different from our world. They’re worlds where gravity is very strong, or world with dramatically different densities of energy and matter. In the end, this means they’re not even the sort of universes that produce interesting aliens, or where we could send an astronaut, or really anything that lends itself well to (non-mathematical) imagination. From a sci-fi perspective, they’re as boring as can be.

Faizal’s idea, though, doesn’t even involve the boring kind of parallel universe!

His idea involves extra dimensions, specifically what physicists refer to as “large” extra dimensions, in contrast with the small extra dimensions of string theory. Large extra dimensions can explain the weakness of gravity, and theories that use them often predict that it’s much easier to create microscopic black holes than it otherwise would be. So far, these models haven’t had much luck at the LHC, and while I get the impression that they haven’t been completely ruled out, they aren’t very popular anymore.

The thing is, extra dimensions don’t mean parallel universes.

In fiction, the two get used interchangeably a lot. People go to “another dimension”, vaguely described as traveling along another dimension of space, and find themselves in a strange new world. In reality, though, there’s no reason to think that traveling along an extra dimension would put you in any sort of “strange new world”. The whole reason that our world is limited to three dimensions is because it’s “bound” to something: a brane, in the string theory picture. If there’s not another brane to bind things to, traveling in an extra dimension won’t put you in a new universe, it will just put you in an empty space where none of the types of matter you’re made of even exist.

It’s really tempting, when talking to laypeople, to fall back on stories. If you mention parallel universes, their faces light up with the idea that this is something they get, if only from imaginary examples. It gives you that same sense of accomplishment as if you had actually taught them something real. But you haven’t. It’s wrong, and Mir Faizal shouldn’t have stooped to doing it.

What Can Pi Do for You?

Tomorrow is Pi Day!

And what a Pi Day! 3/14/15 (if you’re in the US, Belize, Micronesia, some parts of Canada, the Philippines, or Swahili-speaking Kenya), best celebrated at 9:26:53, if you’re up by then. Grab a slice of pie, or cake if you really must, and enjoy!

If you don’t have some of your own, download this one!

Pi is great not just because it’s fun to recite digits and eat pastries, but because it serves a very important role in physics. That’s because, often, pi is one of the most “natural” ways to get larger numbers.

Suppose you’re starting with some sort of “natural” theory. Here I don’t mean natural in the technical sense. Instead, I want you to imagine a theory that has very few free parameters, a theory that is almost entirely fixed by mathematics.

Many physicists hope that the world is ultimately described by this sort of theory, but it’s hard to see in the world we live in. There are so many different numbers, from the tiny mass of the electron to the much larger mass of the top quark, that would all have to come from a simple, overarching theory. Often, it’s easier to get these numbers when they’re made out of factors of pi.

Why is pi easy to get?

In general, pi shows up a lot in physics and mathematics, and its appearance can be mysterious the uninitiated, as this joke related by Eugene Wigner in an essay I mentioned a few weeks ago demonstrates:

THERE IS A story about two friends, who were classmates in high school, talking about their jobs. One of them became a statistician and was working on population trends. He showed a reprint to his former classmate. The reprint started, as usual, with the Gaussian distribution and the statistician explained to his former classmate the meaning of the symbols for the actual population, for the average population, and so on. His classmate was a bit incredulous and was not quite sure whether the statistician was pulling his leg. “How can you know that?” was his query. “And what is this symbol here?” “Oh,” said the statistician, “this is pi.” “What is that?” “The ratio of the circumference of the circle to its diameter.” “Well, now you are pushing your joke too far,” said the classmate, “surely the population has nothing to do with the circumference of the circle.”

While it may sound silly, in a sense the population really is connected to the circumference of the circle. That’s because pi isn’t just about circles, pi is about volumes.

Take a bit to check out that link. Not just the area of a circle, but the volume of a sphere, and that of all sorts of higher-dimensional ball-shaped things, is calculated with the value of pi. It’s not just spheres, either: pi appears in the volume of many higher-dimensional shapes.

Why does this matter for physics? Because you don’t need a literal shape to get a volume. Most of the time, there aren’t literal circles and spheres giving you factors of pi…but there are abstract spaces, and they contain circles and spheres. A electric and magnetic fields might not be shaped like circles, but the mathematics that describes them can still make good use of a circular space.

That’s why, when I describe the mathematical formulas I work with, formulas that often produce factors of pi, mathematicians will often ask if they’re the volume of some particular mathematical space. It’s why Nima Arkani-Hamed is trying to understand related formulas by thinking of them as the volume of some new sort of geometrical object.

All this is not to say you should go and plug factors of pi together until you get the physical constants you want. Throw in enough factors of pi and enough other numbers and you can match current observations, sure…but you could also match anything else in the same way. Instead, it’s better to think of pi as an assistant: waiting in the wings, ready to translate a pure mathematical theory into the complicated mess of the real world.

So have a Happy Pi Day, everyone, and be grateful to our favorite transcendental number. The universe would be a much more boring place without it.

Pics or It Didn’t Happen

I got a tumblr recently.

One thing I’ve noticed is that tumblr is a very visual medium. While some people can get away with massive text-dumps, they’re usually part of specialized communities. The content that’s most popular with a wide audience is, almost always, images. And that’s especially true for science-related content.

This isn’t limited to tumblr either. Most of my most successful posts have images. Most successful science posts in general involve images. Think of the most interesting science you’ve seen on the internet: chances are, it was something visual that made it memorable.

The problem is, I’m a theoretical physicist. I can’t show you pictures of nebulae in colorized glory, or images showing the behavior of individual atoms. I work with words, equations, and, when I’m lucky, diagrams.

Diagrams tend to work best, when they’re an option. I have no doubt that part of the Amplituhedron‘s popularity with the press owes to Andy Gilmore’s beautiful illustration, as printed in Quanta Magazine’s piece:

Gotta get me an artist.

The problem is, the nicer one of these illustrations is, the less it actually means. For most people, the above is just a pretty picture. Sometimes it’s possible to do something more accurate, like a 3d model of one of string theory’s six-dimensional Calabi-Yau manifolds:

What, you expected a six-dimensional intrusion into our world *not* to look like Yog-Sothoth?

A lot of the time, though, we don’t even have a diagram!

In those sorts of situations, it’s tempting to show an equation. After all, equations are the real deal, the stuff we theorists are actually manipulating.

Unless you’ve got an especially obvious equation, though, there’s basically only one thing the general public will get out of it. Either the equation is surprisingly simple,

Isn’t it cute?

Or it’s unreasonably complicated,

Why yes, this is one equation that covers seventeen pages. You're lucky I didn't post the eight-hundred page one.

Why yes, this is one equation that covers seventeen pages. You’re lucky I didn’t post the eight-hundred page one.

This is great for first impressions, but it’s not very repeatable. Show people one giant equation, and they’ll be impressed. Show them two, and they won’t have any idea what the difference is supposed to be.

If you’re not showing diagrams or equations, what else can you show?

The final option is, essentially, to draw a cartoon. Forget about showing what’s “really going on”, physically or mathematically. That’s what the article is for. For an image, just pick something cute and memorable that references the topic.

When I did an article for Ars Technica back in 2013, I didn’t have any diagrams to show, or any interesting equations. Their artist, undeterred, came up with a cute picture of sushi with an N=4 on it.

That sort of thing really helps! It doesn’t tell you anything technical, it doesn’t explain what’s going on…but it does mean that every time I think of the article, that image pops into my head. And in a world where nothing lasts without a picture to document it, that’s a job well done.

Explanations of Phenomena Are All Alike; Every Unexplained Phenomenon Is Unexplained in Its Own Way

Vladimir Kazakov began his talk at ICTP-SAIFR this week with a variant of Tolstoy’s famous opening to the novel Anna Karenina: “Happy families are all alike; every unhappy family is unhappy in its own way.” Kazakov flipped the order of the quote, stating that while “Un-solvable models are each un-solvable in their own way, solvable models are all alike.”

In talking about solvable and un-solvable models, Kazakov was referring to a concept called integrability, the idea that in certain quantum field theories it’s possible to avoid the messy approximations of perturbation theory and instead jump straight to the answer. Kazakov was observing that these integrable systems seem to have a deep kinship: the same basic methods appear to work to understand all of them.

I’d like to generalize Kazakov’s point, and talk about a broader trend in physics.

Much has been made over the years of the “unreasonable effectiveness of mathematics in the natural sciences”, most notably in physicist Eugene Wigner’s famous essay, The Unreasonable Effectiveness of Mathematics in the Natural Sciences. There’s a feeling among some people that mathematics is much better at explaining physical phenomena than one would expect, that the world appears to be “made of math” and that it didn’t have to be.

On the surface, this is a reasonable claim. Certain mathematical ideas, group theory for example, seem to pop up again and again in physics, sometimes in wildly different contexts. The history of fundamental physics has tended to see steady progress over the years, from clunkier mathematical concepts to more and more elegant ones.

Some physicists tend to be dismissive of this. Lee Smolin in particular seems to be under the impression that mathematics is just particularly good at providing useful approximations. This perspective links to his definition of mathematics as “the study of systems of evoked relationships inspired by observations of nature,” a definition to which Peter Woit vehemently objects. Woit argues what I think any mathematician would when presented by a statement like Smolin’s: that mathematics is much more than just a useful tool for approximating observations, and that contrary to physicists’ vanity most of mathematics goes on without any explicit interest in observing the natural world.

While it’s generally rude for physicists to propose definitions for mathematics, I’m going to do so anyway. I think the following definition is one mathematicians would be more comfortable with, though it may be overly broad: Mathematics is the study of simple rules with complex consequences.

We live in a complex world. The breadth of the periodic table, the vast diversity of life, the tangled webs of galaxies across the sky, these are things that display both vast variety and a sense of order. They are, in a rather direct way, the complex consequences of rules that are at heart very very simple.

Part of the wonder of modern mathematics is how interconnected it has become. Many sub-fields, once distinct, have discovered over the years that they are really studying different aspects of the same phenomena. That’s why when you see a proof of a three-hundred-year-old mathematical conjecture, it uses terms that seem to have nothing to do with the original problem. It’s why Woit, in an essay on this topic, quotes Edward Frenkel’s description of a particular recent program as a blueprint for a “Grand Unified Theory of Mathematics”. Increasingly, complex patterns are being shown to be not only consequences of simple rules, but consequences of the same simple rules.

Mathematics itself is “unreasonably effective”. That’s why, when faced with a complex world, we shouldn’t be surprised when the same simple rules pop up again and again to explain it. That’s what explaining something is: breaking down something complex into the simple rules that give rise to it. And as mathematics progresses, it becomes more and more clear that a few closely related types of simple rules lie behind any complex phenomena. While each unexplained fact about the universe may seem unexplained in its own way, as things are explained bit by bit they show just how alike they really are.

Where do you get all those mathematical toys?

I’m at a conference at Caltech this week, so it’s going to be a shorter post than usual.

The conference is on something call the Positive Grassmannian, a precursor to Nima Arkani-Hamed’s much-hyped Amplituhedron. Both are variants of a central idea: take complicated calculations in physics and express them in terms of clean, well-defined mathematical objects.

Because of this, this conference is attended not just by physicists, but by mathematicians as well, and it’s been interesting watching how the two groups interact.

From a physics perspective, mathematicians are great because they give us so many useful tools! Many significant advances in my field happened because a physicist talked to a mathematician and learned that a problem that had stymied the physics world had already been solved in the math community.

This tends to lead to certain expectations among physicists. If a mathematician gives a talk at a physics conference, we expect them to present something we can use. Our ideal math talk is like when Q presents the gadgets at the beginning of a Bond movie: a ton of new toys with just enough explanation for us to use them to save the day in the second act.

Pictured: Mathematicians, through Physicist eyes

You may see the beginning of a problem here, once you realize that physicists are the James Bond in this analogy.

Physicists like to see themselves as the protagonists of their own stories. That’s true of every field, though, to some degree or another. And it’s certainly true of mathematicians.

Mathematicians don’t go to physics conferences just to be someone’s supporting cast. They do it because physics problems are interesting to them: by hearing what physicists are working on they hope to get inspiration for new mathematical structures, concepts jury-rigged together by physicists that represent corners that mathematics hasn’t yet explored. Their goal is to take home an idea that they can turn into something productive, gaining glory among their fellow mathematicians. And if that sounds familiar…

Pictured: Physicists, through Mathematician eyes

While it’s amusing to watch the different expectations go head-to-head, the best collaborations between physicists and mathematicians are those where both sides respect that the other is the protagonist of their own story. Allow for give-and-take, paying attention not just to what you find interesting but to what the other person does, without assuming a tired old movie script, and it’s possible to make great progress.

Of course, that’s true of life in general as well.

What Can Replace Space-Time?

Nima Arkani-Hamed is famous for believing that space-time is doomed, that as physicists we will have to abandon the concepts of space and time if we want to find the ultimate theory of the universe. He’s joked that this is what motivates him to get up in the morning. He tends to bring it up often in talks, both for physicists and for the general public.

The latter especially tend to be baffled by this idea. I’ve heard a lot of questions like “if space-time is doomed, what could replace it?”

In the past, Nima and I both tended to answer this question with a shrug. (Though a more elaborate shrug in his case.) This is the honest answer: we don’t know what replaces space-time, we’re still looking for a good solution. Nima’s Amplituhedron may eventually provide an answer, but it’s still not clear what that answer will look like. I’ve recently realized, though, that this way of responding to the question misses its real thrust.

When people ask me “what could replace space-time?” they’re not asking “what will replace space-time?” Rather, they’re asking “what could possibly replace space-time?” It’s not that they want to know the answer before we’ve found it, it’s that they don’t understand how any reasonable answer could possibly exist.

I don’t think this concern has been addressed much by physicists, and it’s a pity, because it’s not very hard to answer. You don’t even need advanced physics. All you need is some fairly old philosophy. Specifically we’ll use concepts from metaphysics, the branch of philosophy that deals with categories of being.

Think about your day yesterday. Maybe you had breakfast at home, drove to work, had a meeting, then went home and watched TV.

Each of those steps can be thought of as an event. Each event is something that happened that we want to pay attention to. You having breakfast was an event, as was you arriving at work.

These events are connected by relations. Here, each relation specifies the connection between two events. There might be a relation of cause-and-effect, for example, between you arriving at work late and meeting with your boss later in the day.

Space and time, then, can be seen as additional types of relations. Your breakfast is related to you arriving at work: it is before it in time, and some distance from it in space. Before and after, distant in one direction or another, these are all relations between the two events.

Using these relations, we can infer other relations between the events. For example, if we know the distance relating your breakfast and arriving at work, we can make a decent guess at another relation, the difference in amount of gas in your car.

This way of viewing the world, events connected by relations, is already quite common in physics. With Einstein’s theory of relativity, it’s hard to say exactly when or where an event happened, but the overall relationship between two events (distance in space and time taken together) can be thought of much more precisely. As I’ve mentioned before, the curved space-time necessary for Einstein’s theory of gravity can be thought of equally well as a change in the way you measure distances between two points.

So if space and time are relations between events, what would it mean for space-time to be doomed?

The key thing to realize here is that space and time are very specific relations between events, with very specific properties. Some of those properties are what cause problems for quantum gravity, problems which prompt people to suggest that space-time is doomed.

One of those properties is the fact that, when you multiply two distances together, it doesn’t matter which order you do it in. This probably sounds obvious, because you’re used to multiplying normal numbers, for which this is always true anyway. But even slightly more complicated mathematical objects, like matrices, don’t always obey this rule. If distances were this sort of mathematical object, then multiplying them in different orders could give slightly different results. If the difference were small enough, we wouldn’t be able to tell that it was happening in everyday life: distance would have given way to some more complicated concept, but it would still act like distance for us.

That specific idea isn’t generally suggested as a solution to the problems of space and time, but it’s a useful toy model that physicists have used to solve other problems.

It’s the general principle I want to get across: if you want to replace space and time, you need a relation between events. That relation should behave like space and time on the scales we’re used to, but it can be different on very small scales (Big Bang, inside of Black Holes) and on very large scales (long-term fate of the universe).

Space-time is doomed, and we don’t know yet what’s going to replace it. But whatever it is, whatever form it takes, we do know one thing: it’s going to be a relation between events.

Research or Conference? Can’t it be both?

“If you’re there for two months, for sure you’ll be doing research.”

I wanted to be snarky. I wanted to point out that, as a theoretical physicist, I do research wherever I go. I wanted to say that I even did research on the drive over. (This may not have been true, I think I mostly thought about Magic the Gathering cards.)

More than any of those, though, I wanted to get my travel visa. So instead I said,

“That’s fair.”

“Mmhmm, that’s fair.” Looking down at the invitation letter, she triumphantly pointed to the name of the inviting institution: “South American Institute for Fundamental Research.”

A bit of background: I’m going to Brazil this winter. Partly, this is because winter in Canada is not especially desirable, but it’s also because Sao Paulo’s International Center for Theoretical Physics is running a Program on Integrability, the arcane set of techniques that seeks to bypass the approximate perturbations we often use in particle physics and find full, exact results.

What do I mean by a Program? It’s not the sort of scientific program I’ve talked about before, though the ideas are related. When an institute holds a Program, they’re declaring a theme. For a certain length of time (generally from a few months to a whole semester), there will be a large number of talks at the institute focused on some particular scientific theme. The institute invites people from all over the world who work on that theme. Those people are there to give and attend talks, but they’re also there to share ideas with each other, to network and collaborate and do research.

This is where things get tricky. See, Brazil has multiple types of visas. A Tourist Visa can be used, among other things, for attending a scientific conference. On the other hand, someone coming to Brazil to do research uses Visa 1.

A Program is essentially a long conference…but it’s also an opportunity to do research. So are most short conferences, though! In theoretical physics we have workshops, short conferences explicitly focused on collaboration and research, but even if a conference isn’t a workshop you can bet that we’ll be doing some research there, for sure. We don’t need labs, and some of us don’t even need computers, research can happen whenever the inspiration strikes. The distinction between conferences and research, from our perspective, is an arbitrary one.

In physics, we like to cut through this sort of ambiguity by looking at what’s really important. I wanted to figure out what about research makes the Brazilian government use a different visa for it, whether it was about motivating people to enter the country for specific reasons or tracking certain sorts of activities. I wanted to understand that, because it would let me figure out whether my own research fell under those reasons, and thus figure out objectively which type of visa I ought to have.

I wanted to ask about all of this…but more than any of that, I wanted to get my travel visa. So I applied for the visa they told me to, and left.

Why I Can’t Explain Ghosts: Or, a Review of a Popular Physics Piece

Since today is Halloween, I really wanted to write a post talking about the spookiest particles in physics, ghosts.

And their superpartners, ghost riders.

The problem is, in order to explain ghosts I’d have to explain something called gauge symmetry. And gauge symmetry is quite possibly the hardest topic in modern physics to explain to a general audience.

Deep down, gauge symmetry is the idea that irrelevant extra parts of how we represent things in physics should stay irrelevant. While that sounds obvious, it’s far from obvious how you can go from that to predicting new particles like the Higgs boson.

Explaining this is tough! Tough enough that I haven’t thought of a good way to do it yet.

Which is why I was fairly stoked when a fellow postdoc pointed out a recent popular physics article by Juan Maldacena, explaining gauge symmetry.

Juan Maldacena is a Big Deal. He’s the guy who figured out the AdS/CFT correspondence, showing that string theory (in a particular hyperbola-shaped space called AdS) and everybody’s favorite N=4 super Yang-Mills theory are secretly the same, a discovery which led to a Big Blue Dot on Paperscape. So naturally, I was excited to see what he had to say.

Big Blue Dot pictured here.

Big Blue Dot pictured here.

The core analogy he makes is with currencies in different countries. Just like gauge symmetry, currencies aren’t measuring anything “real”: they’re arbitrary conventions put in place because we don’t have a good way of just buying things based on pure “value”. However, also like gauge symmetry, then can have real-life consequences, as different currency exchange rates can lead to currency speculation, letting some people make money and others lose money. In Maldacena’s analogy the Higgs field works like a precious metal, making differences in exchange rates manifest as different prices of precious metals in different countries.

It’s a solid analogy, and one that is quite close to the real mathematics of the problem (as the paper’s Appendix goes into detail to show). However, I have some reservations, both about the paper as a whole and about the core analogy.

In general, Maldacena doesn’t do a very good job of writing something publicly accessible. There’s a lot of stilted, academic language, and a lot of use of “we” to do things other than lead the reader through a thought experiment. There’s also a sprinkling of terms that I don’t think the average person will understand; for example, I doubt the average college student knows flux as anything other than a zany card game.

Regarding the analogy itself, I think Maldacena has fallen into the common physicist trap of making an analogy that explains things really well…if you already know the math.

This is a problem I see pretty frequently. I keep picking on this article, and I apologize for doing so, but it’s got a great example of this when it describes supersymmetry as involving “a whole new class of number that can be thought of as the square roots of zero”. That’s a really great analogy…if you’re a student learning about the math behind supersymmetry. If you’re not, it doesn’t tell you anything about what supersymmetry does, or how it works, or why anyone might study it. It relates something unfamiliar to something unfamiliar.

I’m worried that Maldacena is doing that in this paper. His setup is mathematically rigorous, but doesn’t say much about the why of things: why do physicists use something like this economic model to understand these forces? How does this lead to what we observe around us in the real world? What’s actually going on, physically? What do particles have to do with dimensionless constants? (If you’re curious about that last one, I like to think I have a good explanation here.)

It’s not that Maldacena ignores these questions, he definitely puts effort into answering them. The problem is that his analogy itself doesn’t really address them. They’re the trickiest part, the part that people need help picturing and framing, the part that would benefit the most from a good analogy. Instead, the core imagery of the piece is wasted on details that don’t really do much for a non-expert.

Maybe I’m wrong about this, and I welcome comments from non-physicists. Do you feel like Maldacena’s account gives you a satisfying idea of what gauge symmetry is?