The Way You Think Everything Is Connected Isn’t the Way Everything Is Connected

I hear it from older people, mostly.

“Oh, I know about quantum physics, it’s about how everything is connected!”

“String theory: that’s the one that says everything is connected, right?”

“Carl Sagan said we are all stardust. So really, everything is connected.”

connect_four

It makes Connect Four a lot easier anyway

I always cringe a little when I hear this. There’s a misunderstanding here, but it’s not a nice clean one I can clear up in a few sentences. It’s a bunch of interconnected misunderstandings, mixing some real science with a lot of confusion.

To get it out of the way first, no, string theory is not about how “everything is connected”. String theory describes the world in terms of strings, yes, but don’t picture those strings as links connecting distant places: string theory’s proposed strings are very, very short, much smaller than the scales we can investigate with today’s experiments. The reason they’re thought to be strings isn’t because they connect distant things, it’s because it lets them wiggle (counteracting some troublesome wiggles in quantum gravity) and wind (curling up in six extra dimensions in a multitude of ways, giving us what looks like a lot of different particles).

(Also, for technical readers: yes, strings also connect branes, but that’s not the sort of connection these people are talking about.)

What about quantum mechanics?

Here’s where it gets trickier. In quantum mechanics, there’s a phenomenon called entanglement. Entanglement really does connect things in different places…for a very specific definition of “connect”. And there’s a real (but complicated) sense in which these connections end up connecting everything, which you can read about here. There’s even speculation that these sorts of “connections” in some sense give rise to space and time.

You really have to be careful here, though. These are connections of a very specific sort. Specifically, they’re the sort that you can’t do anything through.

Connect two cans with a length of string, and you can send messages between them. Connect two particles with entanglement, though, and you can’t send messages between them…at least not any faster than between two non-entangled particles. Even in a quantum world, physics still respects locality: the principle that you can only affect the world where you are, and that any changes you make can’t travel faster than the speed of light. Ansibles, science-fiction devices that communicate faster than light, can’t actually exist according to our current knowledge.

What kind of connection is entanglement, then? That’s a bit tricky to describe in a short post. One way to think about entanglement is as a connection of logic.

Imagine someone takes a coin and cuts it along the rim into a heads half and a tails half. They put the two halves in two envelopes, and randomly give you one. You don’t know whether you have heads or tails…but you know that if you open your envelope and it shows heads, the other envelope must have tails.

m_nickel

Unless they’re a spy. Then it could contain something else.

Entanglement starts out with connections like that. Instead of a coin, take a particle that isn’t spinning and “split” it into two particles spinning in different directions, “spin up” and “spin down”. Like the coin, the two particles are “logically connected”: you know if one of them is “spin up” the other is “spin down”.

What makes a quantum coin different from a classical coin is that there’s no way to figure out the result in advance. If you watch carefully, you can see which coin gets put in to which envelope, but no matter how carefully you look you can’t predict which particle will be spin up and which will be spin down. There’s no “hidden information” in the quantum case, nowhere nearby you can look to figure it out.

That makes the connection seem a lot weirder than a regular logical connection. It also has slightly different implications, weirdness in how it interacts with the rest of quantum mechanics, things you can exploit in various ways. But none of those ways, none of those connections, allow you to change the world faster than the speed of light. In a way, they’re connecting things in the same sense that “we are all stardust” is connecting things: tied together by logic and cause.

So as long as this is all you mean by “everything is connected” then sure, everything is connected. But often, people seem to mean something else.

Sometimes, they mean something explicitly mystical. They’re people who believe in dowsing rods and astrology, in sympathetic magic, rituals you can do in one place to affect another. There is no support for any of this in physics. Nothing in quantum mechanics, in string theory, or in big bang cosmology has any support for altering the world with the power of your mind alone, or the stars influencing your day to day life. That’s just not the sort of connection we’re talking about.

Sometimes, “everything is connected” means something a bit more loose, the idea that someone’s desires guide their fate, that you could “know” something happened to your kids the instant it happens from miles away. This has the same problem, though, in that it’s imagining connections that let you act faster than light, where people play a special role. And once again, these just aren’t that sort of connection.

Sometimes, finally, it’s entirely poetic. “Everything is connected” might just mean a sense of awe at the deep physics in mundane matter, or a feeling that everyone in the world should get along. That’s fine: if you find inspiration in physics then I’m glad it brings you happiness. But poetry is personal, so don’t expect others to find the same inspiration. Your “everyone is connected” might not be someone else’s.

Where Grants Go on the Ground

I’ve seen several recent debates about grant funding, arguments about whether this or that scientist’s work is “useless” and shouldn’t get funded. Wading into the specifics is a bit more political than I want to get on this blog right now, and if you’re looking for a general defense of basic science there are plenty to choose from. I’d like to focus on a different part, one where I think the sort of people who want to de-fund “useless” research are wildly overoptimistic.

People who call out “useless” research act as if government science funding works in a simple, straightforward way: scientists say what they want to work on, the government chooses which projects it thinks are worth funding, and the scientists the government chooses get paid.

This may be a (rough) picture of how grants are assigned. For big experiments and grants with very specific purposes, it’s reasonably accurate. But for the bulk of grants distributed among individual scientists, it ignores what happens to the money on the ground, after the scientists get it.

The simple fact of the matter is that what a grant is “for” doesn’t have all that much influence on what it gets spent on. In most cases, scientists work on what they want to, and find ways to pay for it.

Sometimes, this means getting grants for applied work, doing some of that, but also fitting in more abstract theoretical projects during downtime. Sometimes this means sharing grant money, if someone has a promising grad student they can’t fund at the moment and needs the extra help. (When I first got research funding as a grad student, I had to talk to the particle physics group’s secretary, and I’m still not 100% sure why.) Sometimes this means being funded to look into something specific and finding a promising spinoff that takes you in an entirely different direction. Sometimes you can get quite far by telling a good story, like a mathematician I know who gets defense funding to study big abstract mathematical systems because some related systems happen to have practical uses.

Is this unethical? Some of it, maybe. But from what I’ve seen of grant applications, it’s understandable.

The problem is that if scientists are too loose with what they spend grant money on, grant agency asks tend to be far too specific. I’ve heard of grants that ask you to give a timeline, over the next five years, of each discovery you’re planning to make. That sort of thing just isn’t possible in science: we can lay out a rough direction to go, but we don’t know what we’ll find.

The end result is a bit like complaints about job interviews, where everyone is expected to say they love the company even though no-one actually does. It creates an environment where everyone has to twist the truth just to keep up with everyone else.

The other thing to keep in mind is that there really isn’t any practical way to enforce any of this. Sure, you can require receipts for equipment and the like, but once you’re paying for scientists’ time you don’t have a good way to monitor how they spend it. The best you can do is have experts around to evaluate the scientists’ output…but if those experts understand enough to do that, they’re going to be part of the scientific community, like grant committees usually already are. They’ll have the same expectations as the scientists, and give similar leeway.

So if you want to kill off some “useless” area of research, you can’t do it by picking and choosing who gets grants for what. There are advocates of more drastic actions of course, trying to kill whole agencies or fields, and that’s beyond the scope of this post. But if you want science funding to keep working the way it does, and just have strong opinions about what scientists should do with it, then calling out “useless” research doesn’t do very much: if the scientists in question think it’s useful, they’ll find a way to keep working on it. You’ve slowed them down, but you’ll still end up paying for research you don’t like.

Final note: The rule against political discussion in the comments is still in effect. For this post, that means no specific accusations of one field or another as being useless, or one politician/political party/ideology or another of being the problem here. Abstract discussions and discussions of how the grant system works should be fine.

Movie Review: The Truth is in the Stars

Recently, Perimeter aired a showing of The Truth is in the Stars, a documentary about the influence of Star Trek on science and culture, with a panel discussion afterwards. The documentary follows William Shatner as he wanders around the world interviewing scientists and film industry people about how Star Trek inspired them. Along the way he learns a bit about physics, and collects questions to ask Steven Hawking at the end.

5834308d05e48__po__ipho__380_568

I’ll start with the good: the piece is cute. They managed to capture some fun interactions with the interviewees, there are good (if occasionally silly) visuals, and the whole thing seems fairly well edited. If you’re looking for an hour of Star Trek nostalgia and platitudes about physics, this is the documentary for you.

That said, it doesn’t go much beyond cute, and it dances between topics in a way that felt unsatisfying.

The piece has a heavy focus on Shatner, especially early on, beginning with a clumsily shoehorned-in visit to his ranch to hear his thoughts on horses. For a while, the interviews are all about him: his jokes, his awkward questions, his worries about getting old. He has a habit of asking the scientists he talks to whether “everything is connected”, which to the scientists’ credit is usually met by a deft change of subject. All of this fades somewhat as the movie progresses, though: whether by a trick of editing, or because after talking to so many scientists he begins to pick up some humility.

(Incidentally, I really ought to have a blog post debunking the whole “everything is connected” thing. The tricky part is that it involves so many different misunderstandings, from confusion around entanglement to the role of strings to “we are all star-stuff” that it’s hard to be comprehensive.)

Most of the scientific discussions are quite superficial, to the point that they’re more likely to confuse inexperienced viewers than to tell them something new (especially the people who hinted at dark energy-based technology…no, just no). While I don’t expect a documentary like this to cover the science in-depth, trying to touch on so many topics in this short a time mostly just fuels the “everything is connected” misunderstanding. One surprising element of the science coverage was the choice to have both Michio Kaku giving a passionate description of string theory and Neil Turok bluntly calling string theory “a mess”. While giving the public “both sides” like that isn’t unusual in other contexts, for some reason most science documentaries I’ve seen take one side or the other.

Of course, the point of the documentary isn’t really to teach science, it’s to show how Star Trek influenced science. Here too, though, the piece was disappointing. Most of the scientists interviewed could tell their usual story about the power of science fiction in their childhood, but didn’t have much to say about Star Trek specifically. It was the actors and producers who had the most to say about Star Trek, from Ben Stiller showing off his Gorn mask to Seth MacFarlane admiring the design of the Enterprise. The best of these was probably Whoopi Goldberg’s story of being inspired by Uhura, which has been covered better elsewhere (and might have been better as Mae Jemison’s similar story, which would at least have involved an astronaut rather than another actor). I did enjoy Neil deGrasse Tyson’s explanation of how as a kid he thought everything on Star Trek was plausible…except for the automatic doors.

Shatner’s meeting with Hawking is the finale, and is the documentary’s strongest section. Shatner is humbled, even devout, in Hawking’s presence, while Hawking seems to show genuine joy swapping jokes with Captain Kirk.

Overall, the piece felt more than a little disjointed. It’s not really about the science, but it didn’t have enough content to be really about Star Trek either. If it was “about” anything, it was Shatner’s journey: an aging actor getting to hang out and chat with interesting people around the world. If that sounds fun, you should watch it: but don’t expect much deeper than that.

You Can’t Smooth the Big Bang

As a kid, I was fascinated by cosmology. I wanted to know how the universe began, possibly disproving gods along the way, and I gobbled up anything that hinted at the answer.

At the time, I had to be content with vague slogans. As I learned more, I could match the slogans to the physics, to see what phrases like “the Big Bang” actually meant. A large part of why I went into string theory was to figure out what all those documentaries are actually about.

In the end, I didn’t end up working on cosmology due my ignorance of a few key facts while in college (mostly, who Vilenkin was). Thus, while I could match some of the old popularization stories to the science, there were a few I never really understood. In particular, there were two claims I never quite saw fleshed out: “The universe emerged from nothing via quantum tunneling” and “According to Hawking, the big bang was not a singularity, but a smooth change with no true beginning.”

As a result, I’m delighted that I’ve recently learned the physics behind these claims, in the context of a spirited take-down of both by Perimeter’s Director Neil Turok.

neil20turok_cropped_photo20credit20jens20langen

My boss

Neil held a surprise string group meeting this week to discuss the paper I linked above, “No smooth beginning for spacetime” with Job Feldbrugge and Jean-Luc Lehners, as well as earlier work with Steffen Gielen. In it, he talked about problems in the two proposals I mentioned: Hawking’s suggestion that the big bang was smooth with no true beginning (really, the Hartle-Hawking no boundary proposal) and the idea that the universe emerged from nothing via quantum tunneling (really, Vilenkin’s tunneling from nothing proposal).

In popularization-speak, these two proposals sound completely different. In reality, though, they’re quite similar (and as Neil argues, they end up amounting to the same thing). I’ll steal a picture from his paper to illustrate:

neilpaperpic

The picture on the left depicts the universe under the Hartle-Hawking proposal, with time increasing upwards on the page. As the universe gets older, it looks like the expanding (de Sitter) universe we live in. At the beginning, though, there’s a cap, one on which time ends up being treated not in the usual way (Lorentzian space) but on the same footing as the other dimensions (Euclidean space). This lets space be smooth, rather than bunching up in a big bang singularity. After treating time in this way the result is reinterpreted (via a quantum field theory trick called Wick rotation) as part of normal space-time.

What’s the connection to Vilenkin’s tunneling picture? Well, when we talk about quantum tunneling, we also end up describing it with Euclidean space. Saying that the universe tunneled from nothing and saying it has a Euclidean “cap” then end up being closely related claims.

Before Neil’s work these two proposals weren’t thought of as the same because they were thought to give different results. What Neil is arguing is that this is due to a fundamental mistake on Hartle and Hawking’s part. Specifically, Neil is arguing that the Wick rotation trick that Hartle and Hawking used doesn’t work in this context, when you’re trying to calculate small quantum corrections for gravity. In normal quantum field theory, it’s often easier to go to Euclidean space and use Wick rotation, but for quantum gravity Neil is arguing that this technique stops being rigorous. Instead, you should stay in Lorentzian space, and use a more powerful mathematical technique called Picard-Lefschetz theory.

Using this technique, Neil found that Hartle and Hawking’s nicely behaved result was mistaken, and the real result of what Hartle and Hawking were proposing looks more like Vilenkin’s tunneling proposal.

Neil then tried to see what happens when there’s some small perturbation from a perfect de Sitter universe. In general in physics if you want to trust a result it ought to be stable: small changes should stay small. Otherwise, you’re not really starting from the right point, and you should instead be looking at wherever the changes end up taking you. What Neil found was that the Hartle-Hawking and Vilenkin proposals weren’t stable. If you start with a small wiggle in your no-boundary universe you get, not the purple middle drawing with small wiggles, but the red one with wiggles that rapidly grow unstable. The implication is that the Hartle-Hawking and Vilenkin proposals aren’t just secretly the same, they also both can’t be the stable state of the universe.

Neil argues that this problem is quite general, and happens under the following conditions:

  1. A universe that begins smoothly and semi-classically (where quantum corrections are small) with no sharp boundary,
  2. with a positive cosmological constant (the de Sitter universe mentioned earlier),
  3. under which the universe expands many times, allowing the small fluctuations to grow large.

If the universe avoids one of those conditions (maybe the cosmological constant changes in the future and the universe stops expanding, for example) then you might be able to avoid Neil’s argument. But if not, you can’t have a smooth semi-classical beginning and still have a stable universe.

Now, no debate in physics ends just like that. Hartle (and collaborators) don’t disagree with Neil’s insistence on Picard-Lefschetz theory, but they argue there’s still a way to make their proposal work. Neil mentioned at the group meeting that he thinks even the new version of Hartle’s proposal doesn’t solve the problem, he’s been working out the calculation with his collaborators to make sure.

Often, one hears about an idea from science popularization and then it never gets mentioned again. The public hears about a zoo of proposals without ever knowing which ones worked out. I think child-me would appreciate hearing what happened to Hawking’s proposal for a universe with no boundary, and to Vilenkin’s proposal for a universe emerging from nothing. Adult-me certainly does. I hope you do too.

An Amplitudes Flurry

Now that we’re finally done with flurries of snow here in Canada, in the last week arXiv has been hit with a flurry of amplitudes papers.

kitchener-construction

We’re also seeing a flurry of construction, but that’s less welcome.

Andrea Guerrieri, Yu-tin Huang, Zhizhong Li, and Congkao Wen have a paper on what are known as soft theorems. Most famously studied by Weinberg, soft theorems are proofs about what happens when a particle in an amplitude becomes “soft”, or when its momentum becomes very small. Recently, these theorems have gained renewed interest, as new amplitudes techniques have allowed researchers to go beyond Weinberg’s initial results (to “sub-leading” order) in a variety of theories.

Guerrieri, Huang, Li, and Wen’s contribution to the topic looks like it clarifies things quite a bit. Previously, most of the papers I’d seen about this had been isolated examples. This paper ties the various cases together in a very clean way, and does important work in making some older observations more rigorous.

 

Vittorio Del Duca, Claude Duhr, Robin Marzucca, and Bram Verbeek wrote about transcendental weight in something known as the multi-Regge limit. I’ve talked about transcendental weight before: loosely, it’s counting the power of pi that shows up in formulas. The multi-Regge limit concerns amplitudes with very high energies, in which we have a much better understanding of how the amplitudes should behave. I’ve used this limit before, to calculate amplitudes in N=4 super Yang-Mills.

One slogan I love to repeat is that N=4 super Yang-Mills isn’t just a toy model, it’s the most transcendental part of QCD. I’m usually fairly vague about this, because it’s not always true: while often a calculation in N=4 super Yang-Mills will give the part of the same calculation in QCD with the highest power of pi, this isn’t always the case, and it’s hard to propose a systematic principle for when it should happen. Del Duca, Duhr, Marzucca, and Verbeek’s work is a big step in that direction. While some descriptions of the multi-Regge limit obey this property, others don’t, and in looking at the ones that don’t the authors gain a better understanding of what sorts of theories only have a “maximally transcendental part”. What they find is that even when such theories aren’t restricted to N=4 super Yang-Mills, they have shared properties, like supersymmetry and conformal symmetry. Somehow these properties are tied to the transcendentality of functions in the amplitude, in a way that’s still not fully understood.

 

My colleagues at Perimeter released two papers over the last week: one, by Freddy Cachazo and Alfredo Guevara, uses amplitudes techniques to look at classical gravity, while the other, by Sebastian Mizera and Guojun Zhang, looks at one of the “pieces” inside string theory amplitudes.

I worked with Freddy and Alfredo on an early version of their result, back at the PSI Winter School. While I was off lazing about in Santa Barbara, they were hard at work trying to understand how the quantum-looking “loops” one can use to make predictions for potential energy in classical gravity are secretly classical. What they ended up finding was a trick to figure out whether a given amplitude was going to have a classical part or be purely quantum. So far, the trick works for amplitudes with one loop, and a few special cases at higher loops. It’s still not clear if it works for the general case, and there’s a lot of work still to do to understand what it means, but it definitely seems like an idea with potential. (Pun mostly not intended.)

I’ve talked before about “Z theory”, the weird thing you get when you isolate the “stringy” part of string theory amplitudes. What Sebastian and Guojun have carved out isn’t quite the same piece, but it’s related. I’m still not sure of the significance of cutting string amplitudes up in this way, I’ll have to read the paper more thoroughly (or chat with the authors) to find out.

Shades of Translation

I was playing Codenames with some friends, a game about giving one-word clues to multi-word answers. I wanted to hint at “undertaker” and “march”, so I figured I’d do “funeral march”. Since that’s two words, I needed one word that meant something similar. I went with dirge, then immediately regretted it as my teammates spent the better part of two minutes trying to figure out what it meant. In the end they went with “slug”.

lesma_slug

A dirge in its natural habitat.

If I had gone for requiem instead, we would have won. Heck, if I had just used “funeral”, we would have had a fighting chance. I had assumed my team knew the same words I did: they were also native English speakers, also nerds, etc. But the words they knew were still a shade different from the words I knew, and that made the difference.

When communicating science, you have to adapt to your audience. Knowing this, it’s still tempting to go for a shortcut. You list a few possible audiences, like “physicists”, or “children”, and then just make a standard explanation for each. This works pretty well…until it doesn’t, and your audience assumes a “dirge” is a type of slug.

In reality, each audience is different. Rather than just memorizing “translations” for a few specific groups, you need to pay attention to the shades of understanding in between.

On Wednesdays, Perimeter holds an Interdisciplinary Lunch. They cover a table with brown paper (for writing on) and impose one rule: you can’t sit next to someone in the same field.

This week, I sat next to an older fellow I hadn’t met before. He asked me what I did, and I gave my “standard physicist explanation”. This tends to be pretty heavy on jargon: while I don’t go too deep into my sub-field’s lingo, I don’t want to risk “talking down” to a physicist I don’t know. The end result is that I have to notice those “shades” of understanding as I go, hoping to get enough questions to change course if I need to.

Then I asked him what he did, and he patiently walked me through it. His explanation was more gradual: less worried about talking down to me, he was able to build up the background around his work, and the history of who worked on what. It was a bit humbling, to see the sort of honed explanation a person can build after telling variations on the same story for years.

In the end, we both had to adapt to what the other understood, to change course when our story wasn’t getting through. Neither of us could stick with the “standard physicist explanation” all the way to the end. Both of us had to shift from one shade to another, improving our translation.

The Many Worlds of Condensed Matter

Physics is the science of the very big and the very small. We study the smallest scales, the fundamental particles that make up the universe, and the largest, stars on up to the universe as a whole.

We also study the world in between, though.

That’s the domain of condensed matter, the study of solids, liquids, and other medium-sized arrangements of stuff. And while it doesn’t make the news as often, it’s arguably the biggest field in physics today.

(In case you’d like some numbers, the American Physical Society has divisions dedicated to different sub-fields. Condensed Matter Physics is almost twice the size of the next biggest division, Particles & Fields. Add in other sub-fields that focus on medium-sized-stuff, like those who work on solid state physics, optics, or biophysics, and you get a majority of physicists focused on the middle of the distance scale.)

When I started grad school, I didn’t pay much attention to condensed matter and related fields. Beyond the courses in quantum field theory and string theory, my “breadth” courses were on astrophysics and particle physics. But over and over again, from people in every sub-field, I kept hearing the same recommendation:

“You should take Solid State Physics. It’s a really great course!”

At the time, I never understood why. It was only later, once I had some research under my belt, that I realized:

Condensed matter uses quantum field theory!

The same basic framework, describing the world in terms of rippling quantum fields, doesn’t just work for fundamental particles. It also works for materials. Rather than describing the material in terms of its fundamental parts, condensed matter physicists “zoom out” and talk about overall properties, like sound waves and electric currents, treating them as if they were the particles of quantum field theory.

This tends to confuse the heck out of journalists. Not used to covering condensed matter (and sometimes egged on by hype from the physicists), they mix up the metaphorical particles of these systems with the sort of particles made by the LHC, with predictably dumb results.

Once you get past the clumsy journalism, though, this kind of analogy has a lot of value.

Occasionally, you’ll see an article about string theory providing useful tools for condensed matter. This happens, but it’s less widespread than some of the articles make it out to be: condensed matter is a huge and varied field, and string theory applications tend to be of interest to only a small piece of it.

It doesn’t get talked about much, but the dominant trend is actually in the other direction: increasingly, string theorists need to have at least a basic background in condensed matter.

String theory’s curse/triumph is that it can give rise not just to one quantum field theory, but many: a vast array of different worlds obtained by twisting extra dimensions in different ways. Particle physicists tend to study a fairly small range of such theories, looking for worlds close enough to ours that they still fit the evidence.

Condensed matter, in contrast, creates its own worlds. Pick the right material, take the right slice, and you get quantum field theories of almost any sort you like. While you can’t go to higher dimensions than our usual four, you can certainly look at lower ones, at the behavior of currents on a sheet of metal or atoms arranged in a line. This has led some condensed matter theorists to examine a wide range of quantum field theories with one strange behavior or another, theories that wouldn’t have occurred to particle physicists but that, in many cases, are part of the cornucopia of theories you can get out of string theory.

So if you want to explore the many worlds of string theory, the many worlds of condensed matter offer a useful guide. Increasingly, tools from that community, like integrability and tensor networks, are migrating over to ours.

It’s gotten to the point where I genuinely regret ignoring condensed matter in grad school. Parts of it are ubiquitous enough, and useful enough, that some of it is an expected part of a string theorist’s background. The many worlds of condensed matter, as it turned out, were well worth a look.

Pop Goes the Universe and Other Cosmic Microwave Background Games

(With apologies to whoever came up with this “book”.)

Back in February, Ijjas, Steinhardt, and Loeb wrote an article for Scientific American titled “Pop Goes the Universe” criticizing cosmic inflation, the proposal that the universe underwent a period of rapid expansion early in its life, smoothing it out to achieve the (mostly) uniform universe we see today. Recently, Scientific American published a response by Guth, Kaiser, Linde, Nomura, and 29 co-signers. This was followed by a counterresponse, which is the usual number of steps for this sort of thing before it dissipates harmlessly into the blogosphere.

In general, string theory, supersymmetry, and inflation tend to be criticized in very similar ways. Each gets accused of being unverifiable, able to be tuned to match any possible experimental result. Each has been claimed to be unfairly dominant, its position as “default answer” more due to the bandwagon effect than the idea’s merits. All three tend to get discussed in association with the multiverse, and blamed for dooming physics as a result. And all are frequently defended with one refrain: “If you have a better idea, what is it?”

It’s probably tempting (on both sides) to view this as just another example of that argument. In reality, though, string theory, supersymmetry, and inflation are all in very different situations. The details matter. And I worry that in this case both sides are too ready to assume the other is just making the “standard argument”, and ended up talking past each other.

When people say that string theory makes no predictions, they’re correct in a sense, but off topic: the majority of string theorists aren’t making the sort of claims that require successful predictions. When people say that inflation makes no predictions, if you assume they mean the same thing that people mean when they accuse string theory of making no predictions, then they’re flat-out wrong. Unlike string theorists, most people who work on inflation care a lot about experiment. They write papers filled with predictions, consequences for this or that model if this or that telescope sees something in the near future.

I don’t think Ijjas, Steinhardt, and Loeb were making that kind of argument.

When people say that supersymmetry makes no predictions, there’s some confusion of scope. (Low-energy) supersymmetry isn’t one specific proposal that needs defending on its own. It’s a class of different models, each with its own predictions. Given a specific proposal, one can see if it’s been ruled out by experiment, and predict what future experiments might say about it. Ruling out one model doesn’t rule out supersymmetry as a whole, but it doesn’t need to, because any given researcher isn’t arguing for supersymmetry as a whole: they’re arguing for their particular setup. The right “scope” is between specific supersymmetric models and specific non-supersymmetric models, not both as general principles.

Guth, Kaiser, Linde, and Nomura’s response follows similar lines in defending inflation. They point out that the wide variety of models are subject to being ruled out in the face of observation, and compare to the construction of the Standard Model in particle physics, with many possible parameters under the overall framework of Quantum Field Theory.

Ijjas, Steinhardt, and Loeb’s article certainly looked like it was making this sort of mistake. But as they clarify in the FAQ of their counter-response, they’ve got a more serious objection. They’re arguing that, unlike in the case of supersymmetry or the Standard Model, specific inflation models do not lead to specific predictions. They’re arguing that, because inflation typically leads to a multiverse, any specific model will in fact lead to a wide variety of possible observations. In effect, they’re arguing that the multitude of people busily making predictions based on inflationary models are missing a step in their calculations, underestimating their errors by a huge margin.

This is where I really regret that these arguments usually end after three steps (article, response, counter-response). Here Ijjas, Steinhardt, and Loeb are making what is essentially a technical claim, one that Guth, Kaiser, Linde, and Nomura could presumably respond to with a technical response, after which the rest of us would actually learn something. As-is, I certainly don’t have the background in inflation to know whether or not this point makes sense, and I’d love to hear from someone who does.

One aspect of this exchange that baffled me was the “accusation” that Ijjas, Steinhardt, and Loeb were just promoting their own work on bouncing cosmologies. (I put “accusation” in quotes because while Ijjas, Steinhardt, and Loeb seem to treat it as if it were an accusation, Guth, Kaiser, Linde, and Nomura don’t obviously mean it as one.)

“Bouncing cosmology” is Ijjas, Steinhardt, and Loeb’s answer to the standard “If you have a better idea, what is it?” response. It wasn’t the focus of their article, but while they seem to think this speaks well of them (hence their treatment of “promoting their own work” as if it were an accusation), I don’t. I read a lot of Scientific American growing up, and the best articles focused on explaining a positive vision: some cool new idea, mainstream or not, that could capture the public’s interest. That kind of article could still have included criticism of inflation, you’d want it in there to justify the use of a bouncing cosmology. But by going beyond that, it would have avoided falling into the standard back and forth that these arguments tend to, and maybe we would have actually learned from the exchange.

What Makes Light Move?

Light always moves at the speed of light.

It’s not alone in this: anything that lacks mass moves at the speed of light. Gluons, if they weren’t constantly interacting with each other, would move at the speed of light. Neutrinos, back when we thought they were massless, were thought to move at the speed of light. Gravitational waves, and by extension gravitons, move at the speed of light.

This is, on the face of it, a weird thing to say. If I say a jet moves at the speed of sound, I don’t mean that it always moves at the speed of sound. Find it in its hangar and hopefully it won’t be moving at all.

And so, people occasionally ask me, why can’t we find light in its hangar? Why does light never stand still? What makes light move?

(For the record, you can make light “stand still” in a material, but that’s because the material is absorbing and reflecting it, so it’s not the “same” light traveling through. Compare the speed of a wave of hands in a stadium versus the speed you could run past the seats.)

This is surprisingly tricky to explain without math. Some people point out that if you want to see light at rest you need to speed up to catch it, but you can’t accelerate enough unless you too are massless. This probably sounds a bit circular. Some people talk about how, from light’s perspective, no time passes at all. This is true, but it seems to confuse more than it helps. Some people say that light is “made of energy”, but I don’t like that metaphor. Nothing is “made of energy”, nor is anything “made of mass” either. Mass and energy are properties things can have.

I do like game metaphors though. So, imagine that each particle (including photons, particles of light) is a character in an RPG.

260px-yagami_light

For bonus points, play Light in an RPG.

You can think of energy as the particle’s “character points”. When the particle builds its character it gets a number of points determined by its energy. It can spend those points increasing its “stats”: mass and momentum, via the lesser-known big brother of E=mc^2, E^2=p^2c^2+m^2c^4.

Maybe the particle chooses to play something heavy, like a Higgs boson. Then they spend a lot of points on mass, and don’t have as much to spend on momentum. If they picked something lighter, like an electron, they’d have more to spend, so they could go faster. And if they spent nothing at all on mass, like light does, they could use all of their energy “points” boosting their speed.

Now, it turns out that these “energy points” don’t boost speed one for one, which is why low-energy light isn’t any slower than high-energy light. Instead, speed is determined by the ratio between energy and momentum. When they’re proportional to each other, when E^2=p^2c^2, then a particle is moving at the speed of light.

(Why this is is trickier to explain. You’ll have to trust me or wikipedia that the math works out.)

Some of you may be happy with this explanation, but others will accuse me of passing the buck. Ok, a photon with any energy will move at the speed of light. But why do photons have any energy at all? And even if they must move at the speed of light, what determines which direction?

Here I think part of the problem is an old physics metaphor, probably dating back to Newton, of a pool table.

220px-cribbage_pool_rack_closeup

A pool table is a decent metaphor for classical physics. You have moving objects following predictable paths, colliding off each other and the walls of the table.

Where people go wrong is in projecting this metaphor back to the beginning of the game. At the beginning of a game of pool, the balls are at rest, racked in the center. Then one of them is hit with the pool cue, and they’re set into motion.

In physics, we don’t tend to have such neat and tidy starting conditions. In particular, things don’t have to start at rest before something whacks them into motion.

A photon’s “start” might come from an unstable Higgs boson produced by the LHC. The Higgs decays, and turns into two photons. Since energy is conserved, these two each must have half of the energy of the original Higgs, including the energy that was “spent” on its mass. This process is quantum mechanical, and with no preferred direction the photons will emerge in a random one.

Photons in the LHC may seem like an artificial example, but in general whenever light is produced it’s due to particles interacting, and conservation of energy and momentum will send the light off in one direction or another.

(For the experts, there is of course the possibility of very low energy soft photons, but that’s a story for another day.)

Not even the beginning of the universe resembles that racked set of billiard balls. The question of what “initial conditions” make sense for the whole universe is a tricky one, but there isn’t a way to set it up where you start with light at rest. It’s not just that it’s not the default option: it isn’t even an available option.

Light moves at the speed of light, no matter what. That isn’t because light started at rest, and something pushed it. It’s because light has energy, and a particle has to spend its “character points” on something.

 

KITP Conference Retrospective

I’m back from the conference in Santa Barbara, and I thought I’d share a few things I found interesting. (For my non-physicist readers: I know it’s been a bit more technical than usual recently, I promise I’ll get back to some general audience stuff soon!)

James Drummond talked about efforts to extend the hexagon function method I work on to amplitudes with seven (or more) particles. In general, the method involves starting with a guess for what an amplitude should look like, and honing that guess based on behavior in special cases where it’s easier to calculate. In one of those special cases (called the multi-Regge limit), I had thought it would be quite difficult to calculate for more than six particles, but James clarified for me that there’s really only one additional piece needed, and they’re pretty close to having a complete understanding of it.

There were a few talks about ways to think about amplitudes in quantum field theory as the output of a string theory-like setup. There’s been progress pushing to higher quantum-ness, and in understanding the weird web of interconnected theories this setup gives rise to. In the comments, Thoglu asked about one part of this web of theories called Z theory.

Z theory is weird. Most of the theories that come out of this “web” come from a consistent sort of logic: just like you can “square” Yang-Mills to get gravity, you can “square” other theories to get more unusual things. In possibly the oldest known example, you can “square” the part of string theory that looks like Yang-Mills at low energy (open strings) to get the part that looks like gravity (closed strings). Z theory asks: could the open string also come from “multiplying” two theories together? Weirdly enough, the answer is yes: it comes from “multiplying” normal Yang-Mills with a part that takes care of the “stringiness”, a part which Oliver Schlotterer is calling “Z theory”. It’s not clear whether this Z theory makes sense as a theory on its own (for the experts: it may not even be unitary) but it is somewhat surprising that you can isolate a “building block” that just takes care of stringiness.

Peter Young in the comments asked about the Correlahedron. Scattering amplitudes ask a specific sort of question: if some particles come in from very far away, what’s the chance they scatter off each other and some other particles end up very far away? Correlators ask a more general question, about the relationships of quantum fields at different places and times, of which amplitudes are a special case. Just as the Amplituhedron is a geometrical object that specifies scattering amplitudes (in a particular theory), the Correlahedron is supposed to represent correlators (in the same theory). In some sense (different from the sense above) it’s the “square” of the Amplituhedron, and the process that gets you from it to the Amplituhedron is a geometrical version of the process that gets you from the correlator to the amplitude.

For the Amplituhedron, there’s a reasonably smooth story of how to get the amplitude. News articles tended to say the amplitude was the “volume” of the Amplituhedron, but that’s not quite correct. In fact, to find the amplitude you need to add up, not the inside of the Amplituhedron, but something that goes infinite at the Amplituhedron’s boundaries. Finding this “something” can be done on a case by case basis, but it get tricky in more complicated cases.

For the Correlahedron, this part of the story is missing: they don’t know how to define this “something”, the old recipe doesn’t work. Oddly enough, this actually makes me optimistic. This part of the story is something that people working on the Amplituhedron have been trying to avoid for a while, to find a shape where they can more honestly just take the volume. The fact that the old story doesn’t work for the Correlahedron suggests that it might provide some insight into how to build the Amplituhedron in a different way, that bypasses this problem.

There were several more talks by mathematicians trying to understand various aspects of the Amplituhedron. One of them was by Hugh Thomas, who as a fun coincidence actually went to high school with Nima Arkani-Hamed, one of the Amplituhedron’s inventors. He’s now teamed up with Nima and Jaroslav Trnka to try to understand what it means to be inside the Amplituhedron. In the original setup, they had a recipe to generate points inside the Amplituhedron, but they didn’t have a fully geometrical picture of what put them “inside”. Unlike with a normal shape, with the Amplituhedron you can’t just check which side of the wall you’re on. Instead, they can flatten the Amplituhedron, and observe that for points “inside” the Amplituhedron winds around them a specific number of times (hence “Unwinding the Amplituhedron“). Flatten it down to a line and you can read this off from the list of flips over your point, an on-off sequence like binary. If you’ve ever heard the buzzword “scattering amplitudes as binary code”, this is where that comes from.

They also have a better understanding of how supersymmetry shows up in the Amplituhedron, which Song He talked about in his talk. Previously, supersymmetry looked to be quite central, part of the basic geometric shape. Now, they can instead understand it in a different way, with the supersymmetric part coming from derivatives (for the specialists: differential forms) of the part in normal space and time. The encouraging thing is that you can include these sorts of derivatives even if your theory isn’t supersymmetric, to keep track of the various types of particles, and Song provided a few examples in his talk. This is important, because it opens up the possibility that something Amplituhedron-like could be found for a non-supersymmetric theory. Along those lines, Nima talked about ways that aspects of the “nice” description of space and time we use for the Amplituhedron can be generalized to other messier theories.

While he didn’t talk about it at the conference, Jake Bourjaily has a new paper out about a refinement of the generalized unitarity technique I talked about a few weeks back. Generalized unitarity involves matching a “cut up” version of an amplitude to a guess. What Jake is proposing is that in at least some cases you can start with a guess that’s as easy to work with as possible, where each piece of the guess matches up to just one of the “cuts” that you’re checking.  Think about it like a game of twenty questions where you’ve divided all possible answers into twenty individual boxes: for each box, you can just ask “is it in this box”?

Finally, I’ve already talked about the highlight of the conference, so I can direct you to that post for more details. I’ll just mention here that there’s still a fair bit of work to do for Zvi Bern and collaborators to get their result into a form they can check, since the initial output of their setup is quite messy. It’s led to worries about whether they’ll have enough computer power at higher loops, but I’m confident that they still have a few tricks up their sleeves.