Tag Archives: relativity

Stop Listing the Amplituhedron as a Competitor of String Theory

The Economist recently had an article (paywalled) that meandered through various developments in high-energy physics. It started out talking about the failure of the LHC to find SUSY, argued this looked bad for string theory (which…not really?) and used it as a jumping-off point to talk about various non-string “theories of everything”. Peter Woit quoted it a few posts back as kind of a bellwether for public opinion on supersymmetry and string theory.

The article was a muddle, but a fairly conventional muddle, explaining or mis-explaining things in roughly the same way as other popular physics pieces. For the most part that didn’t bug me, but one piece of the muddle hit a bit close to home:

The names of many of these [non-string theories of everything] do, it must be conceded, torture the English language. They include “causal dynamical triangulation”, “asymptotically safe gravity”, “loop quantum gravity” and the “amplituhedron formulation of quantum theory”.

I’ve posted about the amplituhedron more than a few times here on this blog. Out of every achievement of my sub-field, it has most captured the public imagination. It’s legitimately impressive, a way to translate calculations of probabilities of collisions of fundamental particles (in a toy model, to be clear) into geometrical objects. What it isn’t, and doesn’t pretend to be, is a theory of everything.

To be fair, the Economist piece admits this:

Most attempts at a theory of everything try to fit gravity, which Einstein describes geometrically, into quantum theory, which does not rely on geometry in this way. The amplituhedron approach does the opposite, by suggesting that quantum theory is actually deeply geometric after all. Better yet, the amplituhedron is not founded on notions of spacetime, or even statistical mechanics. Instead, these ideas emerge naturally from it. So, while the amplituhedron approach does not as yet offer a full theory of quantum gravity, it has opened up an intriguing path that may lead to one.

The reasoning they have leading up to it has a few misunderstandings anyway. The amplituhedron is geometrical, but in a completely different way from how Einstein’s theory of gravity is geometrical: Einstein’s gravity is a theory of space and time, the amplituhedron’s magic is that it hides space and time behind a seemingly more fundamental mathematics.

This is not to say that the amplituhedron won’t lead to insights about gravity. That’s a big part of what it’s for, in the long-term. Because the amplituhedron hides the role of space and time, it might show the way to theories that lack them altogether, theories where space and time are just an approximation for a more fundamental reality. That’s a real possibility, though not at this point a reality.

Even if you take this possibility completely seriously, though, there’s another problem with the Economist’s description: it’s not clear that this new theory would be a non-string theory!

The main people behind the amplituhedron are pretty positively disposed to string theory. If you asked them, I think they’d tell you that, rather than replacing string theory, they expect to learn more about string theory: to see how it could be reformulated in a way that yields insight about trickier problems. That’s not at all like the other “non-string theories of everything” in that list, which frame themselves as alternatives to, or even opponents of, string theory.

It is a lot like several other research programs, though, like ER=EPR and It from Qubit. Researchers in those programs try to use physical principles and toy models to say fundamental things about quantum gravity, trying to think about space and time as being made up of entangled quantum objects. By that logic, they belong in that list in the article alongside the amplituhedron. The reason they aren’t is obvious if you know where they come from: ER=EPR and It from Qubit are worked on by string theorists, including some of the most prominent ones.

The thing is, any reason to put the amplituhedron on that list is also a reason to put them. The amplituhedron is not a theory of everything, it is not at present a theory of quantum gravity. It’s a research direction that might shed new insight about quantum gravity. It doesn’t explicitly involve strings, but neither does It from Qubit most of the time. Unless you’re going to describe It from Qubit as a “non-string theory of everything”, you really shouldn’t describe the amplituhedron as one.

The amplituhedron is a really cool idea, one with great potential. It’s not something like loop quantum gravity, or causal dynamical triangulations, and it doesn’t need to be. Let it be what it is, please!

Lessons From Neutrinos, Part II

Last week I talked about the history of neutrinos. Neutrinos come in three types, or “flavors”. Electron neutrinos are the easiest: they’re produced alongside electrons and positrons in the different types of beta decay. Electrons have more massive cousins, called muon and tau particles. As it turns out, each of these cousins has a corresponding flavor of neutrino: muon neutrinos, and tau neutrinos.

For quite some time, physicists thought that all of these neutrinos had zero mass.

(If the idea of a particle with zero mass confuses you, think about photons. A particle with zero mass travels, like a photon, at the speed of light. This doesn’t make them immune to gravity: just as no light can escape a black hole, neither can any other massless particle. It turns out that once you take into account Einstein’s general theory of relativity, gravity cares about energy, not just mass.)

Eventually, physicists started to realize they were wrong, and neutrinos had a small non-zero mass after all. Their reason why might seem a bit strange, though. Physicists didn’t weigh the neutrinos, or measure their speed. Instead, they observed that different flavors of neutrinos transform into each other. We say that they oscillate: electron neutrinos oscillate into muon or tau neutrinos, which oscillate into the other flavors, and so on. Over time, a beam of electron neutrinos will become a beam of mostly tau and muon neutrinos, before becoming a beam of electron neutrinos again.

That might not sound like it has much to do with mass. To understand why it does, you’ll need to learn this post’s lesson:

Lesson 2: Mass is just How Particles Move

Oscillating particles seem like a weird sort of evidence for mass. What would be a more normal kind of evidence?

Those of you who’ve taken physics classes might remember the equation F=ma. Apply a known force to something, see how much it accelerates, and you can calculate its mass. If you’ve had a bit more physics, you’ll know that this isn’t quite the right equation to use for particles close to the speed of light, but that there are other equations we can use in a similar way. In particular, using relativity, we have E^2=p^2 c^2 + m^2 c^4. (At rest, p=0, and we have the famous E=mc^2). This lets us do the same kind of thing: give something a kick and see how it moves.

So let’s say we do that: we give a particle a kick, and measure it later. I’ll visualize this with a tool physicists use called a Feynman diagram. The line represents a particle traveling from one side to the other, from “kick” to “measurement”:

Because we only measure the particle at the end, we might miss if something happens in between. For example, it might interact with another particle or field, like this:

If we don’t know about this other field, then when we try to measure the particle’s mass we will include interactions like this. As it turns out, this is how the Higgs boson works: the Higgs field interacts with particles like electrons and quarks, changing how they move, so that they appear to have mass.

Quantum particles can do other things too. You might have heard people talk about one particle turning into a pair of temporary “virtual particles”. When people say that, they usually have a diagram in mind like this:

In particle physics, we need to take into account every diagram of this kind, every possible thing that could happen in between “kick” and measurement. The final result isn’t one path or another, but a sum of all the different things that could have happened in between. So when we measure the mass of a particle, we’re including every diagram that’s allowed: everything that starts with our “kick” and ends with our measurement.

Now what if our particle can transform, from one flavor to another?

Now we have a new type of thing that can happen in between “kick” and measurement. And if it can happen once, it can happen more than once:

Remember that, when we measure mass, we’re measuring a sum of all the things that can happen in between. That means our particle could oscillate back and forth between different flavors many many times, and we need to take every possibility into account. Because of that, it doesn’t actually make sense to ask what the mass is for one flavor, for just electron neutrinos or just muon neutrinos. Instead, mass is for the thing that actually moves: an average (actually, a quantum superposition) over all the different flavors, oscillating back and forth any number of times.

When a process like beta decay produces an electron neutrino, the thing that actually moves is a mix (again, a superposition) of particles with these different masses. Because each of these masses respond to their initial “kick” in different ways, you see different proportions of them over time. Try to measure different flavors at the end, and you’ll find different ones depending on when and where you measure. That’s the oscillation effect, and that’s why it means that neutrinos have mass.

It’s a bit more complicated to work out the math behind this, but not unreasonably so: it’s simpler than a lot of other physics calculations. Working through the math, we find that by measuring how long it takes neutrinos to oscillate we can calculate the differences between (squares of) neutrino masses. What we can’t calculate are the masses themselves. We know they’re small: neutrinos travel at almost the speed of light, and our cosmological models of the universe have surprisingly little room for massive neutrinos: too much mass, and our universe would look very different than it does today. But we don’t know much more than that. We don’t even know the order of the masses: you might assume electron neutrinos are on average lighter than muon neutrinos, which are lighter than tau neutrinos…but it could easily be the other way around! We also don’t know whether neutrinos get their mass from the Higgs like other particles do, or if they work in a completely different way.

Unlike other mysteries of physics, we’ll likely have the answer to some of these questions soon. People are already picking through the data from current experiments, seeing if they hint towards one order of masses or the other, or to one or the other way for neutrinos to get their mass. More experiments will start taking data this year, and others are expected to start later this decade. At some point, the textbooks may well have more “normal” mass numbers for each of the neutrinos. But until then, they serve as a nice illustration of what mass actually means in particle physics.

QCD Meets Gravity 2020, Retrospective

I was at a Zoomference last week, called QCD Meets Gravity, about the many ways gravity can be thought of as the “square” of other fundamental forces. I didn’t have time to write much about the actual content of the conference, so I figured I’d say a bit more this week.

A big theme of this conference, as in the past few years, was gravitational waves. From LIGO’s first announcement of a successful detection, amplitudeologists have been developing new methods to make predictions for gravitational waves more efficient. It’s a field I’ve dabbled in a bit myself. Last year’s QCD Meets Gravity left me impressed by how much progress had been made, with amplitudeologists already solidly part of the conversation and able to produce competitive results. This year felt like another milestone, in that the amplitudeologists weren’t just catching up with other gravitational wave researchers on the same kinds of problems. Instead, they found new questions that amplitudes are especially well-suited to answer. These included combining two pieces of these calculations (“potential” and “radiation”) that the older community typically has to calculate separately, using an old quantum field theory trick, finding the gravitational wave directly from amplitudes, and finding a few nice calculations that can be used to “generate” the rest.

A large chunk of the talks focused on different “squaring” tricks (or as we actually call them, double-copies). There were double-copies for cosmology and conformal field theory, for the celestial sphere, and even some version of M theory. There were new perspectives on the double-copy, new building blocks and algebraic structures that lie behind it. There were talks on the so-called classical double-copy for space-times, where there have been some strange discoveries (an extra dimension made an appearance) but also a more rigorous picture of where the whole thing comes from, using twistor space. There were not one, but two talks linking the double-copy to the Navier-Stokes equation describing fluids, from two different groups. (I’m really curious whether these perspectives are actually useful for practical calculations about fluids, or just fun to think about.) Finally, while there wasn’t a talk scheduled on this paper, the authors were roped in by popular demand to talk about their work. They claim to have made progress on a longstanding puzzle, how to show that double-copy works at the level of the Lagrangian, and the community was eager to dig into the details.

From there, a grab-bag of talks covered other advancements. There were talks from string theorists and ambitwistor string theorists, from Effective Field Theorists working on gravity and the Standard Model, from calculations in N=4 super Yang-Mills, QCD, and scalar theories. Simon Caron-Huot delved into how causality constrains the theories we can write down, showing an interesting case where the common assumption that all parameters are close to one is actually justified. Nima Arkani-Hamed began his talk by saying he’d surprise us, which he certainly did (and not by keeping on time). It’s tricky to explain why his talk was exciting. Comparing to his earlier discovery of the Amplituhedron, which worked for a toy model, this is a toy calculation in a toy model. While the Amplituhedron wasn’t based on Feynman diagrams, this can’t even be compared with Feynman diagrams. Instead of expanding in a small coupling constant, this expands in a parameter that by all rights should be equal to one. And instead of positivity conditions, there are negativity conditions. All I can say is that with all of that in mind, it looks like real progress on an important and difficult problem from a totally unanticipated direction. In a speech summing up the conference, Zvi Bern mentioned a few exciting words from Nima’s talk: “nonplanar”, “integrated”, “nonperturbative”. I’d add “differential equations” and “infinite sums of ladder diagrams”. Nima and collaborators are trying to figure out what happens when you sum up all of the Feynman diagrams in a theory. I’ve made progress in the past for diagrams with one “direction”, a ladder that grows as you add more loops, but I didn’t know how to add “another direction” to the ladder. In very rough terms, Nima and collaborators figured out how to add that direction.

I’ve probably left things out here, it was a packed conference! It’s been really fun seeing what the community has cooked up, and I can’t wait to see what happens next.

QCD Meets Gravity 2020

I’m at another Zoom conference this week, QCD Meets Gravity. This year it’s hosted by Northwestern.

The view of the campus from wonder.me

QCD Meets Gravity is a conference series focused on the often-surprising links between quantum chromodynamics on the one hand and gravity on the other. By thinking of gravity as the “square” of forces like the strong nuclear force, researchers have unlocked new calculation techniques and deep insights.

Last year’s conference was very focused on one particular topic, trying to predict the gravitational waves observed by LIGO and VIRGO. That’s still a core topic of the conference, but it feels like there is a bit more diversity in topics this year. We’ve seen a variety of talks on different “squares”: new theories that square to other theories, and new calculations that benefit from “squaring” (even surprising applications to the Navier-Stokes equation!) There are talks on subjects from String Theory to Effective Field Theory, and even a talk on a very different way that “QCD meets gravity”, in collisions of neutron stars.

With still a few more talks to go, expect me to say a bit more next week, probably discussing a few in more detail. (Several people presented exciting work in progress!) Until then, I should get back to watching!

The Wolfram Physics Project Makes Me Queasy

Stephen Wolfram is…Stephen Wolfram.

Once a wunderkind student of Feynman, Wolfram is now best known for his software, Mathematica, a tool used by everyone from scientists to lazy college students. Almost all of my work is coded in Mathematica, and while it has some flaws (can someone please speed up the linear solver? Maple’s is so much better!) it still tends to be the best tool for the job.

Wolfram is also known for being a very strange person. There’s his tendency to name, or rename, things after himself. (There’s a type of Mathematica file that used to be called “.m”. Now by default they’re “.wl”, “Wolfram Language” files.) There’s his live-streamed meetings. And then there’s his physics.

In 2002, Wolfram wrote a book, “A New Kind of Science”, arguing that computational systems called cellular automata were going to revolutionize science. A few days ago, he released an update: a sprawling website for “The Wolfram Physics Project”. In it, he claims to have found a potential “theory of everything”, unifying general relativity and quantum physics in a cellular automata-like form.

If that gets your crackpot klaxons blaring, yeah, me too. But Wolfram was once a very promising physicist. And he has collaborators this time, who are currently promising physicists. So I should probably give him a fair reading.

On the other hand, his introduction for a technical audience is 448 pages long. I may have more time now due to COVID-19, but I still have a job, and it isn’t reading that.

So I compromised. I didn’t read his 448-page technical introduction. I read his 90-ish page blog post. The post is written for a non-technical audience, so I know it isn’t 100% accurate. But by seeing how someone chooses to promote their work, I can at least get an idea of what they value.

I started out optimistic, or at least trying to be. Wolfram starts with simple mathematical rules, and sees what kinds of structures they create. That’s not an unheard of strategy in theoretical physics, including in my own field. And the specific structures he’s looking at look weirdly familiar, a bit like a generalization of cluster algebras.

Reading along, though, I got more and more uneasy. That unease peaked when I saw him describe how his structures give rise to mass.

Wolfram had already argued that his structures obey special relativity. (For a critique of this claim, see this twitter thread.) He found a way to define energy and momentum in his system, as “fluxes of causal edges”. He picks out a particular “flux of causal edges”, one that corresponds to “just going forward in time”, and defines it as mass. Then he “derives” E=mc^2, saying,

Sometimes in the standard formalism of physics, this relation by now seems more like a definition than something to derive. But in our model, it’s not just a definition, and in fact we can successfully derive it.

In “the standard formalism of physics”, E=mc^2 means “mass is the energy of an object at rest”. It means “mass is the energy of an object just going forward in time”. If the “standard formalism of physics” “just defines” E=mc^2, so does Wolfram.

I haven’t read his technical summary. Maybe this isn’t really how his “derivation” works, maybe it’s just how he decided to summarize it. But it’s a pretty misleading summary, one that gives the reader entirely the wrong idea about some rather basic physics. It worries me, because both as a physicist and a blogger, he really should know better. I’m left wondering whether he meant to mislead, or whether instead he’s misleading himself.

That feeling kept recurring as I kept reading. There was nothing else as extreme as that passage, but a lot of pieces that felt like they were making a big deal about the wrong things, and ignoring what a physicist would find the most important questions.

I was tempted to get snarkier in this post, to throw in a reference to Lewis’s trilemma or some variant of the old quip that “what is new is not good; and what is good is not new”. For now, I’ll just say that I probably shouldn’t have read a 90 page pop physics treatise before lunch, and end the post with that.

QCD Meets Gravity 2019

I’m at UCLA this week for QCD Meets Gravity, a conference about the surprising ways that gravity is “QCD squared”.

When I attended this conference two years ago, the community was branching out into a new direction: using tools from particle physics to understand the gravitational waves observed at LIGO.

At this year’s conference, gravitational waves have grown from a promising new direction to a large fraction of the talks. While there were still the usual talks about quantum field theory and string theory (everything from bootstrap methods to a surprising application of double field theory), gravitational waves have clearly become a major focus of this community.

This was highlighted before the first talk, when Zvi Bern brought up a recent paper by Thibault Damour. Bern and collaborators had recently used particle physics methods to push beyond the state of the art in gravitational wave calculations. Damour, an expert in the older methods, claims that Bern et al’s result is wrong, and in doing so also questions an earlier result by Amati, Ciafaloni, and Veneziano. More than that, Damour argued that the whole approach of using these kinds of particle physics tools for gravitational waves is misguided.

There was a lot of good-natured ribbing of Damour in the rest of the conference, as well as some serious attempts to confront his points. Damour’s argument so far is somewhat indirect, so there is hope that a more direct calculation (which Damour is currently pursuing) will resolve the matter. In the meantime, Julio Parra-Martinez described a reproduction of the older Amati/Ciafaloni/Veneziano result with more Damour-approved techniques, as well as additional indirect arguments that Bern et al got things right.

Before the QCD Meets Gravity community worked on gravitational waves, other groups had already built a strong track record in the area. One encouraging thing about this conference was how much the two communities are talking to each other. Several speakers came from the older community, and there were a lot of references in both groups’ talks to the other group’s work. This, more than even the content of the talks, felt like the strongest sign that something productive is happening here.

Many talks began by trying to motivate these gravitational calculations, usually to address the mysteries of astrophysics. Two talks were more direct, with Ramy Brustein and Pierre Vanhove speculating about new fundamental physics that could be uncovered by these calculations. I’m not the kind of physicist who does this kind of speculation, and I confess both talks struck me as rather strange. Vanhove in particular explicitly rejects the popular criterion of “naturalness”, making me wonder if his work is the kind of thing critics of naturalness have in mind.

The Real E=mc^2

It’s the most famous equation in all of physics, written on thousands of chalkboard stock photos. Part of its charm is its simplicity: E for energy, m for mass, c for the speed of light, just a few simple symbols in a one-line equation. Despite its simplicity, E=mc^2 is deep and important enough that there are books dedicated to explaining it.

What does E=mc^2 mean?

Some will tell you it means mass can be converted to energy, enabling nuclear power and the atomic bomb. This is a useful picture for chemists, who like to think about balancing ingredients: this much mass on one side, this much energy on the other. It’s not the best picture for physicists, though. It makes it sound like energy is some form of “stuff” you can pour into your chemistry set flask, and energy really isn’t like that.

There’s another story you might have heard, in older books. In that story, E=mc^2 tells you that in relativity mass, like distance and time, is relative. The more energy you have, the more mass you have. Those books will tell you that this is why you can’t go faster than light: the faster you go, the greater your mass, and the harder it is to speed up.

Modern physicists don’t talk about it that way. In fact, we don’t even write E=mc^2 that way. We’re more likely to write:

E=\frac{mc^2}{\sqrt{1-\frac{v^2}{c^2}}}

“v” here stands for the velocity, how fast the mass is moving. The faster the mass moves, the more energy it has. Take v to zero, and you get back the familiar E=mc^2.

The older books weren’t lying to you, but they were thinking about a different notion of mass: “relativistic mass” m_r instead of “rest mass” $m_0$, related like this:

m_r=\frac{m_0}{\sqrt{1-\frac{v^2}{c^2}}}

which explains the difference in how we write E=mc^2.

Why the change? In part, it’s because of particle physics. In particle physics, we care about the rest mass of particles. Different particles have different rest mass: each electron has one rest mass, each top quark has another, regardless of how fast they’re going. They still get more energy, and harder to speed up, the faster they go, but we don’t describe it as a change in mass. Our equations match the old books, we just talk about them differently.

Of course, you can dig deeper, and things get stranger. You might hear that mass does change with energy, but in a very different way. You might hear that mass is energy, that they’re just two perspectives on the same thing. But those are stories for another day.

I titled this post “The Real E=mc^2”, but to clarify, none of these explanations are more “real” than the others. They’re words, useful in different situations and for different people. “The Real E=mc^2” isn’t the E=mc^2 of nuclear chemists, or old books, or modern physicists. It’s the theory itself, the mathematical rules and principles that all the rest are just trying to describe.

What Makes Light Move?

Light always moves at the speed of light.

It’s not alone in this: anything that lacks mass moves at the speed of light. Gluons, if they weren’t constantly interacting with each other, would move at the speed of light. Neutrinos, back when we thought they were massless, were thought to move at the speed of light. Gravitational waves, and by extension gravitons, move at the speed of light.

This is, on the face of it, a weird thing to say. If I say a jet moves at the speed of sound, I don’t mean that it always moves at the speed of sound. Find it in its hangar and hopefully it won’t be moving at all.

And so, people occasionally ask me, why can’t we find light in its hangar? Why does light never stand still? What makes light move?

(For the record, you can make light “stand still” in a material, but that’s because the material is absorbing and reflecting it, so it’s not the “same” light traveling through. Compare the speed of a wave of hands in a stadium versus the speed you could run past the seats.)

This is surprisingly tricky to explain without math. Some people point out that if you want to see light at rest you need to speed up to catch it, but you can’t accelerate enough unless you too are massless. This probably sounds a bit circular. Some people talk about how, from light’s perspective, no time passes at all. This is true, but it seems to confuse more than it helps. Some people say that light is “made of energy”, but I don’t like that metaphor. Nothing is “made of energy”, nor is anything “made of mass” either. Mass and energy are properties things can have.

I do like game metaphors though. So, imagine that each particle (including photons, particles of light) is a character in an RPG.

260px-yagami_light

For bonus points, play Light in an RPG.

You can think of energy as the particle’s “character points”. When the particle builds its character it gets a number of points determined by its energy. It can spend those points increasing its “stats”: mass and momentum, via the lesser-known big brother of E=mc^2, E^2=p^2c^2+m^2c^4.

Maybe the particle chooses to play something heavy, like a Higgs boson. Then they spend a lot of points on mass, and don’t have as much to spend on momentum. If they picked something lighter, like an electron, they’d have more to spend, so they could go faster. And if they spent nothing at all on mass, like light does, they could use all of their energy “points” boosting their speed.

Now, it turns out that these “energy points” don’t boost speed one for one, which is why low-energy light isn’t any slower than high-energy light. Instead, speed is determined by the ratio between energy and momentum. When they’re proportional to each other, when E^2=p^2c^2, then a particle is moving at the speed of light.

(Why this is is trickier to explain. You’ll have to trust me or wikipedia that the math works out.)

Some of you may be happy with this explanation, but others will accuse me of passing the buck. Ok, a photon with any energy will move at the speed of light. But why do photons have any energy at all? And even if they must move at the speed of light, what determines which direction?

Here I think part of the problem is an old physics metaphor, probably dating back to Newton, of a pool table.

220px-cribbage_pool_rack_closeup

A pool table is a decent metaphor for classical physics. You have moving objects following predictable paths, colliding off each other and the walls of the table.

Where people go wrong is in projecting this metaphor back to the beginning of the game. At the beginning of a game of pool, the balls are at rest, racked in the center. Then one of them is hit with the pool cue, and they’re set into motion.

In physics, we don’t tend to have such neat and tidy starting conditions. In particular, things don’t have to start at rest before something whacks them into motion.

A photon’s “start” might come from an unstable Higgs boson produced by the LHC. The Higgs decays, and turns into two photons. Since energy is conserved, these two each must have half of the energy of the original Higgs, including the energy that was “spent” on its mass. This process is quantum mechanical, and with no preferred direction the photons will emerge in a random one.

Photons in the LHC may seem like an artificial example, but in general whenever light is produced it’s due to particles interacting, and conservation of energy and momentum will send the light off in one direction or another.

(For the experts, there is of course the possibility of very low energy soft photons, but that’s a story for another day.)

Not even the beginning of the universe resembles that racked set of billiard balls. The question of what “initial conditions” make sense for the whole universe is a tricky one, but there isn’t a way to set it up where you start with light at rest. It’s not just that it’s not the default option: it isn’t even an available option.

Light moves at the speed of light, no matter what. That isn’t because light started at rest, and something pushed it. It’s because light has energy, and a particle has to spend its “character points” on something.

 

Fun with Misunderstandings

Perimeter had its last Public Lecture of the season this week, with Mario Livio giving some highlights from his book Brilliant Blunders. The lecture should be accessible online, either here or on Perimeter’s YouTube page.

These lectures tend to attract a crowd of curious science-fans. To give them something to do while they’re waiting, a few local researchers walk around with T-shirts that say “Ask me, I’m a scientist!” Sometimes we get questions about the upcoming lecture, but more often people just ask us what they’re curious about.

Long-time readers will know that I find this one of the most fun parts of the job. In particular, there’s a unique challenge in figuring out just why someone asked a question. Often, there’s a hidden misunderstanding they haven’t recognized.

The fun thing about these misunderstandings is that they usually make sense, provided you’re working from the person in question’s sources. They heard a bit of this and a bit of that, and they come to the most reasonable conclusion they can given what’s available. For those of us who have heard a more complete story, this often leads to misunderstandings we would never have thought of, but that in retrospect are completely understandable.

One of the simpler ones I ran into was someone who was confused by people claiming that we were running out of water. How could there be a water shortage, he asked, if the Earth is basically a closed system? Where could the water go?

The answer is that when people are talking about a water shortage, they’re not talking about water itself running out. Rather, they’re talking about a lack of safe drinking water. Maybe the water is polluted, or stuck in the ocean without expensive desalinization. This seems like the sort of thing that would be extremely obvious, but if you just hear people complaining that water is running out without the right context then you might just not end up hearing that part of the story.

A more involved question had to do with time dilation in general relativity. The guy had heard that atomic clocks run faster if you’re higher up, and that this was because time itself runs faster in lower gravity.

Given that, he asked, what happens if someone travels to an area of low gravity and then comes back? If more time has passed for them, then they’d be in the future, so wouldn’t they be at the “wrong time” compared to other people? Would they even be able to interact with them?

This guy’s misunderstanding came from hearing what happens, but not why. While he got that time passes faster in lower gravity, he was still thinking of time as universal: there is some past, and some future, and if time passes faster for one person and slower for another that just means that one person is “skipping ahead” into the other person’s future.

What he was missing was the explanation that time dilation comes from space and time bending. Rather than “skipping ahead”, a person for whom time passes faster just experiences more time getting to the same place, because they’re traveling on a curved path through space-time.

As usual, this is easier to visualize in space than in time. I ended up drawing a picture like this:

IMG_20160602_101423

Imagine person A and person B live on a circle. If person B stays the same distance from the center while person A goes out further, they can both travel the same angle around the circle and end up in the same place, but A will have traveled further, even ignoring the trips up and down.

What’s completely intuitive in space ends up quite a bit harder to visualize in time. But if you at least know what you’re trying to think about, that there’s bending involved, then it’s easier to avoid this guy’s kind of misunderstanding. Run into the wrong account, though, and even if it’s perfectly correct (this guy had heard some of Hawking’s popularization work on the subject), if it’s not emphasizing the right aspects you can come away with the wrong impression.

Misunderstandings are interesting because they reveal how people learn. They’re windows into different thought processes, into what happens when you only have partial evidence. And because of that, they’re one of the most fascinating parts of science popularization.

The Three Things Everyone Gets Wrong about the Big Bang

Ah, the Big Bang, our most science-y of creation myths. Everyone knows the story of how the universe and all its physical laws emerged from nothing in a massive explosion, growing from a singularity to the size of a breadbox until, over billions of years, it became the size it is today.

bigbang

A hot dense state, if you know what I mean.

…actually, almost nothing in that paragraph is true. There are a lot of myths about the Big Bang, born from physicists giving sloppy explanations. Here are three things most people get wrong about the Big Bang:

1. A Massive Explosion:

When you picture the big bang, don’t you imagine that something went, well, bang?

In movies and TV shows, a time traveler visiting the big bang sees only an empty void. Suddenly, an explosion lights up the darkness, shooting out stars and galaxies until it has created the entire universe.

Astute readers might find this suspicious: if the entire universe was created by the big bang, then where does the “darkness” come from? What does the universe explode into?

The problem here is that, despite the name, the big bang was not actually an explosion.

In picturing the universe as an explosion, you’re imagining the universe as having finite size. But it’s quite likely that the universe is infinite. Even if it is finite, it’s finite like the surface of the Earth: as Columbus (and others) experienced, you can’t get to the “edge” of the Earth no matter how far you go: eventually, you’ll just end up where you started. If the universe is truly finite, the same is true of it.

Rather than an explosion in one place, the big bang was an explosion everywhere at once. Every point in space was “exploding” at the same time. Each point was moving farther apart from every other point, and the whole universe was, as the song goes, hot and dense.

So what do physicists mean when they say that the universe at some specific time was the size of a breadbox, or a grapefruit?

It’s just sloppy language. When these physicists say “the universe”, what they mean is just the part of the universe we can see today, the Hubble Volume. It is that (enormously vast) space that, once upon a time, was merely the size of a grapefruit. But it was still adjacent to infinitely many other grapefruits of space, each one also experiencing the big bang.

2. It began with a Singularity:

This one isn’t so much definitely wrong as probably wrong.

If the universe obeys Einstein’s Theory of General Relativity perfectly, then we can make an educated guess about how it began. By tracking back the expansion of the universe to its earliest stages, we can infer that the universe was once as small as it can get: a single, zero-dimensional point, or a singularity. The laws of general relativity work the same backwards and forwards in time, so just as we could see a star collapsing and know that it is destined to form a black hole, we can see the universe’s expansion and know that if we traced it back it must have come from a single point.

This is all well and good, but there’s a problem with how it begins: “If the universe obeys Einstein’s Theory of General Relativity perfectly”.

In this situation, general relativity predicts an infinitely small, infinitely dense point. As I’ve talked about before, in physics an infinite result is almost never correct. When we encounter infinity, almost always it means we’re ignoring something about the nature of the universe.

In this case, we’re ignoring Quantum Mechanics. Quantum Mechanics naturally makes physics somewhat “fuzzy”: the Uncertainty Principle means that a quantum state can never be exactly in one specific place.

Combining quantum mechanics and general relativity is famously tricky, and the difficulty boils down to getting rid of pesky infinite results. However, several approaches exist to solving this problem, the most prominent of them being String Theory.

If you ask someone to list string theory’s successes, one thing you’ll always hear mentioned is string theory’s ability to understand black holes. In general relativity, black holes are singularities: infinitely small, and infinitely dense. In string theory, black holes are made up of combinations of fundamental objects: strings and membranes, curled up tight, but crucially not infinitely small. String theory smooths out singularities and tamps down infinities, and the same story applies to the infinity of the big bang.

String theory isn’t alone in this, though. Less popular approaches to quantum gravity, like Loop Quantum Gravity, also tend to “fuzz” out singularities. Whichever approach you favor, it’s pretty clear at this point that the big bang didn’t really begin with a true singularity, just a very compressed universe.

3. It created the laws of physics:

Physicists will occasionally say that the big bang determined the laws of physics. Fans of Anthropic Reasoning in particular will talk about different big bangs in different places in a vast multi-verse, each producing different physical laws.

I’ve met several people who were very confused by this. If the big bang created the laws of physics, then what laws governed the big bang? Don’t you need physics to get a big bang in the first place?

The problem here is that “laws of physics” doesn’t have a precise definition. Physicists use it to mean different things.

In one (important) sense, each fundamental particle is its own law of physics. Each one represents something that is true across all of space and time, a fact about the universe that we can test and confirm.

However, these aren’t the most fundamental laws possible. In string theory, the particles that exist in our four dimensions (three space dimensions, and one of time) change depending on how six “extra” dimensions are curled up. Even in ordinary particle physics, the value of the Higgs field determines the mass of the particles in our universe, including things that might feel “fundamental” like the difference between electromagnetism and the weak nuclear force. If the Higgs field had a different value (as it may have early in the life of the universe), these laws of physics would have been different. These sorts of laws can be truly said to have been created by the big bang.

The real fundamental laws, though, don’t change. Relativity is here to stay, no matter what particles exist in the universe. So is quantum mechanics. The big bang didn’t create those laws, it was a natural consequence of them. Rather than springing physics into existence from nothing, the big bang came out of the most fundamental laws of physics, then proceeded to fix the more contingent ones.

In fact, the big bang might not have even been the beginning of time! As I mentioned earlier in this article, most approaches to quantum gravity make singularities “fuzzy”. One thing these “fuzzy” singularities can do is “bounce”, going from a collapsing universe to an expanding universe. In Cyclic Models of the universe, the big bang was just the latest in a cycle of collapses and expansions, extending back into the distant past. Other approaches, like Eternal Inflation, instead think of the big bang as just a local event: our part of the universe happened to be dense enough to form a big bang, while other regions were expanding even more rapidly.

So if you picture the big bang, don’t just imagine an explosion. Imagine the entire universe expanding at once, changing and settling and cooling until it became the universe as we know it today, starting from a world of tangled strings or possibly an entirely different previous universe.

Sounds a bit more interesting to visit in your TARDIS, no?