Tag Archives: cosmology

At Bohr-100: Current Themes in Theoretical Physics

During the pandemic, some conferences went online. Others went dormant.

Every summer before the pandemic, the Niels Bohr International Academy hosted a conference called Current Themes in High Energy Physics and Cosmology. Current Themes is a small, cozy conference, a gathering of close friends some of whom happen to have Nobel prizes. Holding it online would be almost missing the point.

Instead, we waited. Now, at least in Denmark, the pandemic is quiet enough to hold this kind of gathering. And it’s a special year: the 100th anniversary of Niels Bohr’s Nobel, the 101st of the Niels Bohr Institute. So it seemed like the time for a particularly special Current Themes.

For one, it lets us use remarkably simple signs

A particularly special Current Themes means some unusually special guests. Our guests are usually pretty special already (Gerard t’Hooft and David Gross are regulars, to just name the Nobelists), but this year we also had Alexander Polyakov. Polyakov’s talk had a magical air to it. In a quiet voice, broken by an impish grin when he surprised us with a joke, Polyakov began to lay out five unsolved problems he considered interesting. In the end, he only had time to present one, related to turbulence: when Gross asked him to name the remaining four, the second included a term most of us didn’t recognize (striction, known in a magnetic context and which he wanted to explore gravitationally), so the discussion hung while he defined that and we never did learn what the other three problems were.

At the big 100th anniversary celebration earlier in the spring, the Institute awarded a few years worth of its Niels Bohr Institute Medal of Honor. One of the recipients, Paul Steinhardt, couldn’t make it then, so he got his medal now. After the obligatory publicity photos were taken, Steinhardt entertained us all with a colloquium about his work on quasicrystals, including the many adventures involved in finding the first example “in the wild”. I can’t do the story justice in a short blog post, but if you won’t have the opportunity to watch him speak about it then I hear his book is good.

An anniversary conference should have some historical elements as well. For this one, these were ably provided by David Broadhurst, who gave an after-dinner speech cataloguing things he liked about Bohr. Some was based on public information, but the real draw were the anecdotes: his own reminiscences, and those of people he knew who knew Bohr well.

The other talks covered interesting ground: from deep approaches to quantum field theory, to new tools to understand black holes, to the implications of causality itself. One out of the ordinary talk was by Sabrina Pasterski, who advocated a new model of physics funding. I liked some elements (endowed organizations to further a subfield) and am more skeptical of others (mostly involving NFTs). Regardless it, and the rest of the conference more broadly, spurred a lot of good debate.

The Most Anthropic of All Possible Worlds

Today, we’d call Leibniz a mathematician, a physicist, and a philosopher. As a mathematician, Leibniz turned calculus into something his contemporaries could actually use. As a physicist, he championed a doomed theory of gravity. In philosophy, he seems to be most remembered for extremely cheaty arguments.

Free will and determinism? Can’t it just be a coincidence?

I don’t blame him for this. Faced with a tricky philosophical problem, it’s enormously tempting to just blaze through with an answer that makes every subtlety irrelevant. It’s a temptation I’ve succumbed to time and time again. Faced with a genie, I would always wish for more wishes. On my high school debate team, I once forced everyone at a tournament to switch sides with some sneaky definitions. It’s all good fun, but people usually end up pretty annoyed with you afterwards.

People were annoyed with Leibniz too, especially with his solution to the problem of evil. If you believe in a benevolent, all-powerful god, as Leibniz did, why is the world full of suffering and misery? Leibniz’s answer was that even an all-powerful god is constrained by logic, so if the world contains evil, it must be logically impossible to make the world any better: indeed, we live in the best of all possible worlds. Voltaire famously made fun of this argument in Candide, dragging a Leibniz-esque Professor Pangloss through some of the most creative miseries the eighteenth century had to offer. It’s possibly the most famous satire of a philosopher, easily beating out Aristophanes’ The Clouds (which is also great).

Physicists can also get accused of cheaty arguments, and probably the most mocked is the idea of a multiverse. While it hasn’t had its own Candide, the multiverse has been criticized by everyone from bloggers to Nobel prizewinners. Leibniz wanted to explain the existence of evil, physicists want to explain “unnaturalness”: the fact that the kinds of theories we use to explain the world can’t seem to explain the mass of the Higgs boson. To explain it, these physicists suggest that there are really many different universes, separated widely in space or built in to the interpretation of quantum mechanics. Each universe has a different Higgs mass, and ours just happens to be the one we can live in. This kind of argument is called “anthropic” reasoning. Rather than the best of all possible worlds, it says we live in the world best-suited to life like ours.

I called Leibniz’s argument “cheaty”, and you might presume I think the same of the multiverse. But “cheaty” doesn’t mean “wrong”. It all depends what you’re trying to do.

Leibniz’s argument and the multiverse both work by dodging a problem. For Leibniz, the problem of evil becomes pointless: any evil might be necessary to secure a greater good. With a multiverse, naturalness becomes pointless: with many different laws of physics in different places, the existence of one like ours needs no explanation.

In both cases, though, the dodge isn’t perfect. To really explain any given evil, Leibniz would have to show why it is secretly necessary in the face of a greater good (and Pangloss spends Candide trying to do exactly that). To explain any given law of physics, the multiverse needs to use anthropic reasoning: it needs to show that that law needs to be the way it is to support human-like life.

This sounds like a strict requirement, but in both cases it’s not actually so useful. Leibniz could (and Pangloss does) come up with an explanation for pretty much anything. The problem is that no-one actually knows which aspects of the universe are essential and which aren’t. Without a reliable way to describe the best of all possible worlds, we can’t actually test whether our world is one.

The same problem holds for anthropic reasoning. We don’t actually know what conditions are required to give rise to people like us. “People like us” is very vague, and dramatically different universes might still contain something that can perceive and observe. While it might seem that there are clear requirements, so far there hasn’t been enough for people to do very much with this type of reasoning.

However, for both Leibniz and most of the physicists who believe anthropic arguments, none of this really matters. That’s because the “best of all possible worlds” and “most anthropic of all possible worlds” aren’t really meant to be predictive theories. They’re meant to say that, once you are convinced of certain things, certain problems don’t matter anymore.

Leibniz, in particular, wasn’t trying to argue for the existence of his god. He began the argument convinced that a particular sort of god existed: one that was all-powerful and benevolent, and set in motion a deterministic universe bound by logic. His argument is meant to show that, if you believe in such a god, then the problem of evil can be ignored: no matter how bad the universe seems, it may still be the best possible world.

Similarly, the physicists convinced of the multiverse aren’t really getting there through naturalness. Rather, they’ve become convinced of a few key claims: that the universe is rapidly expanding, leading to a proliferating multiverse, and that the laws of physics in such a multiverse can vary from place to place, due to the huge landscape of possible laws of physics in string theory. If you already believe those things, then the naturalness problem can be ignored: we live in some randomly chosen part of the landscape hospitable to life, which can be anywhere it needs to be.

So despite their cheaty feel, both arguments are fine…provided you agree with their assumptions. Personally, I don’t agree with Leibniz. For the multiverse, I’m less sure. I’m not confident the universe expands fast enough to create a multiverse, I’m not even confident it’s speeding up its expansion now. I know there’s a lot of controversy about the math behind the string theory landscape, about whether the vast set of possible laws of physics are as consistent as they’re supposed to be…and of course, as anyone must admit, we don’t know whether string theory itself is true! I don’t think it’s impossible that the right argument comes around and convinces me of one or both claims, though. These kinds of arguments, “if assumptions, then conclusion” are the kind of thing that seems useless for a while…until someone convinces you of the conclusion, and they matter once again.

So in the end, despite the similarity, I’m not sure the multiverse deserves its own Candide. I’m not even sure Leibniz deserved Candide. But hopefully by understanding one, you can understand the other just a bit better.

At New Ideas in Cosmology

The Niels Bohr Institute is hosting a conference this week on New Ideas in Cosmology. I’m no cosmologist, but it’s a pretty cool field, so as a local I’ve been sitting in on some of the talks. So far they’ve had a selection of really interesting speakers with quite a variety of interests, including a talk by Roger Penrose with his trademark hand-stippled drawings.

Including this old classic

One thing that has impressed me has been the “interdisciplinary” feel of the conference. By all rights this should be one “discipline”, cosmology. But in practice, each speaker came at the subject from a different direction. They all had a shared core of knowledge, common models of the universe they all compare to. But the knowledge they brought to the subject varied: some had deep knowledge of the mathematics of gravity, others worked with string theory, or particle physics, or numerical simulations. Each talk, aware of the varied audience, was a bit “colloquium-style“, introducing a framework before diving in to the latest research. Each speaker knew enough to talk to the others, but not so much that they couldn’t learn from them. It’s been unexpectedly refreshing, a real interdisciplinary conference done right.

How Expert Is That Expert?

The blog Astral Codex Ten had an interesting post a while back, about when to trust experts. Rather than thinking of some experts as “trustworthy” and some as “untrustworthy”, the post suggests an approach of “bounded distrust”. Even if an expert is biased or a news source sometimes lies, there are certain things you can still expect them to tell the truth about. If you are familiar enough with their work, you can get an idea of which kinds of claims you can trust and which you can’t, in a consistent and reliable way. Knowing how to do this is a skill, one you can learn to get better at.

In my corner of science, I can’t think of anyone who outright lies. Nonetheless, some claims are worth more trust than others. Sometimes experts have solid backing for what they say, direct experience that’s hard to contradict. Other times they’re speaking mostly from general impressions, and bias could easily creep in. Luckily, it’s not so hard to tell the difference. In this post, I’ll try to teach you how.

For an example, I’ll use something I saw at a conference last week. A speaker gave a talk describing the current state of cosmology: the new tools we have to map the early universe, and the challenges in using them to their full potential. After the talk, I remember her answering three questions. In each case, she seemed to know what she was talking about, but for different reasons. If she was contradicted by a different expert, I’d use these reasons to figure out which one to trust.

First, sometimes an expert gives what is an informed opinion, but just an informed opinion. As scientists, we are expected to know a fairly broad range of background behind our work, and be able to say something informed about it. We see overview talks and hear our colleagues’ takes, and get informed opinions about topics we otherwise don’t work on. This speaker fielded a question about quantum gravity, and her answer made it clear that the topic falls into this category for her. Her answer didn’t go into much detail, mentioning a few terms but no specific scientific results, and linked back in the end to a different question closer to her expertise. That’s generally how we speak on this kind of topic: vaguely enough to show what we know without overstepping.

The second question came from a different kind of knowledge, which I might call journal club knowledge. Many scientists have what are called “journal clubs”. We meet on a regular basis, read recent papers, and talk about them. The papers go beyond what we work on day-to-day, but not by that much, because the goal is to keep an eye open for future research topics. We read papers in close-by areas, watching for elements that could be useful, answers to questions we have or questions we know how to answer. The kind of “journal club knowledge” we have covers a fair amount of detail: these aren’t topics we are working on right now, but if we spent more time on it they could be. Here, the speaker answered a question about the Hubble tension, a discrepancy between two different ways of measuring the expansion of the universe. The way she answered focused on particular results: someone did X, there was a paper showing Y, this telescope is planning to measure Z. That kind of answer is a good way to tell that someone is answering from “journal club knowledge”. It’s clearly an area she could get involved in if she wanted to, one where she knows the important questions and which papers to read, with some of her own work close enough to the question to give an important advantage. But it was also clear that she hadn’t developed a full argument on one “side” or the other, and as such there are others I’d trust a bit more on that aspect of the question.

Finally, experts are the most trustworthy when we speak about our own work. In this speaker’s case, the questions about machine learning were where her expertise clearly shone through. Her answers there were detailed in a different way than her answers about the Hubble tension: not just papers, but personal experience. They were full of phrases like “I tried that, but it doesn’t work…” or “when we do this, we prefer to do it this way”. They also had the most technical terms of any of her answers, terms that clearly drew distinctions relevant to those who work in the field. In general, when an expert talks about what they do in their own work, and uses a lot of appropriate technical terms, you have especially good reason to trust them.

These cues can help a lot when evaluating experts. An expert who makes a generic claim, like “no evidence for X”, might not know as much as an expert who cites specific papers, and in turn they might not know as much as an expert who describes what they do in their own research. The cues aren’t perfect: one risk is that someone may be an expert on their own work, but that work may be irrelevant to the question you’re asking. But they help: rather than distrusting everyone, they help you towards “bounded distrust”, knowing which claims you can trust and which are riskier.

The Big Bang: What We Know and How We Know It

When most people think of the Big Bang, they imagine a single moment: a whole universe emerging from nothing. That’s not really how it worked, though. The Big Bang refers not to one event, but to a whole scientific theory. Using Einstein’s equations and some simplifying assumptions, we physicists can lay out a timeline for the universe’s earliest history. Different parts of this timeline have different evidence: some are meticulously tested, others we even expect to be wrong! It’s worth talking through this timeline and discussing what we know about each piece, and how we know it.

We can see surprisingly far back in time. As we look out into the universe, we see each star as it was when the light we see left it: longer ago the further the star is from us. Looking back, we see changes in the types of stars and galaxies: stars formed without the metals that later stars produced, galaxies made of those early stars. We see the universe become denser and hotter, until eventually we reach the last thing we can see: the cosmic microwave background, a faint light that fills our view in every direction. This light represents a change in the universe, the emergence of the first atoms. Before this, there were ions: free nuclei and electrons, forming a hot plasma. That plasma constantly emitted and absorbed light. As the universe cooled, the ions merged into atoms, and light was free to travel. Because of this, we cannot see back beyond this point. Our model gives detailed predictions for this curtain of light: its temperature, and even the ways it varies in intensity from place to place, which in turn let us hone our model further.

In principle, we could “see” a bit further. Light isn’t the only thing that travels freely through the universe. Neutrinos are almost massless, and pass through almost everything. Like the cosmic microwave background, the universe should have a cosmic neutrino background. This would come from much earlier, from an era when the universe was so dense that neutrinos regularly interacted with other matter. We haven’t detected this neutrino background yet, but future experiments might. Gravitational waves meanwhile, can also pass through almost any obstacle. There should be gravitational wave backgrounds as well, from a variety of eras in the early universe. Once again these haven’t been detected yet, but more powerful gravitational wave telescopes may yet see them.

We have indirect evidence a bit further back than we can see things directly. In the heat of the early universe the first protons and neutrons were merged via nuclear fusion, becoming the first atomic nuclei: isotopes of hydrogen, helium, and lithium. Our model lets us predict the proportions of these, how much helium and lithium per hydrogen atom. We can then compare this to the oldest stars we see, and see that the proportions are right. In this way, we know something about the universe from before we can “see” it.

We get surprised when we look at the universe on large scales, and compare widely separated regions. We find those regions are surprisingly similar, more than we would expect from randomness and the physics we know. Physicists have proposed different explanations for this. The most popular, cosmic inflation, suggests that the universe expanded very rapidly, accelerating so that a small region of similar matter was blown up much larger than the ordinary Big Bang model would have, projecting those similarities across the sky. While many think this proposal fits the data best, we still aren’t sure it’s the right one: there are alternate proposals, and it’s even controversial whether we should be surprised by the large-scale similarity in the first place.

We understand, in principle, how matter can come from “nothing”. This is sometimes presented as the most mysterious part of the Big Bang, the idea that matter could spontaneously emerge from an “empty” universe. But to a physicist, this isn’t very mysterious. Matter isn’t actually conserved, mass is just energy you haven’t met yet. Deep down, the universe is just a bunch of rippling quantum fields, with different ones more or less active at different times. Space-time itself is just another field, the gravitational field. When people say that in the Big Bang matter emerged from nothing, all they mean is that energy moved from the gravitational field to fields like the electron and quark, giving rise to particles. As we wind the model back, we can pretty well understand how this could happen.

If we extrapolate, winding Einstein’s equations back all the way, we reach a singularity: the whole universe, according to those equations, would have emerged from a single point, a time when everything was zero distance from everything else. This assumes, though, that Einstein’s equations keep working all the way back that far. That’s probably wrong, though. Einstein’s equations don’t include the effect of quantum mechanics, which should be much more important when the universe is at its hottest and densest. We don’t have a complete theory of quantum gravity yet (at least, not one that can model this), so we can’t be certain how to correct these equations. But in general, quantum theories tend to “fuzz out” singularities, spreading out a single point over a wider area. So it’s likely that the universe didn’t actually come from just a single point, and our various incomplete theories of quantum gravity tend to back this up.

So, starting from what we can see, we extrapolate back to what we can’t. We’re quite confident in some parts of the Big Bang theory: the emergence of the first galaxies, the first stars, the first atoms, and the first elements. Back far enough and things get more mysterious, we have proposals but no definite answers. And if you try to wind back up to the beginning, you find we still don’t have the right kind of theory to answer the question. That’s a task for the future.

What Tells Your Story

I watched Hamilton on Disney+ recently. With GIFs and songs from the show all over social media for the last few years, there weren’t many surprises. One thing that nonetheless struck me was the focus on historical evidence. The musical Hamilton is based on Ron Chernow’s biography of Alexander Hamilton, and it preserves a surprising amount of the historian’s care for how we know what we know, hidden within the show’s other themes. From the refrain of “who tells your story”, to the importance of Eliza burning her letters with Hamilton (not just the emotional gesture but the “gap in the narrative” it created for historians), to the song “The Room Where It Happens” (which looked from GIFsets like it was about Burr’s desire for power, but is mostly about how much of history is hidden in conversations we can only partly reconstruct), the show keeps the puzzle of reasoning from incomplete evidence front-and-center.

Any time we try to reason about the past, we are faced with these kinds of questions. They don’t just apply to history, but to the so-called historical sciences as well, sciences that study the past. Instead of asking “who” told the story, such scientists must keep in mind “what” is telling the story. For example, paleontologists reason from fossils, and thus are limited by what does and doesn’t get preserved. As a result after a century of studying dinosaurs, only in the last twenty years did it become clear they had feathers.

Astronomy, too, is a historical science. Whenever astronomers look out at distant stars, they are looking at the past. And just like historians and paleontologists, they are limited by what evidence happened to be preserved, and what part of that evidence they can access.

These limitations lead to mysteries, and often controversies. Before LIGO, astronomers had an idea of what the typical mass of a black hole was. After LIGO, a new slate of black holes has been observed, with much higher mass. It’s still unclear why.

Try to reason about the whole universe, and you end up asking similar questions. When we see the movement of “standard candle” stars, is that because the universe’s expansion is accelerating, or are the stars moving as a group?

Push far enough back and the evidence doesn’t just lead to controversy, but to hard limits on what we can know. No matter how good our telescopes are, we won’t see light older than the cosmic microwave background: before that background was emitted the universe was filled with plasma, which would have absorbed any earlier light, erasing anything we could learn from it. Gravitational waves may one day let us probe earlier, and make discoveries as surprising as feathered dinosaurs. But there is yet a stronger limit to how far back we can go, beyond which any evidence has been so diluted that it is indistinguishable from random noise. We can never quite see into “the room where it happened”.

It’s gratifying to see questions of historical evidence in a Broadway musical, in the same way it was gratifying to hear fractals mentioned in a Disney movie. It’s important to think about who, and what, is telling the stories we learn. Spreading that lesson helps all of us reason better.

Halloween Post: Superstimuli for Physicists

For Halloween, this blog has a tradition of covering “the spooky side” of physics. This year, I’m bringing in a concept from biology to ask a spooky physics “what if?”

In the 1950’s, biologists discovered that birds were susceptible to a worryingly effective trick. By giving them artificial eggs larger and brighter than their actual babies, they found that the birds focused on the new eggs to the exclusion of their own. They couldn’t help trying to hatch the fake eggs, even if they were so large that they would fall off when they tried to sit on them. The effect, since observed in other species, became known as a supernormal stimulus, or superstimulus.

Can this happen to humans? Some think so. They worry about junk food we crave more than actual nutrients, or social media that eclipses our real relationships. Naturally, this idea inspires horror writers, who write about haunting music you can’t stop listening to, or holes in a wall that “fit” so well you’re compelled to climb in.

(And yes, it shows up in porn as well.)

But this is a physics blog, not a biology blog. What kind of superstimulus would work on physicists?

Abstruse goose knows what’s up

Well for one, this sounds a lot like some criticisms of string theory. Instead of a theory that just unifies some forces, why not unify all the forces? Instead of just learning some advanced mathematics, why not learn more, and more? And if you can’t be falsified by any experiment, well, all that would do is spoil the fun, right?

But it’s not just string theory you could apply this logic to. Astrophysicists study not just one world but many. Cosmologists study the birth and death of the entire universe. Particle physicists study the fundamental pieces that make up the fundamental pieces. We all partake in the euphoria of problem-solving, a perpetual rush where each solution leads to yet another question.

Do I actually think that string theory is a superstimulus, that astrophysics or particle physics is a superstimulus? In a word, no. Much as it might look that way from the news coverage, most physicists don’t work on these big, flashy questions. Far from being lured in by irresistible super-scale problems, most physicists work with tabletop experiments and useful materials. For those of us who do look up at the sky or down at the roots of the world, we do it not just because it’s compelling but because it has a good track record: physics wouldn’t exist if Newton hadn’t cared about the orbits of the planets. We study extremes because they advance our understanding of everything else, because they give us steam engines and transistors and change everyone’s lives for the better.

Then again, if I had fallen victim to a superstimulus, I’d say that anyway, right?

*cue spooky music*

The Multiverse You Can Visit Is Not the True Multiverse

I don’t want to be the kind of science blogger who constantly complains about science fiction, but sometimes I can’t help myself.

When I blogged about zero-point energy a few weeks back, there was a particular book that set me off. Ian McDonald’s River of Gods depicts the interactions of human and AI agents in a fragmented 2047 India. One subplot deals with a power company pursuing zero-point energy, using an imagined completion of M theory called M* theory. This post contains spoilers for that subplot.

What frustrated me about River of Gods is that the physics in it almost makes sense. It isn’t just an excuse for magic, or a standard set of tropes. Even the name “M* theory” is extremely plausible, the sort of term that could get used for technical reasons in a few papers and get accidentally stuck as the name of our fundamental theory of nature. But because so much of the presentation makes sense, it’s actively frustrating when it doesn’t.

The problem is the role the landscape of M* theory plays in the story. The string theory (or M theory) landscape is the space of all consistent vacua, a list of every consistent “default” state the world could have. In the story, one of the AIs is trying to make a portal to somewhere else in the landscape, a world of pure code where AIs can live in peace without competing with humans.

The problem is that the landscape is not actually a real place in string theory. It’s a metaphorical mathematical space, a list organized by some handy coordinates. The other vacua, the other “default states”, aren’t places you can travel to, there just other ways the world could have been.

Ok, but what about the multiverse?

There are physicists out there who like to talk about multiple worlds. Some think they’re hypothetical, others argue they must exist. Sometimes they’ll talk about the string theory landscape. But to get a multiverse out of the string theory landscape, you need something else as well.

Two options for that “something else” exist. One is called eternal inflation, the other is the many-worlds interpretation of quantum mechanics. And neither lets you travel around the multiverse.

In eternal inflation, the universe is expanding faster and faster. It’s expanding so fast that, in most places, there isn’t enough time for anything complicated to form. Occasionally, though, due to quantum randomness, a small part of the universe expands a bit more slowly: slow enough for stars, planets, and maybe life. Each small part like that is its own little “Big Bang”, potentially with a different “default” state, a different vacuum from the string landscape. If eternal inflation is true then you can get multiple worlds, but they’re very far apart, and getting farther every second: not easy to visit.

The many-worlds interpretation is a way to think about quantum mechanics. One way to think about quantum mechanics is to say that quantum states are undetermined until you measure them: a particle could be spinning left or right, Schrödinger’s cat could be alive or dead, and only when measured is their state certain. The many-worlds interpretation offers a different way: by doing away with measurement, it instead keeps the universe in the initial “undetermined” state. The universe only looks determined to us because of our place in it: our states become entangled with those of particles and cats, so that our experiences only correspond to one determined outcome, the “cat alive branch” or the “cat dead branch”. Combine this with the string landscape, and our universe might have split into different “branches” for each possible stable state, each possible vacuum. But you can’t travel to those places, your experiences are still “just on one branch”. If they weren’t, many-worlds wouldn’t be an interpretation, it would just be obviously wrong.

In River of Gods, the AI manipulates a power company into using a particle accelerator to make a bubble of a different vacuum in the landscape. Surprisingly, that isn’t impossible. Making a bubble like that is a bit like what the Large Hadron Collider does, but on a much larger scale. When the Large Hadron Collider detected a Higgs boson, it had created a small ripple in the Higgs field, a small deviation from its default state. One could imagine a bigger ripple doing more: with vastly more energy, maybe you could force the Higgs all the way to a different default, a new vacuum in its landscape of possibilities.

Doing that doesn’t create a portal to another world, though. It destroys our world.

That bubble of a different vacuum isn’t another branch of quantum many-worlds, and it isn’t a far-off big bang from eternal inflation. It’s a part of our own universe, one with a different “default state” where the particles we’re made of can’t exist. And typically, a bubble like that spreads at the speed of light.

In the story, they have a way to stabilize the bubble, stop it from growing or shrinking. That’s at least vaguely believable. But it means that their “portal to another world” is just a little bubble in the middle of a big expensive device. Maybe the AI can live there happily…until the humans pull the plug.

Or maybe they can’t stabilize it, and the bubble spreads and spreads at the speed of light destroying everything. That would certainly be another way for the AI to live without human interference. It’s a bit less peaceful than advertised, though.

Zero-Point Energy, Zero-Point Diagrams

Listen to a certain flavor of crackpot, or a certain kind of science fiction, and you’ll hear about zero-point energy. Limitless free energy drawn from quantum space-time itself, zero-point energy probably sounds like bullshit. Often it is. But lurking behind the pseudoscience and the fiction is a real physics concept, albeit one that doesn’t really work like those people imagine.

In quantum mechanics, the zero-point energy is the lowest energy a particular system can have. That number doesn’t actually have to be zero, even for empty space. People sometimes describe this in terms of so-called virtual particles, popping up from nothing in particle-antiparticle pairs only to annihilate each other again, contributing energy in the absence of any “real particles”. There’s a real force, the Casimir effect, that gets attributed to this, a force that pulls two metal plates together even with no charge or extra electromagnetic field. The same bubbling of pairs of virtual particles also gets used to explain the Hawking radiation of black holes.

I’d like to try explaining all of these things in a different way, one that might clear up some common misconceptions. To start, let’s talk about, not zero-point energy, but zero-point diagrams.

Feynman diagrams are a tool we use to study particle physics. We start with a question: if some specific particles come together and interact, what’s the chance that some (perhaps different) particles emerge? We start by drawing lines representing the particles going in and out, then connect them in every way allowed by our theory. Finally we translate the diagrams to numbers, to get an estimate for the probability. In particle physics slang, the number of “points” is the total number of particles: particles in, plus particles out. For example, let’s say we want to know the chance that two electrons go in and two electrons come out. That gives us a “four-point” diagram: two in, plus two out. A zero-point diagram, then, means zero particles in, zero particles out.

A four-point diagram and a zero-point diagram

(Note that this isn’t why zero-point energy is called zero-point energy, as far as I can tell. Zero-point energy is an older term from before Feynman diagrams.)

Remember, each Feynman diagram answers a specific question, about the chance of particles behaving in a certain way. You might wonder, what question does a zero-point diagram answer? The chance that nothing goes to nothing? Why would you want to know that?

To answer, I’d like to bring up some friends of mine, who do something that might sound equally strange: they calculate one-point diagrams, one particle goes to none. This isn’t strange for them because they study theories with defects.

For some reason, they didn’t like my suggestion to use this stamp on their papers

Normally in particle physics, we think about our particles in an empty, featureless space. We don’t have to, though. One thing we can do is introduce features in this space, like walls and mirrors, and try to see what effect they have. We call these features “defects”.

If there’s a defect like that, then it makes sense to calculate a one-point diagram, because your one particle can interact with something that’s not a particle: it can interact with the defect.

A one-point diagram with a wall, or “defect”

You might see where this is going: let’s say you think there’s a force between two walls, that comes from quantum mechanics, and you want to calculate it. You could imagine it involves a diagram like this:

A “zero-point diagram” between two walls

Roughly speaking, this is the kind of thing you could use to calculate the Casimir effect, that mysterious quantum force between metal plates. And indeed, it involves a zero-point diagram.

Here’s the thing, though: metal plates aren’t just “defects”. They’re real physical objects, made of real physical particles. So while you can think of the Casimir effect with a “zero-point diagram” like that, you can also think of it with a normal diagram, more like the four-point diagram I showed you earlier: one that computes, not a force between defects, but a force between the actual electrons and protons that make up the two plates.

A lot of the time when physicists talk about pairs of virtual particles popping up out of the vacuum, they have in mind a picture like this. And often, you can do the same trick, and think about it instead as interactions between physical particles. There’s a story of roughly this kind for Hawking radiation: you can think of a black hole event horizon as “cutting in half” a zero-point diagram, and see pairs of particles going out from the black hole…but you can also do a calculation that looks more like particles interacting with a gravitational field.

This also might help you understand why, contra the crackpots and science fiction writers, zero-point energy isn’t a source of unlimited free energy. Yes, a force like the Casimir effect comes “from the vacuum” in some sense. But really, it’s a force between two particles. And just like the gravitational force between two particles, this doesn’t give you unlimited free power. You have to do the work to move the particles back over and over again, using the same amount of power you gained from the force to begin with. And unlike the forces you’re used to, these are typically very small effects, as usual for something that depends on quantum mechanics. So it’s even less useful than more everyday forces for this.

Why do so many crackpots and authors expect zero-point energy to be a massive source of power? In part, this is due to mistakes physicists made early on.

Sometimes, when calculating a zero-point diagram (or any other diagram), we don’t get a sensible number. Instead, we get infinity. Physicists used to be baffled by this. Later, they understood the situation a bit better, and realized that those infinities were probably just due to our ignorance. We don’t know the ultimate high-energy theory, so it’s possible something happens at high energies to cancel those pesky infinities. Without knowing exactly what happened, physicists would estimate by using a “cutoff” energy where they expected things to change.

That kind of calculation led to an estimate you might have heard of, that the zero-point energy inside single light bulb could boil all the world’s oceans. That estimate gives a pretty impressive mental image…but it’s also wrong.

This kind of estimate led to “the worst theoretical prediction in the history of physics”, that the cosmological constant, the force that speeds up the expansion of the universe, is 120 orders of magnitude higher than its actual value (if it isn’t just zero). If there really were energy enough inside each light bulb to boil the world’s oceans, the expansion of the universe would be quite different than what we observe.

At this point, it’s pretty clear there is something wrong with these kinds of “cutoff” estimates. The only unclear part is whether that’s due to something subtle or something obvious. But either way, this particular estimate is just wrong, and you shouldn’t take it seriously. Zero-point energy exists, but it isn’t the magical untapped free energy you hear about in stories. It’s tiny quantum corrections to the forces between particles.

Guest Post: On the Real Inhomogeneous Universe and the Weirdness of ‘Dark Energy’

A few weeks ago, I mentioned a paper by a colleague of mine, Mohamed Rameez, that generated some discussion. Since I wasn’t up for commenting on the paper’s scientific content, I thought it would be good to give Rameez a chance to explain it in his own words, in a guest post. Here’s what he has to say:


In an earlier post, 4gravitons had contemplated the question of ‘when to trust the contrarians’, in the context of our about-to-be-published paper in which we argue that accounting for the effects of the bulk flow in the local Universe, there is no evidence for any isotropic cosmic acceleration, which would be required to claim some sort of ‘dark energy’.

In the following I would like to emphasize that this is a reasonable view, and not a contrarian one. To do so I will examine the bulk flow of the local Universe and the historical evolution of what appears to be somewhat dodgy supernova data. I will present a trivial solution (from data) to the claimed ‘Hubble tension’.  I will then discuss inhomogeneous cosmology, and the 2011 Nobel prize in Physics. I will proceed to make predictions that can be falsified with future data. I will conclude with some questions that should be frequently asked.

Disclaimer: The views expressed here are not necessarily shared by my collaborators. 

The bulk flow of the local Universe:

The largest anisotropy in the Cosmic Microwave Background is the dipole, believed to be caused by our motion with respect to the ‘rest frame’ of the CMB with a velocity of ~369 km s^-1. Under this view, all matter in the local Universe appear to be flowing. At least out to ~300 Mpc, this flow continues to be directionally coherent, to within ~40 degrees of the CMB dipole, and the scale at which the average relative motion between matter and radiation converges to zero has so far not been found.

This is one of the most widely accepted results in modern cosmology, to the extent that SN1a data come pre ‘corrected’ for it.

Such a flow has covariant consequences under general relativity and this is what we set out to test.

Supernova data, directions in the sky and dodgyness:

Both Riess et al 1998 and Perlmutter et al 1999 used samples of supernovae down to redshifts of 0.01, in which almost all SNe at redshifts below 0.1 were in the direction of the flow.

Subsequently in Astier et al 2006, Kowalsky et al 2008, Amanullah et al 2010 and Suzuki et al 2011, it is reported that a process of outlier rejection was adopted in which data points >3\sigma from the Hubble diagram were discarded. This was done using a highly questionable statistical method that involves adjusting an intrinsic dispersion term \sigma_{\textrm{int}} by hand until a \chi^2/\textrm{ndof} of 1 is obtained to the assumed \LambdaCDM model. The number of outliers rejected is however far in excess of 0.3% – which is the 3\sigma expectation. As the sky coverage became less skewed, supernovae with redshift less than ~0.023 were excluded for being outside the Hubble flow. While the Hubble diagram so far had been inferred from heliocentric redshifts and magnitudes, with the introduction of SDSS supernovae that happened to be in the direction opposite to the flow, peculiar velocity ‘corrections’ were adopted in the JLA catalogue and supernovae down to extremely low redshifts were reintroduced. While the early claims of a cosmological constant were stated as ‘high redshift supernovae were found to be dimmer (15% in flux) than the low redshift supernovae (compared to what would be expected in a \Lambda=0 universe)’, it is worth noting that the peculiar velocity corrections change the redshifts and fluxes of low redshift supernovae by up to ~20 %.

When it was observed that even with this ‘corrected’ sample of 740 SNe, any evidence for isotropic acceleration using a principled Maximum Likelihood Estimator is less than 3\sigma , it was claimed that by adding 12 additional parameters (to the 10 parameter model) to allow for redshift and sample dependence of the light curve fitting parameters, the evidence was greater than 4\sigma .

As we discuss in Colin et al. 2019, these corrections also appear to be arbitrary, and betray an ignorance of the fundamentals of both basic statistical analysis and relativity. With the Pantheon compilation, heliocentric observables were no longer public and these peculiar velocity corrections initially extended far beyond the range of any known flow model of the Local Universe. When this bug was eventually fixed, both the heliocentric redshifts and magnitudes of the SDSS SNe that filled in the ‘redshift desert’ between low and high redshift SNe were found to be alarmingly discrepant. The authors have so far not offered any clarification of these discrepancies.

Thus it seems to me that the latest generation of ‘publicly available’ supernova data are not aiding either open science or progress in cosmology.

A trivial solution to the ‘Hubble tension’?

The apparent tension between the Hubble parameter as inferred from the Cosmic Microwave Background and low redshift tracers has been widely discussed, and recent studies suggest that redshift errors as low as 0.0001 can have a significant impact. Redshift discrepancies as big as 0.1 have been reported. The shifts reported between JLA and Pantheon appear to be sufficient to lower the Hubble parameter from ~73 km s^-1 Mpc^-1 to ~68 km s^-1 Mpc^-1.

On General Relativity, cosmology, metric expansion and inhomogeneities:

In the maximally symmetric Friedmann-Lemaitre-Robertson-Walker solution to general relativity, there is only one meaningful global notion of distance and it expands at the same rate everywhere. However, the late time Universe has structure on all scales, and one may only hope for statistical (not exact) homogeneity. The Universe is expected to be lumpy. A background FLRW metric is not expected to exist and quantities analogous to the Hubble and deceleration parameters will vary across the sky.  Peculiar velocities may be more precisely thought of as variations in the expansion rate of the Universe. At what rate does a real Universe with structure expand? The problems of defining a meaningful average notion of volume, its dynamical evolution, and connecting it to observations are all conceptually open.

On the 2011 Nobel Prize in Physics:

The Fitting Problem in cosmology was written in 1987. In the context of this work and the significant theoretical difficulties involved in inferring fundamental physics from the real Universe, any claims of having measured a cosmological constant from directionally skewed, sparse samples of intrinsically scattered observations should have been taken with a grain of salt.  By honouring this claim with a Nobel Prize, the Swedish Academy may have induced runaway prestige bias in favour of some of the least principled analyses in science, strengthening the confirmation bias that seems prevalent in cosmology.

This has resulted in the generation of a large body of misleading literature, while normalizing the practice of ‘massaging’ scientific data. In her recent video about gravitational waves, Sabine Hossenfelder says “We should not hand out Nobel Prizes if we don’t know how the predictions were fitted to the data”. What about when the data was fitted (in 1998-1999) using a method that has been discredited in 1989 to a toy model that has been cautioned against in 1987, leading to a ‘discovery’ of profound significance to fundamental physics?

A prediction with future cosmological data:

With the advent of high statistics cosmological data in the future, such as from the Large Synoptic Survey Telescope, I predict that the Hubble and deceleration parameters inferred from supernovae in hemispheres towards and away from the CMB dipole will be found to be different in a statistically significant (>5\sigma ) way. Depending upon the criterion for selection and blind analyses of data that can be agreed upon, I would be willing to bet a substantial amount of money on this prediction.

Concluding : on the amusing sociology of ‘Dark Energy’ and manufactured concordance:

Of the two authors of the well-known cosmology textbook ‘The Early Universe’, Edward Kolb writes these interesting papers questioning dark energy while Michael Turner is credited with coining the term ‘Dark Energy’.  Reasonable scientific perspectives have to be presented as ‘Dark Energy without dark energy’. Papers questioning the need to invoke such a mysterious content that makes up ‘68% of the Universe’ are quickly targeted by inane articles by non-experts or perhaps well-meant but still misleading YouTube videos. Much of this is nothing more than a spectacle.

In summary, while the theoretical debate about whether what has been observed as Dark Energy is the effect of inhomogeneities is ongoing, observers appear to have been actively using the most inhomogeneous feature of the local Universe through opaque corrections to data, to continue claiming that this ‘dark energy’ exists.

It is heartening to see that recent works lean toward a breaking of this manufactured concordance and speak of a crisis for cosmology.

Questions that should be frequently asked:

Q. Is there a Hubble frame in the late time Universe?

A. The Hubble frame is a property of the FLRW exact solution, and in the late time Universe in which galaxies and clusters have peculiar motions with respect to each other, an equivalent notion does not exist. While popular inference treats the frame in which the CMB dipole vanishes as the Hubble frame, the scale at which the bulk flow of the local Universe converges to that frame has never been found. We are tilted observers.

Q. I am about to perform blinded analyses on new cosmological data. Should I correct all my redshifts towards the CMB rest frame?

A. No. Correcting all your redshifts towards a frame that has never been found is a good way to end up with ‘dark energy’. It is worth noting that while the CMB dipole has been known since 1994, supernova data have been corrected towards the CMB rest frame only after 2010, for what appear to be independent reasons.

Q. Can I combine new data with existing Supernova data?

A. No. The current generation of publicly available supernova data suffer from the natural biases that are to be expected when data are compiled incrementally through a human mediated process. It would be better to start fresh with a new sample.

Q. Is ‘dark energy’ fundamental or new physics?

A. Given that general relativity is a 100+ year old theory and significant difficulties exist in describing the late time Universe with it, it is unnecessary to invoke new fundamental physics when confronting any apparent acceleration of the real Universe. All signs suggest that what has been ascribed to dark energy are the result of a community that is hell bent on repeating what Einstein supposedly called his greatest mistake.

Digging deeper:

The inquisitive reader may explore the resources on inhomogeneous cosmology, as well as the works of George Ellis, Thomas Buchert and David Wiltshire.