Category Archives: Astrophysics/Cosmology

Bonus Info For “Cosmic Paradox Reveals the Awful Consequence of an Observer-Free Universe”

I had a piece in Quanta Magazine recently, about a tricky paradox that’s puzzling quantum gravity researchers and some early hints at its resolution.

The paradox comes from trying to describe “closed universes”, which are universes where it is impossible to reach the edge, even if you had infinite time to do it. This could be because the universe wraps around like a globe, or because the universe is expanding so fast no traveler could ever reach an edge. Recently, theoretical physicists have been trying to describe these closed universes, and have noticed a weird issue: each such universe appears to have only one possible quantum state. In general, quantum systems have more possible states the more complex they are, so for a whole universe to have only one possible state is a very strange thing, implying a bizarrely simple universe. Most worryingly, our universe may well be closed. Does that mean that secretly, the real world has only one possible state?

There is a possible solution that a few groups are playing around with. The argument that a closed universe has only one state depends on the fact that nothing inside a closed universe can reach the edge. But if nothing can reach the edge, then trying to observe the universe as a whole from outside would tell you nothing of use. Instead, any reasonable measurement would have to come from inside the universe. Such a measurement introduces a new kind of “edge of the universe”, this time not in the far distance, but close by: the edge between an observer and the rest of the world. And when you add that edge to the calculations, the universe stops being closed, and has all the many states it ought to.

This was an unusually tricky story for me to understand. I narrowly avoided several misconceptions, and I’m still not sure I managed to dodge all of them. Likewise, it was unusually tricky for the editors to understand, and I suspect it was especially tricky for Quanta’s social media team to understand.

It was also, quite clearly, tricky for the readers to understand. So I thought I would use this post to clear up a few misconceptions. I’ll say a bit more about what I learned investigating this piece, and try to clarify what the result does and does not mean.

Q: I’m confused about the math terms you’re using. Doesn’t a closed set contain its boundary?

A: Annoyingly, what physicists mean by a closed universe is a bit different from what mathematicians mean by a closed manifold, which is in turn more restrictive than what mathematicians mean by a closed set. One way to think about this that helped me is that in an open set you can take a limit that takes you out of the set, which is like being able to describe a (possibly infinite) path that takes you “out of the universe”. A closed set doesn’t have that, every path, no matter how long, still ends up in the same universe.

Q: So a bunch of string theorists did a calculation and got a result that doesn’t make sense, a one-state universe. What if they’re just wrong?

A: Two things:

First, the people I talked to emphasized that it’s pretty hard to wiggle out of the conclusion. It’s not just a matter of saying you don’t believe in string theory and that’s that. The argument is based in pretty fundamental principles, and it’s not easy to propose a way out that doesn’t mess up something even more important.

That’s not to say it’s impossible. One of the people I interviewed, Henry Maxfield, thinks that some of the recent arguments are misunderstanding how to use one of their core techniques, in a way that accidentally presupposes the one-state universe.

But even he thinks that the bigger point, that closed universes have only one state, is probably true.

And that’s largely due to a second reason: there are older arguments that back the conclusion up.

One of the oldest dates back to John Wheeler, a physicist famous for both deep musings about the nature of space and time and coining evocative terms like “wormhole”. In the 1960’s, Wheeler argued that, in a theory where space and time can be curved, one should think of a system’s state as including every configuration it can evolve into over time, since it can be tricky to specify a moment “right now”. In a closed universe, you could expect a quantum system to explore every possible configuration…meaning that such a universe should be described by only one state.

Later, physicists studying holography ran into a similar conclusion. They kept noticing systems in quantum gravity where you can describe everything that happens inside by what happens on the edges. If there are no edges, that seems to suggest that in some sense there is nothing inside. Apparently, Lenny Susskind had a slide at the end of talks in the 90’s where he kept bringing up this point.

So even if the modern arguments are wrong, and even if string theory is wrong…it still looks like the overall conclusion is right.

Q: If a closed universe has only one state, does that make it deterministic, and thus classical?

A: Oh boy…

So, on the one hand, there is an idea, which I think also goes back to Wheeler, that asks: “if the universe as a whole has a wavefunction, how does it collapse?” One possibility is that the universe has only one state, so that nobody is needed to collapse the wavefunction, it already is in a definite state.

On the other hand, a universe with only one state does not actually look much like a classical universe. Our universe looks classical largely due to a process called decoherence, where small quantum systems interact with big quantum systems with many states, diluting quantum effects until the world looks classical. If there is only one state, there are no big systems to interact with, and the world has large quantum fluctuations that make it look very different from a classical universe.

Q: How, exactly, are you defining “observer”?

A: A few commenters helpfully chimed in to talk about how physics models observers as “witness” systems, objects that preserve some record of what happens to them. A simple example is a ball sitting next to a bowl: if you find the ball in the bowl later, it means something moved it. This process, preserving what happens and making it more obvious, is in essence how physicists think about observers.

However, this isn’t the whole story in this case. Here, different research groups introducing observers are doing it in different ways. That’s, in part, why none of them are confident they have the right answer.

One of the approaches describes an observer in terms of its path through space and time, its worldline. Instead of a detailed witness system with specific properties, all they do is pick out a line and say “the observer is there”. Identifying that line, and declaring it different from its surroundings, seems to be enough to recover the complexity the universe ought to have.

The other approach treats the witness system in a bit more detail. We usually treat an observer in quantum mechanics as infinitely large compared to the quantum systems they measure. This approach instead gives the observer a finite size, and uses that to estimate how far their experience will be from classical physics.

Crucially, both approaches aren’t a matter of defining a physical object, and looking for it in the theory. Given a collection of atoms, neither team can tell you what is an observer, and what isn’t. Instead, in each approach, the observer is arbitrary: a choice, made by us when we use quantum mechanics, of what to count as an observer and what to count as the rest of the world. That choice can be made in many different ways, and each approach tries to describe what happens when you change that choice.

This is part of what makes this approach uncomfortable to some more philosophically-minded physicists: it treats observers not as a predictable part of the physical world, but as a mathematical description used to make statements about the world.

Q: If these ideas come from AdS/CFT, which is an open universe, how do you use them to describe a closed universe?

A: While more examples emerged later, initially theorists were thinking about two types of closed universes:

First, think about a black hole. You may have heard that when you fall into a black hole, you watch the whole universe age away before your eyes, due to the dramatic differences in the passage of time caused by the extreme gravity. Once you’ve seen the outside universe fade away, you are essentially in a closed universe of your own. The outside world will never affect you again, and you are isolated, with no path to the outside. These black hole interiors are one of the examples theorists looked at.

The other example are so-called “baby universes”. When physicists use quantum mechanics to calculate the chance of something happening, they have to add up every possible series of events that could have happened in between. For quantum gravity, this includes every possible arrangement of space and time. This includes arrangements with different shapes, including ones with tiny extra “baby universes” which branch off from the main universe and return. Universes with these “baby universes” are another example that theorists considered to understand closed universes.

Q: So wait, are you actually saying the universe needs to be observed to exist? That’s ridiculous, didn’t the universe exist long before humans existed to observe it? Is this some sort of Copenhagen Interpretation thing, or that thing called QBism?

You’re starting to ask philosophical questions, and here’s the thing:

There are physicists who spend their time thinking about how to interpret quantum mechanics. They talk to philosophers, and try to figure out how to answer these kinds of questions in a consistent and systematic way, keeping track of all the potential pitfalls and implications. They’re part of a subfield called “quantum foundations”.

The physicists whose work I was talking about in that piece are not those people.

Of the people I interviewed, one of them, Rob Myers, probably has lunch with quantum foundations researchers on occasion. The others, based at places like MIT and the IAS, probably don’t even do that.

Instead, these are people trying to solve a technical problem, people whose first inclination is to put philosophy to the side, and “shut up and calculate”. These people did a calculation that ought to have worked, checking how many quantum states they could find in a closed universe, and found a weird and annoying answer: just one. Trying to solve the problem, they’ve done technical calculation work, introducing a path through the universe, or a boundary around an observer, and seeing what happens. While some of them may have their own philosophical leanings, they’re not writing works of philosophy. Their papers don’t talk through the philosophical implications of their ideas in all that much detail, and they may well have different thoughts as to what those implications are.

So while I suspect I know the answers they would give to some of these questions, I’m not sure.

Instead, how about I tell you what I think?

I’m not a philosopher, I can’t promise my views will be consistent, that they won’t suffer from some pitfall. But unlike other people’s views, I can tell you what my own views are.

To start off: yes, the universe existed before humans. No, there is nothing special about our minds, we don’t have psychic powers to create the universe with our thoughts or anything dumb like that.

What I think is that, if we want to describe the world, we ought to take lessons from science.

Science works. It works for many reasons, but two important ones stand out.

Science works because it leads to technology, and it leads to technology because it guides actions. It lets us ask, if I do this, what will happen? What will I experience?

And science works because it lets people reach agreement. It lets people reach agreement because it lets us ask, if I observe this, what do I expect you to observe? And if we agree, we can agree on the science.

Ultimately, if we want to describe the world with the virtues of science, our descriptions need to obey this rule: they need to let us ask “what if?” questions about observations.

That means that science cannot avoid an observer. It can often hide the observer, place them far away and give them an infinite mind to behold what they see, so that one observer is essentially the same as another. But we shouldn’t expect to always be able to do this. Sometimes, we can’t avoid saying something about the observer: about where they are, or how big they are, for example.

These observers, though, don’t have to actually exist. We should be able to ask “what if” questions about others, and that means we should be able to dream up fictional observers, and ask, if they existed, what would they see? We can imagine observers swimming in the quark-gluon plasma after the Big Bang, or sitting inside a black hole’s event horizon, or outside our visible universe. The existence of the observer isn’t a physical requirement, but a methodological one: a restriction on how we can make useful, scientific statements about the world. Our theory doesn’t have to explain where observers “come from”, and can’t and shouldn’t do that. The observers aren’t part of the physical world being described, they’re a precondition for us to describe that world.

Is this the Copenhagen Interpretation? I’m not a historian, but I don’t think so. The impression I get is that there was no real Copenhagen Interpretation, that Bohr and Heisenberg, while more deeply interested in philosophy than many physicists today, didn’t actually think things through in enough depth to have a perspective you can name and argue with.

Is this QBism? I don’t think so. It aligns with some things QBists say, but they say a lot of silly things as well. It’s probably some kind of instrumentalism, for what that’s worth.

Is it logical positivism? I’ve been told logical positivists would argue that the world outside the visible universe does not exist. If that’s true, I’m not a logical positivist.

Is it pragmatism? Maybe? What I’ve seen of pragmatism definitely appeals to me, but I’ve seen my share of negative characterizations as well.

In the end, it’s an idea about what’s useful and what’s not, about what moves science forward and what doesn’t. It tries to avoid being preoccupied with unanswerable questions, and as much as possible to cash things out in testable statements. If I do this, what happens? What if I did that instead?

The results I covered for Quanta, to me, show that the observer matters on a deep level. That isn’t a physical statement, it isn’t a mystical statement. It’s a methodological statement: if we want to be scientists, we can’t give up on the observer.

Fear of the Dark, Physics Version

Happy Halloween! I’ve got a yearly tradition on this blog of talking about the spooky side of physics. This year, we’ll think about what happens…when you turn off the lights.

Over history, astronomy has given us larger and larger views of the universe. We started out thinking the planets, Sun, and Moon were human-like, just a short distance away. Measuring distances, we started to understand the size of the Earth, then the Sun, then realized how much farther still the stars were from us. Gradually, we came to understand that some of the stars were much farther away than others. Thinkers like Immanuel Kant speculated that “nebulae” were clouds of stars like our own Milky Way, and in the early 20th century better distance measurements confirmed it, showing that Andromeda was not a nearby cloud, but an entirely different galaxy. By the 1960’s, scientists had observed the universe’s cosmic microwave background, seeing as far out as it was possible to see.

But what if we stopped halfway?

Since the 1920’s, we’ve known the universe is expanding. Since the 1990’s, we’ve thought that that expansion is speeding up: faraway galaxies are getting farther and farther away from us. Space itself is expanding, carrying the galaxies apart…faster than light.

That ever-increasing speed has a consequence. It means that, eventually, each galaxy will fly beyond our view. One by one, the other galaxies will disappear, so far away that light will not have had enough time to reach us.

From our perspective, it will be as if the lights, one by one, started to turn out. Each faraway light, each cloudy blur that hides a whirl of worlds, will wink out. The sky will get darker and darker, until to an astronomer from a distant future, the universe will appear a strangely limited place:

A single whirl of stars, in a deep, dark, void.

To Measure Something or to Test It

Black holes have been in the news a couple times recently.

On one end, there was the observation of an extremely large black hole in the early universe, when no black holes of the kind were expected to exist. My understanding is this is very much a “big if true” kind of claim, something that could have dramatic implications but may just be being misunderstood. At the moment, I’m not going to try to work out which one it is.

In between, you have a piece by me in Quanta Magazine a couple weeks ago, about tests of whether black holes deviate from general relativity. They don’t, by the way, according to the tests so far.

And on the other end, you have the coverage last week of a “confirmation” (or even “proof”) of the black hole area law.

The black hole area law states that the total area of the event horizons of all black holes will always increase. It’s also known as the second law of black hole thermodynamics, paralleling the second law of thermodynamics that entropy always increases. Hawking proved this as a theorem in 1971, assuming that general relativity holds true.

(That leaves out quantum effects, which indeed can make black holes shrink, as Hawking himself famously later argued.)

The black hole area law is supposed to hold even when two black holes collide and merge. While the combination may lose energy (leading to gravitational waves that carry energy to us), it will still have greater area, in the end, than the sum of the black holes that combined to make it.

Ok, so that’s the area law. What’s this paper that’s supposed to “finally prove” it?

The LIGO, Virgo, and KAGRA collaborations recently published a paper based on gravitational waves from one particularly clear collision of black holes, which they measured back in January. They compare their measurements to predictions from general relativity, and checked two things: whether the measurements agreed with predictions based on the Kerr metric (how space-time around a rotating black hole is supposed to behave), and whether they obeyed the area law.

The first check isn’t so different in purpose from the work I wrote about in Quanta Magazine, just using different methods. In both studies, physicists are looking for deviations from the laws of general relativity, triggered by the highly curved environments around black holes. These deviations could show up in one way or another in any black hole collision, so while you would ideally look for them by scanning over many collisions (as the paper I reported on did), you could do a meaningful test even with just one collision. That kind of a check may not be very strenuous (if general relativity is wrong, it’s likely by a very small amount), but it’s still an opportunity, diligently sought, to be proven wrong.

The second check is the one that got the headlines. It also got first billing in the paper title, and a decent amount of verbiage in the paper itself. And if you think about it for more than five minutes, it doesn’t make a ton of sense as presented.

Suppose the black hole area law is wrong, and sometimes black holes lose area when they collide. Even if this happened sometimes, you wouldn’t expect it to happen every time. It’s not like anyone is pondering a reverse black hole area law, where black holes only shrink!

Because of that, I think it’s better to say that LIGO measured the black hole area law for this collision, while they tested whether black holes obey the Kerr metric. In one case, they’re just observing what happened in this one situation. In the other, they can try to draw implications for other collisions.

That doesn’t mean their work wasn’t impressive, but it was impressive for reasons that don’t seem to be getting emphasized. It’s impressive because, prior to this paper, they had not managed to measure the areas of colliding black holes well enough to confirm that they obeyed the area law! The previous collisions looked like they obeyed the law, but when you factor in the experimental error they couldn’t say it with confidence. The current measurement is better, and can. So the new measurement is interesting not because it confirms a fundamental law of the universe or anything like that…it’s interesting because previous measurements were so bad, that they couldn’t even confirm this kind of fundamental law!

That, incidentally, feels like a “missing mood” in pop science. Some things are impressive not because of their amazing scale or awesome implications, but because they are unexpectedly, unintuitively, really really hard to do. These measurements shouldn’t be thought of, or billed, as tests of nature’s fundamental laws. Instead they’re interesting because they highlight what we’re capable of, and what we still need to accomplish.

Did the South Pole Telescope Just Rule Out Neutrino Masses? Not Exactly, Followed by My Speculations

Recently, the South Pole Telescope’s SPT-3G collaboration released new measurements of the cosmic microwave background, the leftover light from the formation of the first atoms. By measuring this light, cosmologists can infer the early universe’s “shape”: how it rippled on different scales as it expanded into the universe we know today. They compare this shape to mathematical models, equations and simulations which tie together everything we know about gravity and matter, and try to see what it implies for those models’ biggest unknowns.

Some of the most interesting such unknowns are neutrino masses. We know that neutrinos have mass because they transform as they move, from one type of neutrino to another. Those transformations let physicists measure the differences between neutrino masses, but but themselves, they don’t say what the actual masses are. All we know from particle physics, at this point, is a minimum: in order for the neutrinos to differ in mass enough to transform in the way they do, the total mass of the three flavors of neutrino must be at least 0.06 electron-Volts.

(Divided by the speed of light squared to get the right units, if you’re picky about that sort of thing. Physicists aren’t.)

Neutrinos also influenced the early universe, shaping it in a noticeably different way than heavier particles that bind together into atoms, like electrons and protons, did. That effect, observed in the cosmic microwave background and in the distribution of galaxies in the universe today, lets cosmologists calculate a maximum: if neutrinos are more massive than a certain threshold, they could not have the effects cosmologists observe.

Over time as measurements improved, this maximum has decreased. Now, the South Pole Telescope has added more data to the pool, and combining it with prior measurements…well, I’ll quote their paper:

Ok, it’s probably pretty hard to understand what that means if you’re not a physicist. To explain:

  1. There are two different hypotheses for how neutrino masses work, called “hierarchies”. In the “normal” hierarchy, the neutrinos go in the same order as the particles they interact with with the weak nuclear force: electron-neutrinos are lighter than muon neutrinos, which are lighter than tau neutrinos. In the “inverted” hierarchy, they come in the opposite order, and the electron neutrino is the heaviest. Both of these are consistent with the particle-physics data.
  2. Confidence is a statistics thing, which could take a lot of unpacking to define correctly. To give a short but likely tortured-sounding explanation: when you rule out a hypothesis with a certain confidence level, you’re saying that, if that hypothesis was true, there would only be a 100%-minus-that-chance chance that you would see what you actually observed.

So, what are the folks at the South Pole Telescope saying? They’re saying that if you put all the evidence together (that’s roughly what that pile of acroynms at the beginning means), then the result would be incredibly uncharacteristic for either hypothesis for neutrino masses. If you had “normal” neutrino masses, you’d only see these cosmological observations 2.1% of the time. And if you had inverted neutrino masses instead, you’d only see these observations 0.01% of the time!

That sure makes it sound like neither hypothesis is correct, right? Does it actually mean that?

I mean, it could! But I don’t think so. Here I’ll start speculating on the possibilities, from least likely in my opinion to most likely. This is mostly my bias talking, and shouldn’t be taken too seriously.

5. Neutrinos are actually massless

This one is really unlikely. The evidence from particle physics isn’t just quantitative, but qualitative. I don’t know if it’s possible to write down a model that reproduces the results of neutrino oscillation experiments without massive neutrinos, and if it is it would be a very bizarre model that would almost certainly break something else. This is essentially a non-starter.

4. This is a sign of interesting new physics

I mean, it would be nice, right? I’m sure there are many proposals at this point, tweaks that add a few extra fields with some hard-to-notice effects to explain the inconsistency. I can’t rule this out, and unlike the last point there isn’t anything about it that seems impossible. But we’ve had a lot of odd observations, and so far this hasn’t happened.

3. Someone did statistics wrong

This happens more often. Any argument like this is a statistical argument, and while physicists keep getting better at statistics, they’re not professional statisticians. Sometimes there’s a genuine misunderstanding that goes in to testing a model, and once it gets resolved the problem goes away.

2. The issue will go away with more data

The problem could also just…go away. 97.9% confidence sounds huge…but in physics, the standards are higher: you need 99.99994% to announce a new discovery. Physicists do a lot of experiments and observations, and sometimes, they see weird things! As the measurements get more precise, we may well see the disagreement melt away, and cosmology and particle physics both point to the same range for neutrino masses. It’s happened to many other measurements before.

1. We’re reaching the limits of our current approach to cosmology

This is probably not actually the most likely possibility, but it’s my list, what are you going to do?

There are basic assumptions behind how most theoretical physicists do cosmology. These assumptions are reasonably plausible, and seem to be needed to do anything at all. But they can be relaxed. Our universe looks like it’s homogeneous on the largest scales: the same density on average, in every direction you look. But the way that gets enforced in the mathematical models is very direct, and it may be that a different, more indirect, approach has more flexibility. I’ll probably be writing about this more in future, hopefully somewhere journalistic. But there are some very cool ideas floating around, gradually getting fleshed out more and more. It may be that the answer to many of the mysteries of cosmology right now is not new physics, but new mathematics: a new approach to modeling the universe.

Lambda-CDM Is Not Like the Standard Model

A statistician will tell you that all models are wrong, but some are useful.

Particle physicists have an enormously successful model called the Standard Model, which describes the world in terms of seventeen quantum fields, giving rise to particles from the familiar electron to the challenging-to-measure Higgs boson. The model has nineteen parameters, numbers that aren’t predicted by the model itself but must be found by doing experiments and finding the best statistical fit. With those numbers as input, the model is extremely accurate, aside from the occasional weird discrepancy.

Cosmologists have their own very successful standard model that they use to model the universe as a whole. Called ΛCDM, it describes the universe in terms of three things: dark energy, denoted with a capital lambda (Λ), cold dark matter (CDM), and ordinary matter, all interacting with each other via gravity. The model has six parameters, which must be found by observing the universe and finding the best statistical fit. When those numbers are input, the model is extremely accurate, though there have recently been some high-profile discrepancies.

These sound pretty similar. You model the world as a list of things, fix your parameters based on nature, and make predictions. Wikipedia has a nice graphic depicting the quantum fields of the Standard Model, and you could imagine a similar graphic for ΛCDM.

A graphic like that would be misleading, though.

ΛCDM doesn’t just propose a list of fields and let them interact freely. Instead, it tries to model the universe as a whole, which means it carries assumptions about how matter and energy are distributed, and how space-time is shaped. Some of this is controlled by its parameters, and by tweaking them one can model a universe that varies in different ways. But other assumptions are baked in. If the universe had a very different shape, caused by a very different distribution of matter and energy, then we would need a very different model to represent it. We couldn’t use ΛCDM.

The Standard Model isn’t like that. If you collide two protons together, you need a model of how quarks are distributed inside protons. But that model isn’t the Standard Model, it’s a separate model used for that particular type of experiment. The Standard Model is supposed to be the big picture, the stuff that exists and affects every experiment you can do.

That means the Standard Model is supported in a way that ΛCDM isn’t. The Standard Model describes many different experiments, and is supported by almost all of them. When an experiment disagrees, it has specific implications for part of the model only. For example, neutrinos have mass, which was not predicted in the Standard Model, but it proved easy for people to modify the model to fit. We know the Standard Model is not the full picture, but we also know that any deviations from it must be very small. Large deviations would contradict other experiments, or more basic principles like probabilities needing to be smaller than one.

In contrast, ΛCDM is really just supported by one experiment. We have one universe to observe. We can gather a lot of data, measuring it from its early history to the recent past. But we can’t run it over and over again under different conditions, and our many measurements are all measuring different aspects of the same thing. That’s why unlike in the Standard Model, we can’t separate out assumptions about the shape of the universe from assumptions about what it contains. Dark energy and dark matter are on the same footing as distribution of fluctuations and homogeneity and all those shape-related words, part of one model that gets fit together as a whole.

And so while both the Standard Model and ΛCDM are successful, that success means something different. It’s hard to imagine that we find new evidence and discover that electrons don’t exist, or quarks don’t exist. But we may well find out that dark energy doesn’t exist, or that the universe has a radically different shape. The statistical success of ΛCDM is impressive, and it means any alternative has a high bar to clear. But it doesn’t have to mean rethinking everything the way an alternative to the Standard Model would.

Cool Asteroid News

Did you hear about the asteroid?

Which one?

You might have heard that an asteroid named 2024 YR4 is going to come unusually close to the Earth in 2032. When it first made the news, astronomers estimated a non-negligible chance of it hitting us: about three percent. That’s small enough that they didn’t expect it to happen, but large enough to plan around it: people invest in startups with a smaller chance of succeeding. Still, people were fairly calm about this one, and there are a couple of good reasons:

  • First, this isn’t a “kill the dinosaurs” asteroid, it’s much smaller. This is a “Tunguska Event” asteroid. Still pretty bad if it happens near a populated area, but not the end of life as we know it.
  • We know about it far in advance, and space agencies have successfully deflected an asteroid before, for a test. If it did pose a risk, it’s quite likely they’d be able to change its path so it misses the Earth instead.
  • It’s tempting to think of that 3% chance as like a roll of a hundred-sided die: the asteroid is on a random path, roll 1 to 3 and it will hit the Earth, roll higher and it won’t, and nothing we do will change that. In reality, though, that 3% was a measure of our ignorance. As astronomers measure the asteroid more thoroughly, they’ll know more and more about its path, and each time they figure something out, they’ll update the number.

And indeed, the number has been updated. In just the last few weeks, the estimated probability of impact has dropped from 3% to a few thousandths of a percent, as more precise observations clarified the asteroid’s path. There’s still a non-negligible chance it will hit the moon (about two percent at the moment), but it’s far too small to do more than make a big flashy crater.

It’s kind of fun to think that there are people out there who systematically track these things, with a plan to deal with them. It feels like something out of a sci-fi novel.

But I find the other asteroid more fun.

In 2020, a probe sent by NASA visited an asteroid named Bennu, taking samples which it carefully packaged and brought back to Earth. Now, scientists have analyzed the samples, revealing several moderately complex chemicals that have an important role in life on Earth, like amino acids and the bases that make up RNA and DNA. Interestingly, while on Earth these molecules all have the same “handedness“, the molecules on Bennu are divided about 50/50. Something similar was seen on samples retrieved from another asteroid, so this reinforces the idea that amino acids and nucleotide bases in space do not have a preferred handedness.

I first got into physics for the big deep puzzles, the ones that figure into our collective creation story. Where did the universe come from? Why are its laws the way they are? Over the ten years since I got my PhD, it’s felt like the answers to these questions have gotten further and further away, with new results serving mostly to rule out possible explanations with greater and greater precision.

Biochemistry has its own deep puzzles figuring into our collective creation story, and the biggest one is abiogenesis: how life formed from non-life. What excites me about these observations from Bennu is that it represents real ongoing progress on that puzzle. By glimpsing a soup of ambidextrous molecules, Bennu tells us something about how our own molecules’ handedness could have developed, and rules out ways that it couldn’t have. In physics, if we could see an era of the universe when there were equal amounts of matter and antimatter, we’d be ecstatic: it would confirm that the imbalance between matter and antimatter is a real mystery, and show us where we need to look for the answer. I love that researchers on the origins of life have reason right now to be similarly excited.

At Ars Technica Last Week, With a Piece on How Wacky Ideas Become Big Experiments

I had a piece last week at Ars Technica about the path ideas in physics take to become full-fledged experiments.

My original idea for the story was a light-hearted short news piece. A physicist at the University of Kansas, Steven Prohira, had just posted a proposal for wiring up a forest to detect high-energy neutrinos, using the trees like giant antennas.

Chatting to experts, what at first seemed silly started feeling like a hook for something more. Prohira has a strong track record, and the experts I talked to took his idea seriously. They had significant doubts, but I was struck by how answerable those doubts were, how rather than dismissing the whole enterprise they had in mind a list of questions one could actually test. I wrote a blog post laying out that impression here.

The editor at Ars was interested, so I dug deeper. Prohira’s story became a window on a wider-ranging question: how do experiments happen? How does a scientist convince the community to work on a project, and the government to fund it? How do ideas get tested before these giant experiments get built?

I tracked down researchers from existing experiments and got their stories. They told me how detecting particles from space takes ingenuity, with wacky ideas involving the natural world being surprisingly common. They walked me through tales of prototypes and jury-rigging and feasibility studies and approval processes.

The highlights of those tales ended up in the piece, but there was a lot I couldn’t include. In particular, I had a long chat with Sunil Gupta about the twists and turns taken by the GRAPES experiment in India. Luckily for you, some of the most interesting stories have already been covered, for example their measurement of the voltage of a thunderstorm or repurposing used building materials to keep costs down. I haven’t yet found his story about stirring wavelength-shifting chemicals all night using a propeller mounted on a power drill, but I suspect it’s out there somewhere. If not, maybe it can be the start of a new piece!

A Tale of Two Experiments

Before I begin, two small announcements:

First: I am now on bluesky! Instead of having a separate link in the top menu for each social media account, I’ve changed the format so now there are social media buttons in the right-hand sidebar, right under the “Follow” button. Currently, they cover tumblr, twitter, and bluesky, but there may be more in future.

Second, I’ve put a bit more technical advice on my “Open Source Grant Proposal” post, so people interested in proposing similar research can have some ideas about how best to pitch it.

Now, on to the post:


Gravitational wave telescopes are possibly the most exciting research program in physics right now. Big, expensive machines with more on the way in the coming decades, gravitational wave telescopes need both precise theoretical predictions and high-quality data analysis. For some, gravitational wave telescopes have the potential to reveal genuinely new physics, to probe deviations from general relativity that might be related to phenomena like dark matter, though so far no such deviations have been conclusively observed. In the meantime, they’re teaching us new consequences of known physics. For example, the unusual population of black holes observed by LIGO has motivated those who model star clusters to consider processes in which the motion of three stars or black holes is related to each other, discovering that these processes are more important than expected.

Particle colliders are probably still exciting to the general public, but for many there is a growing sense of fatigue and disillusionment. Current machines like the LHC are big and expensive, and proposed future colliders would be even costlier and take decades to come online, in addition to requiring a huge amount of effort from the community in terms of precise theoretical predictions and data analysis. Some argue that colliders still might uncover genuinely new physics, deviations from the standard model that might explain phenomena like dark matter, but as no such deviations have yet been conclusively observed people are increasingly skeptical. In the meantime, most people working on collider physics are focused on learning new consequences of known physics. For example, by comparing observed results with theoretical approximations, people have found that certain high-energy processes usually left out of calculations are actually needed to get a good agreement with the data, showing that these processes are more important than expected.

…ok, you see what I did there, right? Was that fair?

There are a few key differences, with implications to keep in mind:

First, collider physics is significantly more expensive than gravitational wave physics. LIGO took about $300 million to build and spends about $50 million a year. The LHC took about $5 billion to build and costs $1 billion a year to run. That cost still puts both well below several other government expenses that you probably consider frivolous (please don’t start arguing about which ones in the comments!), but it does mean collider physics demands a bit of a stronger argument.

Second, the theoretical motivation to expect new fundamental physics out of LIGO is generally considered much weaker than for colliders. A large part of the theoretical physics community thought that they had a good argument why they should see something new at the LHC. In contrast, most theorists have been skeptical of the kinds of modified gravity theories that have dramatic enough effects that one could measure them with gravitational wave telescopes, with many of these theories having other pathologies or inconsistencies that made people wary.

Third, the general public finds astrophysics cooler than particle physics. Somehow, telling people “pairs of black holes collide more often than we thought because sometimes a third star in the neighborhood nudges them together” gets people much more excited than “pairs of quarks collide more often than we thought because we need to re-sum large logarithms differently”, even though I don’t think there’s a real “principled” difference between them. Neither reveals new laws of nature, both are upgrades to our ability to model how real physical objects behave, neither is useful to know for anybody living on Earth in the present day.

With all this in mind, my advice to gravitational wave physicists is to try, as much as possible, not to lean on stories about dark matter and modified gravity. You might learn something, and it’s worth occasionally mentioning that. But if you don’t, you run a serious risk of disappointing people. And you have such a big PR advantage if you just lean on new consequences of bog standard GR, that those guys really should get the bulk of the news coverage if you want to keep the public on your side.

Generalize

What’s the difference between a model and an explanation?

Suppose you cared about dark matter. You observe that things out there in the universe don’t quite move the way you would expect. There is something, a consistent something, that changes the orbits of galaxies and the bending of light, the shape of the early universe and the spiderweb of super-clusters. How do you think about that “something”?

One option is to try to model the something. You want to use as few parameters as possible, so that your model isn’t just an accident, but will actually work to predict new data. You want to describe how it changes gravity, on all the scales you care about. Your model might be very simple, like the original MOND, and just describe a modification to Newtonian gravity, since you typically only need Newtonian gravity to model many of these phenomena. (Though MOND itself can’t account for all the things attributed to dark matter, so it had to be modified.) You might have something slightly more complicated, proposing some “matter” but not going into much detail about what it is, just enough for your model to work.

If you were doing engineering, a model like that is a fine thing to have. If you were building a spaceship and wanted to figure out what its destination would look like after a long journey, you’d need a model of dark matter like this, one that predicted how galaxies move and light bends, to do the job.

But a model like that isn’t an explanation. And the reason why is that explanations generalize.

In practice, you often just need Newtonian gravity to model how galaxies move. But if you want to model more dramatic things, the movement of the whole universe or the area around a black hole, then you need general relativity as well. So to generalize to those areas, you can’t just modify Newtonian gravity. You need an explanation, one that tells you not just how Newton’s equations change, but how Einstein’s equations change.

In practice, you can get by with a simple model of dark matter, one that doesn’t tell you very much, and just adds a new type of matter. But if you want to model quantum gravity, you need to know how this new matter interacts, not just at baseline with gravity, but with everything else. You need to know how the new matter is produced, whether it gets its mass from the Higgs boson or from something else, whether it falls into the same symmetry groups as the Standard Model or totally new ones, how it arises from tangled-up strings and multi-dimensional membranes. You need not just a model, but an explanation, one that tells you not just roughly what kind of particle you need, but how it changes our models of particle physics overall.

Physics, at its best, generalizes. Newton’s genius wasn’t that he modeled gravity on Earth, but that he unified it with gravity in the solar system. By realizing that gravity was universal, he proposed an explanation that led to much more progress than the models of predecessors like Kepler. Later, Einstein’s work on general relativity led to similar progress.

We can’t always generalize. Sometimes, we simply don’t know enough. But if we’re not engineering, then we don’t need a model, and generalizing should, at least in the long-run, be our guiding hope.

LHC Black Hole Reassurance: The Professional Version

A while back I wrote a post trying to reassure you that the Large Hadron Collider cannot create a black hole that could destroy the Earth. If you’re the kind of person who is worried about this kind of thing, you’ve probably heard a variety of arguments: that it hasn’t happened yet, despite the LHC running for quite some time, that it didn’t happen before the LHC with cosmic rays of comparable energy, and that a black hole that small would quickly decay due to Hawking radiation. I thought it would be nice to give a different sort of argument, a back-of-the-envelope calculation you can try out yourself, showing that even if a black hole was produced using all of the LHC’s energy and fell directly into the center of the Earth, and even if Hawking radiation didn’t exist, it would still take longer than the lifetime of the universe to cause any detectable damage. Modeling the black hole as falling through the Earth and just slurping up everything that falls into its event horizon, it wouldn’t even double in size before the stars burn out.

That calculation was extremely simple by physics standards. As it turns out, it was too simple. A friend of mine started thinking harder about the problem, and dug up this paper from 2008: Astrophysical implications of hypothetical stable TeV-scale black holes.

Before the LHC even turned on, the experts were hard at work studying precisely this question. The paper has two authors, Steve Giddings and Michelangelo Mangano. Giddings is an expert on the problem of quantum gravity, while Mangano is an expert on LHC physics, so the two are exactly the dream team you’d ask for to answer this question. Like me, they pretend that black holes don’t decay due to Hawking radiation, and pretend that one falls to straight from the LHC to the center of the Earth, for the most pessimistic possible scenario.

Unlike me, but like my friend, they point out that the Earth is not actually a uniform sphere of matter. It’s made up of particles: quarks arranged into nucleons arranged into nuclei arranged into atoms. And a black hole that hits a nucleus will probably not just slurp up an event horizon-sized chunk of the nucleus: it will slurp up the whole nucleus.

This in turn means that the black hole starts out growing much more fast. Eventually, it slows down again: once it’s bigger than an atom, it starts gobbling up atoms a few at a time until eventually it is back to slurping up a cylinder of the Earth’s material as it passes through.

But an atom-sized black hole will grow faster than an LHC-energy-sized black hole. How much faster is estimated in the Giddings and Mangano paper, and it depends on the number of dimensions. For eight dimensions, we’re safe. For fewer, they need new arguments.

Wait a minute, you might ask, aren’t there only four dimensions? Is this some string theory nonsense?

Kind of, yes. In order for the LHC to produce black holes, gravity would need to have a much stronger effect than we expect on subatomic particles. That requires something weird, and the most plausible such weirdness people considered at the time were extra dimensions. With extra dimensions of the right size, the LHC might have produced black holes. It’s that kind of scenario that Giddings and Mangano are checking: they don’t know of a plausible way for black holes to be produced at the LHC if there are just four dimensions.

For fewer than eight dimensions, though, they have a problem: the back-of-the-envelope calculation suggests black holes could actually grow fast enough to cause real damage. Here, they fall back on the other type of argument: if this could happen, would it have happened already? They argue that, if the LHC could produce black holes in this way, then cosmic rays could produce black holes when they hit super-dense astronomical objects, such as white dwarfs and neutron stars. Those black holes would eat up the white dwarfs and neutron stars, in the same way one might be worried they could eat up the Earth. But we can observe that white dwarfs and neutron stars do in fact exist, and typically live much longer than they would if they were constantly being eaten by miniature black holes. So we can conclude that any black holes like this don’t exist, and we’re safe.

If you’ve got a smattering of physics knowledge, I encourage you to read through the paper. They consider a lot of different scenarios, much more than I can summarize in a post. I don’t know if you’ll find it reassuring, since they may not cover whatever you happen to be worried about. But it’s a lot of fun seeing how the experts handle the problem.