Tag Archives: philosophy of science

Bonus Info For “Cosmic Paradox Reveals the Awful Consequence of an Observer-Free Universe”

I had a piece in Quanta Magazine recently, about a tricky paradox that’s puzzling quantum gravity researchers and some early hints at its resolution.

The paradox comes from trying to describe “closed universes”, which are universes where it is impossible to reach the edge, even if you had infinite time to do it. This could be because the universe wraps around like a globe, or because the universe is expanding so fast no traveler could ever reach an edge. Recently, theoretical physicists have been trying to describe these closed universes, and have noticed a weird issue: each such universe appears to have only one possible quantum state. In general, quantum systems have more possible states the more complex they are, so for a whole universe to have only one possible state is a very strange thing, implying a bizarrely simple universe. Most worryingly, our universe may well be closed. Does that mean that secretly, the real world has only one possible state?

There is a possible solution that a few groups are playing around with. The argument that a closed universe has only one state depends on the fact that nothing inside a closed universe can reach the edge. But if nothing can reach the edge, then trying to observe the universe as a whole from outside would tell you nothing of use. Instead, any reasonable measurement would have to come from inside the universe. Such a measurement introduces a new kind of “edge of the universe”, this time not in the far distance, but close by: the edge between an observer and the rest of the world. And when you add that edge to the calculations, the universe stops being closed, and has all the many states it ought to.

This was an unusually tricky story for me to understand. I narrowly avoided several misconceptions, and I’m still not sure I managed to dodge all of them. Likewise, it was unusually tricky for the editors to understand, and I suspect it was especially tricky for Quanta’s social media team to understand.

It was also, quite clearly, tricky for the readers to understand. So I thought I would use this post to clear up a few misconceptions. I’ll say a bit more about what I learned investigating this piece, and try to clarify what the result does and does not mean.

Q: I’m confused about the math terms you’re using. Doesn’t a closed set contain its boundary?

A: Annoyingly, what physicists mean by a closed universe is a bit different from what mathematicians mean by a closed manifold, which is in turn more restrictive than what mathematicians mean by a closed set. One way to think about this that helped me is that in an open set you can take a limit that takes you out of the set, which is like being able to describe a (possibly infinite) path that takes you “out of the universe”. A closed set doesn’t have that, every path, no matter how long, still ends up in the same universe.

Q: So a bunch of string theorists did a calculation and got a result that doesn’t make sense, a one-state universe. What if they’re just wrong?

A: Two things:

First, the people I talked to emphasized that it’s pretty hard to wiggle out of the conclusion. It’s not just a matter of saying you don’t believe in string theory and that’s that. The argument is based in pretty fundamental principles, and it’s not easy to propose a way out that doesn’t mess up something even more important.

That’s not to say it’s impossible. One of the people I interviewed, Henry Maxfield, thinks that some of the recent arguments are misunderstanding how to use one of their core techniques, in a way that accidentally presupposes the one-state universe.

But even he thinks that the bigger point, that closed universes have only one state, is probably true.

And that’s largely due to a second reason: there are older arguments that back the conclusion up.

One of the oldest dates back to John Wheeler, a physicist famous for both deep musings about the nature of space and time and coining evocative terms like “wormhole”. In the 1960’s, Wheeler argued that, in a theory where space and time can be curved, one should think of a system’s state as including every configuration it can evolve into over time, since it can be tricky to specify a moment “right now”. In a closed universe, you could expect a quantum system to explore every possible configuration…meaning that such a universe should be described by only one state.

Later, physicists studying holography ran into a similar conclusion. They kept noticing systems in quantum gravity where you can describe everything that happens inside by what happens on the edges. If there are no edges, that seems to suggest that in some sense there is nothing inside. Apparently, Lenny Susskind had a slide at the end of talks in the 90’s where he kept bringing up this point.

So even if the modern arguments are wrong, and even if string theory is wrong…it still looks like the overall conclusion is right.

Q: If a closed universe has only one state, does that make it deterministic, and thus classical?

A: Oh boy…

So, on the one hand, there is an idea, which I think also goes back to Wheeler, that asks: “if the universe as a whole has a wavefunction, how does it collapse?” One possibility is that the universe has only one state, so that nobody is needed to collapse the wavefunction, it already is in a definite state.

On the other hand, a universe with only one state does not actually look much like a classical universe. Our universe looks classical largely due to a process called decoherence, where small quantum systems interact with big quantum systems with many states, diluting quantum effects until the world looks classical. If there is only one state, there are no big systems to interact with, and the world has large quantum fluctuations that make it look very different from a classical universe.

Q: How, exactly, are you defining “observer”?

A: A few commenters helpfully chimed in to talk about how physics models observers as “witness” systems, objects that preserve some record of what happens to them. A simple example is a ball sitting next to a bowl: if you find the ball in the bowl later, it means something moved it. This process, preserving what happens and making it more obvious, is in essence how physicists think about observers.

However, this isn’t the whole story in this case. Here, different research groups introducing observers are doing it in different ways. That’s, in part, why none of them are confident they have the right answer.

One of the approaches describes an observer in terms of its path through space and time, its worldline. Instead of a detailed witness system with specific properties, all they do is pick out a line and say “the observer is there”. Identifying that line, and declaring it different from its surroundings, seems to be enough to recover the complexity the universe ought to have.

The other approach treats the witness system in a bit more detail. We usually treat an observer in quantum mechanics as infinitely large compared to the quantum systems they measure. This approach instead gives the observer a finite size, and uses that to estimate how far their experience will be from classical physics.

Crucially, both approaches aren’t a matter of defining a physical object, and looking for it in the theory. Given a collection of atoms, neither team can tell you what is an observer, and what isn’t. Instead, in each approach, the observer is arbitrary: a choice, made by us when we use quantum mechanics, of what to count as an observer and what to count as the rest of the world. That choice can be made in many different ways, and each approach tries to describe what happens when you change that choice.

This is part of what makes this approach uncomfortable to some more philosophically-minded physicists: it treats observers not as a predictable part of the physical world, but as a mathematical description used to make statements about the world.

Q: If these ideas come from AdS/CFT, which is an open universe, how do you use them to describe a closed universe?

A: While more examples emerged later, initially theorists were thinking about two types of closed universes:

First, think about a black hole. You may have heard that when you fall into a black hole, you watch the whole universe age away before your eyes, due to the dramatic differences in the passage of time caused by the extreme gravity. Once you’ve seen the outside universe fade away, you are essentially in a closed universe of your own. The outside world will never affect you again, and you are isolated, with no path to the outside. These black hole interiors are one of the examples theorists looked at.

The other example are so-called “baby universes”. When physicists use quantum mechanics to calculate the chance of something happening, they have to add up every possible series of events that could have happened in between. For quantum gravity, this includes every possible arrangement of space and time. This includes arrangements with different shapes, including ones with tiny extra “baby universes” which branch off from the main universe and return. Universes with these “baby universes” are another example that theorists considered to understand closed universes.

Q: So wait, are you actually saying the universe needs to be observed to exist? That’s ridiculous, didn’t the universe exist long before humans existed to observe it? Is this some sort of Copenhagen Interpretation thing, or that thing called QBism?

You’re starting to ask philosophical questions, and here’s the thing:

There are physicists who spend their time thinking about how to interpret quantum mechanics. They talk to philosophers, and try to figure out how to answer these kinds of questions in a consistent and systematic way, keeping track of all the potential pitfalls and implications. They’re part of a subfield called “quantum foundations”.

The physicists whose work I was talking about in that piece are not those people.

Of the people I interviewed, one of them, Rob Myers, probably has lunch with quantum foundations researchers on occasion. The others, based at places like MIT and the IAS, probably don’t even do that.

Instead, these are people trying to solve a technical problem, people whose first inclination is to put philosophy to the side, and “shut up and calculate”. These people did a calculation that ought to have worked, checking how many quantum states they could find in a closed universe, and found a weird and annoying answer: just one. Trying to solve the problem, they’ve done technical calculation work, introducing a path through the universe, or a boundary around an observer, and seeing what happens. While some of them may have their own philosophical leanings, they’re not writing works of philosophy. Their papers don’t talk through the philosophical implications of their ideas in all that much detail, and they may well have different thoughts as to what those implications are.

So while I suspect I know the answers they would give to some of these questions, I’m not sure.

Instead, how about I tell you what I think?

I’m not a philosopher, I can’t promise my views will be consistent, that they won’t suffer from some pitfall. But unlike other people’s views, I can tell you what my own views are.

To start off: yes, the universe existed before humans. No, there is nothing special about our minds, we don’t have psychic powers to create the universe with our thoughts or anything dumb like that.

What I think is that, if we want to describe the world, we ought to take lessons from science.

Science works. It works for many reasons, but two important ones stand out.

Science works because it leads to technology, and it leads to technology because it guides actions. It lets us ask, if I do this, what will happen? What will I experience?

And science works because it lets people reach agreement. It lets people reach agreement because it lets us ask, if I observe this, what do I expect you to observe? And if we agree, we can agree on the science.

Ultimately, if we want to describe the world with the virtues of science, our descriptions need to obey this rule: they need to let us ask “what if?” questions about observations.

That means that science cannot avoid an observer. It can often hide the observer, place them far away and give them an infinite mind to behold what they see, so that one observer is essentially the same as another. But we shouldn’t expect to always be able to do this. Sometimes, we can’t avoid saying something about the observer: about where they are, or how big they are, for example.

These observers, though, don’t have to actually exist. We should be able to ask “what if” questions about others, and that means we should be able to dream up fictional observers, and ask, if they existed, what would they see? We can imagine observers swimming in the quark-gluon plasma after the Big Bang, or sitting inside a black hole’s event horizon, or outside our visible universe. The existence of the observer isn’t a physical requirement, but a methodological one: a restriction on how we can make useful, scientific statements about the world. Our theory doesn’t have to explain where observers “come from”, and can’t and shouldn’t do that. The observers aren’t part of the physical world being described, they’re a precondition for us to describe that world.

Is this the Copenhagen Interpretation? I’m not a historian, but I don’t think so. The impression I get is that there was no real Copenhagen Interpretation, that Bohr and Heisenberg, while more deeply interested in philosophy than many physicists today, didn’t actually think things through in enough depth to have a perspective you can name and argue with.

Is this QBism? I don’t think so. It aligns with some things QBists say, but they say a lot of silly things as well. It’s probably some kind of instrumentalism, for what that’s worth.

Is it logical positivism? I’ve been told logical positivists would argue that the world outside the visible universe does not exist. If that’s true, I’m not a logical positivist.

Is it pragmatism? Maybe? What I’ve seen of pragmatism definitely appeals to me, but I’ve seen my share of negative characterizations as well.

In the end, it’s an idea about what’s useful and what’s not, about what moves science forward and what doesn’t. It tries to avoid being preoccupied with unanswerable questions, and as much as possible to cash things out in testable statements. If I do this, what happens? What if I did that instead?

The results I covered for Quanta, to me, show that the observer matters on a deep level. That isn’t a physical statement, it isn’t a mystical statement. It’s a methodological statement: if we want to be scientists, we can’t give up on the observer.

When Your Theory Is Already Dead

Occasionally, people try to give “even-handed” accounts of crackpot physics, like people who claim to have invented anti-gravity devices. These accounts don’t go so far as to say that the crackpots are right, and will freely point out plausible doubts about the experiments. But at the end of the day, they’ll conclude that we still don’t really know the answer, and perhaps the next experiment will go differently. More tests are needed.

For someone used to engineering, or to sciences without much theory behind them, this might sound pretty reasonable. Sure, any one test can be critiqued. But you can’t prove a negative: you can’t rule out a future test that might finally see the effect.

That’s all well and good…if you have no idea what you’re doing. But these people, just like anyone else who grapples with physics, aren’t just proposing experiments. They’re proposing theories: models of the world.

And once you’ve got a theory, you don’t just have to care about future experiments. You have to care about past experiments too. Some theories…are already dead.

The "You're already dead" scene from the anime North Star
Warning: this is a link to TVTropes, enter only if you have lots of time on your hands

To get a little more specific, let’s talk about antigravity proposals that use scalar fields.

Scalar fields seem to have some sort of mysticism attached to them in the antigravity crackpot community, but for physicists they’re just the simplest possible type of field, the most obvious thing anyone would have proposed once they were comfortable enough with the idea of fields in the first place. We know of one, the Higgs field, which gives rise to the Higgs boson.

We also know that if there are any more, they’re pretty subtle…and as a result, pretty useless.

We know this because of a wide variety of what are called “fifth-force experiments“, tests and astronomical observations looking for an undiscovered force that, like gravity, reaches out to long distances. Many of these experiments are quite general, the sort of thing that would pick up a wide variety of scalar fields. And so far, none of them have seen anything.

That “so far” doesn’t mean “wait and see”, though. Each time physicists run a fifth-force experiment, they establish a limit. They say, “a fifth force cannot be like this“. It can’t be this strong, it can’t operate on these scales, it can’t obey this model. Each experiment doesn’t just say “no fifth force yet”, it says “no fifth force of this kind, at all”.

When you write down a theory, if you’re not careful, you might find it has already been ruled out by one of these experiments. This happens to physicists all the time. Physicists want to use scalar fields to understand the expansion of the universe, they use them to think about dark matter. And frequently, a model one physicist proposed will be ruled out, not by new experiments, but by someone doing the math and realizing that the model is already contradicted by a pre-existing fifth-force experiment.

So can you prove a negative? Sort of.

If you never commit to a model, if you never propose an explanation, then you can never be disproven, you can always wait for the experiment of your dreams to come true. But if you have any model, any idea, any explanation at all, then your explanation will have implications. Those implications may kill your theory in a future experiment. Or, they may have already killed it.

Technology as Evidence

How much can you trust general relativity?

On the one hand, you can read through a lovely Wikipedia article full of tests, explaining just how far and how precisely scientists have pushed their knowledge of space and time. On the other hand, you can trust GPS satellites.

As many of you may know, GPS wouldn’t work if we didn’t know about general relativity. In order for the GPS in your phone to know where you are, it has to compare signals from different satellites, each giving the location and time the signal was sent. To get an accurate result, the times measured on those satellites have to be adjusted: because of the lighter gravity they experience, time moves more quickly for them than for us down on Earth.

In a sense, general relativity gets tested every minute of every day, on every phone in the world. That’s pretty trustworthy! Any time that science is used in technology, it gets tested in this way. The ideas we can use are ideas that have shown they can perform, ideas which do what we expect again and again and again.

In another sense, though, GPS is a pretty bad test of general relativity. It tests one of general relativity’s simplest consequences, based on the Schwarzchild metric for how gravity behaves near a large massive object, and not to an incredibly high degree of precision. Gravity could still violate general relativity in a huge number of other ways, and GPS would still function. That’s why the other tests are valuable: if you want to be sure general relativity doesn’t break down, you need to test it under conditions that GPS doesn’t cover, and to higher precision.

Once you know to look for it, these layers of tests come up everywhere. You might see the occasional article talking about tests of quantum gravity. The tests they describe are very specific, testing a very general and basic question: does quantum mechanics make sense at all in a gravitational world? In contrast, most scientists who research quantum gravity don’t find that question very interesting: if gravity breaks quantum mechanics in a way those experiments could test, it’s hard to imagine it not leading to a huge suite of paradoxes. Instead, quantum gravity researchers tend to be interested in deeper problems with quantum gravity, distinctions between theories that don’t dramatically break with our existing ideas, but that because of that are much harder to test.

The easiest tests are important, especially when they come from technology: they tell us, on a basic level, what we can trust. But we need the hard tests too, because those are the tests that are most likely to reveal something new, and bring us to a new level of understanding.

Value in Formal Theory Land

What makes a physics theory valuable?

You may think that a theory’s job is to describe reality, to be true. If that’s the goal, we have a whole toolbox of ways to assess its value. We can check if it makes predictions and if those predictions are confirmed. We can assess whether the theory can cheat to avoid the consequences of its predictions (falsifiability) and whether its complexity is justified by the evidence (Occam’s razor, and statistical methods that follow from it).

But not every theory in physics can be assessed this way.

Some theories aren’t even trying to be true. Others may hope to have evidence some day, but are clearly not there yet, either because the tests are too hard or the theory hasn’t been fleshed out enough.

Some people specialize in theories like these. We sometimes say they’re doing “formal theory”, working with the form of theories rather than whether they describe the world.

Physics isn’t mathematics. Work in formal theory is still supposed to help describe the real world. But that help might take a long time to arrive. Until then, how can formal theorists know which theories are valuable?

One option is surprise. After years tinkering with theories, a formal theorist will have some idea of which sorts of theories are possible and which aren’t. Some of this is intuition and experience, but sometimes it comes in the form of an actual “no-go theorem”, a proof that a specific kind of theory cannot be consistent.

Intuition and experience can be wrong, though. Even no-go theorems are fallible, both because they have assumptions which can be evaded and because people often assume they go further than they do. So some of the most valuable theories are valuable because they are surprising: because they do something that many experienced theorists think is impossible.

Another option is usefulness. Here I’m not talking about technology: these are theories that may or may not describe the real world and can’t be tested in feasible experiments, they’re not being used for technology! But they can certainly be used by other theorists. They can show better ways to make predictions from other theories, or better ways to check other theories for contradictions. They can be a basis that other theories are built on.

I remember, back before my PhD, hearing about the consistent histories interpretation of quantum mechanics. I hadn’t heard much about it, but I did hear that it allowed calculations that other interpretations didn’t. At the time, I thought this was an obvious improvement: surely, if you can’t choose based on observations, you should at least choose an interpretation that is useful. In practice, it doesn’t quite live up to the hype. The things it allows you to calculate are things other interpretations would say don’t make sense to ask, questions like “what was the history of the universe” instead of observations you can test like “what will I see next?” But still, being able to ask new questions has proven useful to some, and kept a community interested.

Often, formal theories are judged on vaguer criteria. There’s a notion of explanatory power, of making disparate effects more intuitively part of the same whole. There’s elegance, or beauty, which is the theorist’s Occam’s razor, favoring ideas that do more with less. And there’s pure coolness, where a bunch of nerds are going to lean towards ideas that let them play with wormholes and multiverses.

But surprise, and usefulness, feel more solid to me. If you can find someone who says “I didn’t think this was possible”, then you’ve almost certainly done something valuable. And if you can’t do that, “I’d like to use this” is an excellent recommendation too.

There Is No Shortcut to Saying What You Mean

Blogger Andrew Oh-Willeke of Dispatches from Turtle Island pointed me to an editorial in Science about the phrase scientific consensus.

The editorial argues that by referring to conclusions like the existence of climate change or vaccine safety as “the scientific consensus”, communicators have inadvertently fanned the flames of distrust. By emphasizing agreement between scientists, the phrase “scientific consensus” leaves open the question of how that consensus was reached. More conspiracy-minded people imagine shady backroom deals and corrupt payouts, while the more realistic blame incentives and groupthink. If you disagree with “the scientific consensus”, you may thus decide the best way forward is to silence those pesky scientists.

(The link to current events is left as an exercise to the reader, to comment on elsewhere. As usual, please no explicit discussion of politics on this blog!)

Instead of “scientific consensus”, the editorial suggests another term, convergence of evidence. The idea is that by centering the evidence instead of the scientists, the phrase would make it clear that these conclusions are justified by something more than social pressures, and will remain even if the scientists promoting them are silenced.

Oh-Willeke pointed me to another blog post responding to the editorial, which has a nice discussion of how the terms were used historically, showing their popularity over time. “Convergence of evidence” was more popular in the 1950’s, with a small surge in the late 90’s and early 2000’s. “Scientific consensus” rose in the 1980’s and 90’s, lining up with a time when social scientists were skeptical about science’s objectivity and wanted to explore the social reasons why scientists come to agreement. It then fell around the year 2000, before rising again, this time used instead by professional groups of scientists to emphasize their agreement on issues like climate change.

(The blog post then goes on to try to motivate the word “consilience” instead, on the rather thin basis that “convergence of evidence” isn’t interdisciplinary enough, which seems like a pretty silly objection. “Convergence” implies coming in from multiple directions, it’s already interdisciplinary!)

I appreciate “convergence of evidence”, it seems like a useful phrase. But I think the editorial is working from the wrong perspective, in trying to argue for which terms “we should use” in the first place.

Sometimes, as a scientist or an organization or a journalist, you want to emphasize evidence. Is it “a preponderance of evidence”, most but not all? Is it “overwhelming evidence”, evidence so powerful it is unlikely to ever be defeated? Or is it a “convergence of evidence”, evidence that came in slowly from multiple paths, each independent route making a coincidence that much less likely?

But sometimes, you want to emphasize the judgement of the scientists themselves.

Sometimes when scientists agree, they’re working not from evidence but from personal experience: feelings of which kinds of research pan out and which don’t, or shared philosophies that sit deep in how they conceive their discipline. Describing physicists’ reasons for expecting supersymmetry before the LHC turned on as a convergence of evidence would be inaccurate. Describing it as having been a (not unanimous) consensus gets much closer to the truth.

Sometimes, scientists do have evidence, but as a journalist, you can’t evaluate its strength. You note some controversy, you can follow some of the arguments, but ultimately you have to be honest about how you got the information. And sometimes, that will be because it’s what most of the responsible scientists you talked to agreed on: scientific consensus.

As science communicators, we care about telling the truth (as much as we ever can, at any rate). As a result, we cannot adopt blanket rules of thumb. We cannot say, “we as a community are using this term now”. The only responsible thing we can do is to think about each individual word. We need to decide what we actually mean, to read widely and learn from experience, to find which words express our case in a way that is both convincing and accurate. There’s no shortcut to that, no formula where you just “use the right words” and everything turns out fine. You have to do the work, and hope it’s enough.

Experiments Should Be Surprising, but Not Too Surprising

People are talking about colliders again.

This year, the European particle physics community is updating its shared plan for the future, the European Strategy for Particle Physics. A raft of proposals at the end of March stirred up a tail of public debate, focused on asking what sort of new particle collider should be built, and discussing potential reasons why.

That discussion, in turn, has got me thinking about experiments, and how they’re justified.

The purpose of experiments, and of science in general, is to learn something new. The more sure we are of something, the less reason there is to test it. Scientists don’t check whether the Sun rises every day. Like everyone else, they assume it will rise, and use that knowledge to learn other things.

You want your experiment to surprise you. But to design an experiment to surprise you, you run into a contradiction.

Suppose that every morning, you check whether the Sun rises. If it doesn’t, you will really be surprised! You’ll have made the discovery of the century! That’s a really exciting payoff, grant agencies should be lining up to pay for…

Well, is that actually likely to happen, though?

The same reasons it would be surprising if the Sun stopped rising are reasons why we shouldn’t expect the Sun to stop rising. A sunrise-checking observatory has incredibly high potential scientific reward…but an absurdly low chance of giving that reward.

Ok, so you can re-frame your experiment. You’re not hoping the Sun won’t rise, you’re observing the sunrise. You expect it to rise, almost guaranteed, so your experiment has an almost guaranteed payoff.

But what a small payoff! You saw exactly what you expected, there’s no science in that!

By either criterion, the “does the Sun rise” observatory is a stupid experiment. Real experiments operate in between the two extremes. They also mix motivations. Together, that leads to some interesting tensions.

What was the purpose of the Large Hadron Collider?

There were a few things physicists were pretty sure of, when they planned the LHC. Previous colliders had measured W bosons and Z bosons, and their properties made it clear that something was missing. If you could collide protons with enough energy, physicists were pretty sure you’d see the missing piece. Physicists had a reasonably plausible story for that missing piece, in the form of the Higgs boson. So physicists could be pretty sure they’d see something, and reasonably sure it would be the Higgs boson.

If physicists expected the Higgs boson, what was the point of the experiment?

First, physicists expected to see the Higgs boson, but they didn’t expect it to have the mass that it did. In fact, they didn’t know anything about the particle’s mass, besides that it should be low enough that the collider could produce it, and high enough that it hadn’t been detected before. The specific number? That was a surprise, and an almost-inevitable one. A rare creature, an almost-guaranteed scientific payoff.

I say almost, because there was a second point. The Higgs boson didn’t have to be there. In fact, it didn’t have to exist at all. There was a much bigger potential payoff, of noticing something very strange, something much more complicated than the straightforward theory most physicists had expected.

(Many people also argued for another almost-guaranteed payoff, and that got a lot more press. People talked about finding the origin of dark matter by discovering supersymmetric particles, which they argued was almost guaranteed due to a principle called naturalness. This is very important for understanding the history…but it’s an argument that many people feel has failed, and that isn’t showing up much anymore. So for this post, I’ll leave it to the side.)

This mix, of a guaranteed small surprise and the potential for a very large surprise, was a big part of what made the LHC make sense. The mix has changed a bit for people considering a new collider, and it’s making for a rougher conversation.

Like the LHC, most of the new collider proposals have a guaranteed payoff. The LHC could measure the mass of the Higgs, these new colliders will measure its “couplings”: how strongly it influences other particles and forces.

Unlike the LHC, though, this guarantee is not a guaranteed surprise. Before building the LHC, we did not know the mass of the Higgs, and we could not predict it. On the other hand, now we absolutely can predict the couplings of the Higgs. We have quite precise numbers, our expectation for what they should be based on a theory that so far has proven quite successful.

We aren’t certain, of course, just like physicists weren’t certain before. The Higgs boson might have many surprising properties, things that contradict our current best theory and usher in something new. These surprises could genuinely tell us something about some of the big questions, from the nature of dark matter to the universe’s balance of matter and antimatter to the stability of the laws of physics.

But of course, they also might not. We no longer have that rare creature, a guaranteed mild surprise, to hedge in case the big surprises fail. We have guaranteed observations, and experimenters will happily tell you about them…but no guaranteed surprises.

That’s a strange position to be in. And I’m not sure physicists have figured out what to do about it.

I Have a Theory

“I have a theory,” says the scientist in the book. But what does that mean? What does it mean to “have” a theory?

First, there’s the everyday sense. When you say “I have a theory”, you’re talking about an educated guess. You think you know why something happened, and you want to check your idea and get feedback. A pedant would tell you you don’t really have a theory, you have a hypothesis. It’s “your” hypothesis, “your theory”, because it’s what you think happened.

The pedant would insist that “theory” means something else. A theory isn’t a guess, even an educated guess. It’s an explanation with evidence, tested and refined in many different contexts in many different ways, a whole framework for understanding the world, the most solid knowledge science can provide. Despite the pedant’s insistence, that isn’t the only way scientists use the word “theory”. But it is a common one, and a central one. You don’t really “have” a theory like this, though, except in the sense that we all do. These are explanations with broad consensus, things you either know of or don’t, they don’t belong to one person or another.

Except, that is, if one person takes credit for them. We sometimes say “Darwin’s theory of evolution”, or “Einstein’s theory of relativity”. In that sense, we could say that Einstein had a theory, or that Darwin had a theory.

Sometimes, though, “theory” doesn’t mean this standard official definition, even when scientists say it. And that changes what it means to “have” a theory.

For some researchers, a theory is a lens with which to view the world. This happens sometimes in physics, where you’ll find experts who want to think about a situation in terms of thermodynamics, or in terms of a technique called Effective Field Theory. It happens in mathematics, where some choose to analyze an idea with category theory not to prove new things about it, but just to translate it into category theory lingo. It’s most common, though, in the humanities, where researchers often specialize in a particular “interpretive framework”.

For some, a theory is a hypothesis, but also a pet project. There are physicists who come up with an idea (maybe there’s a variant of gravity with mass! maybe dark energy is changing!) and then focus their work around that idea. That includes coming up with ways to test whether the idea is true, showing the idea is consistent, and understanding what variants of the idea could be proposed. These ideas are hypotheses, in that they’re something the scientist thinks could be true. But they’re also ideas with many moving parts that motivate work by themselves.

Taken to the extreme, this kind of “having” a theory can go from healthy science to political bickering. Instead of viewing an idea as a hypothesis you might or might not confirm, it can become a platform to fight for. Instead of investigating consistency and proposing tests, you focus on arguing against objections and disproving your rivals. This sometimes happens in science, especially in more embattled areas, but it happens much more often with crackpots, where people who have never really seen science done can decide it’s time for their idea, right or wrong.

Finally, sometimes someone “has” a theory that isn’t a hypothesis at all. In theoretical physics, a “theory” can refer to a complete framework, even if that framework isn’t actually supposed to describe the real world. Some people spend time focusing on a particular framework of this kind, understanding its properties in the hope of getting broader insights. By becoming an expert on one particular theory, they can be said to “have” that theory.

Bonus question: in what sense do string theorists “have” string theory?

You might imagine that string theory is an interpretive framework, like category theory, with string theorists coming up with the “string version” of things others understand in other ways. This, for the most part, doesn’t happen. Without knowing whether string theory is true, there isn’t much benefit in just translating other things to string theory terms, and people for the most part know this.

For some, string theory is a pet project hypothesis. There is a community of people who try to get predictions out of string theory, or who investigate whether string theory is consistent. It’s not a huge number of people, but it exists. A few of these people can get more combative, or make unwarranted assumptions based on dedication to string theory in particular: for example, you’ll see the occasional argument that because something is difficult in string theory it must be impossible in any theory of quantum gravity. You see a spectrum in the community, from people for whom string theory is a promising project to people for whom it is a position that needs to be defended and argued for.

For the rest, the question of whether string theory describes the real world takes a back seat. They’re people who “have” string theory in the sense that they’re experts, and they use the theory primarily as a mathematical laboratory to learn broader things about how physics works. If you ask them, they might still say that they hypothesize string theory is true. But for most of these people, that question isn’t central to their work.

AI Can’t Do Science…And Neither Can Other Humans

Seen on Twitter:

I don’t know the context here, so I can’t speak to what Prof. Cronin meant. But it got me thinking.

Suppose you, like Prof. Cronin, were to insist that AI “cannot in principle” do science, because AI “is not autonomous” and “does not come up with its own problems to solve”. What might you mean?

You might just be saying that AI is bad at coming up with new problems to solve. That’s probably fair, at least at the moment. People have experimented with creating simple “AI researchers” that “study” computer programs, coming up with hypotheses about the programs’ performance and testing them. But it’s a long road from that to reproducing the much higher standards human scientists have to satisfy.

You probably don’t mean that, though. If you did, you wouldn’t have said “in principle”. You mean something stronger.

More likely, you might mean that AI cannot come up with its own problems, because AI is a tool. People come up with problems, and use AI to help solve them. In this perspective, not only is AI “not autonomous”, it cannot be autonomous.

On a practical level, this is clearly false. Yes, machine learning models, the core technology in current AI, are set up to answer questions. A user asks something, and receives the model’s prediction of the answer. That’s a tool, but for the more flexible models like GPT it’s trivial to turn it into something autonomous. Just add another program: a loop that asks the model what to do, does it, tells the model the result, and asks what to do next. Like taping a knife to a Roomba, you’ve made a very simple modification to make your technology much more dangerous.

You might object, though, that this simple modification of GPT is not really autonomous. After all, a human created it. That human had some goal, some problem they wanted to solve, and the AI is just solving the problem for them.

That may be a fair description of current AI, but insisting it’s true in principle has some awkward implications. If you make a “physics AI”, just tell it to do “good physics”, and it starts coming up with hypotheses you’d never thought of, is it really fair to say it’s just solving your problem?

What if the AI, instead, was a child? Picture a physicist encouraging a child to follow in their footsteps, filling their life with physics ideas and rhapsodizing about the hard problems of the field at the dinner table. Suppose the child becomes a physicist in turn, and finds success later in life. Were they really autonomous? Were they really a scientist?

What if the child, instead, was a scientific field, and the parent was the general public? The public votes for representatives, the representatives vote to hire agencies, and the agencies promise scientists they’ll give them money if they like the problems they come up with. Who is autonomous here?

(And what happens if someone takes a hammer to that process? I’m…still not talking about this! No-politics-rule still in effect, sorry! I do have a post planned, but it will have to wait until I can deal with the fallout.)

At this point, you’d probably stop insisting. You’d drop that “in principle”, and stick with the claim I started with, that current AI can’t be a scientist.

But you have another option.

You can accept the whole chain of awkward implications, bite all the proverbial bullets. Yes, you insist, AI is not autonomous. Neither is the physicist’s child in your story, and neither are the world’s scientists paid by government grants. Each is a tool, used by the one, true autonomous scientist: you.

You are stuck in your skull, a blob of curious matter trained on decades of experience in the world and pre-trained with a couple billion years of evolution. For whatever reason, you want to know more, so you come up with problems to solve. You’re probably pretty vague about those problems. You might want to see more pretty pictures of space, or wrap your head around the nature of time. So you turn the world into your tool. You vote and pay taxes, so your government funds science. You subscribe to magazines and newspapers, so you hear about it. You press out against the world, and along with the pressure that already exists it adds up, and causes change. Biological intelligences and artificial intelligences scurry at your command. From their perspective, they are proposing their own problems, much more detailed and complex than the problems you want to solve. But from yours, they’re your limbs beyond limbs, sight beyond sight, asking the fundamental questions you want answered.

Valentine’s Day Physics Poem 2025

Today is Valentine’s Day, so it’s time for the blog’s yearly tradition of posting a poem. This one is inspired by that one Robert Wilson quote.

The physicist was called 
before the big wide world and asked,
Why?

This commitment
This drive
This dream

(and as Nature is a woman, so let her be)

How does she defend?
How does she serve your interests,
home and abroad
(which may be one and the same)?

The physicist stood
before the big wide world
alone but not alone

and answered

She makes me worth defending.

A realist defends to defend
Lives to live
Survives to survive
And devours to devour
It’s dour
Mere existence
The law of “better mine than yours”

Instead, the physicist spoke of the painters,
the sculptors,
…and the poets
He spoke of dignity and honor and love and worth
Of seeing a twinkling many-faceted thing
past the curve of the road
and a future to be shared.

Replacing Space-Time With the Space in Your Eyes

Nima Arkani-Hamed thinks space-time is doomed.

That doesn’t mean he thinks it’s about to be destroyed by a supervillain. Rather, Nima, like many physicists, thinks that space and time are just approximations to a deeper reality. In order to make sense of gravity in a quantum world, seemingly fundamental ideas, like that particles move through particular places at particular times, will probably need to become more flexible.

But while most people who think space-time is doomed research quantum gravity, Nima’s path is different. Nima has been studying scattering amplitudes, formulas used by particle physicists to predict how likely particles are to collide in particular ways. He has been trying to find ways to calculate these scattering amplitudes without referring directly to particles traveling through space and time. In the long run, the hope is that knowing how to do these calculations will help suggest new theories beyond particle physics, theories that can’t be described with space and time at all.

Ten years ago, Nima figured out how to do this in a particular theory, one that doesn’t describe the real world. For that theory he was able to find a new picture of how to calculate scattering amplitudes based on a combinatorical, geometric space with no reference to particles traveling through space-time. He gave this space the catchy name “the amplituhedron“. In the years since, he found a few other “hedra” describing different theories.

Now, he’s got a new approach. The new approach doesn’t have the same kind of catchy name: people sometimes call it surfaceology, or curve integral formalism. Like the amplituhedron, it involves concepts from combinatorics and geometry. It isn’t quite as “pure” as the amplituhedron: it uses a bit more from ordinary particle physics, and while it avoids specific paths in space-time it does care about the shape of those paths. Still, it has one big advantage: unlike the amplituhedron, Nima’s new approach looks like it can work for at least a few of the theories that actually describe the real world.

The amplituhedron was mysterious. Instead of space and time, it described the world in terms of a geometric space whose meaning was unclear. Nima’s new approach also describes the world in terms of a geometric space, but this space’s meaning is a lot more clear.

The space is called “kinematic space”. That probably still sounds mysterious. “Kinematic” in physics refers to motion. In the beginning of a physics class when you study velocity and acceleration before you’ve introduced a single force, you’re studying kinematics. In particle physics, kinematic refers to the motion of the particles you detect. If you see an electron going up and to the right at a tenth the speed of light, those are its kinematics.

Kinematic space, then, is the space of observations. By saying that his approach is based on ideas in kinematic space, what Nima is saying is that it describes colliding particles not based on what they might be doing before they’re detected, but on mathematics that asks questions only about facts about the particles that can be observed.

(For the experts: this isn’t quite true, because he still needs a concept of loop momenta. He’s getting the actual integrands from his approach, rather than the dual definition he got from the amplituhedron. But he does still have to integrate one way or another.)

Quantum mechanics famously has many interpretations. In my experience, Nima’s favorite interpretation is the one known as “shut up and calculate”. Instead of arguing about the nature of an indeterminately philosophical “real world”, Nima thinks quantum physics is a tool to calculate things people can observe in experiments, and that’s the part we should care about.

From a practical perspective, I agree with him. And I think if you have this perspective, then ultimately, kinematic space is where your theories have to live. Kinematic space is nothing more or less than the space of observations, the space defined by where things land in your detectors, or if you’re a human and not a collider, in your eyes. If you want to strip away all the speculation about the nature of reality, this is all that is left over. Any theory, of any reality, will have to be described in this way. So if you think reality might need a totally new weird theory, it makes sense to approach things like Nima does, and start with the one thing that will always remain: observations.