Monthly Archives: October 2025

Fear of the Dark, Physics Version

Happy Halloween! I’ve got a yearly tradition on this blog of talking about the spooky side of physics. This year, we’ll think about what happens…when you turn off the lights.

Over history, astronomy has given us larger and larger views of the universe. We started out thinking the planets, Sun, and Moon were human-like, just a short distance away. Measuring distances, we started to understand the size of the Earth, then the Sun, then realized how much farther still the stars were from us. Gradually, we came to understand that some of the stars were much farther away than others. Thinkers like Immanuel Kant speculated that “nebulae” were clouds of stars like our own Milky Way, and in the early 20th century better distance measurements confirmed it, showing that Andromeda was not a nearby cloud, but an entirely different galaxy. By the 1960’s, scientists had observed the universe’s cosmic microwave background, seeing as far out as it was possible to see.

But what if we stopped halfway?

Since the 1920’s, we’ve known the universe is expanding. Since the 1990’s, we’ve thought that that expansion is speeding up: faraway galaxies are getting farther and farther away from us. Space itself is expanding, carrying the galaxies apart…faster than light.

That ever-increasing speed has a consequence. It means that, eventually, each galaxy will fly beyond our view. One by one, the other galaxies will disappear, so far away that light will not have had enough time to reach us.

From our perspective, it will be as if the lights, one by one, started to turn out. Each faraway light, each cloudy blur that hides a whirl of worlds, will wink out. The sky will get darker and darker, until to an astronomer from a distant future, the universe will appear a strangely limited place:

A single whirl of stars, in a deep, dark, void.

C. N. Yang, Dead at 103

I don’t usually do obituaries here, but sometimes I have something worth saying.

Chen Ning Yang, a towering figure in particle physics, died last week.

Picture from 1957, when he received his Nobel

I never met him. By the time I started my PhD at Stony Brook, Yang was long-retired, and hadn’t visited the Yang Institute for Theoretical Physics in quite some time.

(Though there was still an office door, tucked behind the institute’s admin staff, that bore his name.)

The Nobel Prize doesn’t always honor the most important theoretical physicists. In order to get a Nobel Prize, you need to discover something that gets confirmed by experiment. Generally, it has to be a very crisp, clear statement about reality. New calculation methods and broader new understandings are on shakier ground, and theorists who propose them tend to be left out, or at best combined together into lists of partial prizes long after the fact.

Yang was lucky. With T. D. Lee, he had made that crisp, clear statement. He claimed that the laws of physics, counter to everyone’s expectations, are not the same when reflected in a mirror. In 1956, Wu confirmed the prediction, and Lee and Yang got the prize the year after.

That’s a huge, fundamental discovery about the natural world. But as a theorist, I don’t think that was Yang’s greatest accomplishment.

Yang contributed to other fields. Practicing theorists have seen his name strewn across concepts, formalisms, and theorems. I didn’t have space to talk about him in my article on integrability for Quanta Magazine, but only just barely: another paragraph or two, and he would have been there.

But his most influential contribution is something even more fundamental. And long-time readers of this blog should already know what it is.

Yang, along with Robert Mills, proposed Yang-Mills Theory.

There isn’t a Nobel prize for Yang-Mills theory. In 1953, when Yang and Mills proposed the theory, it was obviously wrong, a theory that couldn’t explain anything in the natural world, mercilessly mocked by famous bullshit opponent Wolfgang Pauli. Not even an ambitious idea that seemed outlandish (like plate tectonics), it was a theory with such an obvious missing piece that, for someone who prioritized experiment like the Nobel committee does, it seemed pointless to consider.

All it had going for it was that it was a clear generalization, an obvious next step. If there are forces like electromagnetism, with one type of charge going from plus to minus, why not a theory with multiple, interacting types of charge?

Nothing about Yang-Mills theory was impossible, or contradictory. Mathematically, it was fine. It obeyed all the rules of quantum mechanics. It simply didn’t appear to match anything in the real world.

But, as theorists learn, nature doesn’t let a good idea go to waste.

Of the four fundamental forces of nature, as it would happen, half are Yang-Mills theories. Gravity is different, electromagnetism is simpler, and could be understood without Yang and Mills’ insights. But the weak nuclear force, that’s a Yang-Mills theory. It wasn’t obvious in 1953 because it wasn’t clear how the massless, photon-like particles in Yang-Mills theory could have mass, and it wouldn’t become clear until the work of Peter Higgs over a decade later. And the strong nuclear force, that’s also a Yang-Mills theory, missed because of the ability of such a strong force to “confine” charges, hiding them away.

So Yang got a Nobel, not for understanding half of nature’s forces before anyone else had, but from a quirky question of symmetry.

In practice, Yang was known for all of this, and more. He was enormously influential. I’ve heard it claimed that he personally kept China from investing in a new particle collider, the strength of his reputation the most powerful force on that side of the debate, as he argued that a developing country like China should be investing in science with more short-term industrial impact, like condensed matter and atomic physics. I wonder if the debate will shift with his death, and what commitments the next Chinese five-year plan will make.

Ultimately, Yang is an example of what a theorist can be, a mix of solid work, counterintuitive realizations, and the thought-through generalizations that nature always seems to make use of in the end. If you’re not clear on what a theoretical physicist is, or what one can do, let Yang’s story be your guide.

AGI Is an Economic Term, Not a Computer Science Term

Since it resonated with the audience, I’ll recap my main argument against AGI here. ‘General intelligence’ is like phlogiston, or the aether. It’s an outmoded scientific concept that does not refer to anything real. Any explanatory work it did can be done better by a richer scientific frame. 1/3

Shannon Vallor (@shannonvallor.bsky.social) 2025-10-02T22:09:06.610Z

I ran into this Bluesky post, and while a lot of the argument resonated with me, I think the author is missing something important.

Shannon Vallor is a philosopher of technology at the University of Edinburgh. She spoke recently at a meeting honoring the 75th anniversary of the Turing Test. The core of her argument, recapped in the Bluesky post, is that artificial general intelligence, or AGI, represents an outdated scientific concept, like phlogiston. While some researchers in the past thought of humans as having a kind of “general” intelligence that a machine would need to replicate, scientists today break down intelligence into a range of capabilities that can be present in different ways. From that perspective, searching for artificial general intelligence doesn’t make much sense: instead, researchers should focus on the particular capabilities they’re interested in.

I have a lot of sympathy for Vallor’s argument, though perhaps from a different direction than what she had in mind. I don’t know enough about intelligence in a biological context to comment there. But from a computer science perspective, intelligence obviously is composed of distinct capabilities. Something that computes, like a human or a machine, can have different amounts of memory, different processing speeds, different input and output rates. In terms of ability to execute algorithms, it can be a Turing machine, or something less than a Turing machine. In terms of the actual algorithms it runs, they can have different scaling for large inputs, and different overhead for small inputs. In terms of learning, one can have better data, or priors that are closer to the ground truth.

These days, all of these Turing machine algorithm capabilities are in some sense obviously not what the people interested in AGI are after. We already have them in currently-existing computers, after all. Instead, people who pursue AGI, and AI researchers more generally, are interested in heuristics. Humans do certain things without reliable algorithms, instead we do them faster, but unreliably. And while some human heuristics seem pretty general, it’s widely understood that in the heuristics world there is no free lunch. No heuristic is good for everything, and no heuristic is bad for everything.

So is “general intelligence” a mirage, like phlogiston?

If you think about it as a scientific goal, sure. But as a product, not so much.

Consider a word processor.

Obviously, from a scientific perspective, there are lots of capabilities that involve processing words. Some were things machines could do well before the advent of modern computers: consider typewriters, for instance. Others still are out of reach, after all, we do still pay people to write. (I myself am such person!)

But at the same time, if I say that a computer program is a word processor, you have a pretty good idea of what that means. There was a time when processing words involved an enormous amount of labor, work done by a large number of specialized people (mostly women). Look at a workplace documentary from the 1960’s, and compare it to a workplace today, and you’ll see that word processor technology has radically changed what tasks people do.

AGI may not make sense as a scientific goal, but it’s perfectly coherent in these terms.

Right now, a lot of tasks are done by what one could broadly call human intelligence. Some of these tasks have already fallen to technology, others will fall one by one. But it’s not unreasonable to think of a package deal, a technology that covers enough of such tasks that human intelligence stops being economically viable. That’s not because there will be some scientific general intelligence that the technology would then have, but because a decent number of intellectual tasks do seem to come bundled together. And you don’t need to cover 100% of human capabilities to radically change workplaces, any more than you needed to cover 100% of the work of a 1960’s secretary with a word processor for modern secretarial work to have a dramatically different scope and role.

It’s worth keeping in mind what is and isn’t scientifically coherent, to be aware that you can’t just extrapolate the idea of general intelligence to any future machine. (For one, it constrains what “superintelligence” could look like.) But that doesn’t mean we should be complacent, and assume that AGI is impossible in principle. AGI, like a word processor, would be a machine that covers a set of tasks well enough that people use it instead of hiring people to do the work by hand. It’s just a broader set of tasks.

Congratulations to John Clarke, Michel Devoret, and John Martinis!

The 2025 Physics Nobel Prize was announced this week, awarded to John Clarke, Michel Devoret, and John Martinis for building an electrical circuit that exhibited quantum effects like tunneling and energy quantization on a macroscopic scale.

Press coverage of this prize tends to focus on two aspects: the idea that these three “scaled up” quantum effects to medium-sized objects (the technical account quotes a description that calls it “big enough to get one’s grubby fingers on”), and that the work paved the way for some of the fundamental technologies people are exploring for quantum computing.

That’s a fine enough story, but it leaves out what made these folks’ work unique, why it differs from other Nobel laureates working with other quantum systems. It’s a bit more technical of a story, but I don’t think it’s that technical. I’ll try to tell it here.

To start, have you heard of Bose-Einstein Condensates?

Bose-Einstein Condensates are macroscopic quantum states that have already won Nobel prizes. First theorized based on ideas developed by Einstein and Bose (the namesake of bosons), they involve a large number of particles moving together, each in the same state. While the first gas that obeyed Einstein’s equations for a Bose-Einstein Condensate was created in the 1990’s, after Clarke, Devoret, and Martinis’s work, other things based on essentially the same principles were created much earlier. A laser works on the same principles as a Bose-Einstein condensate, as do phenomena like superconductivity and superfluidity.

This means that lasers, superfluids, and superconductors had been showing off quantum mechanics on grubby finger scales well before Clarke, Devoret, and Martinis’s work. But the science rewarded by this year’s Nobel turns out to be something quite different.

Because the different photons in laser light are independently in identical quantum states, lasers are surprisingly robust. You can disrupt the state of one photon, and it won’t interfere with the other states. You’ll have weakened the laser’s consistency a little bit, but the disruption won’t spread much, if at all.

That’s very different from the way quantum systems usually work. Schrodinger’s cat is the classic example. You have a box with a radioactive atom, and if that atom decays, it releases poison, killing the cat. You don’t know if the atom has decayed or not, and you don’t know if the cat is alive or not. We say the atom’s state is a superposition of decayed and not decayed, and the cat’s state is a superposition of alive and dead.

But unlike photons in a laser, the atom and the cat in Schrodinger’s cat are not independent: if the atom has decayed, the cat is dead, if the atom has not, the cat is alive. We say the states of atom and cat are entangled.

That makes these so-called “Schrodinger’s cat” states much more delicate. The state of the cat depends on the state of the atom, and those dependencies quickly “leak” to the outside world. If you haven’t sealed the box well, the smell of the room is now also entangled with the cat…which, if you have a sense of smell, means that you are entangled with the cat. That’s the same as saying that you have measured the cat, so you can’t treat it as quantum any more.

What Clarke, Devoret, and Martinis did was to build a circuit that could exhibit, not a state like a laser, but a “cat state”: delicately entangled, at risk of total collapse if measured.

That’s why they deserved a Nobel, even in a world where there are many other Nobels for different types of quantum states. Lasers, superconductors, even Bose-Einstein condensates were in a sense “easy mode”, robust quantum states that didn’t need all that much protection. This year’s physics laureates, in contrast, showed it was possible to make circuits that could make use of quantum mechanics’ most delicate properties.

That’s also why their circuits, in particular, are being heralded as a predecessor for modern attempts at quantum computers. Quantum computers do tricks with entanglement, they need “cat states”, not Bose-Einstein Condensates. And Clarke, Devoret, and Martinis’s work in the 1980’s was the first clear proof that this was a feasible thing to do.

When Your Theory Is Already Dead

Occasionally, people try to give “even-handed” accounts of crackpot physics, like people who claim to have invented anti-gravity devices. These accounts don’t go so far as to say that the crackpots are right, and will freely point out plausible doubts about the experiments. But at the end of the day, they’ll conclude that we still don’t really know the answer, and perhaps the next experiment will go differently. More tests are needed.

For someone used to engineering, or to sciences without much theory behind them, this might sound pretty reasonable. Sure, any one test can be critiqued. But you can’t prove a negative: you can’t rule out a future test that might finally see the effect.

That’s all well and good…if you have no idea what you’re doing. But these people, just like anyone else who grapples with physics, aren’t just proposing experiments. They’re proposing theories: models of the world.

And once you’ve got a theory, you don’t just have to care about future experiments. You have to care about past experiments too. Some theories…are already dead.

The "You're already dead" scene from the anime North Star
Warning: this is a link to TVTropes, enter only if you have lots of time on your hands

To get a little more specific, let’s talk about antigravity proposals that use scalar fields.

Scalar fields seem to have some sort of mysticism attached to them in the antigravity crackpot community, but for physicists they’re just the simplest possible type of field, the most obvious thing anyone would have proposed once they were comfortable enough with the idea of fields in the first place. We know of one, the Higgs field, which gives rise to the Higgs boson.

We also know that if there are any more, they’re pretty subtle…and as a result, pretty useless.

We know this because of a wide variety of what are called “fifth-force experiments“, tests and astronomical observations looking for an undiscovered force that, like gravity, reaches out to long distances. Many of these experiments are quite general, the sort of thing that would pick up a wide variety of scalar fields. And so far, none of them have seen anything.

That “so far” doesn’t mean “wait and see”, though. Each time physicists run a fifth-force experiment, they establish a limit. They say, “a fifth force cannot be like this“. It can’t be this strong, it can’t operate on these scales, it can’t obey this model. Each experiment doesn’t just say “no fifth force yet”, it says “no fifth force of this kind, at all”.

When you write down a theory, if you’re not careful, you might find it has already been ruled out by one of these experiments. This happens to physicists all the time. Physicists want to use scalar fields to understand the expansion of the universe, they use them to think about dark matter. And frequently, a model one physicist proposed will be ruled out, not by new experiments, but by someone doing the math and realizing that the model is already contradicted by a pre-existing fifth-force experiment.

So can you prove a negative? Sort of.

If you never commit to a model, if you never propose an explanation, then you can never be disproven, you can always wait for the experiment of your dreams to come true. But if you have any model, any idea, any explanation at all, then your explanation will have implications. Those implications may kill your theory in a future experiment. Or, they may have already killed it.