Author Archives: 4gravitons

Energy Is That Which Is Conserved

In school, kids learn about different types of energy. They learn about solar energy and wind energy, nuclear energy and chemical energy, electrical energy and mechanical energy, and potential energy and kinetic energy. They learn that energy is conserved, that it can never be created or destroyed, but only change form. They learn that energy makes things happen, that you can use energy to do work, that energy is different from matter.

Some, between good teaching and good students, manage to impose order on the jumble of concepts and terms. Others end up envisioning the whole story a bit like Pokemon, with different types of some shared “stuff”.

Energy isn’t “stuff”, though. So what is it? What relates all these different types of things?

Energy is something which is conserved.

The mathematician Emmy Noether showed that, when the laws of physics are symmetrical, they come with a conserved quantity. For example, because the laws of the physics are the same from place to place, momentum is conserved. Similarly, because the laws of physics are the same from one time to another, Noether’s theorem states that there must be some quantity related to time, some number we can calculate, that is conserved, even as other things change. We call that number energy.

If energy is that simple, why are there all those types?

Energy is a number we can calculate. It’s a number we can calculate for different things. If you have a detailed description of how something in physics works, you can use that description to calculate that thing’s energy. In school, you memorize formulas like \frac{1}{2}m v^2 and m g h. These are all formulas that, with a bit more knowledge, you could calculate. They are the things that, for a something that meets the conditions, are conserved. They are things that, according to Noether’s theorem, stay the same.

Because of this, you shouldn’t think of energy as a substance, or a fuel. Energy is something we can do: we physicists, and we students of physics. We can take a physical system, and see what about it ought to be conserved. Energy is an action, a calculation, a conceptual tool that can be used to make predictions.

Most things are, in the end.

Ideally, Exams Are for the Students

I should preface this by saying I don’t actually know that much about education. I taught a bit in my previous life as a professor, yes, but I probably spent more time being taught how to teach than actually teaching.

Recently, the Atlantic had a piece about testing accommodations for university students, like extra time on exams, or getting to do an exam in a special distraction-free environment. The piece quotes university employees who are having more and more trouble satisfying these accommodations, and includes the statistic that 20 percent of undergraduate students at Brown and Harvard are registered as disabled.

The piece has kicked off a firestorm on social media, mostly focused on that statistic (which conveniently appears just before the piece’s paywall). People are shocked, and cynical. They feel like more and more students are cheating the system, getting accommodations that they don’t actually deserve.

I feel like there is a missing mood in these discussions, that the social media furor is approaching this from the wrong perspective. People are forgetting what exams actually ought to be for.

Exams are for the students.

Exams are measurement tools. An exam for a class says whether a student has learned the material, or whether they haven’t, and need to retake the class or do more work to get there. An entrance exam, or a standardized exam like the SAT, predicts a student’s future success: whether they will be able to benefit from the material at a university, or whether they don’t yet have the background for that particular program of study.

These are all pieces of information that are most important to the students themselves, that help them structure their decisions. If you want to learn the material, should you take the course again? Which universities are you prepared for, and which not?

We have accommodations, and concepts like disability, because we believe that there are kinds of students for whom the exams don’t give this information accurately. We think that a student with more time, or who can take the exam in a distraction-free environment, would have a more accurate idea of whether they need to retake the material, or whether they’re ready for a course of study, than a student who has to take the exam under ordinary conditions. And we think we can identify the students who this matters for, and the students for whom this doesn’t matter nearly as much.

These aren’t claims about our values, or about what students deserve. They’re empirical claims, about how test results correlate with outcomes the students want. The conversation, then, needs to be built on top of those empirical claims. Are we better at predicting the success of students that receive accommodations, or worse? Can we measure that at all, or are we just guessing? And are we communicating the consequences accurately to students, that exam results tell them something useful and statistically robust that should help them plan their lives?

Values come in later, of course. We don’t have infinite resources, as the Atlantic piece emphasizes. We can’t measure everyone with as much precision as we would like. At some level, generalization takes over and accuracy is lost. There is absolutely a debate to be had about which measurements we can afford to make, and which we can’t.

But in order to have that argument at all, we first need to agree on what we’re measuring. And I feel like most of the people talking about this piece haven’t gotten there yet.

Bonus Info For “Cosmic Paradox Reveals the Awful Consequence of an Observer-Free Universe”

I had a piece in Quanta Magazine recently, about a tricky paradox that’s puzzling quantum gravity researchers and some early hints at its resolution.

The paradox comes from trying to describe “closed universes”, which are universes where it is impossible to reach the edge, even if you had infinite time to do it. This could be because the universe wraps around like a globe, or because the universe is expanding so fast no traveler could ever reach an edge. Recently, theoretical physicists have been trying to describe these closed universes, and have noticed a weird issue: each such universe appears to have only one possible quantum state. In general, quantum systems have more possible states the more complex they are, so for a whole universe to have only one possible state is a very strange thing, implying a bizarrely simple universe. Most worryingly, our universe may well be closed. Does that mean that secretly, the real world has only one possible state?

There is a possible solution that a few groups are playing around with. The argument that a closed universe has only one state depends on the fact that nothing inside a closed universe can reach the edge. But if nothing can reach the edge, then trying to observe the universe as a whole from outside would tell you nothing of use. Instead, any reasonable measurement would have to come from inside the universe. Such a measurement introduces a new kind of “edge of the universe”, this time not in the far distance, but close by: the edge between an observer and the rest of the world. And when you add that edge to the calculations, the universe stops being closed, and has all the many states it ought to.

This was an unusually tricky story for me to understand. I narrowly avoided several misconceptions, and I’m still not sure I managed to dodge all of them. Likewise, it was unusually tricky for the editors to understand, and I suspect it was especially tricky for Quanta’s social media team to understand.

It was also, quite clearly, tricky for the readers to understand. So I thought I would use this post to clear up a few misconceptions. I’ll say a bit more about what I learned investigating this piece, and try to clarify what the result does and does not mean.

Q: I’m confused about the math terms you’re using. Doesn’t a closed set contain its boundary?

A: Annoyingly, what physicists mean by a closed universe is a bit different from what mathematicians mean by a closed manifold, which is in turn more restrictive than what mathematicians mean by a closed set. One way to think about this that helped me is that in an open set you can take a limit that takes you out of the set, which is like being able to describe a (possibly infinite) path that takes you “out of the universe”. A closed set doesn’t have that, every path, no matter how long, still ends up in the same universe.

Q: So a bunch of string theorists did a calculation and got a result that doesn’t make sense, a one-state universe. What if they’re just wrong?

A: Two things:

First, the people I talked to emphasized that it’s pretty hard to wiggle out of the conclusion. It’s not just a matter of saying you don’t believe in string theory and that’s that. The argument is based in pretty fundamental principles, and it’s not easy to propose a way out that doesn’t mess up something even more important.

That’s not to say it’s impossible. One of the people I interviewed, Henry Maxfield, thinks that some of the recent arguments are misunderstanding how to use one of their core techniques, in a way that accidentally presupposes the one-state universe.

But even he thinks that the bigger point, that closed universes have only one state, is probably true.

And that’s largely due to a second reason: there are older arguments that back the conclusion up.

One of the oldest dates back to John Wheeler, a physicist famous for both deep musings about the nature of space and time and coining evocative terms like “wormhole”. In the 1960’s, Wheeler argued that, in a theory where space and time can be curved, one should think of a system’s state as including every configuration it can evolve into over time, since it can be tricky to specify a moment “right now”. In a closed universe, you could expect a quantum system to explore every possible configuration…meaning that such a universe should be described by only one state.

Later, physicists studying holography ran into a similar conclusion. They kept noticing systems in quantum gravity where you can describe everything that happens inside by what happens on the edges. If there are no edges, that seems to suggest that in some sense there is nothing inside. Apparently, Lenny Susskind had a slide at the end of talks in the 90’s where he kept bringing up this point.

So even if the modern arguments are wrong, and even if string theory is wrong…it still looks like the overall conclusion is right.

Q: If a closed universe has only one state, does that make it deterministic, and thus classical?

A: Oh boy…

So, on the one hand, there is an idea, which I think also goes back to Wheeler, that asks: “if the universe as a whole has a wavefunction, how does it collapse?” One possibility is that the universe has only one state, so that nobody is needed to collapse the wavefunction, it already is in a definite state.

On the other hand, a universe with only one state does not actually look much like a classical universe. Our universe looks classical largely due to a process called decoherence, where small quantum systems interact with big quantum systems with many states, diluting quantum effects until the world looks classical. If there is only one state, there are no big systems to interact with, and the world has large quantum fluctuations that make it look very different from a classical universe.

Q: How, exactly, are you defining “observer”?

A: A few commenters helpfully chimed in to talk about how physics models observers as “witness” systems, objects that preserve some record of what happens to them. A simple example is a ball sitting next to a bowl: if you find the ball in the bowl later, it means something moved it. This process, preserving what happens and making it more obvious, is in essence how physicists think about observers.

However, this isn’t the whole story in this case. Here, different research groups introducing observers are doing it in different ways. That’s, in part, why none of them are confident they have the right answer.

One of the approaches describes an observer in terms of its path through space and time, its worldline. Instead of a detailed witness system with specific properties, all they do is pick out a line and say “the observer is there”. Identifying that line, and declaring it different from its surroundings, seems to be enough to recover the complexity the universe ought to have.

The other approach treats the witness system in a bit more detail. We usually treat an observer in quantum mechanics as infinitely large compared to the quantum systems they measure. This approach instead gives the observer a finite size, and uses that to estimate how far their experience will be from classical physics.

Crucially, both approaches aren’t a matter of defining a physical object, and looking for it in the theory. Given a collection of atoms, neither team can tell you what is an observer, and what isn’t. Instead, in each approach, the observer is arbitrary: a choice, made by us when we use quantum mechanics, of what to count as an observer and what to count as the rest of the world. That choice can be made in many different ways, and each approach tries to describe what happens when you change that choice.

This is part of what makes this approach uncomfortable to some more philosophically-minded physicists: it treats observers not as a predictable part of the physical world, but as a mathematical description used to make statements about the world.

Q: If these ideas come from AdS/CFT, which is an open universe, how do you use them to describe a closed universe?

A: While more examples emerged later, initially theorists were thinking about two types of closed universes:

First, think about a black hole. You may have heard that when you fall into a black hole, you watch the whole universe age away before your eyes, due to the dramatic differences in the passage of time caused by the extreme gravity. Once you’ve seen the outside universe fade away, you are essentially in a closed universe of your own. The outside world will never affect you again, and you are isolated, with no path to the outside. These black hole interiors are one of the examples theorists looked at.

The other example are so-called “baby universes”. When physicists use quantum mechanics to calculate the chance of something happening, they have to add up every possible series of events that could have happened in between. For quantum gravity, this includes every possible arrangement of space and time. This includes arrangements with different shapes, including ones with tiny extra “baby universes” which branch off from the main universe and return. Universes with these “baby universes” are another example that theorists considered to understand closed universes.

Q: So wait, are you actually saying the universe needs to be observed to exist? That’s ridiculous, didn’t the universe exist long before humans existed to observe it? Is this some sort of Copenhagen Interpretation thing, or that thing called QBism?

You’re starting to ask philosophical questions, and here’s the thing:

There are physicists who spend their time thinking about how to interpret quantum mechanics. They talk to philosophers, and try to figure out how to answer these kinds of questions in a consistent and systematic way, keeping track of all the potential pitfalls and implications. They’re part of a subfield called “quantum foundations”.

The physicists whose work I was talking about in that piece are not those people.

Of the people I interviewed, one of them, Rob Myers, probably has lunch with quantum foundations researchers on occasion. The others, based at places like MIT and the IAS, probably don’t even do that.

Instead, these are people trying to solve a technical problem, people whose first inclination is to put philosophy to the side, and “shut up and calculate”. These people did a calculation that ought to have worked, checking how many quantum states they could find in a closed universe, and found a weird and annoying answer: just one. Trying to solve the problem, they’ve done technical calculation work, introducing a path through the universe, or a boundary around an observer, and seeing what happens. While some of them may have their own philosophical leanings, they’re not writing works of philosophy. Their papers don’t talk through the philosophical implications of their ideas in all that much detail, and they may well have different thoughts as to what those implications are.

So while I suspect I know the answers they would give to some of these questions, I’m not sure.

Instead, how about I tell you what I think?

I’m not a philosopher, I can’t promise my views will be consistent, that they won’t suffer from some pitfall. But unlike other people’s views, I can tell you what my own views are.

To start off: yes, the universe existed before humans. No, there is nothing special about our minds, we don’t have psychic powers to create the universe with our thoughts or anything dumb like that.

What I think is that, if we want to describe the world, we ought to take lessons from science.

Science works. It works for many reasons, but two important ones stand out.

Science works because it leads to technology, and it leads to technology because it guides actions. It lets us ask, if I do this, what will happen? What will I experience?

And science works because it lets people reach agreement. It lets people reach agreement because it lets us ask, if I observe this, what do I expect you to observe? And if we agree, we can agree on the science.

Ultimately, if we want to describe the world with the virtues of science, our descriptions need to obey this rule: they need to let us ask “what if?” questions about observations.

That means that science cannot avoid an observer. It can often hide the observer, place them far away and give them an infinite mind to behold what they see, so that one observer is essentially the same as another. But we shouldn’t expect to always be able to do this. Sometimes, we can’t avoid saying something about the observer: about where they are, or how big they are, for example.

These observers, though, don’t have to actually exist. We should be able to ask “what if” questions about others, and that means we should be able to dream up fictional observers, and ask, if they existed, what would they see? We can imagine observers swimming in the quark-gluon plasma after the Big Bang, or sitting inside a black hole’s event horizon, or outside our visible universe. The existence of the observer isn’t a physical requirement, but a methodological one: a restriction on how we can make useful, scientific statements about the world. Our theory doesn’t have to explain where observers “come from”, and can’t and shouldn’t do that. The observers aren’t part of the physical world being described, they’re a precondition for us to describe that world.

Is this the Copenhagen Interpretation? I’m not a historian, but I don’t think so. The impression I get is that there was no real Copenhagen Interpretation, that Bohr and Heisenberg, while more deeply interested in philosophy than many physicists today, didn’t actually think things through in enough depth to have a perspective you can name and argue with.

Is this QBism? I don’t think so. It aligns with some things QBists say, but they say a lot of silly things as well. It’s probably some kind of instrumentalism, for what that’s worth.

Is it logical positivism? I’ve been told logical positivists would argue that the world outside the visible universe does not exist. If that’s true, I’m not a logical positivist.

Is it pragmatism? Maybe? What I’ve seen of pragmatism definitely appeals to me, but I’ve seen my share of negative characterizations as well.

In the end, it’s an idea about what’s useful and what’s not, about what moves science forward and what doesn’t. It tries to avoid being preoccupied with unanswerable questions, and as much as possible to cash things out in testable statements. If I do this, what happens? What if I did that instead?

The results I covered for Quanta, to me, show that the observer matters on a deep level. That isn’t a physical statement, it isn’t a mystical statement. It’s a methodological statement: if we want to be scientists, we can’t give up on the observer.

Mandatory Dumb Acronyms

Sometimes, the world is silly for honest, happy reasons. And sometimes, it’s silly for reasons you never even considered.

Scientific projects often have acronyms, some of which are…clever, let’s say. Astronomers are famous for acronyms. Read this list, and you can find examples from 2D-FRUTTI and ABRACADABRA to WOMBAT and YORIC. Some of these aren’t even “really” acronyms, using letters other than the beginning of each word, multiple letters from a word, or both. (An egregious example from that list: VESTALE from “unVEil the darknesS of The gAlactic buLgE”.)

But here’s a pattern you’ve probably not noticed. I suggest that you should see more of these…clever…acronyms in projects in Europe, and they should show up in a wider range of fields, not just astronomy. And the reason why, is the European Research Council.

In the US, scientific grants are spread out among different government agencies. Typical grants are small, the kind of thing that lets a group share a postdoc every few years, with different types of grants covering projects of different scales.

The EU, instead, has the European Research Council, or ERC, with a flagship series of grants covering different career stages: Starting, Consolidator, and Advanced. Unlike most US grants, these are large (supporting multiple employees over several years), individual (awarded to a single principal investigator, not a collaboration) and general (the ERC uses the same framework across multiple fields, from physics to medicine to history).

That means there are a lot of medium-sized research projects in Europe that are funded by an ERC grant. And each of them are required to have an acronym.

Why? Who knows? “Acronym” is simply one of the un-skippable entries in the application forms, with a pre-set place of honor in their required grant proposal format. Nobody checks whether it’s a “real acronym”, so in practice it often isn’t, turning into some sort of catchy short name with “acronym vibes”. It, like everything else on these forms, is optimized to catch the attention of a committee of scientists who really would rather be doing something else, often discussed and refined by applicants’ mentors and sometimes even dedicated university staff.

So if you run into a scientist in Europe who proudly leads a group with a cutesy, vaguely acronym-adjacent name? And you keep running into these people?

It’s not a coincidence, and it’s not just scientists’ sense of humor. It’s the ERC.

Reminder to Physics Popularizers: “Discover” Is a Technical Term

When a word has both an everyday meaning and a technical meaning, it can cause no end of confusion.

I’ve written about this before using one of the most common examples, the word “model”, which means something quite different in the phrases “large language model”, “animal model for Alzheimer’s” and “model train”. And I’ve written about running into this kind of confusion at the beginning of my PhD, with the word “effective”.

But there is one example I see crop up again and again, even with otherwise skilled science communicators. It’s the word “discover”.

“Discover”, in physics, has a technical meaning. It’s a first-ever observation of something, with an associated standard of evidence. In this sense, the LHC discovered the Higgs boson in 2012, and LIGO discovered gravitational waves in 2015. And there are discoveries we can anticipate, like the cosmic neutrino background.

But of course, “discover” has a meaning in everyday English, too.

You probably think I’m going to say that “discover”, in everyday English, doesn’t have the same statistical standards it does in physics. That’s true of course, but it’s also pretty obvious, I don’t think it’s confusing anybody.

Rather, there is a much more important difference that physicists often forget: in everyday English, a discovery is a surprise.

“Discover”, a word arguably popularized by Columbus’s discovery of the Americas, is used pretty much exclusively to refer to learning about something you did not know about yet. It can be minor, like discovering a stick of gum you forgot, or dramatic, like discovering you’ve been transformed into a giant insect.

Now, as a scientist, you might say that everything that hasn’t yet been observed is unknown, ready for discovery. We didn’t know that the Higgs boson existed before the LHC, and we don’t know yet that there is a cosmic neutrino background.

But just because we don’t know something in a technical sense, doesn’t mean it’s surprising. And if something isn’t surprising at all, then in everyday, colloquial English, people don’t call it a discovery. You don’t “discover” that the store has milk today, even if they sometimes run out. You don’t “discover” that a movie is fun, if you went because you heard reviews claim it would be, even if the reviews might have been wrong. You don’t “discover” something you already expect.

At best, maybe you could “discover” something controversial. If you expect to find a lost city of gold, and everyone says you’re crazy, then fine, you can discover the lost city of gold. But if everyone agrees that there is probably a lost city of gold there? Then in everyday English, it would be very strange to say that you were the one who discovered it.

With this in mind, the way physicists use the word “discover” can cause a lot of confusion. It can make people think, as with gravitational waves, that a “discovery” is something totally new, that we weren’t pretty confident before LIGO that gravitational waves exist. And it can make people get jaded, and think physicists are overhyping, talking about “discovering” this or that particle physics fact because an experiment once again did exactly what it was expected to.

My recommendation? If you’re writing for the general public, use other words. The LHC “decisively detected” the Higgs boson. We expect to see “direct evidence” of the cosmic neutrino background. “Discover” has baggage, and should be used with care.

Explain/Teach/Advocate

Scientists have different goals when they communicate, leading to different styles, or registers, of communication. If you don’t notice what register a scientist is using, you might think they’re saying something they’re not. And if you notice someone using the wrong register for a situation, they may not actually be a scientist.

Sometimes, a scientist is trying to explain an idea to the general public. The point of these explanations is to give you appreciation and intuition for the science, not to understand it in detail. This register makes heavy use of metaphors, and sometimes also slogans. It should almost never be taken literally, and a contradiction between two different scientist explanations usually just means they are using incompatible metaphors for the same concept. Sometimes, scientists who do this a lot will comment on other metaphors you might have heard, referencing other slogans to help explain what those explanations miss. They do this knowing that they do, in the end, agree on the actual science: they’re just trying to give you another metaphor, with a deeper intuition for a neglected part of the story.

Other times, scientists are trying to teach a student to be able to do something. Teaching can use metaphors or slogans as introductions, but quickly moves past them, because it wants to show the students something they can use: an equation, a diagram, a classification. If a scientist shows you any of these equations/diagrams/classifications without explaining what they mean, then you’re not the student they had in mind: they had designed their lesson for someone who already knew those things. Teaching may convey the kinds of appreciation and intuition that explanations for the general public do, but that goal gets much less emphasis. The main goal is for students with the appropriate background to learn to do something new.

Finally, sometimes scientists are trying to advocate for a scientific point. In this register, and only in this register, are they trying to convince people who don’t already trust them. This kind of communication can include metaphors and slogans as decoration, but the bulk will be filled with details, and those details should constitute evidence: they should be a structured argument, one that lays out, scientifically, why others should come to the same conclusion.

A piece that tries to address multiple audiences can move between registers in a clean way. But if the register jumps back and forth, or if the wrong register is being used for a task, that usually means trouble. That trouble can be simple boredom, like a scientist’s typical conference talk that can’t decide whether it just wants other scientists to appreciate the work, whether it wants to teach them enough to actually use it, or whether it needs to convince any skeptics. It can also be more sinister: a lot of crackpots write pieces that are ostensibly aimed at convincing other scientists, but are almost entirely metaphors and slogans, pieces good at tugging on the general public’s intuition without actually giving scientists anything meaningful to engage with.

If you’re writing, or speaking, know what register you need to use to do what you’re trying to do! And if you run into a piece that doesn’t make sense, consider that it might be in a different register than you thought.

Fear of the Dark, Physics Version

Happy Halloween! I’ve got a yearly tradition on this blog of talking about the spooky side of physics. This year, we’ll think about what happens…when you turn off the lights.

Over history, astronomy has given us larger and larger views of the universe. We started out thinking the planets, Sun, and Moon were human-like, just a short distance away. Measuring distances, we started to understand the size of the Earth, then the Sun, then realized how much farther still the stars were from us. Gradually, we came to understand that some of the stars were much farther away than others. Thinkers like Immanuel Kant speculated that “nebulae” were clouds of stars like our own Milky Way, and in the early 20th century better distance measurements confirmed it, showing that Andromeda was not a nearby cloud, but an entirely different galaxy. By the 1960’s, scientists had observed the universe’s cosmic microwave background, seeing as far out as it was possible to see.

But what if we stopped halfway?

Since the 1920’s, we’ve known the universe is expanding. Since the 1990’s, we’ve thought that that expansion is speeding up: faraway galaxies are getting farther and farther away from us. Space itself is expanding, carrying the galaxies apart…faster than light.

That ever-increasing speed has a consequence. It means that, eventually, each galaxy will fly beyond our view. One by one, the other galaxies will disappear, so far away that light will not have had enough time to reach us.

From our perspective, it will be as if the lights, one by one, started to turn out. Each faraway light, each cloudy blur that hides a whirl of worlds, will wink out. The sky will get darker and darker, until to an astronomer from a distant future, the universe will appear a strangely limited place:

A single whirl of stars, in a deep, dark, void.

C. N. Yang, Dead at 103

I don’t usually do obituaries here, but sometimes I have something worth saying.

Chen Ning Yang, a towering figure in particle physics, died last week.

Picture from 1957, when he received his Nobel

I never met him. By the time I started my PhD at Stony Brook, Yang was long-retired, and hadn’t visited the Yang Institute for Theoretical Physics in quite some time.

(Though there was still an office door, tucked behind the institute’s admin staff, that bore his name.)

The Nobel Prize doesn’t always honor the most important theoretical physicists. In order to get a Nobel Prize, you need to discover something that gets confirmed by experiment. Generally, it has to be a very crisp, clear statement about reality. New calculation methods and broader new understandings are on shakier ground, and theorists who propose them tend to be left out, or at best combined together into lists of partial prizes long after the fact.

Yang was lucky. With T. D. Lee, he had made that crisp, clear statement. He claimed that the laws of physics, counter to everyone’s expectations, are not the same when reflected in a mirror. In 1956, Wu confirmed the prediction, and Lee and Yang got the prize the year after.

That’s a huge, fundamental discovery about the natural world. But as a theorist, I don’t think that was Yang’s greatest accomplishment.

Yang contributed to other fields. Practicing theorists have seen his name strewn across concepts, formalisms, and theorems. I didn’t have space to talk about him in my article on integrability for Quanta Magazine, but only just barely: another paragraph or two, and he would have been there.

But his most influential contribution is something even more fundamental. And long-time readers of this blog should already know what it is.

Yang, along with Robert Mills, proposed Yang-Mills Theory.

There isn’t a Nobel prize for Yang-Mills theory. In 1953, when Yang and Mills proposed the theory, it was obviously wrong, a theory that couldn’t explain anything in the natural world, mercilessly mocked by famous bullshit opponent Wolfgang Pauli. Not even an ambitious idea that seemed outlandish (like plate tectonics), it was a theory with such an obvious missing piece that, for someone who prioritized experiment like the Nobel committee does, it seemed pointless to consider.

All it had going for it was that it was a clear generalization, an obvious next step. If there are forces like electromagnetism, with one type of charge going from plus to minus, why not a theory with multiple, interacting types of charge?

Nothing about Yang-Mills theory was impossible, or contradictory. Mathematically, it was fine. It obeyed all the rules of quantum mechanics. It simply didn’t appear to match anything in the real world.

But, as theorists learn, nature doesn’t let a good idea go to waste.

Of the four fundamental forces of nature, as it would happen, half are Yang-Mills theories. Gravity is different, electromagnetism is simpler, and could be understood without Yang and Mills’ insights. But the weak nuclear force, that’s a Yang-Mills theory. It wasn’t obvious in 1953 because it wasn’t clear how the massless, photon-like particles in Yang-Mills theory could have mass, and it wouldn’t become clear until the work of Peter Higgs over a decade later. And the strong nuclear force, that’s also a Yang-Mills theory, missed because of the ability of such a strong force to “confine” charges, hiding them away.

So Yang got a Nobel, not for understanding half of nature’s forces before anyone else had, but from a quirky question of symmetry.

In practice, Yang was known for all of this, and more. He was enormously influential. I’ve heard it claimed that he personally kept China from investing in a new particle collider, the strength of his reputation the most powerful force on that side of the debate, as he argued that a developing country like China should be investing in science with more short-term industrial impact, like condensed matter and atomic physics. I wonder if the debate will shift with his death, and what commitments the next Chinese five-year plan will make.

Ultimately, Yang is an example of what a theorist can be, a mix of solid work, counterintuitive realizations, and the thought-through generalizations that nature always seems to make use of in the end. If you’re not clear on what a theoretical physicist is, or what one can do, let Yang’s story be your guide.

AGI Is an Economic Term, Not a Computer Science Term

Since it resonated with the audience, I’ll recap my main argument against AGI here. ‘General intelligence’ is like phlogiston, or the aether. It’s an outmoded scientific concept that does not refer to anything real. Any explanatory work it did can be done better by a richer scientific frame. 1/3

Shannon Vallor (@shannonvallor.bsky.social) 2025-10-02T22:09:06.610Z

I ran into this Bluesky post, and while a lot of the argument resonated with me, I think the author is missing something important.

Shannon Vallor is a philosopher of technology at the University of Edinburgh. She spoke recently at a meeting honoring the 75th anniversary of the Turing Test. The core of her argument, recapped in the Bluesky post, is that artificial general intelligence, or AGI, represents an outdated scientific concept, like phlogiston. While some researchers in the past thought of humans as having a kind of “general” intelligence that a machine would need to replicate, scientists today break down intelligence into a range of capabilities that can be present in different ways. From that perspective, searching for artificial general intelligence doesn’t make much sense: instead, researchers should focus on the particular capabilities they’re interested in.

I have a lot of sympathy for Vallor’s argument, though perhaps from a different direction than what she had in mind. I don’t know enough about intelligence in a biological context to comment there. But from a computer science perspective, intelligence obviously is composed of distinct capabilities. Something that computes, like a human or a machine, can have different amounts of memory, different processing speeds, different input and output rates. In terms of ability to execute algorithms, it can be a Turing machine, or something less than a Turing machine. In terms of the actual algorithms it runs, they can have different scaling for large inputs, and different overhead for small inputs. In terms of learning, one can have better data, or priors that are closer to the ground truth.

These days, all of these Turing machine algorithm capabilities are in some sense obviously not what the people interested in AGI are after. We already have them in currently-existing computers, after all. Instead, people who pursue AGI, and AI researchers more generally, are interested in heuristics. Humans do certain things without reliable algorithms, instead we do them faster, but unreliably. And while some human heuristics seem pretty general, it’s widely understood that in the heuristics world there is no free lunch. No heuristic is good for everything, and no heuristic is bad for everything.

So is “general intelligence” a mirage, like phlogiston?

If you think about it as a scientific goal, sure. But as a product, not so much.

Consider a word processor.

Obviously, from a scientific perspective, there are lots of capabilities that involve processing words. Some were things machines could do well before the advent of modern computers: consider typewriters, for instance. Others still are out of reach, after all, we do still pay people to write. (I myself am such person!)

But at the same time, if I say that a computer program is a word processor, you have a pretty good idea of what that means. There was a time when processing words involved an enormous amount of labor, work done by a large number of specialized people (mostly women). Look at a workplace documentary from the 1960’s, and compare it to a workplace today, and you’ll see that word processor technology has radically changed what tasks people do.

AGI may not make sense as a scientific goal, but it’s perfectly coherent in these terms.

Right now, a lot of tasks are done by what one could broadly call human intelligence. Some of these tasks have already fallen to technology, others will fall one by one. But it’s not unreasonable to think of a package deal, a technology that covers enough of such tasks that human intelligence stops being economically viable. That’s not because there will be some scientific general intelligence that the technology would then have, but because a decent number of intellectual tasks do seem to come bundled together. And you don’t need to cover 100% of human capabilities to radically change workplaces, any more than you needed to cover 100% of the work of a 1960’s secretary with a word processor for modern secretarial work to have a dramatically different scope and role.

It’s worth keeping in mind what is and isn’t scientifically coherent, to be aware that you can’t just extrapolate the idea of general intelligence to any future machine. (For one, it constrains what “superintelligence” could look like.) But that doesn’t mean we should be complacent, and assume that AGI is impossible in principle. AGI, like a word processor, would be a machine that covers a set of tasks well enough that people use it instead of hiring people to do the work by hand. It’s just a broader set of tasks.

Congratulations to John Clarke, Michel Devoret, and John Martinis!

The 2025 Physics Nobel Prize was announced this week, awarded to John Clarke, Michel Devoret, and John Martinis for building an electrical circuit that exhibited quantum effects like tunneling and energy quantization on a macroscopic scale.

Press coverage of this prize tends to focus on two aspects: the idea that these three “scaled up” quantum effects to medium-sized objects (the technical account quotes a description that calls it “big enough to get one’s grubby fingers on”), and that the work paved the way for some of the fundamental technologies people are exploring for quantum computing.

That’s a fine enough story, but it leaves out what made these folks’ work unique, why it differs from other Nobel laureates working with other quantum systems. It’s a bit more technical of a story, but I don’t think it’s that technical. I’ll try to tell it here.

To start, have you heard of Bose-Einstein Condensates?

Bose-Einstein Condensates are macroscopic quantum states that have already won Nobel prizes. First theorized based on ideas developed by Einstein and Bose (the namesake of bosons), they involve a large number of particles moving together, each in the same state. While the first gas that obeyed Einstein’s equations for a Bose-Einstein Condensate was created in the 1990’s, after Clarke, Devoret, and Martinis’s work, other things based on essentially the same principles were created much earlier. A laser works on the same principles as a Bose-Einstein condensate, as do phenomena like superconductivity and superfluidity.

This means that lasers, superfluids, and superconductors had been showing off quantum mechanics on grubby finger scales well before Clarke, Devoret, and Martinis’s work. But the science rewarded by this year’s Nobel turns out to be something quite different.

Because the different photons in laser light are independently in identical quantum states, lasers are surprisingly robust. You can disrupt the state of one photon, and it won’t interfere with the other states. You’ll have weakened the laser’s consistency a little bit, but the disruption won’t spread much, if at all.

That’s very different from the way quantum systems usually work. Schrodinger’s cat is the classic example. You have a box with a radioactive atom, and if that atom decays, it releases poison, killing the cat. You don’t know if the atom has decayed or not, and you don’t know if the cat is alive or not. We say the atom’s state is a superposition of decayed and not decayed, and the cat’s state is a superposition of alive and dead.

But unlike photons in a laser, the atom and the cat in Schrodinger’s cat are not independent: if the atom has decayed, the cat is dead, if the atom has not, the cat is alive. We say the states of atom and cat are entangled.

That makes these so-called “Schrodinger’s cat” states much more delicate. The state of the cat depends on the state of the atom, and those dependencies quickly “leak” to the outside world. If you haven’t sealed the box well, the smell of the room is now also entangled with the cat…which, if you have a sense of smell, means that you are entangled with the cat. That’s the same as saying that you have measured the cat, so you can’t treat it as quantum any more.

What Clarke, Devoret, and Martinis did was to build a circuit that could exhibit, not a state like a laser, but a “cat state”: delicately entangled, at risk of total collapse if measured.

That’s why they deserved a Nobel, even in a world where there are many other Nobels for different types of quantum states. Lasers, superconductors, even Bose-Einstein condensates were in a sense “easy mode”, robust quantum states that didn’t need all that much protection. This year’s physics laureates, in contrast, showed it was possible to make circuits that could make use of quantum mechanics’ most delicate properties.

That’s also why their circuits, in particular, are being heralded as a predecessor for modern attempts at quantum computers. Quantum computers do tricks with entanglement, they need “cat states”, not Bose-Einstein Condensates. And Clarke, Devoret, and Martinis’s work in the 1980’s was the first clear proof that this was a feasible thing to do.