Newsworthiness Bias

I had a chat about journalism recently, and I had a realization about just how weird science journalism, in particular, is.

Journalists aren’t supposed to be cheerleaders. Journalism and PR have very different goals (which is why I keep those sides of my work separate). A journalist is supposed to be uncompromising, to write the truth even if it paints the source in a bad light.

Norms are built around this. Serious journalistic outlets usually don’t let sources see pieces before they’re published. The source doesn’t have the final say in how they’re portrayed: the journalist reserves the right to surprise them if justified. Investigative journalists can be superstars, digging up damning secrets about the powerful.

When a journalist starts a project, the piece might turn out positive, or negative. A politician might be the best path forward, or a disingenuous grifter. A business might be a great investment opportunity, or a total scam. A popular piece of art might be a triumph, or a disappointment.

And a scientific result?

It might be a fraud, of course. Scientific fraud does exist, and is a real problem. But it’s not common, really. Pick a random scientific paper, filter by papers you might consider reporting on in the first place, and you’re very unlikely to find a fraudulent result. Science journalists occasionally report on spectacularly audacious scientific frauds, or frauds in papers that have already made the headlines. But you don’t expect fraud in the average paper you cover.

It might be scientifically misguided: flawed statistics, a gap in a proof, a misuse of concepts. Journalists aren’t usually equipped to ferret out these issues, though. Instead, this is handled in principle by peer review, and in practice by the scientific community outside of the peer review process.

Instead, for a scientific result, the most common negative judgement isn’t that it’s a lie, or a mistake. It’s that it’s boring.

And certainly, a good science journalist can judge a paper as boring. But there is a key difference between doing that, and judging a politician as crooked or a popular work of art as mediocre. You can write an article about the lying candidate for governor, or the letdown Tarantino movie. But if a scientific result is boring, and nobody else has covered it…then it isn’t newsworthy.

In science, people don’t usually publish their failures, their negative results, their ho-hum obvious conclusions. That fills the literature with only the successes, a phenomenon called publication bias. It also means, though, that scientists try to make their results sound more successful, more important and interesting, than they actually are. Some of the folks fighting the replication crisis have coined a term for this: they call it importance hacking.

The same incentives apply to journalists, especially freelancers. Starting out, it was far from clear that I could make enough to live on. I felt like I had to make every lead count, to find a newsworthy angle on every story idea I could find, because who knew when I would find another one? Over time, I learned to balance that pull better. Now that I’m making most of my income from consulting instead, the pressure has eased almost entirely: there are things I’m tempted to importance-hack for the sake of friends, but nothing that I need to importance-hack to stay in the black.

Doing journalism on the side may be good for me personally at the moment, but it’s not really a model. Much like we need career scientists, even if their work is sometimes boring, we need career journalists, even if they’re sometimes pressured to overhype.

So if we don’t want to incentivize science journalists to be science cheerleaders, what can we do instead?

In science, one way to address publication bias is with pre-registered studies. A scientist sets out what they plan to test, and a journal agrees to publish the result, no matter what it is. You could imagine something like this for science journalism. I once proposed a recurring column where every month I would cover a random paper from arXiv.org, explaining what it meant to accomplish. I get why the idea was turned down, but I still think about it.

In journalism, the arts offer the closest parallel with a different approach. There are many negative reviews of books, movies, and music, and most of them merely accuse the art of being boring, not evil. These exist because they focus on popular works that people pay attention to anyway, so that any negative coverage has someone to convince. You could imagine applying this model to science, though it could be a bit silly. I’m envisioning a journalist who writes an article every time Witten publishes, rating some papers impressive and others disappointing, the same way a music journalist might cover every Taylor Swift album.

Neither of these models are really satisfactory. You could imagine an even more adversarial model, where journalists run around accusing random scientists of wasting the government’s money, but that seems dramatically worse.

So I’m not sure. Science is weird, and hard to accurately value: if we knew how much something mattered already, it would be engineering, not science. Journalism is weird: it’s public-facing research, where the public facing is the whole point. Their combination? Even weirder.

Microdosing Vibe Physics

Have you heard of “vibe physics”?

The phrase “vibe coding” came first. People have been using large language models like ChatGPT to write computer code (and not the way I did last year). They chat with the model, describing what they want to do and asking the model to code it up. You can guess the arguments around this, from people who are convinced AI is already better than a human programmer to people sure the code will be riddled with errors and vulnerabilities.

Now, there are people claiming not only to do vibe coding, but vibe physics: doing theoretical physics by chatting with an AI.

I think we can all agree that’s a lot less plausible. Some of the people who do vibe coding actually know how to code, but I haven’t seen anyone claiming to do vibe physics who actually understands physics. They’re tech entrepreneurs in the most prominent cases, random people on the internet otherwise. And while a lot of computer code is a minor tweak on something someone has already done, theoretical physics doesn’t work that way: if someone has already come up with your idea, you’re an educator, not a physicist.

Still, I think there is something to keep in mind about the idea of “vibe physics”, related to where physics comes from.

Here’s a question to start with: go back a bit before the current chat-bot boom. There were a ton of other computational and mathematical tools. Theorem-proving software could encode almost arbitrary mathematical statements in computer code and guarantee their accuracy. Statistical concepts like Bayes’ rule described how to reason from evidence to conclusions, not flawlessly but as well as anyone reliably can. We had computer simulations for a wealth of physical phenomena, and approximation schemes for many others.

With all those tools, why did we still have human physicists?

That is, go back before ChatGPT, before large language models. Why not just code up a program that starts with the evidence and checks which mathematical model fits it best?

In principle, I think you really could have done that. But you could never run that program. It would take too long.

Doing science 100% correctly and reliably is agonizingly slow, and prohibitively expensive. You cannot check every possible model, nor can you check those models against all the available data. You must simplify your problem, somehow, even if it makes your work less reliable, and sometimes incorrect.

And for most of history, humans have provided that simplification.

A physicist isn’t going to consider every possible model. They’re going to consider models that are similar to models they studied, or similar to models others propose. They aren’t going to consider all the evidence. They’ll look at some of the evidence, the evidence other physicists are talking about and puzzled by. They won’t simulate the consequences of their hypotheses in exhaustive detail. Instead, they’ll guess, based on their own experience, a calculation that captures what they expect to be relevant.

Human physicists provided the unreliable part of physics, the heuristics. The “vibe physics”, if you will.

AI is also unreliable, also heuristic. But humans still do this better than AI.

Part of the difference is specificity. These AIs are trained on all of human language, and then perhaps fine-tuned on a general class of problems. A human expert has spent their life fine-tuning on one specific type of problem, and their intuitions, their heuristics, their lazy associations and vibes, all will be especially well-suited to problems of that type.

Another part of the difference, though, is scale.

When you talk to ChatGPT, it follows its vibes into paragraphs of text. If you turn on reasoning features, you make it check its work in the background, but it still is generating words upon words inside, evaluating those words, then generating more.

I suspect, for a physicist, the “control loop” is much tighter. Many potential ideas get ruled out a few words in. Many aren’t even expressed in words at all, just concepts. A human physicist is ultimately driven by vibes, but they check and verify those vibes, based on their experience, at a much higher frequency than any current AI system can achieve.

(I know almost nothing about neuroscience. I’m just basing this on what it can feel like, to grope through a sentence and have it assemble itself as it goes into something correct, rather than having to go back and edit it.)

As companies get access to bigger datacenters, I suspect they’ll try to make this loop tighter, to get AI to do something closer to what (I suspect, it appears) humans do. And then maybe AI will be able to do vibe physics.

Even then, though, you should not do vibe physics with the AI.

If you look at the way people describe doing vibe physics, they’re not using the AI for the vibes. They’re providing the vibes, and the AI is supposed to check things.

And that, I can confidently say, is completely ass-backwards. The AI is a vibe machine, it is great at vibes. Substituting your vibes will just make it worse. On the other hand, the AI is awful at checking things. It can find published papers sometimes, which can help you check something. But it is not set up to do the math, at least not unless the math can be phrased as a simple Python script or an IMO problem. In order to do anything like that, it has to call another type of software to verify. And you could have just used that software.

Theoretical physics is still not something everyone can do. Proposing a crackpot theory based on a few papers you found on Google and a couple YouTube videos may make you feel less confident than proposing a crackpot theory based on praise from ChatGPT and a list of papers it claims have something to do with your idea, which makes it more tempting. But it’s still proposing a crackpot theory. If you want to get involved, there’s still no substitute for actually learning how physics works.

Value in Formal Theory Land

What makes a physics theory valuable?

You may think that a theory’s job is to describe reality, to be true. If that’s the goal, we have a whole toolbox of ways to assess its value. We can check if it makes predictions and if those predictions are confirmed. We can assess whether the theory can cheat to avoid the consequences of its predictions (falsifiability) and whether its complexity is justified by the evidence (Occam’s razor, and statistical methods that follow from it).

But not every theory in physics can be assessed this way.

Some theories aren’t even trying to be true. Others may hope to have evidence some day, but are clearly not there yet, either because the tests are too hard or the theory hasn’t been fleshed out enough.

Some people specialize in theories like these. We sometimes say they’re doing “formal theory”, working with the form of theories rather than whether they describe the world.

Physics isn’t mathematics. Work in formal theory is still supposed to help describe the real world. But that help might take a long time to arrive. Until then, how can formal theorists know which theories are valuable?

One option is surprise. After years tinkering with theories, a formal theorist will have some idea of which sorts of theories are possible and which aren’t. Some of this is intuition and experience, but sometimes it comes in the form of an actual “no-go theorem”, a proof that a specific kind of theory cannot be consistent.

Intuition and experience can be wrong, though. Even no-go theorems are fallible, both because they have assumptions which can be evaded and because people often assume they go further than they do. So some of the most valuable theories are valuable because they are surprising: because they do something that many experienced theorists think is impossible.

Another option is usefulness. Here I’m not talking about technology: these are theories that may or may not describe the real world and can’t be tested in feasible experiments, they’re not being used for technology! But they can certainly be used by other theorists. They can show better ways to make predictions from other theories, or better ways to check other theories for contradictions. They can be a basis that other theories are built on.

I remember, back before my PhD, hearing about the consistent histories interpretation of quantum mechanics. I hadn’t heard much about it, but I did hear that it allowed calculations that other interpretations didn’t. At the time, I thought this was an obvious improvement: surely, if you can’t choose based on observations, you should at least choose an interpretation that is useful. In practice, it doesn’t quite live up to the hype. The things it allows you to calculate are things other interpretations would say don’t make sense to ask, questions like “what was the history of the universe” instead of observations you can test like “what will I see next?” But still, being able to ask new questions has proven useful to some, and kept a community interested.

Often, formal theories are judged on vaguer criteria. There’s a notion of explanatory power, of making disparate effects more intuitively part of the same whole. There’s elegance, or beauty, which is the theorist’s Occam’s razor, favoring ideas that do more with less. And there’s pure coolness, where a bunch of nerds are going to lean towards ideas that let them play with wormholes and multiverses.

But surprise, and usefulness, feel more solid to me. If you can find someone who says “I didn’t think this was possible”, then you’ve almost certainly done something valuable. And if you can’t do that, “I’d like to use this” is an excellent recommendation too.

Hype, Incentives, and Culture

To be clear, hype isn’t just lying.

We have a word for when someone lies to convince someone else to pay them, and that word is fraud. Most of what we call hype doesn’t reach that bar.

Instead, hype lives in a gray zone of affect and metaphor.

Some hype is pure affect. It’s about the subjective details, it’s about mood. “This is amazing” isn’t a lie, or at least, isn’t a lie you can check. They might really be amazed!

Some hype relies on metaphor. A metaphor can’t really be a lie, because a metaphor is always incomplete. But a metaphor can certainly be misleading. It can associate something minor with something important, or add emotional valence that isn’t really warranted.

Hype lies in a gray zone…and precisely because it lives in a gray zone, not everything that looks like hype is intended to be type.

We think of hype as a consequence of incentives. Scientists hype their work to grant committees to get grants, and hype it more to the public for prestige. Companies hype their products to sell them, and their business plans to draw in investors.

But what looks like hype can also be language, and culture.

To many people in the rest of the world, the way Americans talk about almost everything is hype. Everything is bigger and nicer and cooler. This isn’t because Americans are under some sort of weird extra career incentives, though. It’s just how they expect to talk, how they learned to talk, how everyone around them normally talks.

Similarly, people in different industries are used to talking differently. Depending on what work you do, you interpret different metaphors in different ways. What might seem like an enthusiastic endorsement in one industry might be dismissive in another.

In the end, it takes two to communicate: a speaker, and an audience. Speakers want to get their audience excited, and hopefully, if they don’t want to hype, to understand something of the truth. That means understanding how the audience communicates enthusiasm, and how it differs from the speaker. It means understanding language, and culture.

Did the South Pole Telescope Just Rule Out Neutrino Masses? Not Exactly, Followed by My Speculations

Recently, the South Pole Telescope’s SPT-3G collaboration released new measurements of the cosmic microwave background, the leftover light from the formation of the first atoms. By measuring this light, cosmologists can infer the early universe’s “shape”: how it rippled on different scales as it expanded into the universe we know today. They compare this shape to mathematical models, equations and simulations which tie together everything we know about gravity and matter, and try to see what it implies for those models’ biggest unknowns.

Some of the most interesting such unknowns are neutrino masses. We know that neutrinos have mass because they transform as they move, from one type of neutrino to another. Those transformations let physicists measure the differences between neutrino masses, but but themselves, they don’t say what the actual masses are. All we know from particle physics, at this point, is a minimum: in order for the neutrinos to differ in mass enough to transform in the way they do, the total mass of the three flavors of neutrino must be at least 0.06 electron-Volts.

(Divided by the speed of light squared to get the right units, if you’re picky about that sort of thing. Physicists aren’t.)

Neutrinos also influenced the early universe, shaping it in a noticeably different way than heavier particles that bind together into atoms, like electrons and protons, did. That effect, observed in the cosmic microwave background and in the distribution of galaxies in the universe today, lets cosmologists calculate a maximum: if neutrinos are more massive than a certain threshold, they could not have the effects cosmologists observe.

Over time as measurements improved, this maximum has decreased. Now, the South Pole Telescope has added more data to the pool, and combining it with prior measurements…well, I’ll quote their paper:

Ok, it’s probably pretty hard to understand what that means if you’re not a physicist. To explain:

  1. There are two different hypotheses for how neutrino masses work, called “hierarchies”. In the “normal” hierarchy, the neutrinos go in the same order as the particles they interact with with the weak nuclear force: electron-neutrinos are lighter than muon neutrinos, which are lighter than tau neutrinos. In the “inverted” hierarchy, they come in the opposite order, and the electron neutrino is the heaviest. Both of these are consistent with the particle-physics data.
  2. Confidence is a statistics thing, which could take a lot of unpacking to define correctly. To give a short but likely tortured-sounding explanation: when you rule out a hypothesis with a certain confidence level, you’re saying that, if that hypothesis was true, there would only be a 100%-minus-that-chance chance that you would see what you actually observed.

So, what are the folks at the South Pole Telescope saying? They’re saying that if you put all the evidence together (that’s roughly what that pile of acroynms at the beginning means), then the result would be incredibly uncharacteristic for either hypothesis for neutrino masses. If you had “normal” neutrino masses, you’d only see these cosmological observations 2.1% of the time. And if you had inverted neutrino masses instead, you’d only see these observations 0.01% of the time!

That sure makes it sound like neither hypothesis is correct, right? Does it actually mean that?

I mean, it could! But I don’t think so. Here I’ll start speculating on the possibilities, from least likely in my opinion to most likely. This is mostly my bias talking, and shouldn’t be taken too seriously.

5. Neutrinos are actually massless

This one is really unlikely. The evidence from particle physics isn’t just quantitative, but qualitative. I don’t know if it’s possible to write down a model that reproduces the results of neutrino oscillation experiments without massive neutrinos, and if it is it would be a very bizarre model that would almost certainly break something else. This is essentially a non-starter.

4. This is a sign of interesting new physics

I mean, it would be nice, right? I’m sure there are many proposals at this point, tweaks that add a few extra fields with some hard-to-notice effects to explain the inconsistency. I can’t rule this out, and unlike the last point there isn’t anything about it that seems impossible. But we’ve had a lot of odd observations, and so far this hasn’t happened.

3. Someone did statistics wrong

This happens more often. Any argument like this is a statistical argument, and while physicists keep getting better at statistics, they’re not professional statisticians. Sometimes there’s a genuine misunderstanding that goes in to testing a model, and once it gets resolved the problem goes away.

2. The issue will go away with more data

The problem could also just…go away. 97.9% confidence sounds huge…but in physics, the standards are higher: you need 99.99994% to announce a new discovery. Physicists do a lot of experiments and observations, and sometimes, they see weird things! As the measurements get more precise, we may well see the disagreement melt away, and cosmology and particle physics both point to the same range for neutrino masses. It’s happened to many other measurements before.

1. We’re reaching the limits of our current approach to cosmology

This is probably not actually the most likely possibility, but it’s my list, what are you going to do?

There are basic assumptions behind how most theoretical physicists do cosmology. These assumptions are reasonably plausible, and seem to be needed to do anything at all. But they can be relaxed. Our universe looks like it’s homogeneous on the largest scales: the same density on average, in every direction you look. But the way that gets enforced in the mathematical models is very direct, and it may be that a different, more indirect, approach has more flexibility. I’ll probably be writing about this more in future, hopefully somewhere journalistic. But there are some very cool ideas floating around, gradually getting fleshed out more and more. It may be that the answer to many of the mysteries of cosmology right now is not new physics, but new mathematics: a new approach to modeling the universe.

Bonus Info on the LHC and Beyond

Three of my science journalism pieces went up last week!

(This is a total coincidence. One piece was a general explainer “held in reserve” for a nice slot in the schedule, one was a piece I drafted in February, while the third I worked on in May. In journalism, things take as long as they take.)

The shortest piece, at Quanta Magazine, was an explainer about the two types of particles in physics: bosons, and fermions.

I don’t have a ton of bonus info here, because of how tidy the topic is, so just two quick observations.

First, I have the vague impression that Bose, bosons’ namesake, is “claimed” by both modern-day Bangladesh and India. I had friends in grad school who were proud of their fellow physicist from Bangladesh, but while he did his most famous work in Dhaka, he was born and died in Calcutta. Since both were under British India for most of his life, these things likely get complicated.

Second, at the end of the piece I mention a “world on a wire” where fermions and bosons are the same. One example of such a “wire” is a string, like in string theory. One thing all young string theorists learn is “bosonization”: the idea that, in a 1+1-dimensional world like a string, you can re-write any theory with fermions as a theory with bosons, as well as vice versa. This has important implications for how string theory is set up.

Next, in Ars Technica, I had a piece about how LHC physicists are using machine learning to untangle the implications of quantum interference.

As a journalist, it’s really easy to fall into a trap where you give the main person you interview too much credit: after all, you’re approaching the story from their perspective. I tried to be cautious about this, only to be stymied when literally everyone else I interviewed praised Aishik Ghosh to the skies and credited him with being the core motivating force behind the project. So I shrugged my shoulders and followed suit. My understanding is that he has been appropriately rewarded and will soon be a professor at Georgia Tech.

I didn’t list the inventors of the NSBI method that Ghosh and co. used, but names like Kyle Cranmer and Johann Brehmer tend to get bandied about. It’s a method that was originally explored for a more general goal, trying to characterize what the Standard Model might be missing, while the work I talk about in the piece takes it in a new direction, closer to the typical things the ATLAS collaboration looks for.

I also did not say nearly as much as I was tempted to about how the ATLAS collaboration publishes papers, which was honestly one of the most intriguing parts of the story for me. There is a huge amount of review that goes on inside ATLAS before one of their papers reaches the outside world, way more than there ever is in a journal’s peer review process. This is especially true for “physics papers”, where ATLAS is announcing a new conclusion about the physical world, as ATLAS’s reputation stands on those conclusions being reliable. That means starting with an “internal note” that’s hundreds of pages long (and sometimes over a thousand), an editorial board that manages the editing process, disseminating the paper to the entire collaboration for comment, and getting specific experts and institute groups within the collaboration to read through the paper in detail. The process is a bit less onerous for “technical papers”, which describe a new method, not a new conclusion about the world. Still, it’s cumbersome enough that for those papers, often scientists don’t publish them “within ATLAS” at all, instead releasing them independently. The results I reported on are special because they involved a physics paper and a technical paper, both within the ATLAS collaboration process. Instead of just working with partial or simplified data, they wanted to demonstrate the method on a “full analysis”, with all the computation and human coordination that requires. Normally, ATLAS wouldn’t go through the whole process of publishing a physics paper without basing it on new data, but this was different: the method had the potential to be so powerful that the more precise results would be worth stating as physics results alone.

(Also, for the people in the comments worried about training a model on old data: that’s not what they did. In physics, they don’t try to train a neural network model to predict the results of colliders, such a model wouldn’t tell us anything useful. They run colliders to tell us whether what they see matches the analytic, Standard, model. The neural network is trained to predict not what the experiment will say, but what the Standard Model will say, as we can usually only figure that out through time-consuming simulations. So it’s trained on (new) simulations, not on experimental data.)

Finally, on Friday I had a piece in Physics Today about the European Strategy for Particle Physics (or ESPP), and in particular, plans for the next big collider.

Before I even started working on this piece, I saw a thread by Patrick Koppenburg on some of the 263 documents submitted for the ESPP update. While my piece ended up mostly focused on the big circular collider plan that most of the field is converging on (the future circular collider, or FCC), Koppenburg’s thread was more wide-ranging, meant to illustrate the breadth of ideas under discussion. Some of that discussion is about the LHC’s current plans, like its “high-luminosity” upgrade that will see it gather data at much higher rates up until 2040. Some of it is assessing broader concerns, which it may surprise some of you to learn includes sustainability: yes, there are more or less sustainable ways to build giant colliders.

The most fun part of the discussion, though, concerns all of the other collider proposals.

Some report progress on new technologies. Muon colliders are the most famous of these, but there are other proposals that would specifically help with a linear collider. I never did end up understanding what Cooled Copper Colliders are all about, beyond that they let you get more energy in a smaller machine without super-cooling. If you know about them, chime in in the comments! Meanwhile, plasma wakefield acceleration could accelerate electrons on a wave of plasma. This has the disadvantage that you want to collide electrons and positrons, and if you try to stick a positron in plasma it will happily annihilate with the first electron it meets. So what do you do? You go half-and-half, with the HALHF project: speed up the electron with a plasma wakefield, accelerate the positron normally, and have them meet in the middle.

Others are backup plans, or “budget options”, where CERN could get a bit better measurements on some parameters if they can’t stir up the funding to measure the things they really want. They could put electrons and positrons into the LHC tunnel instead of building a new one, for a weaker machine that could still study the Higgs boson to some extent. They could use a similar experiment to produce Z bosons instead, which could serve as a bridge to a different collider project. Or, they could collider the LHC’s proton beam with an electron beam, for an experiment that mixes advantages and disadvantages of some of the other approaches.

While working on the piece, one resource I found invaluable was this colloquium talk by Tristan du Pree, where he goes through the 263 submissions and digs up a lot of interesting numbers and commentary. Read the slides for quotes from the different national inputs and “solo inputs” with comments from particular senior scientists. I used that talk to get a broad impression of what the community was feeling, and it was interesting how well it was reflected in the people I interviewed. The physicist based in Switzerland felt the most urgency for the FCC plan, while the Dutch sources were more cautious, with other Europeans firmly in the middle.

Going over the FCC report itself, one thing I decided to leave out of the discussion was the cost-benefit analysis. There’s the potential for a cute sound-bite there, “see, the collider is net positive!”, but I’m pretty skeptical of the kind of analysis they’re doing there, even if it is standard practice for government projects. Between the biggest benefits listed being industrial benefits to suppliers and early-career researcher training (is a collider unusually good for either of those things, compared to other ways we spend money?) and the fact that about 10% of the benefit is the science itself (where could one possibly get a number like that?), it feels like whatever reasoning is behind this is probably the kind of thing that makes rigor-minded economists wince. I wasn’t able to track down the full calculation though, so I really don’t know, maybe this makes more sense than it looks.

I think a stronger argument than anything along those lines is a much more basic point, about expertise. Right now, we have a community of people trying to do something that is not merely difficult, but fundamental. This isn’t like sending people to space, where many of the engineering concerns will go away when we can send robots instead. This is fundamental engineering progress in how to manipulate the forces of nature (extremely powerful magnets, high voltages) and process huge streams of data. Pushing those technologies to the limit seems like it’s going to be relevant, almost no matter what we end up doing. That’s still not putting the science first and foremost, but it feels a bit closer to an honest appraisal of what good projects like this do for the world.

Why Solving the Muon Puzzle Doesn’t Solve the Puzzle

You may have heard that the muon g-2 problem has been solved.

Muons are electrons’ heavier cousins. As spinning charged particles, they are magnetic, the strength of that magnetism characterized by a number denoted “g”. If you were to guess this number from classical physics alone, you’d conclude it should be 2, but quantum mechanics tweaks it. The leftover part, “g-2”, can be measured, and predicted, with extraordinary precision, which ought to make it an ideal test: if our current understanding of the particle physics, called the Standard Model, is subtly wrong, the difference might be noticeable there.

And for a while, it looked like such a difference was indeed noticeable. Extremely precise experiments over the last thirty years have consistently found a number slightly different from the extremely precise calculations, different enough that it seemed quite unlikely to be due to chance.

Now, the headlines are singing a different tune.

What changed?

That headline might make you think the change was an experimental result, a new measurement that changed the story. It wasn’t, though. There is a new, more precise measurement, but it agrees with the old measurements.

So the change has to be in the calculations, right? They did a new calculation, corrected a mistake or just pushed up their precision, and found that the Standard Model matches the experiment after all?

…sort of, but again, not really. The group of theoretical physicists associated with the experiment did release new, more accurate calculations. But it wasn’t the new calculations, by themselves, that made a difference. Instead, it was a shift in what kind of calculations they used…or even more specifically, what kind of calculations they trusted.

Parts of the calculation of g-2 can be done with Feynman diagrams, those photogenic squiggles you see on physicists’ blackboards. That part is very precise, and not especially controversial. However, Feynman diagrams only work well when forces between particles are comparatively weak. They’re great for electromagnetism, even better for the weak nuclear force. But for the strong nuclear force, the one that holds protons and neutrons together, you often need a different method.

For g-2, that used to be done via a “data-driven” method. Physicists measured different things, particles affected by the strong nuclear force in different ways, and used that to infer how the strong force would affect g-2. By getting a consistent picture from different experiments, they were reasonably confident that they had the right numbers.

Back in 2020, though, a challenger came to the scene, with another method. Called lattice QCD, this method involves building gigantic computer simulations of the effect of the strong force. People have been doing lattice QCD since the 1970’s, and the simulations have been getting better and better, until in 2020, a group managed to calculate the piece of the g-2 calculation that had until then been done by the data-driven method.

The lattice group found a very different result than what had been found previously. Instead of a wild disagreement with experiment, their calculation agreed. According to them, everything was fine, the muon g-2 was behaving exactly as the Standard Model predicted.

For some of us, that’s where the mystery ended. Clearly, something must be wrong with the data-driven method, not with the Standard Model. No more muon puzzle.

But the data-driven method wasn’t just a guess, it was being used for a reason. A significant group of physicists found the arguments behind it convincing. Now, there was a new puzzle: figuring out why the data-driven method and lattice QCD disagree.

Five years later, has that mystery been solved? Is that, finally, what the headlines are about?

Again, not really, no.

The theorists associated with the experiment have decided to trust lattice QCD, not the data-driven method. But they don’t know what went wrong, exactly.

Instead, they’ve highlighted cracks in the data-driven method. The way the data-driven method works, it brings together different experiments to try to get a shared picture. But that shared picture has started to fall apart. A new measurement by a different experiment doesn’t fit into the system: the data-driven method now “has tensions”, as physicists say. It’s no longer possible to combine all experiments into a shared picture they way they used to. Meanwhile, lattice QCD has gotten even better, reaching even higher precision. From the perspective of the theorists associated with the muon g-2 experiment, switching methods is now clearly the right call.

But does that mean they solved the puzzle?

If you were confident that lattice QCD is the right approach, then the puzzle was already solved in 2020. All that changed was the official collaboration finally acknowledging that.

And if you were confident that the data-driven method was the right approach, then the puzzle is even worse. Now, there are tensions within the method itself…but still no explanation of what went wrong! If you had good reasons to think the method should work, you still have those good reasons. Now you’re just…more puzzled.

I am reminded of another mystery, a few years back, when an old experiment announced a dramatically different measurement for the mass of the W boson. Then, I argued the big mystery was not how the W boson’s mass had changed (it hadn’t), but how they came to be so confident in a result so different from what others, also confidently, had found. In physics, our confidence is encoded in numbers, estimated and measured and tested and computed. If we’re not estimating that confidence correctly…then that’s the real mystery, the real puzzle. One much more important to solve.


Also, I had two more pieces out this week! In Quanta I have a short explainer about bosons and fermions, while at Ars Technica I have a piece about machine learning at the LHC. I may have a “bonus info” post on the latter at some point, I have to think about whether I have enough material for it.

Amplitudes 2025 This Week

Summer is conference season for academics, and this week held my old sub-field’s big yearly conference, called Amplitudes. This year, it was in Seoul at Seoul National University, the first time the conference has been in Asia.

(I wasn’t there, I don’t go to these anymore. But I’ve been skimming slides in my free time, to give you folks the updates you crave. Be forewarned that conference posts like these get technical fast, I’ll be back to my usual accessible self next week.)

There isn’t a huge amplitudes community in Korea, but it’s bigger than it was back when I got started in the field. Of the organizers, Kanghoon Lee of the Asia Pacific Center for Theoretical Physics and Sangmin Lee of Seoul National University have what I think of as “core amplitudes interests”, like recursion relations and the double-copy. The other Korean organizers are from adjacent areas, work that overlaps with amplitudes but doesn’t show up at the conference each year. There was also a sizeable group of organizers from Taiwan, where there has been a significant amplitudes presence for some time now. I do wonder if Korea was chosen as a compromise between a conference hosted in Taiwan or in mainland China, where there is also quite a substantial amplitudes community.

One thing that impresses me every year is how big, and how sophisticated, the gravitational-wave community in amplitudes has grown. Federico Buccioni’s talk began with a plot that illustrates this well (though that wasn’t his goal):

At the conference Amplitudes, dedicated to the topic of scattering amplitudes, there were almost as many talks with the phrase “black hole” in the title as there were with “scattering” or “amplitudes”! This is for a topic that did not even exist in the subfield when I got my PhD eleven years ago.

With that said, gravitational wave astronomy wasn’t quite as dominant at the conference as Buccioni’s bar chart suggests. There were a few talks each day on the topic: I counted seven in total, excluding any short talks on the subject in the gong show. Spinning black holes were a significant focus, central to Jung-Wook Kim’s, Andres Luna’s and Mao Zeng’s talks (the latter two showing some interesting links between the amplitudes story and classic ideas in classical mechanics) and relevant in several others, with Riccardo Gonzo, Miguel Correia, Ira Rothstein, and Enrico Herrmann’s talks showing not just a wide range of approaches, but an increasing depth of research in this area.

Herrmann’s talk in particular dealt with detector event shapes, a framework that lets physicists think more directly about what a specific particle detector or observer can see. He applied the idea not just to gravitational waves but to quantum gravity and collider physics as well. The latter is historically where this idea has been applied the most thoroughly, as highlighted in Hua Xing Zhu’s talk, where he used them to pick out particular phenomena of interest in QCD.

QCD is, of course, always of interest in the amplitudes field. Buccioni’s talk dealt with the theory’s behavior at high-energies, with a nice example of the “maximal transcendentality principle” where some quantities in QCD are identical to quantities in N=4 super Yang-Mills in the “most transcendental” pieces (loosely, those with the highest powers of pi). Andrea Guerreri’s talk also dealt with high-energy behavior in QCD, trying to address an experimental puzzle where QCD results appeared to violate a fundamental bound all sensible theories were expected to obey. By using S-matrix bootstrap techniques, they clarify the nature of the bound, finding that QCD still obeys it once correctly understood, and conjecture a weird theory that should be possible to frame right on the edge of the bound. The S-matrix bootstrap was also used by Alexandre Homrich, who talked about getting the framework to work for multi-particle scattering.

Heribertus Bayu Hartanto is another recent addition to Korea’s amplitudes community. He talked about a concrete calculation, two-loop five-particle scattering including top quarks, a tricky case that includes elliptic curves.

When amplitudes lead to integrals involving elliptic curves, many standard methods fail. Jake Bourjaily’s talk raised a question he has brought up again and again: what does it mean to do an integral for a new type of function? One possible answer is that it depends on what kind of numerics you can do, and since more general numerical methods can be cumbersome one often needs to understand the new type of function in more detail. In light of that, Stephen Jones’ talk was interesting in taking a common problem often cited with generic approaches (that they have trouble with the complex numbers introduced by Minkowski space) and finding a more natural way in a particular generic approach (sector decomposition) to take them into account. Giulio Salvatori talked about a much less conventional numerical method, linked to the latest trend in Nima-ology, surfaceology. One of the big selling points of the surface integral framework promoted by people like Salvatori and Nima Arkani-Hamed is that it’s supposed to give a clear integral to do for each scattering amplitude, one which should be amenable to a numerical treatment recently developed by Michael Borinsky. Salvatori can currently apply the method only to a toy model (up to ten loops!), but he has some ideas for how to generalize it, which will require handling divergences and numerators.

Other approaches to the “problem of integration” included Anna-Laura Sattelberger’s talk that presented a method to find differential equations for the kind of integrals that show up in amplitudes using the mathematical software Macaulay2, including presenting a package. Matthias Wilhelm talked about the work I did with him, using machine learning to find better methods for solving integrals with integration-by-parts, an area where two other groups have now also published. Pierpaolo Mastrolia talked about integration-by-parts’ up-and-coming contender, intersection theory, a method which appears to be delving into more mathematical tools in an effort to catch up with its competitor.

Sometimes, one is more specifically interested in the singularities of integrals than their numerics more generally. Felix Tellander talked about a geometric method to pin these down which largely went over my head, but he did have a very nice short description of the approach: “Describe the singularities of the integrand. Find a map representing integration. Map the singularities of the integrand onto the singularities of the integral.”

While QCD and gravity are the applications of choice, amplitudes methods germinate in N=4 super Yang-Mills. Ruth Britto’s talk opened the conference with an overview of progress along those lines before going into her own recent work with one-loop integrals and interesting implications of ideas from cluster algebras. Cluster algebras made appearances in several other talks, including Anastasia Volovich’s talk which discussed how ideas from that corner called flag cluster algebras may give insights into QCD amplitudes, though some symbol letters still seem to be hard to track down. Matteo Parisi covered another idea, cluster promotion maps, which he thinks may help pin down algebraic symbol letters.

The link between cluster algebras and symbol letters is an ongoing mystery where the field is seeing progress. Another symbol letter mystery is antipodal duality, where flipping an amplitude like a palindrome somehow gives another valid amplitude. Lance Dixon has made progress in understanding where this duality comes from, finding a toy model where it can be understood and proved.

Others pushed the boundaries of methods specific to N=4 super Yang-Mills, looking for novel structures. Song He’s talk pushes an older approach by Bourjaily and collaborators up to twelve loops, finding new patterns and connections to other theories and observables. Qinglin Yang bootstraps Wilson loops with a Lagrangian insertion, adding a side to the polygon used in previous efforts and finding that, much like when you add particles to amplitudes in a bootstrap, the method gets stricter and more powerful. Jaroslav Trnka talked about work he has been doing with “negative geometries”, an odd method descended from the amplituhedron that looks at amplitudes from a totally different perspective, probing a bit of their non-perturbative data. He’s finding more parts of that setup that can be accessed and re-summed, finding interestingly that multiple-zeta-values show up in quantities where we know they ultimately cancel out. Livia Ferro also talked about a descendant of the amplituhedron, this time for cosmology, getting differential equations for cosmological observables in a particular theory from a combinatorial approach.

Outside of everybody’s favorite theories, some speakers talked about more general approaches to understanding the differences between theories. Andreas Helset covered work on the geometry of the space of quantum fields in a theory, applying the method to a general framework for characterizing deviations from the standard model called the SMEFT. Jasper Roosmale Nepveu also talked about a general space of theories, thinking about how positivity (a trait linked to fundamental constraints like causality and unitarity) gets tangled up with loop effects, and the implications this has for renormalization.

Soft theorems, universal behavior of amplitudes when a particle has low energy, continue to be a trendy topic, with Silvia Nagy showing how the story continues to higher orders and Sangmin Choi investigating loop effects. Callum Jones talks about one of the more powerful results from the soft limit, Weinberg’s theorem showing the uniqueness of gravity. Weinberg’s proof was set up in Minkowski space, but we may ultimately live in curved, de Sitter space. Jones showed how the ideas Weinberg explored generalize in de Sitter, using some tools from the soft-theorem-inspired field of dS/CFT. Julio Parra-Martinez, meanwhile, tied soft theorems to another trendy topic, higher symmetries, a more general notion of the usual types of symmetries that physicists have explored in the past. Lucia Cordova reported work that was not particularly connected to soft theorems but was connected to these higher symmetries, showing how they interact with crossing symmetry and the S-matrix bootstrap.

Finally, a surprisingly large number of talks linked to Kevin Costello and Natalie Paquette’s work with self-dual gauge theories, where they found exact solutions from a fairly mathy angle. Paquette gave an update on her work on the topic, while Alfredo Guevara talked about applications to black holes, comparing the power of expanding around a self-dual gauge theory to that of working with supersymmetry. Atul Sharma looked at scattering in self-dual backgrounds in work that merges older twistor space ideas with the new approach, while Roland Bittelson talked about calculating around an instanton background.


Also, I had another piece up this week at FirstPrinciples, based on an interview with the (outgoing) president of the Sloan Foundation. I won’t have a “bonus info” post for this one, as most of what I learned went into the piece. But if you don’t know what the Sloan Foundation does, take a look! I hadn’t known they funded Jupyter notebooks and Hidden Figures, or that they introduced Kahneman and Tversky.

Bonus info for Reversible Computing and Megastructures

After some delay, a bonus info post!

At FirstPrinciples.org, I had a piece covering work by engineering professor Colin McInnes on stability of Dyson spheres and ringworlds. This was a fun one to cover, mostly because of how it straddles the borderline between science fiction and practical physics and engineering. McInnes’s claim to fame is work on solar sails, which seem like a paradigmatic example of that kind of thing: a common sci-fi theme that’s surprisingly viable. His work on stability was interesting to me because it’s the kind of work that a century and a half ago would have been paradigmatic physics. Now, though, very few physicists work on orbital mechanics, and a lot of the core questions have passed on to engineering. It’s fascinating to see how these classic old problems can still have undiscovered solutions, and how the people best equipped to find them now are tinkerers practicing their tools instead of cutting-edge mathematicians.

At Quanta Magazine, I had a piece about reversible computing. Readers may remember I had another piece on that topic at the end of March, a profile on the startup Vaire Computing at FirstPrinciples.org. That piece talked about FirstPrinciples, but didn’t say much about reversible computing. I figured I’d combine the “bonus info” for both posts here.

Neither piece went into much detail about the engineering involved, as it didn’t really make sense in either venue. One thing that amused me a bit is that the core technology that drove Vaire into action is something that actually should be very familiar to a physics or engineering student: a resonator. Theirs is obviously quite a bit more sophisticated than the base model, but at its heart it’s doing the same thing: storing charge and controlling frequency. It turns out that those are both essential to making reversible computers work: you need to store charge so it isn’t lost to ground when you empty a transistor, and you need to control the frequency so you can have waves with gentle transitions instead of the more sharp corners of the waves used in normal computers, thus wasting less heat in rapid changes of voltage. Vaire recently announced they’re getting 50% charge recovery from their test chips, and they’re working on raising that number.

Originally, the Quanta piece was focused more on reversible programming than energy use, as the energy angle seemed a bit more physics-focused than their computer science desk usually goes. The emphasis ended up changing as I worked on the draft, but it meant that an interesting parallel story got lost on the cutting-room floor. There’s a community of people who study reversible computing not from the engineering side, but from the computer science side, studying reversible logic and reversible programming languages. It’s a pursuit that goes back to the 1980’s, where at Caltech around when Feynman was teaching his course on the physics of computing a group of students were figuring out how to set up a reversible programming language. Called Janus, they sent their creation to Landauer, and the letter ended up with Michael Frank after Landauer died. There’s a lovely quote from it regarding their motivation: “We did it out of curiosity over whether such an odd animal as this was possible, and because we were interested in knowing where we put information when we programmed. Janus forced us to pay attention to where our bits went since none could be thrown away.”

Being forced to pay attention to information, in turn, is what has animated the computer science side of the reversible computing community. There are applications to debugging, where you can run code backwards when it gets stuck, to encryption and compression, where you want to be able to recover the information you hid away, and to security, where you want to keep track of information to make sure a hacker can’t figure out things they shouldn’t. Also, for a lot of these people, it’s just a fun puzzle. Early on my attention was caught by a paper by Hannah Earley describing a programming language called Alethe, a word you might recognize from the Greek word for truth, which literally means something like “not-forgetting”.

(Compression is particularly relevant for the “garbage data” you need to output in a reversible computation. If you want to add two numbers reversibly, naively you need to keep both input numbers and their output, but you can be more clever than that and just keep one of the inputs since you can subtract to find the other. There are a lot of substantially more clever tricks in this vein people have figured out over the years.)

I didn’t say anything about the other engineering approaches to reversible computing, that try to do something outside of traditional computer chips. There’s DNA computing, which tries to compute with a bunch of DNA in solution. There’s the old concept of ballistic reversible computing, where you imagine a computer that runs like a bunch of colliding billiard balls, conserving energy. Coordinating such a computer can be a nightmare, and early theoretical ideas were shown to be disrupted by something as tiny as a few stray photons from a distant star. But people like Frank figured out ways around the coordination problem, and groups have experimented with superconductors as places to toss those billiard balls around. The early billiard-inspired designs also had a big impact on quantum computing, where you need reversible gates and the only irreversible operation is the measurement. The name “Toffoli” comes up a lot in quantum computing discussions, I hadn’t known before this that Toffoli gates were originally for reversible computing in general, not specifically quantum computing.

Finally, I only gestured at the sci-fi angle. For reversible computing’s die-hards, it isn’t just a way to make efficient computers now. It’s the ultimate future of the technology, the kind of energy-efficiency civilization will need when we’re covering stars with shells of “computronium” full of busy joyous artificial minds.

And now that I think about it, they should chat with McInnes. He can tell them the kinds of stars they should build around.

Branching Out, and Some Ground Rules

In January, my time at the Niels Bohr Institute ended. Instead of supporting myself by doing science, as I’d done the last thirteen or so years, I started making a living by writing, doing science journalism.

That work picked up. My readers here have seen a few of the pieces already, but there are lots more in the pipeline, getting refined by editors or waiting to be published. It’s given me a bit of income, and a lot of visibility.

That visibility, in turn, has given me new options. It turns out that magazines aren’t the only companies interested in science writing, and journalism isn’t the only way to write for a living. Companies that invest in science want a different kind of writing, one that builds their reputation both with the public and with the scientific community. And as I’ve discovered, if you have enough of a track record, some of those companies will reach out to you.

So I’m branching out, from science journalism to science communications consulting, advising companies how to communicate science. I’ve started working with an exciting client, with big plans for the future. If you follow me on LinkedIn, you’ll have seen a bit about who they are and what I’ll be doing for them.

Here on the blog, I’d like to maintain a bit more separation. Blogging is closer to journalism, and in journalism, one ought to be careful about conflicts of interest. The advice I’ve gotten is that it’s good to establish some ground rules, separating my communications work from my journalistic work, since I intend to keep doing both.

So without further ado, my conflict of interest rules:

  • I will not write in a journalistic capacity about my consulting clients, or their direct competitors.
  • I will not write in a journalistic capacity about the technology my clients are investing in, except in extremely general terms. (For example, most businesses right now are investing in AI. I’ll still write about AI in general, but not about any particular AI technologies my clients are pursuing.)
  • I will more generally maintain a distinction between areas I cover journalistically and areas where I consult. Right now, this means I avoid writing in a journalistic capacity about:
    • Health/biomedical topics
    • Neuroscience
    • Advanced sensors for medical applications

I plan to update these rules over time as I get a better feeling for what kinds of conflict of interest risks I face and what my clients are comfortable with. I now have a Page for this linked in the top menu, clients and editors can check there to see my current conflict of interest rules.