Monthly Archives: May 2025

In Scientific American, With a Piece on Vacuum Decay

I had a piece in Scientific American last week. It’s paywalled, but if you’re a subscriber there you can see it, or you can buy the print magazine.

(I also had two pieces out in other outlets this week. I’ll be saying more about them…in a couple weeks.)

The Scientific American piece is about an apocalyptic particle physics scenario called vacuum decay. It’s a topic I covered last year in Quanta Magazine, an unlikely event where the Higgs field which gives fundamental particles their mass changes value, suddenly making all other particles much more massive and changing physics as we know it. It’s a change that physicists think would start as a small bubble and spread at (almost) the speed of light, covering the universe.

What I wrote for Quanta was a short news piece covering a small adjustment to the calculation, one that made the chance of vacuum decay slightly more likely. (But still mind-bogglingly small, to be clear.)

Scientific American asked for a longer piece, and that gave me space to dig deeper. I was able to say more about how vacuum decay works, with a few metaphors that I think should make it a lot easier to understand. I also got to learn about some new developments, in particular, an interesting story about how tiny primordial black holes could make vacuum decay dramatically more likely.

One thing that was a bit too complicated to talk about were the puzzles involved in trying to calculate these chances. In the article, I mention a calculation of the chance of vacuum decay by a team including Matthew Schwartz. That calculation wasn’t the first to estimate the chance of vacuum decay, and it’s not the most recent update either. Instead, I picked it because Schwartz’s team approached the question in what struck me as a more reliable way, trying to cut through confusion by asking the most basic question you can in a quantum theory: given that now you observe X, what’s the chance that later you observe Y? Figuring out how to turn vacuum decay into that kind of question correctly is tricky (for example, you need to include the possibility that vacuum decay happens, then reverses, then happens again).

The calculations of black holes speeding things up didn’t work things out in quite as much detail. I like to think I’ve made a small contribution by motivating them to look at Schwartz’s work, which might spawn a more rigorous calculation in future. When I talked to Schwartz, he wasn’t even sure whether the picture of a bubble forming in one place and spreading at light speed is correct: he’d calculated the chance of the initial decay, but hadn’t found a similarly rigorous way to think about the aftermath. So even more than the uncertainty I talk about in the piece, the questions about new physics and probability, there is even some doubt about whether the whole picture really works the way we’ve been imagining it.

That makes for a murky topic! But it’s also a flashy one, a compelling story for science fiction and the public imagination, and yeah, another motivation to get high-precision measurements of the Higgs and top quark from future colliders! (If maybe not quite the way this guy said it.)

Publishing Isn’t Free, but SciPost Makes It Cheaper

I’ve mentioned SciPost a few times on this blog. They’re an open journal in every sense you could think of: diamond open-access scientific publishing on an open-source platform, run with open finances. They even publish their referee reports. They’re aiming to cover not just a few subjects, but a broad swath of academia, publishing scientists’ work in the most inexpensive and principled way possible and challenging the dominance of for-profit journals.

And they’re struggling.

SciPost doesn’t charge university libraries for access, they let anyone read their articles for free. And they don’t charge authors Article Processing Charges (or APCs), they let anyone publish for free. All they do is keep track of which institutions those authors are affiliated with, calculate what fraction of their total costs comes from them, and post it in a nice searchable list on their website.

And amazingly, for the last nine years, they’ve been making that work.

SciPost encourages institutions to pay their share, mostly by encouraging authors to bug their bosses until they do. SciPost will also quite happily accept more than an institution’s share, and a few generous institutions do just that, which is what has kept them afloat so far. But since nothing compels anyone to pay, most organizations simply don’t.

From an economist’s perspective, this is that most basic of problems, the free-rider problem. People want scientific publication to be free, but it isn’t. Someone has to pay, and if you don’t force someone to do it, then the few who pay will be exploited by the many who don’t.

There’s more worth saying, though.

First, it’s worth pointing out that SciPost isn’t paying the same cost everyone else pays to publish. SciPost has a stripped-down system, without any physical journals or much in-house copyediting, based entirely on their own open-source software. As a result, they pay about 500 euros per article. Compare this to the fees negotiated by particle physics’ SCOAP3 agreement, which average to closer to 1000 euros, and realize that those fees are on the low end: for-profit journals tend to make their APCs higher in order to, well, make a profit.

(By the way, while it’s tempting to think of for-profit journals as greedy, I think it’s better to think of them as not cost-effective. Profit is an expense, like the interest on a loan: a payment to investors in exchange for capital used to set up the business. The thing is, online journals don’t seem to need that kind of capital, especially when they’re based on code written by academics in their spare time. So they can operate more cheaply as nonprofits.)

So when an author publishes in SciPost instead of a journal with APCs, they’re saving someone money, typically their institution or their grant. This would happen even if their institution paid their share of SciPost’s costs. (But then they would pay something rather than nothing, hence free-rider problem.)

If an author instead would have published in a closed-access journal, the kind where you have to pay to read the articles and university libraries pay through the nose to get access? Then you don’t save any money at all, your library still has to pay for the journal. You only save money if everybody at the institution stops using the journal. This one is instead a collective action problem.

Collective action problems are hard, and don’t often have obvious solutions. Free-rider problems do suggest an obvious solution: why not just charge?

In SciPost’s case, there are philosophical commitments involved. Their desire to attribute costs transparently and equally means dividing a journal’s cost among all its authors’ institutions, a cost only fully determined at the end of the year, which doesn’t make for an easy invoice.

More to the point, though, charging to publish is directly against what the Open Access movement is about.

That takes some unpacking, because of course, someone does have to pay. It probably seems weird to argue that institutions shouldn’t have to pay charges to publish papers…instead, they should pay to publish papers.

SciPost itself doesn’t go into detail about this, but despite how weird it sounds when put like I just did, there is a difference. Charging a fee to publish means that anyone who publishes needs to pay a fee. If you’re working in a developing country on a shoestring budget, too bad, you have to pay the fee. If you’re an amateur mathematician who works in a truck stop and just puzzled through something amazing, too bad, you have to pay the fee.

Instead of charging a fee, SciPost asks for support. I have to think that part of the reason is that they want some free riders. There are some people who would absolutely not be able to participate in science without free riding, and we want their input nonetheless. That means to support them, others need to give more. It means organizations need to think about SciPost not as just another fee, but as a way they can support the scientific process as a whole.

That’s how other things work, like the arXiv. They get support from big universities and organizations and philanthropists, not from literally everyone. It seems a bit weird to do that for a single scientific journal among many, though, which I suspect is part of why institutions are reluctant to do it. But for a journal that can save money like SciPost, maybe it’s worth it.

Post on the Weak Gravity Conjecture for FirstPrinciples.org

I have another piece this week on the FirstPrinciples.org Hub. If you’d like to know who they are, I say a bit about my impressions of them in my post on the last piece I had there. They’re still finding their niche, so there may be shifts in the kind of content they cover over time, but for now they’ve given me an opportunity to cover a few topics that are off the beaten path.

This time, the piece is what we in the journalism biz call an “explainer”. Instead of interviewing people about cutting-edge science, I wrote a piece to explain an older idea. It’s an idea that’s pretty cool, in a way I think a lot of people can actually understand: a black hole puzzle that might explain why gravity is the weakest force. It’s an idea that’s had an enormous influence, both in the string theory world where it originated and on people speculating more broadly about the rules of quantum gravity. If you want to learn more, read the piece!

Since I didn’t interview anyone for this piece, I don’t have the same sort of “bonus content” I sometimes give. Instead of interviewing, I brushed up on the topic, and the best resource I found was this review article written by Dan Harlow, Ben Heidenreich, Matthew Reece, and Tom Rudelius. It gave me a much better idea of the subtleties: how many different ways there are to interpret the original conjecture, and how different attempts to build on it reflect on different facets and highlight different implications. If you are a physicist curious what the whole thing is about, I recommend reading that review: while I try to give a flavor of some of the subtleties, a piece for a broad audience can only do so much.

There Is No Shortcut to Saying What You Mean

Blogger Andrew Oh-Willeke of Dispatches from Turtle Island pointed me to an editorial in Science about the phrase scientific consensus.

The editorial argues that by referring to conclusions like the existence of climate change or vaccine safety as “the scientific consensus”, communicators have inadvertently fanned the flames of distrust. By emphasizing agreement between scientists, the phrase “scientific consensus” leaves open the question of how that consensus was reached. More conspiracy-minded people imagine shady backroom deals and corrupt payouts, while the more realistic blame incentives and groupthink. If you disagree with “the scientific consensus”, you may thus decide the best way forward is to silence those pesky scientists.

(The link to current events is left as an exercise to the reader, to comment on elsewhere. As usual, please no explicit discussion of politics on this blog!)

Instead of “scientific consensus”, the editorial suggests another term, convergence of evidence. The idea is that by centering the evidence instead of the scientists, the phrase would make it clear that these conclusions are justified by something more than social pressures, and will remain even if the scientists promoting them are silenced.

Oh-Willeke pointed me to another blog post responding to the editorial, which has a nice discussion of how the terms were used historically, showing their popularity over time. “Convergence of evidence” was more popular in the 1950’s, with a small surge in the late 90’s and early 2000’s. “Scientific consensus” rose in the 1980’s and 90’s, lining up with a time when social scientists were skeptical about science’s objectivity and wanted to explore the social reasons why scientists come to agreement. It then fell around the year 2000, before rising again, this time used instead by professional groups of scientists to emphasize their agreement on issues like climate change.

(The blog post then goes on to try to motivate the word “consilience” instead, on the rather thin basis that “convergence of evidence” isn’t interdisciplinary enough, which seems like a pretty silly objection. “Convergence” implies coming in from multiple directions, it’s already interdisciplinary!)

I appreciate “convergence of evidence”, it seems like a useful phrase. But I think the editorial is working from the wrong perspective, in trying to argue for which terms “we should use” in the first place.

Sometimes, as a scientist or an organization or a journalist, you want to emphasize evidence. Is it “a preponderance of evidence”, most but not all? Is it “overwhelming evidence”, evidence so powerful it is unlikely to ever be defeated? Or is it a “convergence of evidence”, evidence that came in slowly from multiple paths, each independent route making a coincidence that much less likely?

But sometimes, you want to emphasize the judgement of the scientists themselves.

Sometimes when scientists agree, they’re working not from evidence but from personal experience: feelings of which kinds of research pan out and which don’t, or shared philosophies that sit deep in how they conceive their discipline. Describing physicists’ reasons for expecting supersymmetry before the LHC turned on as a convergence of evidence would be inaccurate. Describing it as having been a (not unanimous) consensus gets much closer to the truth.

Sometimes, scientists do have evidence, but as a journalist, you can’t evaluate its strength. You note some controversy, you can follow some of the arguments, but ultimately you have to be honest about how you got the information. And sometimes, that will be because it’s what most of the responsible scientists you talked to agreed on: scientific consensus.

As science communicators, we care about telling the truth (as much as we ever can, at any rate). As a result, we cannot adopt blanket rules of thumb. We cannot say, “we as a community are using this term now”. The only responsible thing we can do is to think about each individual word. We need to decide what we actually mean, to read widely and learn from experience, to find which words express our case in a way that is both convincing and accurate. There’s no shortcut to that, no formula where you just “use the right words” and everything turns out fine. You have to do the work, and hope it’s enough.

Experiments Should Be Surprising, but Not Too Surprising

People are talking about colliders again.

This year, the European particle physics community is updating its shared plan for the future, the European Strategy for Particle Physics. A raft of proposals at the end of March stirred up a tail of public debate, focused on asking what sort of new particle collider should be built, and discussing potential reasons why.

That discussion, in turn, has got me thinking about experiments, and how they’re justified.

The purpose of experiments, and of science in general, is to learn something new. The more sure we are of something, the less reason there is to test it. Scientists don’t check whether the Sun rises every day. Like everyone else, they assume it will rise, and use that knowledge to learn other things.

You want your experiment to surprise you. But to design an experiment to surprise you, you run into a contradiction.

Suppose that every morning, you check whether the Sun rises. If it doesn’t, you will really be surprised! You’ll have made the discovery of the century! That’s a really exciting payoff, grant agencies should be lining up to pay for…

Well, is that actually likely to happen, though?

The same reasons it would be surprising if the Sun stopped rising are reasons why we shouldn’t expect the Sun to stop rising. A sunrise-checking observatory has incredibly high potential scientific reward…but an absurdly low chance of giving that reward.

Ok, so you can re-frame your experiment. You’re not hoping the Sun won’t rise, you’re observing the sunrise. You expect it to rise, almost guaranteed, so your experiment has an almost guaranteed payoff.

But what a small payoff! You saw exactly what you expected, there’s no science in that!

By either criterion, the “does the Sun rise” observatory is a stupid experiment. Real experiments operate in between the two extremes. They also mix motivations. Together, that leads to some interesting tensions.

What was the purpose of the Large Hadron Collider?

There were a few things physicists were pretty sure of, when they planned the LHC. Previous colliders had measured W bosons and Z bosons, and their properties made it clear that something was missing. If you could collide protons with enough energy, physicists were pretty sure you’d see the missing piece. Physicists had a reasonably plausible story for that missing piece, in the form of the Higgs boson. So physicists could be pretty sure they’d see something, and reasonably sure it would be the Higgs boson.

If physicists expected the Higgs boson, what was the point of the experiment?

First, physicists expected to see the Higgs boson, but they didn’t expect it to have the mass that it did. In fact, they didn’t know anything about the particle’s mass, besides that it should be low enough that the collider could produce it, and high enough that it hadn’t been detected before. The specific number? That was a surprise, and an almost-inevitable one. A rare creature, an almost-guaranteed scientific payoff.

I say almost, because there was a second point. The Higgs boson didn’t have to be there. In fact, it didn’t have to exist at all. There was a much bigger potential payoff, of noticing something very strange, something much more complicated than the straightforward theory most physicists had expected.

(Many people also argued for another almost-guaranteed payoff, and that got a lot more press. People talked about finding the origin of dark matter by discovering supersymmetric particles, which they argued was almost guaranteed due to a principle called naturalness. This is very important for understanding the history…but it’s an argument that many people feel has failed, and that isn’t showing up much anymore. So for this post, I’ll leave it to the side.)

This mix, of a guaranteed small surprise and the potential for a very large surprise, was a big part of what made the LHC make sense. The mix has changed a bit for people considering a new collider, and it’s making for a rougher conversation.

Like the LHC, most of the new collider proposals have a guaranteed payoff. The LHC could measure the mass of the Higgs, these new colliders will measure its “couplings”: how strongly it influences other particles and forces.

Unlike the LHC, though, this guarantee is not a guaranteed surprise. Before building the LHC, we did not know the mass of the Higgs, and we could not predict it. On the other hand, now we absolutely can predict the couplings of the Higgs. We have quite precise numbers, our expectation for what they should be based on a theory that so far has proven quite successful.

We aren’t certain, of course, just like physicists weren’t certain before. The Higgs boson might have many surprising properties, things that contradict our current best theory and usher in something new. These surprises could genuinely tell us something about some of the big questions, from the nature of dark matter to the universe’s balance of matter and antimatter to the stability of the laws of physics.

But of course, they also might not. We no longer have that rare creature, a guaranteed mild surprise, to hedge in case the big surprises fail. We have guaranteed observations, and experimenters will happily tell you about them…but no guaranteed surprises.

That’s a strange position to be in. And I’m not sure physicists have figured out what to do about it.