About the OpenAI Amplitudes Paper, but Not as Much as You’d Like

I’ve had a bit more time to dig in to the paper I mentioned last week, where OpenAI collaborated with amplitudes researchers, using one of their internal models to find and prove a simplified version of a particle physics formula. I figured I’d say a bit about my own impressions from reading the paper and OpenAI’s press release.

This won’t be a real “deep dive”, though it will be long nonetheless. As it turns out, most of the questions I’d like answers to aren’t answered in the paper or the press release. Getting them will involve actual journalistic work, i.e. blocking off time to interview people, and I haven’t done that yet. What I can do is talk about what I know so far, and what I’m still wondering.

Context:

Scattering amplitudes are formulas used by particle physicists to make predictions. For a while, people would just calculate these when they needed them, writing down pages of mess that you could plug in numbers to to get answers. However, forty years ago two physicists decided they wanted more, writing “we hope to obtain a simplified form for the answer, making our result not only an experimentalist’s, but a theorist’s delight.”

In their next paper, they managed to find that “theorist’s delight”: a simplified, intuitive-looking answer that worked for calculations involving any number of particles, summarizing many different calculations. Ten years later, a few people had started building on it, and ten years after that, the big shots started paying attention. A whole subfield, “amplitudeology”, grew from that seed, finding new forms of “theorists’s delight” in scattering amplitudes.

Each subfield has its own kind of “theory of victory”, its own concept for what kind of research is most likely to yield progress. In amplitudes, it’s these kinds of simplifications. When they work out well, they yield new, more efficient calculation techniques, yielding new messy results which can be simplified once more. To one extent or another, most of the field is chasing after those situations when simplification works out well.

That motivation shapes both the most ambitious projects of senior researchers, and the smallest student projects. Students often spend enormous amounts of time looking for a nice formula for something and figuring out how to generalize it, often on a question suggested by a senior researcher. These projects mostly serve as training, but occasionally manage to uncover something more impressive and useful, an idea others can build around.

I’m mentioning all of this, because as far as I can tell, what ChatGPT and the OpenAI internal model contributed here roughly lines up with the roles students have on amplitudes papers. In fact, it’s not that different from the role one of the authors, Alfredo Guevara, had when I helped mentor him during his Master’s.

Senior researchers noticed something unusual, suggested by prior literature. They decided to work out the implications, did some calculations, and got some messy results. It wasn’t immediately clear how to clean up the results, or generalize them. So they waited, and eventually were contacted by someone eager for a research project, who did the work to get the results into a nice, general form. Then everyone publishes together on a shared paper.

How impressed should you be?

I said, “as far as I can tell” above. What’s annoying is that this paper makes it hard to tell.

If you read through the paper, they mention AI briefly in the introduction, saying they used GPT-5.2 Pro to conjecture formula (39) in the paper, and an OpenAI internal model to prove it. The press release actually goes into more detail, saying that the humans found formulas (29)-(32), and GPT-5.2 Pro found a special case where it could simplify them to formulas (35)-(38), before conjecturing (39). You can get even more detail from an X thread by one of the authors, OpenAI Research Scientist Alex Lupsasca. Alex had done his PhD with another one of the authors, Andrew Strominger, and was excited to apply the tools he was developing at OpenAI to his old research field. So they looked for a problem, and tried out the one that ended up in the paper.

What is missing, from the paper, press release, and X thread, is any real detail about how the AI tools were used. We don’t have the prompts, or the output, or any real way to assess how much input came from humans and how much from the AI.

(We have more for their follow-up paper, where Lupsasca posted a transcript of the chat.)

Contra some commentators, I don’t think the authors are being intentionally vague here. They’re following business as usual. In a theoretical physics paper, you don’t list who did what, or take detailed account of how you came to the results. You clean things up, and create a nice narrative. This goes double if you’re aiming for one of the most prestigious journals, which tend to have length limits.

This business-as-usual approach is ok, if frustrating, for the average physics paper. It is, however, entirely inappropriate for a paper showcasing emerging technologies. For a paper that was going to be highlighted this highly by OpenAI, the question of how they reached their conclusion is much more interesting than the results themselves. And while I wouldn’t ask them to go to the standards of an actual AI paper, with ablation analysis and all that jazz, they could at least have aimed for the level of detail of my final research paper, which gave samples of the AI input and output used in its genetic algorithm.

For the moment, then, I have to guess what input the AI had, and what it actually accomplished.

Let’s focus on the work done by the internal OpenAI model. The descriptions I’ve seen suggest that it started where GPT-5.2 Pro did, with formulas (29)-(32), but with a more specific prompt that guided what it was looking for. It then ran for 12 hours with no additional input, and both conjectured (39) and proved it was correct, providing essentially the proof that follows formula (39) in the paper.

Given that, how impressed should we be?

First, the model needs to decide to go to a specialized region, instead of trying to simplify the formula in full generality. I don’t know whether they prompted their internal model explicitly to do this. It’s not something I’d expect a student to do, because students don’t know what types of results are interesting enough to get published, so they wouldn’t be confident in computing only a limited version of a result without an advisor telling them it was ok. On the other hand, it is actually something I’d expect an LLM to be unusually likely to do, as a result of not managing to consistently stick to the original request! What I don’t know is whether the LLM proposed this for the right reason: that if you have the formula for one region, you can usually find it for other regions.

Second, the model needs to take formulas (29)-(32), write them in the specialized region, and simplify them to formulas (35)-(38). I’ve seen a few people saying you can do this pretty easily with Mathematica. That’s true, though not every senior researcher is comfortable doing that kind of thing, as you need to be a bit smarter than just using the Simplify[] command. Most of the people on this paper strike me as pen-and-paper types who wouldn’t necessarily know how to do that. It’s definitely the kind of thing I’d expect most students to figure out, perhaps after a couple of weeks of flailing around if it’s their first crack at it. The LLM likely would not have used Mathematica, but would have used SymPy, since these “AI scientist” setups usually can write and execute Python code. You shouldn’t think of this as the AI reasoning through the calculation itself, but it at least sounds like it was reasonably quick at coding it up.

Then, the model needs to conjecture formula (39). This gets highlighted in the intro, but as many have pointed out, it’s pretty easy to do. If any non-physicists are still reading at this point, take a look:

Could you guess (39) from (35)-(38)?

After that, the paper goes over the proof that formula (39) is correct. Most of this proof isn’t terribly difficult, but the way it begins is actually unusual in an interesting way. The proof uses ideas from time-ordered perturbation theory, an old-fashioned way to do particle physics calculations. Time-ordered perturbation theory isn’t something any of the authors are known for using with regularity, but it has recently seen a resurgence in another area of amplitudes research, showing up for example in papers by Matthew Schwartz, a colleague of Strominger at Harvard.

If a student of Strominger came up with an idea drawn from time-ordered perturbation theory, that would actually be pretty impressive. It would mean that, rather than just learning from their official mentor, this student was talking to other people in the department and broadening their horizons, showing a kind of initiative that theoretical physicists value a lot.

From an LLM, though, this is not impressive in the same way. The LLM was not trained by Strominger, it did not learn specifically from Strominger’s papers. Its context suggested it was working on an amplitudes paper, and it produced an idea which would be at home in an amplitudes paper, just a different one than the one it was working on.

While not impressive, that capability may be quite useful. Academic subfields can often get very specialized and siloed. A tool that suggests ideas from elsewhere in the field could help some people broaden their horizons.

Overall, it appears that that twelve-hour OpenAI internal model run reproduced roughly what an unusually bright student would be able to contribute over the course of a several-month project. Like most student projects, you could find a senior researcher who could do the project much faster, maybe even faster than the LLM. But it’s unclear whether any of the authors could have: different senior researchers have different skillsets.

A stab at implications:

If we take all this at face-value, it looks like OpenAI’s internal model was able to do a reasonably competent student project with no serious mistakes in twelve hours. If they started selling that capability, what would happen?

If it’s cheap enough, you might wonder if professors would choose to use the OpenAI model instead of hiring students. I don’t think this would happen, though: I think it misunderstands why these kinds of student projects exist in a theoretical field. Professors sometimes use students to get results they care about, but more often, the student’s interest is itself the motivation, with the professor wanting to educate someone, to empire-build, or just to take on their share of the department’s responsibilities. AI is only useful for this insofar as AI companies continue reaching out to these people to generate press releases: once this is routinely possible, the motivation goes away.

More dangerously, if it’s even cheaper, you could imagine students being tempted to use it. The whole point of a student project is to train and acculturate the student, to get them to the point where they have affection for the field and the capability to do more impressive things. You can’t skip that, but people are going to be tempted to.

And of course, there is the broader question of how much farther this technology can go. That’s the hardest to estimate here, since we don’t know the prompts used. So I don’t know if seeing this result tells us anything more about the bigger picture than we knew going in.

Remaining questions:

At the end of the day, there are a lot of things I still want to know. And if I do end up covering this professionally, they’re things I’ll ask.

  1. What was the prompt given to the internal model, and how much did it do based on that prompt?
  2. Was it really done in one shot, no retries or feedback?
  3. How much did running the internal model cost?
  4. Is this result likely to be useful? Are there things people want to calculate that this could make easier? Recursion relations it could seed? Is it useful for SCET somehow?
  5. How easy would it have been for the authors to do what the LLM did? What about other experts in the community?

Practice, Don’t Memorize, Understand Justifications, Not Stories

Teaching is one of those things that’s always controversial.

There seems to be a constant tug of war between two approaches. In one, thought of as old-fashioned and practical, students are expected to work hard, study to memorize facts and formulas, and end up with an impressive ability to reproduce the knowledge of the past. In the other, presented as more modern or more permissive, students aren’t supposed to memorize, but to understand, to get intuition for how things work, and are expected to end up more creative and analytical, able to come up with new ideas and understand things in ways their predecessors could not. This whole thing then gets muddled further with discussions of which skills actually matter in the modern day, with the technology of the hour standing in. If adults can use calculators, why should students be able to do arithmetic? If adults can use AI, why should students be able to draw, or write, or reason?

I’ve taught a little in my day, though likely less than I should. More frequently, I’ve learned. And, with apologies to the teachers and education experts who read this blog, I’ve got my own opinion.

I don’t think anyone in the old-fashioned/new-fashioned tug of war is thinking about education right.

People talk about memorization, when they should be talking about practice.

We want kids to be able to multiply and divide numbers. That’s not because they won’t have calculators. It’s because we want to teach them things that build on top of multiplying and dividing numbers. We want some of them to learn how to multiply and divide polynomials, and if you don’t know how to multiply and divide numbers, then learning to multiply and divide polynomials is almost impossible. We want some of them to learn abstract generalizations, groups and rings and fields, and if you’re not comfortable with the basics, then learning these is almost impossible. And for everyone, we want them to get used to making a logical argument why something is true, in a context where we can easily judge whether the argument works.

This doesn’t mean that we need students to memorize their times tables, though. It helps, sure. But we don’t actually care whether students can recite 5 times 7 equals 35, that’s not our end goal. Instead, we want to make sure that students can do these operations, and that they find them easy to do. And ultimately, that doesn’t come from memorization, but from practice. It comes from using the ideas, again and again, until it’s obvious how to step ahead to the results. You can’t replicate that with pure understanding, like some more modern approaches try to. You need the “muscle memory”, and that takes real practice. But you also can’t get there by memorizing isolated facts for an exam. You need to use them.

Understanding is important too, though. We need students to know the limits of their knowledge, not just what they’ve been taught but why it’s true. It’s the only way to get adults who can generalize, who can accept that maybe there is a type of math with numbers that square to zero without dismissing it as a plot to corrupt the youth. It’s the only way to get students who can go to the next level, and the next, and then generate new knowledge on their own.

But that understanding often gets left by the wayside, when teachers forget what it’s for. If you try to teach the Pythagorean theorem by showing a few examples, or tell students stories where different types of energy are different “stuff”, you’re trying to convey an intuitive understanding, but not the useful kind. What you’re trying to give the students is stories about how things work. But the kind of understanding we need students to have isn’t of stories. It’s of justifications, and arguments. Students should understand why what they are taught is true, and understanding why doesn’t mean having a feeling in their hearts about it: it means they can convince a skeptic.

It’s easier, for a world full of overworked teachers from a variety of backgrounds, to teach the simpler versions of these. It’s easy for a traditionalist teacher to drill their students on memorization, and test them on memorization. It’s easier for a sympathetic teacher to tell students stories, based on stories the teacher thinks they understand.

But if you want the traditionalist approach to work, you have to actually do things, to practice using ideas rather than merely know them, to have that experience down as reflexively as those times tables. And if you want the modern approach to work, you have to actually understand why what you’re teaching is true, the way you would convince a skeptic that it is true, and then convey those justifications to the students.

And if you, instead, are a student:

Don’t worry about memorizing facts, you’ll drill too hard and stress yourself out. Don’t worry about finding a comfortable story, because no story is true. Use the ideas you’re learning. Use them to convince yourself, and to convince others. Use them again and again, until you reach for them as easily as breathing. When you can use what you’re learning, and know why it holds, then you’re ready to move forward.

Hypothesis: If AI Is Bad at Originality, It’s a Documentation Problem

Recently, a few people have asked me about this paper.

A couple weeks back, OpenAI announced a collaboration with a group of amplitudes researchers, physicists who study the types of calculations people do to make predictions at particle colliders. The amplitudes folks had identified an interesting loophole, finding a calculation that many would have expected to be zero actually gave a nonzero answer. They did the calculation for different examples involving more and more particles, and got some fairly messy answers. They suspected, as amplitudes researchers always expect, that there was a simpler formula, one that worked for any number of particles. But they couldn’t find it.

Then a former amplitudes researcher at OpenAI suggested that they use AI to find it.

“Use AI” can mean a lot of different things, and most of them don’t look much like the way the average person talks to ChatGPT. This was closer than most. They were using “reasoning models”, loops that try to predict the next few phrases in a “chain of thought” again and again and again. Using that kind of tool, they were able to find that simpler formula, and mathematically prove that it was correct.

A few of you are hoping for an in-depth post about what they did, and its implications. This isn’t that. I’m still figuring out if I’ll be writing that for an actual news site, for money, rather than free, for you folks.

Instead, I want to talk about a specific idea I’ve seen crop up around the paper.

See, for some, the existence of a result like this isn’t all that surprising.

Mathematicians have been experimenting with reasoning models for a bit, now. Recently, a group published a systematic study, setting the AI loose on a database of minor open problems proposed by the famously amphetamine-fueled mathematician Paul Erdös. The AI managed to tackle a few of the problems, sometimes by identifying existing solutions that had not yet been linked to the problem database, but sometimes by proofs that appeared to be new.

The Erdös problems solved by the AI were not especially important. Neither was the problem solved by the amplitudes researchers, as far as I can tell at this point.

But I get the impression the amplitudes problem was a bit more interesting than the Erdös problems. The difference, so far, has mostly been attributed to human involvement. This amplitudes paper started because human amplitudes researchers found an interesting loophole, and only after that used the AI. Unlike the mathematicians, they weren’t just searching a database.

This lines up with a general point, one people tend to make much less carefully. It’s often said that, unlike humans, AI will never be truly creative. It can solve mechanical problems, do things people have done before, but it will never be good at having truly novel ideas.

To me, that line of thinking goes a bit too far. I suspect it’s right on one level, that it will be hard for any of these reasoning models to propose anything truly novel. But if so, I think it will be for a different reason.

The thing is, creativity is not as magical as we make it out to be. Our ideas, scientific or artistic, don’t just come from the gods. They recombine existing ideas, shuffling them in ways more akin to randomness than miracle. They’re then filtered through experience, deep heuristics honed over careers. Some people are good at ideas, and some are bad at them. Having ideas takes work, and there are things people do to improve their ideas. Nothing about creativity suggests it should be impossible to mechanize.

However, a machine trained on text won’t necessarily know how to do any of that.

That’s because in science, we don’t write down our inspirations. By the time a result gets into a scientific paper or textbook, it’s polished and refined into a pure argument, cutting out most of the twists and turns that were an essential part of the creative process. Mathematics is even worse, most math papers don’t even mention the motivation behind the work, let alone the path taken to the paper.

This lack of documentation makes it hard for students, making success much more a function of having the right mentors to model good practices, rather than being able to pick them up from literature everyone can access. I suspect it makes it even harder for language models. And if today’s language model-based reasoning tools are bad at that crucial, human-seeming step, of coming up with the right idea at the right time? I think that has more to do with this lack of documentation, than with the fact that they’re “statistical parrots”.

Most Academics Don’t Choose Their Specialty

It’s there in every biography, and many interviews: the moment the scientist falls in love with an idea. It can be a kid watching ants in the backyard, a teen peering through a telescope, or an undergrad seeing a heart cell beat on a slide. It’s a story so common that it forms the heart of the public idea of a scientist: not just someone smart enough to understand the world, but someone passionate enough to dive in to their one particular area above all else. It’s easy to think of it as a kind of passion most people never get to experience.

And it does happen, sometimes. But it’s a lot less common than you’d think.

I first started to suspect this as a PhD student. In the US, getting accepted into a PhD program doesn’t guarantee you an advisor to work with. You have to impress a professor to get them to spend limited time and research funding on you. In practice, the result was the academic analog of the dating scene. Students looked for who they might have a chance with, based partly on interest but mostly on availability and luck and rapport, and some bounced off many potential mentors before finding one that would stick.

Then, for those who continued to postdoctoral positions, the same story happened all over again. Now, they were applying for jobs, looking for positions where they were qualified enough and might have some useful contacts, with interest into the specific research topic at best a distant third.

Working in the EU, I’ve seen the same patterns, but offset a bit. Students do a Master’s thesis, and the search for a mentor there is messy and arbitrary in similar ways. Then for a PhD, they apply for specific projects elsewhere, and as each project is its own funded position the same job search dynamics apply.

The picture only really clicked for me, though, when I started doing journalism.

Nowadays, I don’t do science, I interview people about it. The people I interview are by and large survivors: people who got through the process of applying again and again and now are sitting tight in an in-principle permanent position. They’re people with a lot of freedom to choose what to do.

And so I often ask for that reason, that passion, that scientific love at first sight moment: why do you study what you do? It’s a story that audiences love, and thus that editors love, it’s always a great way to begin a piece.

But surprisingly often, I get an unromantic answer. Why study this? Because it was available. Because in the Master’s, that professor taught the intro course. Because in college, their advisor had contacts with that lab to arrange a study project. Because that program accepted people from that country.

And I’ve noticed how even the romantic answers tend to be built on the unromantic ones. The professors who know how to weave a story, to self-promote and talk like a politician, they’ll be able to tell you about falling in love with something, sure. But if you read between the lines, you’ll notice where their anecdotes fall, how they trace a line through the same career steps that less adroit communicators admit were the real motivation.

There’s been times I’ve thought that my problem was a lack of passion, that I wasn’t in love the same way other scientists were in love. I’ve even felt guilty, that I took resources and positions from people who were. There is still some truth in that guilt, I don’t think I had the same passion for my science as most of my colleagues.

But I appreciate more now, that that passion is in part a story. We don’t choose our specialty, making some grand agentic move. Life chooses for us. And the romance comes in how you tell that story, after the fact.

Valentine’s Day Physics Poem 2026

Tomorrow is Valentine’s Day, so it’s time for this blog’s yearly tradition of posting a poem. Next week there may be a prose take on the same topic.

You’ve heard love stories like Oliver’s, I’m sure.
Meeting that childhood sweetheart
In the back room, with the garden view
And trust that, with a wink, the parents may regret.
Stories tungsten-milled
To fit our expectations.

And you’ve heard wilder stories
From genuinely riskier lives.
The rescue and the love linked under the Milky Way
Like an action movie.
The love’s reality, even so,
Defying summary.

You’ve heard stories of wide-eyed students
Realizing they can be adults.
Of those moments in study or celebration
Turning points in self-conception.
And maybe you don’t ask
About the other times.

Love happens,
And we love love to happen.
But we build love too.

May that which we build
Outgrow the story.

The Timeline for Replacing Theorists Is Not Technological

Quanta Magazine recently published a reflection by Natalie Wolchover on the state of fundamental particle physics. The discussion covers a lot of ground, but one particular paragraph has gotten the lion’s share of the attention. Wolchover talked to Jared Kaplan, the ex-theoretical physicist turned co-founder of Anthropic, one of the foremost AI companies today.

Kaplan was one of Nima Arkani-Hamed’s PhD students, which adds an extra little punch.

There’s a lot to contest here. Is AI technology anywhere close to generating papers as good as the top physicists, or is that relegated to the sci-fi future? Does Kaplan really believe this, or is he just hyping up his company?

I don’t have any special insight into those questions, about the technology and Kaplan’s motivations. But I think that, even if we trusted him on the claim that AI could be generating Witten- or Nima-level papers in three years, that doesn’t mean it will replace theoretical physicists. That part of the argument isn’t a claim about the technology, but about society.

So let’s take the technological claims as given, and make them a bit more specific. Since we don’t have any objective way of judging the quality of scientific papers, let’s stick to the subjective. Today, there are a lot of people who get excited when Witten posts a new paper. They enjoy reading them, they find the insights inspiring, they love the clarity of the writing and their tendency to clear up murky ideas. They also find them reliable: the papers very rarely have mistakes, and don’t leave important questions unanswered.

Let’s use that as our baseline, then. Suppose that Anthropic had an AI workflow that could reliably write papers that were just as appealing to physicists as Witten’s papers are, for the same reasons. What happens to physicists?

Witten himself is retired, which for an academic means you do pretty much the same thing you were doing before, but now paid out of things like retirement savings and pension funds, not an institute budget. Nobody is going to fire Witten, there’s no salary to fire him from. And unless he finds these developments intensely depressing and demoralizing (possible, but very much depends on how this is presented), he’s not going to stop writing papers. Witten isn’t getting replaced.

More generally, though, I don’t think this directly results in anyone getting fired, or in universities trimming positions. The people making funding decisions aren’t just sitting on a pot of money, trying to maximize research output. They’ve got money to be spent on hires, and different pools of money to be spent on equipment, and the hires get distributed based on what current researchers at the institutes think is promising. Universities want to hire people who can get grants, to help fund the university, and absent rules about AI personhood, the AIs won’t be applying for grants.

Funding cuts might be argued for based on AI, but that will happen long before AI is performing at the Witten level. We already see this happening in other industries or government agencies, where groups that already want to cut funding are getting think tanks and consultants to write estimates that justify cutting positions, without actually caring whether those estimates are performed carefully enough to justify their conclusions. That can happen now, and doesn’t depend on technological progress.

AI could also replace theoretical physicists in another sense: the physicists themselves might use AI to do most of their work. That’s more plausible, but here adoption still heavily depends on social factors. Will people feel like they are being assessed on whether they can produce these Witten-level papers, and that only those who make them get hired, or funded? Maybe. But it will propagate unevenly, from subfield to subfield. Some areas will make their own rules forbidding AI content, there will be battles and scandals and embarrassments aplenty. It won’t be a single switch, the technology alone setting the timeline.

Finally, AI could replace theoretical physicists in another way, by people outside of academia filling the field so much that theoretical physicists have nothing more that they want to do. Some non-physicists are very passionate about physics, and some of those people have a lot of money. I’ve done writing work for one such person, whose foundation is now attempting to build an AI Physicist. If these AI Physicists get to Witten-level quality, they might start writing compelling paper after compelling paper. Those papers, though, will due to their origins be specialized. Much as philanthropists mostly fund the subfields they’ve heard of, philanthropist-funded AI will mostly target topics the people running the AI have heard are important. Much like physicists themselves adopting the technology, there will be uneven progress from subfield to subfield, inch by socially-determined inch.

In a hard-to-quantify area like progress in science, that’s all you can hope for. I suspect Kaplan got a bit of a distorted picture of how progress and merit work in theoretical physics. He studied with Nima Arkani-Hamed, who is undeniably exceptionally brilliant but also undeniably exceptionally charismatic. It must feel to a student of Nima’s that academia simply hires the best people, that it does whatever it takes to accomplish the obviously best research. But the best research is not obvious.

I think some of these people imagine a more direct replacement process, not arranged by topic and tastes, but by goals. They picture AI sweeping in and doing what theoretical physics was always “meant to do”: solve quantum gravity, and proceed to shower us with teleporters and antigravity machines. I don’t think there’s any reason to expect that to happen. If you just asked a machine to come up with the most useful model of the universe for a near-term goal, then in all likelihood it wouldn’t consider theoretical high-energy physics at all. If you see your AI as a tool to navigate between utopia and dystopia, theoretical physics might matter at some point: when your AI has devoured the inner solar system, is about to spread beyond the few light-minutes when it can signal itself in real-time, and has to commit to a strategy. But as long as the inner solar system remains un-devoured, I don’t think you’ll see an obviously successful theory of fundamental physics.

How Much Academic Attrition Is Too Much?

Have you seen “population pyramids“? They’re diagrams that show snapshots of a population, how many people there are of each age. They can give you an intuition for how a population is changing, and where the biggest hurdles are to survival.

I wonder what population pyramids would look like for academia. In each field and subfield, how many people are PhD students, postdocs, and faculty?

If every PhD student was guaranteed to become faculty, and the number of faculty stayed fixed, you could roughly estimate what this pyramid would have to look like. An estimate for the US might take an average 7-year PhD, two postdoc positions at 3 years each, followed by a 30-year career as faculty, and estimate the proportions of each stage based on proportions of each scholar’s life. So you’d have roughly one PhD student per four faculty, and one postdoc per five. In Europe, with three-year PhDs, the proportion of PhD students decreases further, and in a world where people are still doing at least two postdocs you expect significantly more postdocs than PhDs.

Of course, the world doesn’t look like that at all, because the assumptions are wrong.

The number of faculty doesn’t stay fixed, for one. When population is growing in the wider world, new universities open in new population centers, and existing universities find ways to expand. When population falls, enrollments shrink, and universities cut back.

But this is a minor perturbation compared to the much more obvious difference: most PhD students do not stay in academia. A single professor may mentor many PhDs at the same time, and potentially several postdocs. Most of those people aren’t staying.

You can imagine someone trying to fix this by fiat, setting down a fixed ratio between PhD students, postdocs, and faculty. I’ve seen partial attempts at this. When I applied for grants at the University of Copenhagen, I was told I had to budget at least half of my hires as PhD students, not postdocs, which makes me wonder if they were trying to force careers to default to one postdoc position, rather than two. More likely, they hadn’t thought about it.

Zero attrition doesn’t really make sense, anyway. Some people are genuinely better off leaving: they made a mistake when they started, or they changed over time. Sometimes new professions arise, and the best way in is from an unexpected direction. I’ve talked to people who started data science work in the early days, before there really were degrees in it, who felt a physics PhD had been the best route possible to that world. Similarly, some move into policy, or academic administration, or found a startup. And if we think there are actually criteria to choose better or worse academics (which I’m a bit skeptical of), then presumably some people are simply not good enough, and trying to filter them out earlier is irresponsible when they still don’t have enough of a track record to really judge.

How much attrition should be there is the big question, and one I don’t have an answer for. In academia, when so much of these decisions are made by just a few organizations, it seems like a question that someone should have a well-considered answer to. But so far, it’s unclear to me that anyone does.

It also makes me think, a bit, about how these population pyramids work in industry. There there is no overall control. Instead, there’s a web of incentives, many of them decades-delayed from the behavior they’re meant to influence, leaving each individual to try to predict as well as they can. If companies only hire senior engineers, no-one gets a chance to start a career, and the population of senior engineers dries up. Eventually, those companies have to settle for junior engineers. (Or, I guess, ex-academics.) It sounds like it should lead to the kind of behavior biologists model in predators and prey, wild swings in population modeled by a differential equation. But maybe there’s something that tamps down those wild swings.

School Facts and Research Facts

As you grow up, teachers try to teach you how the world works. This is more difficult than it sounds, because teaching you something is a much harder goal than just telling you something. A teacher wants you to remember what you’re told. They want you to act on it, and to generalize it. And they want you to do this not just for today’s material, but to set a foundation for next year, and the next. They’re setting you up for progress through a whole school system, with its own expectations.

Because of that, not everything a teacher tells you is, itself, a fact about the world. Some things you hear from teachers are liked the scaffolds on a building. They’re facts that only make sense in the context of school, support that lets you build to a point where you can learn other facts, and throw away the school facts that got you there.

Not every student uses all of that scaffolding, though. The scaffold has to be complete enough that some students can use it to go on, getting degrees in science or mathematics, and eventually becoming researchers where they use facts more deeply linked to the real world. But most students don’t become researchers. So the scaffold sits there, unused. And many people, as their lives move on, mistake the scaffold for the real world.

Here’s an example. How do you calculate something like this?

3+4\div (3-1)\times 5

From school, you might remember order of operations, or PEMDAS. First parentheses, then exponents, multiplication, division, addition, and finally subtraction. If you ran into that calculation in school, you could easily work it out.

But out of school, in the real world? Trick question, you never calculate something like that to begin with.

When I wrote this post, I had to look up how to write \div and \times. In the research world, people are far more likely to run into calculations like this:

3+5\frac{4}{3-1}

Here, it’s easier to keep track of what order you need to do things. In other situations, you might be writing a computer program (or an Excel spreadsheet formula, which is also a computer program). Then you follow that programming language’s rules for order of operations, which may or may not match PEMDAS.

PEMDAS was taught to you in school for good reason. It got you used to following rules to understand notation, and gave you tools the teachers needed to teach you other things. But it isn’t a fact about the universe. It’s a fact about school.

Once you start looking around for these “school facts”, they show up everywhere.

Are there really “three states of matter”, solid, liquid, and gas? Or four, if you add plasma? Well, sort of. There are real scientific definitions for solids, liquids, gases, and plasmas, and they play a real role in how people model big groups of atoms, “matter” in a quite specific sense. But they can’t be used to describe literally everything. If you start asking what state of matter light or spacetime is, you’ve substituted a simplification that was useful for school (“everything is one of three states of matter”) for the actual facts in the real world.

If you remember a bit further, maybe you remember there are two types of things, matter and energy? You might have even heard that matter and antimatter annihilate into energy. These are also just school facts, though. “Energy” isn’t something things are made of, it’s a property things have. Instead, your teachers were building scaffolding for understanding the difference between massive and massless particles, or between dark matter and dark energy. Each of those uses different concepts of matter and energy, and each in turn is different than the concept of matter in its states of solid, liquid, and gas. But in school, you need a consistent scaffold to learn, not a mess of different definitions for different applications. So unless you keep going past school, you don’t learn that.

Physics in school likes to work with forces, and forces do sometimes make an appearance in the real world, for example for engineers. But if you’re asking a question about fundamental physics, like “is gravity really a force?”, then you’re treating a school fact as if it was a research fact. Fundamental physics doesn’t care about forces in the same way. It uses different mathematical tools, like Lagrangians and Hamiltonians, to calculate the motion of objects in systems, and uses “force” in a pop science way to describe fundamental interactions.

If you get good enough at this, you can spot which things you learned in school were likely just scaffolding “school facts”, and which are firm enough that they may hold further. Any simple division of the world into categories is likely a school fact, one that let you do exercises on your homework but gets much more complicated when the real world gets involved. Contradictory or messy concepts are usually another sign, showing something fuzzy used to get students comfortable rather than something precise enough for professionals to use. Keep an eye out, and even if you don’t yet know the real facts, you’ll know enough to know what you’re missing.

A Paper With a Bluesky Account

People make social media accounts for their pets. Why not a scientific paper?

Anthropologist Ed Hagen made a Bluesky account for his recent preprint, “Menopause averted a midlife energetic crisis with help from older children and parents: A simulation study.” The paper’s topic itself is interesting (menopause is surprisingly rare among mammals, he has a plausible account as to why), but not really the kind of thing I cover here.

Rather, it’s his motivation that’s interesting. Hagen didn’t make the account out of pure self-promotion or vanity. Instead, he’s promoting it as a novel approach to scientific publishing. Unlike Twitter, Bluesky is based on an open, decentralized protocol. Anyone can host an account compatible with Bluesky on their own computer, and anyone with the programming know-how can build a computer program that reads Bluesky posts. That means that nothing actually depends on Bluesky, in principle: the users have ultimate control.

Hagen’s idea, then, is that this could be a way to fulfill the role of scientific journals without channeling money and power to for-profit publishers. If each paper is hosted on a scientist’s own site, the papers can link to each other via following each other. Scientists on Bluesky can follow or like the paper, or comment on and discuss it, creating a way to measure interest from the scientific community and aggregate reviews, two things journals are supposed to cover.

I must admit, I’m skeptical. The interface really seems poorly-suited for this. Hagen’s paper’s account is called @menopause-preprint.edhagen.net. What happens when he publishes another paper on menopause, what will he call it? How is he planning to keep track of interactions from other scientists with an account for every single paper, won’t swapping between fifteen Bluesky accounts every morning get tedious? Or will he just do this with papers he wants to promote?

I applaud the general idea. Decentralized hosting seems like a great way to get around some of the problems of academic publishing. But this will definitely take a lot more work, if it’s ever going to be viable on a useful scale.

Still, I’ll keep an eye on it, and see if others give it a try. Stranger things have happened.

On Theories of Everything and Cures for Cancer

Some people are disappointed in physics. Shocking, I know!

Those people, when careful enough, clarify that they’re disappointed in fundamental physics: not the physics of materials or lasers or chemicals or earthquakes, or even the physics of planets and stars, but the physics that asks big fundamental questions, about the underlying laws of the universe and where they come from.

Some of these people are physicists themselves, or were once upon a time. These often have in mind other directions physicists should have gone. They think that, with attention and funding, their own ideas would have gotten us closer to our goals than the ideas that, in practice, got the attention and the funding.

Most of these people, though, aren’t physicists. They’re members of the general public.

It’s disappointment from the general public, I think, that feels the most unfair to physicists. The general public reads history books, and hears about a series of revolutions: Newton and Maxwell, relativity and quantum mechanics, and finally the Standard Model. They read science fiction books, and see physicists finding “theories of everything”, and making teleporters and antigravity engines. And they wonder what made the revolutions stop, and postponed the science fiction future.

Physicists point out, rightly, that this is an oversimplified picture of how the world works. Something happens between those revolutions, the kind of progress not simple enough to summarize for history class. People tinker away at puzzles, and make progress. And they’re still doing that, even for the big fundamental questions. Physicists know more about even faraway flashy topics like quantum gravity than they did ten years ago. And while physicists and ex-physicists can argue about whether that work is on the right path, it’s certainly farther along its own path than it was. We know things we didn’t know before, progress continues to be made. We aren’t at the “revolution” stage yet, or even all that close. But most progress isn’t revolutionary, and no-one can predict how often revolutions should take place. A revolution is never “due”, and thus can never be “overdue”.

Physicists, in turn, often don’t notice how normal this kind of reaction from the public is. They think people are being stirred up by grifters, or negatively polarized by excess hype, that fundamental physics is facing an unfair reaction only shared by political hot-button topics. But while there are grifters, and people turned off by the hype…this is also just how the public thinks about science.

Have you ever heard the phrase “a cure for cancer”?

Fiction is full of scientists working on a cure for cancer, or who discovered a cure for cancer, or were prevented from finding a cure for cancer. It’s practically a trope. It’s literally a trope.

It’s also a real thing people work on, in a sense. Many scientists work on better treatments for a variety of different cancers. They’re making real progress, even dramatic progress. As many whose loved ones have cancer know, it’s much more likely for someone with cancer to survive than it was, say, twenty years ago.

But those cures don’t meet the threshold for science fiction, or for the history books. They don’t move us, like the polio vaccine did, from a world where you know many people with a disease to a world where you know none. They don’t let doctors give you a magical pill, like in a story or a game, that instantly cures your cancer.

For the vast majority of medical researchers, that kind of goal isn’t realistic, and isn’t worth thinking about. The few that do pursue it work towards extreme long-term solutions, like periodically replacing everyone’s skin with a cloned copy.

So while you will run into plenty of media descriptions of scientists working on cures for cancer, you won’t see the kind of thing the public expects is an actual “cure for cancer”. And people are genuinely disappointed about this! “Where’s my cure for cancer?” is a complaint on the same level as “where’s my hovercar?” There are people who think that medical science has made no progress in fifty years, because after all those news articles, we still don’t have a cure for cancer.

I appreciate that there are real problems in what messages are being delivered to the public about physics, both from hypesters in the physics mainstream and grifters outside it. But put those problems aside, and a deeper issue remains. People understand the world as best they can, as a story. And the world is complicated and detailed, full of many people making incremental progress on many things. Compared to a story, the truth is always at a disadvantage.