Tag Archives: academia

How Much Academic Attrition Is Too Much?

Have you seen “population pyramids“? They’re diagrams that show snapshots of a population, how many people there are of each age. They can give you an intuition for how a population is changing, and where the biggest hurdles are to survival.

I wonder what population pyramids would look like for academia. In each field and subfield, how many people are PhD students, postdocs, and faculty?

If every PhD student was guaranteed to become faculty, and the number of faculty stayed fixed, you could roughly estimate what this pyramid would have to look like. An estimate for the US might take an average 7-year PhD, two postdoc positions at 3 years each, followed by a 30-year career as faculty, and estimate the proportions of each stage based on proportions of each scholar’s life. So you’d have roughly one PhD student per four faculty, and one postdoc per five. In Europe, with three-year PhDs, the proportion of PhD students decreases further, and in a world where people are still doing at least two postdocs you expect significantly more postdocs than PhDs.

Of course, the world doesn’t look like that at all, because the assumptions are wrong.

The number of faculty doesn’t stay fixed, for one. When population is growing in the wider world, new universities open in new population centers, and existing universities find ways to expand. When population falls, enrollments shrink, and universities cut back.

But this is a minor perturbation compared to the much more obvious difference: most PhD students do not stay in academia. A single professor may mentor many PhDs at the same time, and potentially several postdocs. Most of those people aren’t staying.

You can imagine someone trying to fix this by fiat, setting down a fixed ratio between PhD students, postdocs, and faculty. I’ve seen partial attempts at this. When I applied for grants at the University of Copenhagen, I was told I had to budget at least half of my hires as PhD students, not postdocs, which makes me wonder if they were trying to force careers to default to one postdoc position, rather than two. More likely, they hadn’t thought about it.

Zero attrition doesn’t really make sense, anyway. Some people are genuinely better off leaving: they made a mistake when they started, or they changed over time. Sometimes new professions arise, and the best way in is from an unexpected direction. I’ve talked to people who started data science work in the early days, before there really were degrees in it, who felt a physics PhD had been the best route possible to that world. Similarly, some move into policy, or academic administration, or found a startup. And if we think there are actually criteria to choose better or worse academics (which I’m a bit skeptical of), then presumably some people are simply not good enough, and trying to filter them out earlier is irresponsible when they still don’t have enough of a track record to really judge.

How much attrition should be there is the big question, and one I don’t have an answer for. In academia, when so much of these decisions are made by just a few organizations, it seems like a question that someone should have a well-considered answer to. But so far, it’s unclear to me that anyone does.

It also makes me think, a bit, about how these population pyramids work in industry. There there is no overall control. Instead, there’s a web of incentives, many of them decades-delayed from the behavior they’re meant to influence, leaving each individual to try to predict as well as they can. If companies only hire senior engineers, no-one gets a chance to start a career, and the population of senior engineers dries up. Eventually, those companies have to settle for junior engineers. (Or, I guess, ex-academics.) It sounds like it should lead to the kind of behavior biologists model in predators and prey, wild swings in population modeled by a differential equation. But maybe there’s something that tamps down those wild swings.

School Facts and Research Facts

As you grow up, teachers try to teach you how the world works. This is more difficult than it sounds, because teaching you something is a much harder goal than just telling you something. A teacher wants you to remember what you’re told. They want you to act on it, and to generalize it. And they want you to do this not just for today’s material, but to set a foundation for next year, and the next. They’re setting you up for progress through a whole school system, with its own expectations.

Because of that, not everything a teacher tells you is, itself, a fact about the world. Some things you hear from teachers are liked the scaffolds on a building. They’re facts that only make sense in the context of school, support that lets you build to a point where you can learn other facts, and throw away the school facts that got you there.

Not every student uses all of that scaffolding, though. The scaffold has to be complete enough that some students can use it to go on, getting degrees in science or mathematics, and eventually becoming researchers where they use facts more deeply linked to the real world. But most students don’t become researchers. So the scaffold sits there, unused. And many people, as their lives move on, mistake the scaffold for the real world.

Here’s an example. How do you calculate something like this?

3+4\div (3-1)\times 5

From school, you might remember order of operations, or PEMDAS. First parentheses, then exponents, multiplication, division, addition, and finally subtraction. If you ran into that calculation in school, you could easily work it out.

But out of school, in the real world? Trick question, you never calculate something like that to begin with.

When I wrote this post, I had to look up how to write \div and \times. In the research world, people are far more likely to run into calculations like this:

3+5\frac{4}{3-1}

Here, it’s easier to keep track of what order you need to do things. In other situations, you might be writing a computer program (or an Excel spreadsheet formula, which is also a computer program). Then you follow that programming language’s rules for order of operations, which may or may not match PEMDAS.

PEMDAS was taught to you in school for good reason. It got you used to following rules to understand notation, and gave you tools the teachers needed to teach you other things. But it isn’t a fact about the universe. It’s a fact about school.

Once you start looking around for these “school facts”, they show up everywhere.

Are there really “three states of matter”, solid, liquid, and gas? Or four, if you add plasma? Well, sort of. There are real scientific definitions for solids, liquids, gases, and plasmas, and they play a real role in how people model big groups of atoms, “matter” in a quite specific sense. But they can’t be used to describe literally everything. If you start asking what state of matter light or spacetime is, you’ve substituted a simplification that was useful for school (“everything is one of three states of matter”) for the actual facts in the real world.

If you remember a bit further, maybe you remember there are two types of things, matter and energy? You might have even heard that matter and antimatter annihilate into energy. These are also just school facts, though. “Energy” isn’t something things are made of, it’s a property things have. Instead, your teachers were building scaffolding for understanding the difference between massive and massless particles, or between dark matter and dark energy. Each of those uses different concepts of matter and energy, and each in turn is different than the concept of matter in its states of solid, liquid, and gas. But in school, you need a consistent scaffold to learn, not a mess of different definitions for different applications. So unless you keep going past school, you don’t learn that.

Physics in school likes to work with forces, and forces do sometimes make an appearance in the real world, for example for engineers. But if you’re asking a question about fundamental physics, like “is gravity really a force?”, then you’re treating a school fact as if it was a research fact. Fundamental physics doesn’t care about forces in the same way. It uses different mathematical tools, like Lagrangians and Hamiltonians, to calculate the motion of objects in systems, and uses “force” in a pop science way to describe fundamental interactions.

If you get good enough at this, you can spot which things you learned in school were likely just scaffolding “school facts”, and which are firm enough that they may hold further. Any simple division of the world into categories is likely a school fact, one that let you do exercises on your homework but gets much more complicated when the real world gets involved. Contradictory or messy concepts are usually another sign, showing something fuzzy used to get students comfortable rather than something precise enough for professionals to use. Keep an eye out, and even if you don’t yet know the real facts, you’ll know enough to know what you’re missing.

A Paper With a Bluesky Account

People make social media accounts for their pets. Why not a scientific paper?

Anthropologist Ed Hagen made a Bluesky account for his recent preprint, “Menopause averted a midlife energetic crisis with help from older children and parents: A simulation study.” The paper’s topic itself is interesting (menopause is surprisingly rare among mammals, he has a plausible account as to why), but not really the kind of thing I cover here.

Rather, it’s his motivation that’s interesting. Hagen didn’t make the account out of pure self-promotion or vanity. Instead, he’s promoting it as a novel approach to scientific publishing. Unlike Twitter, Bluesky is based on an open, decentralized protocol. Anyone can host an account compatible with Bluesky on their own computer, and anyone with the programming know-how can build a computer program that reads Bluesky posts. That means that nothing actually depends on Bluesky, in principle: the users have ultimate control.

Hagen’s idea, then, is that this could be a way to fulfill the role of scientific journals without channeling money and power to for-profit publishers. If each paper is hosted on a scientist’s own site, the papers can link to each other via following each other. Scientists on Bluesky can follow or like the paper, or comment on and discuss it, creating a way to measure interest from the scientific community and aggregate reviews, two things journals are supposed to cover.

I must admit, I’m skeptical. The interface really seems poorly-suited for this. Hagen’s paper’s account is called @menopause-preprint.edhagen.net. What happens when he publishes another paper on menopause, what will he call it? How is he planning to keep track of interactions from other scientists with an account for every single paper, won’t swapping between fifteen Bluesky accounts every morning get tedious? Or will he just do this with papers he wants to promote?

I applaud the general idea. Decentralized hosting seems like a great way to get around some of the problems of academic publishing. But this will definitely take a lot more work, if it’s ever going to be viable on a useful scale.

Still, I’ll keep an eye on it, and see if others give it a try. Stranger things have happened.

Academia Tracks Priority, Not Provenance

A recent Correspondence piece in Nature Machine Intelligence points at an issue with using LLMs to write journal articles. LLMs are trained on enormous amounts of scholarly output, but the result is quite opaque: it is usually impossible to tell which sources influence a specific LLM-written text. That means that when a scholar uses an LLM, they may get a result that depends on another scholar’s work, without realizing it or documenting it. The ideas’ provenance gets lost, and the piece argues this is damaging, depriving scholars of credit and setting back progress.

It’s a good point. Provenance matters. If we want to prioritize funding for scholars whose ideas have the most impact, we need a way to track where ideas arise.

However, current publishing norms make essentially no effort to do this. Academic citations are not used to track provenance, and they are not typically thought of as tracking provenance. Academic citations track priority.

Priority is a central value in scholarship, with a long history. We give special respect to the first person to come up with an idea, make an observation, or do a calculation, and more specifically, the first person to formally publish it. We do this even if the person’s influence was limited, and even if the idea was rediscovered independently later on. In an academic context, being first matters.

In a paper, one is thus expected to cite the sources that have priority, that came up with an idea first. Someone who fails to do so will get citation request emails, and reviewers may request revisions to the paper to add in those missing citations.

One may also cite papers that were helpful, even if they didn’t come first. Tracking provenance in this way can be nice, a way to give direct credit to those who helped and point people to useful resources. But it isn’t mandatory in the same way. If you leave out a secondary source and your paper doesn’t use anything original to that source (like new notation), you’re much less likely to get citation request emails, or revision requests from reviewers. Provenance is just much lower priority.

In practice, academics track provenance in much less formal ways. Before citations, a paper will typically have an Acknowledgements section, where the authors thank those who made the paper possible. This includes formal thanks to funding agencies, but also informal thanks for “helpful discussions” that don’t meet the threshold of authorship.

If we cared about tracking provenance, those acknowledgements would be crucial information, an account of whose ideas directly influenced the ideas in the paper. But they’re not treated that way. No-one lists the number of times they’ve been thanked for helpful discussions on their CV, or in a grant application, no-one considers these discussions for hiring or promotion. You can’t look them up on an academic profile or easily graph them in a metascience paper. Unlike citations, unlike priority, there is essentially no attempt to measure these tracks of provenance in any organized way.

Instead, provenance is often the realm of historians or history-minded scholars, writing long after the fact. For academics, the fact that Yang and Mills published their theory first is enough, we call it Yang-Mills theory. For those studying the history, the story is murkier: it looks like Pauli came up with the idea first, and did most of the key calculations, but didn’t publish when it looked to him like the theory couldn’t describe the real world. What’s more, there is evidence suggesting that Yang knew about Pauli’s result, that he had read a letter from him on the topic, that the idea’s provenance goes back to Pauli. But Yang published, Pauli didn’t. And in the way academia has worked over the last 75 years, that claim of priority is what actually mattered.

Should we try to track provenance? Maybe. Maybe the emerging ubiquitousness of LLMs should be a wakeup call, a demand to improve our tracking of ideas, both in artificial and human neural networks. Maybe we need to demand interpretability from our research tools, to insist that we can track every conclusion back to its evidence for every method we employ, to set a civilizational technological priority on the accurate valuation of information.

What we shouldn’t do, though, is pretend that we just need to go back to what we were doing before.

Ideally, Exams Are for the Students

I should preface this by saying I don’t actually know that much about education. I taught a bit in my previous life as a professor, yes, but I probably spent more time being taught how to teach than actually teaching.

Recently, the Atlantic had a piece about testing accommodations for university students, like extra time on exams, or getting to do an exam in a special distraction-free environment. The piece quotes university employees who are having more and more trouble satisfying these accommodations, and includes the statistic that 20 percent of undergraduate students at Brown and Harvard are registered as disabled.

The piece has kicked off a firestorm on social media, mostly focused on that statistic (which conveniently appears just before the piece’s paywall). People are shocked, and cynical. They feel like more and more students are cheating the system, getting accommodations that they don’t actually deserve.

I feel like there is a missing mood in these discussions, that the social media furor is approaching this from the wrong perspective. People are forgetting what exams actually ought to be for.

Exams are for the students.

Exams are measurement tools. An exam for a class says whether a student has learned the material, or whether they haven’t, and need to retake the class or do more work to get there. An entrance exam, or a standardized exam like the SAT, predicts a student’s future success: whether they will be able to benefit from the material at a university, or whether they don’t yet have the background for that particular program of study.

These are all pieces of information that are most important to the students themselves, that help them structure their decisions. If you want to learn the material, should you take the course again? Which universities are you prepared for, and which not?

We have accommodations, and concepts like disability, because we believe that there are kinds of students for whom the exams don’t give this information accurately. We think that a student with more time, or who can take the exam in a distraction-free environment, would have a more accurate idea of whether they need to retake the material, or whether they’re ready for a course of study, than a student who has to take the exam under ordinary conditions. And we think we can identify the students who this matters for, and the students for whom this doesn’t matter nearly as much.

These aren’t claims about our values, or about what students deserve. They’re empirical claims, about how test results correlate with outcomes the students want. The conversation, then, needs to be built on top of those empirical claims. Are we better at predicting the success of students that receive accommodations, or worse? Can we measure that at all, or are we just guessing? And are we communicating the consequences accurately to students, that exam results tell them something useful and statistically robust that should help them plan their lives?

Values come in later, of course. We don’t have infinite resources, as the Atlantic piece emphasizes. We can’t measure everyone with as much precision as we would like. At some level, generalization takes over and accuracy is lost. There is absolutely a debate to be had about which measurements we can afford to make, and which we can’t.

But in order to have that argument at all, we first need to agree on what we’re measuring. And I feel like most of the people talking about this piece haven’t gotten there yet.

Mandatory Dumb Acronyms

Sometimes, the world is silly for honest, happy reasons. And sometimes, it’s silly for reasons you never even considered.

Scientific projects often have acronyms, some of which are…clever, let’s say. Astronomers are famous for acronyms. Read this list, and you can find examples from 2D-FRUTTI and ABRACADABRA to WOMBAT and YORIC. Some of these aren’t even “really” acronyms, using letters other than the beginning of each word, multiple letters from a word, or both. (An egregious example from that list: VESTALE from “unVEil the darknesS of The gAlactic buLgE”.)

But here’s a pattern you’ve probably not noticed. I suggest that you should see more of these…clever…acronyms in projects in Europe, and they should show up in a wider range of fields, not just astronomy. And the reason why, is the European Research Council.

In the US, scientific grants are spread out among different government agencies. Typical grants are small, the kind of thing that lets a group share a postdoc every few years, with different types of grants covering projects of different scales.

The EU, instead, has the European Research Council, or ERC, with a flagship series of grants covering different career stages: Starting, Consolidator, and Advanced. Unlike most US grants, these are large (supporting multiple employees over several years), individual (awarded to a single principal investigator, not a collaboration) and general (the ERC uses the same framework across multiple fields, from physics to medicine to history).

That means there are a lot of medium-sized research projects in Europe that are funded by an ERC grant. And each of them are required to have an acronym.

Why? Who knows? “Acronym” is simply one of the un-skippable entries in the application forms, with a pre-set place of honor in their required grant proposal format. Nobody checks whether it’s a “real acronym”, so in practice it often isn’t, turning into some sort of catchy short name with “acronym vibes”. It, like everything else on these forms, is optimized to catch the attention of a committee of scientists who really would rather be doing something else, often discussed and refined by applicants’ mentors and sometimes even dedicated university staff.

So if you run into a scientist in Europe who proudly leads a group with a cutesy, vaguely acronym-adjacent name? And you keep running into these people?

It’s not a coincidence, and it’s not just scientists’ sense of humor. It’s the ERC.

Explain/Teach/Advocate

Scientists have different goals when they communicate, leading to different styles, or registers, of communication. If you don’t notice what register a scientist is using, you might think they’re saying something they’re not. And if you notice someone using the wrong register for a situation, they may not actually be a scientist.

Sometimes, a scientist is trying to explain an idea to the general public. The point of these explanations is to give you appreciation and intuition for the science, not to understand it in detail. This register makes heavy use of metaphors, and sometimes also slogans. It should almost never be taken literally, and a contradiction between two different scientist explanations usually just means they are using incompatible metaphors for the same concept. Sometimes, scientists who do this a lot will comment on other metaphors you might have heard, referencing other slogans to help explain what those explanations miss. They do this knowing that they do, in the end, agree on the actual science: they’re just trying to give you another metaphor, with a deeper intuition for a neglected part of the story.

Other times, scientists are trying to teach a student to be able to do something. Teaching can use metaphors or slogans as introductions, but quickly moves past them, because it wants to show the students something they can use: an equation, a diagram, a classification. If a scientist shows you any of these equations/diagrams/classifications without explaining what they mean, then you’re not the student they had in mind: they had designed their lesson for someone who already knew those things. Teaching may convey the kinds of appreciation and intuition that explanations for the general public do, but that goal gets much less emphasis. The main goal is for students with the appropriate background to learn to do something new.

Finally, sometimes scientists are trying to advocate for a scientific point. In this register, and only in this register, are they trying to convince people who don’t already trust them. This kind of communication can include metaphors and slogans as decoration, but the bulk will be filled with details, and those details should constitute evidence: they should be a structured argument, one that lays out, scientifically, why others should come to the same conclusion.

A piece that tries to address multiple audiences can move between registers in a clean way. But if the register jumps back and forth, or if the wrong register is being used for a task, that usually means trouble. That trouble can be simple boredom, like a scientist’s typical conference talk that can’t decide whether it just wants other scientists to appreciate the work, whether it wants to teach them enough to actually use it, or whether it needs to convince any skeptics. It can also be more sinister: a lot of crackpots write pieces that are ostensibly aimed at convincing other scientists, but are almost entirely metaphors and slogans, pieces good at tugging on the general public’s intuition without actually giving scientists anything meaningful to engage with.

If you’re writing, or speaking, know what register you need to use to do what you’re trying to do! And if you run into a piece that doesn’t make sense, consider that it might be in a different register than you thought.

Requests for an Ethnography of Cheating

What is AI doing to higher education? And what, if anything, should be done about it?

Chad Orzel at Counting Atoms had a post on this recently, tying the question to a broader point. There is a fundamental tension in universities, between actual teaching and learning and credentials. A student who just wants the piece of paper at the end has no reason not to cheat if they can get away with it, so the easier it becomes to get away with cheating (say, by using AI), the less meaningful the credential gets. Meanwhile, professors who want students to actually learn something are reduced to trying to “trick” these goal-oriented students into accidentally doing something that makes them fall in love with a subject, while being required to police the credential side of things.

Social science, as Orzel admits and emphasizes, is hard. Any broad-strokes picture like this breaks down into details, and while Orzel talks through some of those details he and I are of course not social scientists.

Because of that, I’m not going to propose my own “theory” here. Instead, think of this post as a request.

I want to read an ethnography of cheating. Like other ethnographies, it should involve someone spending time in the culture in question (here, cheating students), talking to the people involved, and getting a feeling for what they believe and value. Ideally, it would be augmented with an attempt at quantitative data, like surveys, that estimate how representative the picture is.

I suspect that cheating students aren’t just trying to get a credential. Part of why is that I remember teaching pre-meds. In the US, students don’t directly study medicine as a Bachelor’s degree. Instead, they study other subjects as pre-medical students (“pre-meds”), and then apply to Medical School, which grants a degree on the same level as a PhD. As part of their application, they include a standardized test called the MCAT, which checks that they have the basic level of math and science that the medical schools expect.

A pre-med in a physics class, then, has good reason to want to learn: the better they know their physics, the better they will do on the MCAT. If cheating was mostly about just trying to get a credential, pre-meds wouldn’t cheat.

I’m pretty sure they do cheat, though. I didn’t catch any cheaters back when I taught, but there were a lot of students who tried to push the rules, pre-meds and not.

Instead, I think there are a few other motivations involved. And in an ethnography of cheating, I’d love to see some attempt to estimate how prevalent they are:

  1. Temptation: Maybe students know that they shouldn’t cheat, in the same way they know they should go to the gym. They want to understand the material and learn in the same way people who exercise have physical goals. But the mind, and flesh, are weak. You have a rough week, you feel like you can’t handle the work right now. So you compensate. Some of the motivation here is still due to credentials: a student who shrugs and accepts that their breakup will result in failing a course is a student who might have to pay for an extra year of ultra-expensive US university education to get that credential. But I suspect there is a more fundamental motivation here, related to ego and easy self-deception. If you do the assignment, even if you cheat for part of it, you get to feel like you did it, while if you just turn in a blank page you have to accept the failure.
  2. Skepticism: Education isn’t worth much if it doesn’t actually work. Students may be skeptical that the things that professors are asking them to do actually help them learn what they want to learn, or that the things the professors want them to learn are actually the course’s most valuable content. A student who uses ChatGPT to write an essay might believe that they will never have to write something without ChatGPT in life, so why not use it now? Sometimes professors simply aren’t explicit about what an exercise is actually meant to teach (there have been a huge number of blog posts explaining that writing is meant to teach you to think, not to write), and sometimes professors are genuinely pretty bad at teaching, since there is little done to retain the good ones in most places. A student in this situation still has to be optimistic about some aspect of the education, at some time. But they may be disillusioned, or just interested in something very different.
  3. Internalized Expectations: Do employers actually care if you get a bad grade? Does it matter? By the time a student is in college, they’ve been spending half their waking hours in a school environment for over a decade. Maybe the need to get good grades is so thoroughly drilled in that the actual incentives don’t matter. If you think of yourself as the kind of person who doesn’t fail courses, and you start failing, what do you do?
  4. External Non-Credential Expectations: Don’t worry about the employers, worry about the parents. Some college students have the kind of parents who keep checking in on how they’re doing, who want to see evidence and progress the same way they did when they were kids. Any feedback, no matter how much it’s intended to teach, not to judge, might get twisted into a judgement. Better to avoid that judgement, right?
  5. Credentials, but for the Government, not Employers: Of course, for some students, failing really does wreck their life. If you’re on the kind of student visa that requires you maintain grades a certain level, you’ve got a much stronger incentive to cheat, imposed for much less reason.

If you’re aware of a good ethnography of cheating, let me know! And if you’re a social scientist, consider studying this!

What You’re Actually Scared of in Impostor Syndrome

Academics tend to face a lot of impostor syndrome. Something about a job with no clear criteria for success, where you could always in principle do better and you mostly only see the cleaned-up, idealized version of others’ work, is a recipe for driving people utterly insane with fear.

The way most of us talk about that fear, it can seem like a cognitive bias, like a failure of epistemology. “Competent people think they’re less competent than they are,” the less-discussed half of the Dunning-Kruger effect.

(I’ve talked about it that way before. And, in an impostor-syndrome-inducing turn of events, I got quoted in a news piece in Nature about it.)

There’s something missing in that perspective, though. It doesn’t really get across how impostor syndrome feels. There’s something very raw about it, something that feels much more personal and urgent than an ordinary biased self-assessment.

To get at the core of it, let me ask a question: what happens to impostors?

The simple answer, the part everyone will admit to, is to say they stop getting grants, or stop getting jobs. Someone figures out they can’t do what they claim, and stops choosing them to receive limited resources. Pretty much anyone with impostor syndrome will say that they fear this: the moment that they reach too far, and the world decides they aren’t worth the money after all.

In practice, it’s not even clear that that happens. You might have people in your field who are actually thought of as impostors, on some level. People who get snarked about behind their back, people where everyone rolls their eyes when they ask a question at a conference and the question just never ends. People who are thought of as shiny storytellers without substance, who spin a tale for journalists but aren’t accomplishing anything of note. Those people…aren’t facing consequences at all, really! They keep getting the grants, they keep finding the jobs, and the ranks of people leaving for industry are instead mostly filled with those you respect.

Instead, I think what we fear when we feel impostor syndrome isn’t the obvious consequence, or even the real consequence, but something more primal. Primatologists and psychologists talk about our social brain, and the role of ostracism. They talk about baboons who piss off the alpha and get beat up and cast out of the group, how a social animal on their own risks starvation and becomes easy prey for bigger predators.

I think when we wake up in a cold sweat remembering how we had no idea what that talk was about, and were too afraid to ask, it’s a fear on that level that’s echoing around in our heads. That the grinding jags of adrenaline, the run-away-and-hide feeling of never being good enough, the desperate unsteadiness of trying to sound competent when you’re sure that you’re not and will get discovered at any moment…that’s not based on any realistic fears about what would happen if you got caught. That’s your monkey-brain, telling you a story drilled down deep by evolution.

Does that help? I’m not sure. If you manage to tell your inner monkey that it won’t get eaten by a lion if its friends stop liking it, let me know!

The Rocks in the Ground Era of Fundamental Physics

It’s no secret that the early twentieth century was a great time to make progress in fundamental physics. On one level, it was an era when huge swaths of our understanding of the world were being rewritten, with relativity and quantum mechanics just being explored. It was a time when a bright student could guide the emergence of whole new branches of scholarship, and recently discovered physical laws could influence world events on a massive scale.

Put that way, it sounds like it was a time of low-hanging fruit, the early days of a field when great strides can be made before the easy problems are all solved and only the hard ones are left. And that’s part of it, certainly: the fields sprung from that era have gotten more complex and challenging over time, requiring more specialized knowledge to make any kind of progress. But there is also a physical reason why physicists had such an enormous impact back then.

The early twentieth century was the last time that you could dig up a rock out of the ground, do some chemistry, and end up with a discovery about the fundamental laws of physics.

When scientists like Curie and Becquerel were working with uranium, they didn’t yet understand the nature of atoms. The distinctions between elements were described in qualitative terms, but only just beginning to be physically understood. That meant that a weird object in nature, “a weird rock”, could do quite a lot of interesting things.

And once you find a rock that does something physically unexpected, you can scale up. From the chemistry experiments of a single scientist’s lab, countries can build industrial processes to multiply the effect. Nuclear power and the bomb were such radical changes because they represented the end effect of understanding the nature of atoms, and atoms are something people could build factories to manipulate.

Scientists went on to push that understanding further. They wanted to know what the smallest pieces of matter were composed of, to learn the laws behind the most fundamental laws they knew. And with relativity and quantum mechanics, they could begin to do so systematically.

US particle physics has a nice bit of branding. They talk about three frontiers: the Energy Frontier, the Intensity Frontier, and the Cosmic Frontier.

Some things we can’t yet test in physics are gated by energy. If we haven’t discovered a particle, it may be because it’s unstable, decaying quickly into lighter particles so we can’t observe it in everyday life. If these particles interact appreciably with particles of everyday matter like protons and electrons, then we can try to make them in particle colliders. These end up creating pretty much everything up to a certain mass, due to a combination of the tendency in quantum mechanics for everything that can happen to happen, and relativity’s E=mc^2. In the mid-20th century these particle colliders were serious pieces of machinery, but still small enough to make industrial: now, there are so-called medical accelerators in many hospitals based on their designs. But current particle accelerators are a different beast, massive facilities built by international collaborations. This is the Energy Frontier.

Some things in physics are gated by how rare they are. Some particles interact only very faintly with other particles, so to detect them, physicists have to scan a huge chunk of matter, a giant tank of argon or a kilometer of antarctic ice, looking for deviations from the norm. Over time, these experiments have gotten bigger, looking for more and more subtle effects. A few weird ones still fit on tabletops, but only because they have the tools to measure incredibly small variations. Most are gigantic. This is the Intensity Frontier.

Finally, the Cosmic Frontier looks for the unknown behind both kinds of gates, using the wider universe to look at events with extremely high energy or size.

Pushing these frontiers has meant cleaning up our understanding of the fundamental laws of physics up to these frontiers. It means that whatever is still hiding, it either requires huge amounts of energy to produce, or is an extremely rare, subtle effect.

That means that you shouldn’t expect another nuclear bomb out of fundamental physics. Physics experiments are already working on vast scales, to the extent that a secret government project would have to be smaller than publicly known experiments, in physical size, energy use, and budget. And you shouldn’t expect another nuclear power plant, either: we’ve long passed the kinds of things you could devise a clever industrial process to take advantage of at scale.

Instead, new fundamental physics will only be directly useful once we’re the kind of civilization that operates on a much greater scale than we do today. That means larger than the solar system: there wouldn’t be much advantage, at this point, of putting a particle physics experiment on the edge of the Sun. It means the kind of civilization that tosses galaxies around.

It means that right now, you won’t see militaries or companies pushing the frontiers of fundamental physics, unlike the way they might have wanted to at the dawn of the twentieth century. By the time fundamental physics is useful in that way, all of these actors will likely be radically different: companies, governments, and in all likelihood human beings themselves. Instead, supporting fundamental physics right now is an act of philanthropy, maintaining a practice because it maintains good habits of thought and produces powerful ideas, the same reasons organizations support mathematics or poetry. That’s not nothing, and fundamental physics is still often affordable as philanthropy goes. But it’s not changing the world, not the way physicists did in the early twentieth century.