How Small Scales Can Matter for Large Scales

For a certain type of physicist, nothing matters more than finding the ultimate laws of nature for its tiniest building-blocks, the rules that govern quantum gravity and tell us where the other laws of physics come from. But because they know very little about those laws at this point, they can predict almost nothing about observations on the larger distance scales we can actually measure.

“Almost nothing” isn’t nothing, though. Theoretical physicists don’t know nature’s ultimate laws. But some things about them can be reasonably guessed. The ultimate laws should include a theory of quantum gravity. They should explain at least some of what we see in particle physics now, explaining why different particles have different masses in terms of a simpler theory. And they should “make sense”, respecting cause and effect, the laws of probability, and Einstein’s overall picture of space and time.

All of these are assumptions, of course. Further assumptions are needed to derive any testable consequences from them. But a few communities in theoretical physics are willing to take the plunge, and see what consequences their assumptions have.

First, there’s the Swampland. String theorists posit that the world has extra dimensions, which can be curled up in a variety of ways to hide from view, with different observable consequences depending on how the dimensions are curled up. This list of different observable consequences is referred to as the Landscape of possibilities. Based on that, some string theorists coined the term “Swampland” to represent an area outside the Landscape, containing observations that are incompatible with quantum gravity altogether, and tried to figure out what those observations would be.

In principle, the Swampland includes the work of all the other communities on this list, since a theory of quantum gravity ought to be consistent with other principles as well. In practice, people who use the term focus on consequences of gravity in particular. The earliest such ideas argued from thought experiments with black holes, finding results that seemed to demand that gravity be the weakest force for at least one type of particle. Later researchers would more frequently use string theory as an example, looking at what kinds of constructions people had been able to make in the Landscape to guess what might lie outside of it. They’ve used this to argue that dark energy might be temporary, and to try to figure out what traits new particles might have.

Second, I should mention naturalness. When talking about naturalness, people often use the analogy of a pen balanced on its tip. While possible in principle, it must have been set up almost perfectly, since any small imbalance would cause it to topple, and that perfection demands an explanation. Similarly, in particle physics, things like the mass of the Higgs boson and the strength of dark energy seem to be carefully balanced, so that a small change in how they were set up would lead to a much heavier Higgs boson or much stronger dark energy. The need for an explanation for the Higgs’ careful balance is why many physicists expected the Large Hadron Collider to discover additional new particles.

As I’ve argued before, this kind of argument rests on assumptions about the fundamental laws of physics. It assumes that the fundamental laws explain the mass of the Higgs, not merely by giving it an arbitrary number but by showing how that number comes from a non-arbitrary physical process. It also assumes that we understand well how physical processes like that work, and what kinds of numbers they can give. That’s why I think of naturalness as a type of argument, much like the Swampland, that uses the smallest scales to constrain larger ones.

Third is a host of constraints that usually go together: causality, unitarity, and positivity. Causality comes from cause and effect in a relativistic universe. Because two distant events can appear to happen in different orders depending on how fast you’re going, any way to send signals faster than light is also a way to send signals back in time, causing all of the paradoxes familiar from science fiction. Unitarity comes from quantum mechanics. If quantum calculations are supposed to give the probability of things happening, those probabilities should make sense as probabilities: for example, they should never go above one.

You might guess that almost any theory would satisfy these constraints. But if you extend a theory to the smallest scales, some theories that otherwise seem sensible end up failing this test. Actually linking things up takes other conjectures about the mathematical form theories can have, conjectures that seem more solid than the ones underlying Swampland and naturalness constraints but that still can’t be conclusively proven. If you trust the conjectures, you can derive restrictions, often called positivity constraints when they demand that some set of observations is positive. There has been a renaissance in this kind of research over the last few years, including arguments that certain speculative theories of gravity can’t actually work.

Which String Theorists Are You Complaining About?

Do string theorists have an unfair advantage? Do they have an easier time getting hired, for example?

In one of the perennial arguments about this on Twitter, Martin Bauer posted a bar chart of faculty hires in the US by sub-field. The chart was compiled by Erich Poppitz from data in the US particle physics rumor mill, a website where people post information about who gets hired where for the US’s quite small number of permanent theoretical particle physics positions at research universities and national labs. The data covers 1994 to 2017, and shows one year, 1999, when there were more string theorists hired than all other topics put together. The years around then also had many string theorists hired, but the proportion starts falling around the mid 2000’s…around when Lee Smolin wrote a book, The Trouble With Physics, arguing that string theorists had strong-armed their way into academic dominance. After that, the percentage of string theorists falls, oscillating between a tenth and a quarter of total hires.

Judging from that, you get the feeling that string theory’s critics are treating a temporary hiring fad as if it was a permanent fact. The late 1990’s were a time of high-profile developments in string theory that excited a lot of people. Later, other hiring fads dominated, often driven by experiments: I remember when the US decided to prioritize neutrino experiments and neutrino theorists had a much easier time getting hired, and there seem to be similar pushes now with gravitational waves, quantum computing, and AI.

Thinking about the situation in this way, though, ignores what many of the critics have in mind. That’s because the “string” column on that bar chart is not necessarily what people think of when they think of string theory.

If you look at the categories on Poppitz’s bar chart, you’ll notice something odd. “String” its itself a category. Another category, “lattice”, refers to lattice QCD, a method to find the dynamics of quarks numerically. The third category, though, is a combination of three things “ph/th/cosm”.

“Cosm” here refers to cosmology, another sub-field. “Ph” and “th” though aren’t really sub-fields. Instead, they’re arXiv categories, sections of the website arXiv.org where physicists post papers before they submit them to journals. The “ph” category is used for phenomenology, the type of theoretical physics where people try to propose models of the real world and make testable predictions. The “th” category is for “formal theory”, papers where theoretical physicists study the kinds of theories they use in more generality and develop new calculation methods, with insights that over time filter into “ph” work.

“String”, on the other hand, is not an arXiv category. When string theorists write papers, they’ll put them into “th” or “ph” or another relevant category (for example “gr-qc”, for general relativity and quantum cosmology). This means that when Poppitz distinguishes “ph/th/cosm” from “string”, he’s being subjective, using his own judgement to decide who counts as a string theorist.

So who counts as a string theorist? The simplest thing to do would be to check if their work uses strings. Failing that, they could use other tools of string theory and its close relatives, like Calabi-Yau manifolds, M-branes, and holography.

That might be what Poppitz was doing, but if he was, he was probably missing a lot of the people critics of string theory complain about. He even misses many people who describe themselves as string theorists. In an old post of mine I go through the talks at Strings, string theory’s big yearly conference, giving them finer-grained categories. The majority don’t use anything uniquely stringy.

Instead, I think critics of string theory have two kinds of things in mind.

First, most of the people who made their reputations on string theory are still in academia, and still widely respected. Some of them still work on string theory topics, but many now work on other things. Because they’re still widely respected, their interests have a substantial influence on the field. When one of them starts looking at connections between theories of two-dimensional materials, you get a whole afternoon of talks at Strings about theories of two-dimensional materials. Working on those topics probably makes it a bit easier to get a job, but also, many of the people working on them are students of these highly respected people, who just because of that have an easier time getting a job. If you’re a critic of string theory who thinks the founders of the field led physics astray, then you probably think they’re still leading physics astray even if they aren’t currently working on string theory.

Second, for many other people in physics, string theorists are their colleagues and friends. They’ll make fun of trends that seem overhyped and under-thought, like research on the black hole information paradox or the swampland, or hopes that a slightly tweaked version of supersymmetry will show up soon at the LHC. But they’ll happily use ideas developed in string theory when they prove handy, using supersymmetric theories to test new calculation techniques, string theory’s extra dimensions to inspire and ground new ideas for dark matter, or the math of strings themselves as interesting shortcuts to particle physics calculations. String theory is available as reference to these people in a way that other quantum gravity proposals aren’t. That’s partly due to familiarity and shared language (I remember a talk at Perimeter where string theorists wanted to learn from practitioners from another area and the discussion got bogged down by how they were using the word “dimension”), but partly due to skepticism of the various alternate approaches. Most people have some idea in their heads of deep problems with various proposals: screwing up relativity, making nonsense out of quantum mechanics, or over-interpreting on limited evidence. The most commonly believed criticisms are usually wrong, with objections long-known to practitioners of the alternate approaches, and so those people tend to think they’re being treated unfairly. But the wrong criticisms are often simplified versions of correct criticisms, passed down by the few people who dig deeply into these topics, criticisms that the alternative approaches don’t have good answers to.

The end result is that while string theory itself isn’t dominant, a sort of “string friendliness” is. Most of the jobs aren’t going to string theorists in the literal sense. But the academic world string theorists created keeps turning. People still respect string theorists and the research directions they find interesting, and people are still happy to collaborate and discuss with string theorists. For research communities people are more skeptical of, it must feel very isolating, like the world is still being run by their opponents. But this isn’t the kind of hegemony that can be solved by a revolution. Thinking that string theory is a failed research program, and people focused on it should have a harder time getting hired, is one thing. Thinking that everyone who respects at least one former string theorist should have a harder time getting hired is a very different goal. And if what you’re complaining about is “string friendliness”, not actual string theorists, then that’s what you’re asking for.

At Ars Technica Last Week, With a Piece on How Wacky Ideas Become Big Experiments

I had a piece last week at Ars Technica about the path ideas in physics take to become full-fledged experiments.

My original idea for the story was a light-hearted short news piece. A physicist at the University of Kansas, Steven Prohira, had just posted a proposal for wiring up a forest to detect high-energy neutrinos, using the trees like giant antennas.

Chatting to experts, what at first seemed silly started feeling like a hook for something more. Prohira has a strong track record, and the experts I talked to took his idea seriously. They had significant doubts, but I was struck by how answerable those doubts were, how rather than dismissing the whole enterprise they had in mind a list of questions one could actually test. I wrote a blog post laying out that impression here.

The editor at Ars was interested, so I dug deeper. Prohira’s story became a window on a wider-ranging question: how do experiments happen? How does a scientist convince the community to work on a project, and the government to fund it? How do ideas get tested before these giant experiments get built?

I tracked down researchers from existing experiments and got their stories. They told me how detecting particles from space takes ingenuity, with wacky ideas involving the natural world being surprisingly common. They walked me through tales of prototypes and jury-rigging and feasibility studies and approval processes.

The highlights of those tales ended up in the piece, but there was a lot I couldn’t include. In particular, I had a long chat with Sunil Gupta about the twists and turns taken by the GRAPES experiment in India. Luckily for you, some of the most interesting stories have already been covered, for example their measurement of the voltage of a thunderstorm or repurposing used building materials to keep costs down. I haven’t yet found his story about stirring wavelength-shifting chemicals all night using a propeller mounted on a power drill, but I suspect it’s out there somewhere. If not, maybe it can be the start of a new piece!

A Tale of Two Experiments

Before I begin, two small announcements:

First: I am now on bluesky! Instead of having a separate link in the top menu for each social media account, I’ve changed the format so now there are social media buttons in the right-hand sidebar, right under the “Follow” button. Currently, they cover tumblr, twitter, and bluesky, but there may be more in future.

Second, I’ve put a bit more technical advice on my “Open Source Grant Proposal” post, so people interested in proposing similar research can have some ideas about how best to pitch it.

Now, on to the post:


Gravitational wave telescopes are possibly the most exciting research program in physics right now. Big, expensive machines with more on the way in the coming decades, gravitational wave telescopes need both precise theoretical predictions and high-quality data analysis. For some, gravitational wave telescopes have the potential to reveal genuinely new physics, to probe deviations from general relativity that might be related to phenomena like dark matter, though so far no such deviations have been conclusively observed. In the meantime, they’re teaching us new consequences of known physics. For example, the unusual population of black holes observed by LIGO has motivated those who model star clusters to consider processes in which the motion of three stars or black holes is related to each other, discovering that these processes are more important than expected.

Particle colliders are probably still exciting to the general public, but for many there is a growing sense of fatigue and disillusionment. Current machines like the LHC are big and expensive, and proposed future colliders would be even costlier and take decades to come online, in addition to requiring a huge amount of effort from the community in terms of precise theoretical predictions and data analysis. Some argue that colliders still might uncover genuinely new physics, deviations from the standard model that might explain phenomena like dark matter, but as no such deviations have yet been conclusively observed people are increasingly skeptical. In the meantime, most people working on collider physics are focused on learning new consequences of known physics. For example, by comparing observed results with theoretical approximations, people have found that certain high-energy processes usually left out of calculations are actually needed to get a good agreement with the data, showing that these processes are more important than expected.

…ok, you see what I did there, right? Was that fair?

There are a few key differences, with implications to keep in mind:

First, collider physics is significantly more expensive than gravitational wave physics. LIGO took about $300 million to build and spends about $50 million a year. The LHC took about $5 billion to build and costs $1 billion a year to run. That cost still puts both well below several other government expenses that you probably consider frivolous (please don’t start arguing about which ones in the comments!), but it does mean collider physics demands a bit of a stronger argument.

Second, the theoretical motivation to expect new fundamental physics out of LIGO is generally considered much weaker than for colliders. A large part of the theoretical physics community thought that they had a good argument why they should see something new at the LHC. In contrast, most theorists have been skeptical of the kinds of modified gravity theories that have dramatic enough effects that one could measure them with gravitational wave telescopes, with many of these theories having other pathologies or inconsistencies that made people wary.

Third, the general public finds astrophysics cooler than particle physics. Somehow, telling people “pairs of black holes collide more often than we thought because sometimes a third star in the neighborhood nudges them together” gets people much more excited than “pairs of quarks collide more often than we thought because we need to re-sum large logarithms differently”, even though I don’t think there’s a real “principled” difference between them. Neither reveals new laws of nature, both are upgrades to our ability to model how real physical objects behave, neither is useful to know for anybody living on Earth in the present day.

With all this in mind, my advice to gravitational wave physicists is to try, as much as possible, not to lean on stories about dark matter and modified gravity. You might learn something, and it’s worth occasionally mentioning that. But if you don’t, you run a serious risk of disappointing people. And you have such a big PR advantage if you just lean on new consequences of bog standard GR, that those guys really should get the bulk of the news coverage if you want to keep the public on your side.

The Nowhere String

Space and time seem as fundamental as anything can get. Philosophers like Immanuel Kant thought that they were inescapable, that we could not conceive of the world without space and time. But increasingly, physicists suspect that space and time are not as fundamental as they appear. When they try to construct a theory of quantum gravity, physicists find puzzles, paradoxes that suggest that space and time may just be approximations to a more fundamental underlying reality.

One piece of evidence that quantum gravity researchers point to are dualities. These are pairs of theories that seem to describe different situations, including with different numbers of dimensions, but that are secretly indistinguishable, connected by a “dictionary” that lets you interpret any observation in one world in terms of an equivalent observation in the other world. By itself, duality doesn’t mean that space and time aren’t fundamental: as I explained in a blog post a few years ago, it could still be that one “side” of the duality is a true description of space and time, and the other is just a mathematical illusion. To show definitively that space and time are not fundamental, you would want to find a situation where they “break down”, where you can go from a theory that has space and time to a theory that doesn’t. Ideally, you’d want a physical means of going between them: some kind of quantum field that, as it shifts, changes the world between space-time and not space-time.

What I didn’t know when I wrote that post was that physicists already knew about such a situation in 1993.

Back when I was in pre-school, famous string theorist Edward Witten was trying to understand something that others had described as a duality, and realized there was something more going on.

In string theory, particles are described by lengths of vibrating string. In practice, string theorists like to think about what it’s like to live on the string itself, seeing it vibrate. In that world, there are two dimensions, one space dimension back and forth along the string and one time dimension going into the future. To describe the vibrations of the string in that world, string theorists use the same kind of theory that people use to describe physics in our world: a quantum field theory. In string theory, you have a two-dimensional quantum field theory stuck “inside” a theory with more dimensions describing our world. You see that this world exists by seeing the kinds of vibrations your two-dimensional world can have, through a type of quantum field called a scalar field. With ten scalar fields, ten different ways you can push energy into your stringy world, you can infer that the world around you is a space-time with ten dimensions.

String theory has “extra” dimensions beyond the three of space and one of time we’re used to, and these extra dimensions can be curled up in various ways to hide them from view, often using a type of shape called a Calabi-Yau manifold. In the late 80’s and early 90’s, string theorists had found a similarity between the two-dimensional quantum field theories you get folding string theory around some of these Calabi-Yau manifolds and another type of two-dimensional quantum field theory related to theories used to describe superconductors. People called the two types of theories dual, but Witten figured out there was something more going on.

Witten described the two types of theories in the same framework, and showed that they weren’t two equivalent descriptions of the same world. Rather, they were two different ways one theory could behave.

The two behaviors were connected by something physical: the value of a quantum field called a modulus field. This field can be described by a number, and that number can be positive or negative.

When the modulus field is a large positive number, then the theory behaves like string theory twisted around a Calabi-Yau manifold. In particular, the scalar fields have many different values they can take, values that are smoothly related to each other. These values are nothing more or less than the position of the string in space and time. Because the scalars can take many values, the string can sit in many different places, and because the values are smoothly related to each other, the string can smoothly move from one place to another.

When the modulus field is a large negative number, then the theory is very different. What people thought of as the other side of the duality, a theory like the theories used to describe superconductors, is the theory that describes what happens when the modulus field is large and negative. In this theory, the scalars can no longer take many values. Instead, they have one option, one stable solution. That means that instead of there being many different places the string could sit, describing space, there are no different places, and thus no space. The string lives nowhere.

These are two very different situations, one with space and one without. And they’re connected by something physical. You could imagine manipulating the modulus field, using other fields to funnel energy into it, pushing it back and forth from a world with space to a world of nowhere. Much more than the examples I was aware of, this is a super-clear example of a model where space is not fundamental, but where it can be manipulated, existing or not existing based on physical changes.

We don’t know whether a model like this describes the real world. But it’s gratifying to know that it can be written down, that there is a picture, in full mathematical detail, of how this kind of thing works. Hopefully, it makes the idea that space and time are not fundamental sound a bit more reasonable.

The “That’s Neat” Level

Everything we do, we do for someone.

The simplest things we do for ourselves. We grab that chocolate bar on the table and eat it, and it makes us happier.

Unless the chocolate bar is homemade, we probably paid money for it. We do other things, working for a living, to get the money to get those chocolate bars for ourselves.

(We also get chocolate bars for our loved ones, or for people we care about. Whether this is not in a sense also getting a chocolate bar for yourself is left as an exercise to the reader.)

What we do for the money, in turn, is driven by what would make someone else happier. Sometimes this is direct: you cut someone’s hair, they enjoy the breeze, they pay you, you enjoy the chocolate.

Other times, this gets mediated. You work in HR at a haircut chain. The shareholders want more money, to buy things like chocolate bars, so they vote for a board who wants to do what the shareholders want so as not to be in breach of contract and get fewer chocolate bars, so the board tells you to do things they believe will achieve that, and you do them because that’s how you get your chocolate bars. Every so often, the shareholders take a look at how many chocolate bars they can afford and adjust.

Compared to all this, academia is weirdly un-mediated.

It gets the closest to this model with students. Students want to learn certain things because they will allow them to provide other people with better services in future, which they can use to buy chocolate bars, and other things for the sheer pleasure, a neat experience almost comparable to a chocolate bar. People running universities want more money from students so they can spend it on things like giant statues of chocolate bars, so they instruct people working in the university to teach more of the things students want. (Typically in a very indirect way, for example funding a department in the US based on number of majors rather than number of students.)

But there’s a big chunk of academics whose performance is mostly judged not by their teaching, but by their research. They are paid salaries by departments based on the past quality of their research, or paid out of grants awarded based on the expected future quality of their research. (Or to combine them, paid salaries by departments based on the expected size of their grants.)

And in principle, that introduces many layers of mediation. The research universities and grant agencies are funded by governments, which pool money together in the expectation that someday by doing so they will bring about a world where more people can eat chocolate bars.

But the potential to bring about a world of increased chocolate bars isn’t like maximizing shareholder value. Nobody can check, one year later, how much closer you are to the science-fueled chocolate bar utopia.

And so in practice, in science, people fund you because they think what you’re doing is neat. Because it scratches the chocolate-bar-shaped hole in their brains. They might have some narrative about how your work could lead to the chocolate bar utopia the government is asking for, but it’s not like they’re calculating the expected distribution of chocolate bars if they fund your project versus another. You have to convince a human being, not that you are doing something instrumentally and measurably useful…but that you are doing something cool.

And that makes us very weird people! Halfway between haircuts and HR, selling a chocolate bar that promises to be something more.

Replacing Space-Time With the Space in Your Eyes

Nima Arkani-Hamed thinks space-time is doomed.

That doesn’t mean he thinks it’s about to be destroyed by a supervillain. Rather, Nima, like many physicists, thinks that space and time are just approximations to a deeper reality. In order to make sense of gravity in a quantum world, seemingly fundamental ideas, like that particles move through particular places at particular times, will probably need to become more flexible.

But while most people who think space-time is doomed research quantum gravity, Nima’s path is different. Nima has been studying scattering amplitudes, formulas used by particle physicists to predict how likely particles are to collide in particular ways. He has been trying to find ways to calculate these scattering amplitudes without referring directly to particles traveling through space and time. In the long run, the hope is that knowing how to do these calculations will help suggest new theories beyond particle physics, theories that can’t be described with space and time at all.

Ten years ago, Nima figured out how to do this in a particular theory, one that doesn’t describe the real world. For that theory he was able to find a new picture of how to calculate scattering amplitudes based on a combinatorical, geometric space with no reference to particles traveling through space-time. He gave this space the catchy name “the amplituhedron“. In the years since, he found a few other “hedra” describing different theories.

Now, he’s got a new approach. The new approach doesn’t have the same kind of catchy name: people sometimes call it surfaceology, or curve integral formalism. Like the amplituhedron, it involves concepts from combinatorics and geometry. It isn’t quite as “pure” as the amplituhedron: it uses a bit more from ordinary particle physics, and while it avoids specific paths in space-time it does care about the shape of those paths. Still, it has one big advantage: unlike the amplituhedron, Nima’s new approach looks like it can work for at least a few of the theories that actually describe the real world.

The amplituhedron was mysterious. Instead of space and time, it described the world in terms of a geometric space whose meaning was unclear. Nima’s new approach also describes the world in terms of a geometric space, but this space’s meaning is a lot more clear.

The space is called “kinematic space”. That probably still sounds mysterious. “Kinematic” in physics refers to motion. In the beginning of a physics class when you study velocity and acceleration before you’ve introduced a single force, you’re studying kinematics. In particle physics, kinematic refers to the motion of the particles you detect. If you see an electron going up and to the right at a tenth the speed of light, those are its kinematics.

Kinematic space, then, is the space of observations. By saying that his approach is based on ideas in kinematic space, what Nima is saying is that it describes colliding particles not based on what they might be doing before they’re detected, but on mathematics that asks questions only about facts about the particles that can be observed.

(For the experts: this isn’t quite true, because he still needs a concept of loop momenta. He’s getting the actual integrands from his approach, rather than the dual definition he got from the amplituhedron. But he does still have to integrate one way or another.)

Quantum mechanics famously has many interpretations. In my experience, Nima’s favorite interpretation is the one known as “shut up and calculate”. Instead of arguing about the nature of an indeterminately philosophical “real world”, Nima thinks quantum physics is a tool to calculate things people can observe in experiments, and that’s the part we should care about.

From a practical perspective, I agree with him. And I think if you have this perspective, then ultimately, kinematic space is where your theories have to live. Kinematic space is nothing more or less than the space of observations, the space defined by where things land in your detectors, or if you’re a human and not a collider, in your eyes. If you want to strip away all the speculation about the nature of reality, this is all that is left over. Any theory, of any reality, will have to be described in this way. So if you think reality might need a totally new weird theory, it makes sense to approach things like Nima does, and start with the one thing that will always remain: observations.

I Ain’t Afraid of No-Ghost Theorems

In honor of Halloween this week, let me say a bit about the spookiest term in physics: ghosts.

In particle physics, we talk about the universe in terms of quantum fields. There is an electron field for electrons, a gluon field for gluons, a Higgs field for Higgs bosons. The simplest fields, for the simplest particles, can be described in terms of just a single number at each point in space and time, a value describing how strong the field is. More complicated fields require more numbers.

Most of the fundamental forces have what we call vector fields. They’re called this because they are often described with vectors, lists of numbers that identify a direction in space and time. But these vectors actually contain too many numbers.

These extra numbers have to be tidied up in some way in order to describe vector fields in the real world, like the electromagnetic field or the gluon field of the strong nuclear force. There are a number of tricks, but the nicest is usually to add some extra particles called ghosts. Ghosts are designed to cancel out the extra numbers in a vector, leaving the right description for a vector field. They’re set up mathematically such that they can never be observed, they’re just a mathematical trick.

Mathematical tricks aren’t all that spooky (unless you’re scared of mathematics itself, anyway). But in physics, ghosts can take on a spookier role as well.

In order to do their job cancelling those numbers, ghosts need to function as a kind of opposite to a normal particle, a sort of undead particle. Normal particles have kinetic energy: as they go faster and faster, they have more and more energy. Said another way, it takes more and more energy to make them go faster. Ghosts have negative kinetic energy: the faster they go, the less energy they have.

If ghosts are just a mathematical trick, that’s fine, they’ll do their job and cancel out what they’re supposed to. But sometimes, physicists accidentally write down a theory where the ghosts aren’t just a trick cancelling something out, but real particles you could detect, without anything to hide them away.

In a theory where ghosts really exist, the universe stops making sense. The universe defaults to the lowest energy it can reach. If making a ghost particle go faster reduces its energy, then the universe will make ghost particles go faster and faster, and make more and more ghost particles, until everything is jam-packed with super-speedy ghosts unto infinity, never-ending because it’s always possible to reduce the energy by adding more ghosts.

The absence of ghosts, then, is a requirement for a sensible theory. People prove theorems showing that their new ideas don’t create ghosts. And if your theory does start seeing ghosts…well, that’s the spookiest omen of all: an omen that your theory is wrong.

Transforming Particles Are Probably Here to Stay

It can be tempting to imagine the world in terms of lego-like building-blocks. Atoms stick together protons, neutrons, and electrons, and protons and neutrons are made of stuck-together quarks in turn. And while atoms, despite the name, aren’t indivisible, you might think that if you look small enough you’ll find indivisible, unchanging pieces, the smallest building-blocks of reality.

Part of that is true. We might, at some point, find the smallest pieces, the things everything else is made of. (In a sense, it’s quite likely we’ve already found them!) But those pieces don’t behave like lego blocks. They aren’t indivisible and unchanging.

Instead, particles, even the most fundamental particles, transform! The most familiar example is beta decay, a radioactive process where a neutron turns into a proton, emitting an electron and a neutrino. This process can be explained in terms of more fundamental particles: the neutron is made of three quarks, and one of those quarks transforms from a “down quark” to an “up quark”. But the explanation, as far as we can tell, doesn’t go any deeper. Quarks aren’t unchanging, they transform.

Beta decay! Ignore the W, which is important but not for this post.

There’s a suggestion I keep hearing, both from curious amateurs and from dedicated crackpots: why doesn’t this mean that quarks have parts? If a down quark can turn into an up dark, an electron, and a neutrino, then why doesn’t that mean that a down quark contains an up quark, an electron, and a neutrino?

The simplest reason is that this isn’t the only way a quark transforms. You can also have beta-plus decay, where an up quark transforms into a down quark, emitting a neutrino and the positively charged anti-particle of the electron, called a positron.

Also, ignore the directions of the arrows, that’s weird particle physics notation that doesn’t matter here.

So to make your idea work, you’d somehow need each down quark to contain an up quark plus some other particles, and each up quark to contain a down quark plus some other particles.

Can you figure out some complicated scheme that works like that? Maybe. But there’s a deeper reason why this is the wrong path.

Transforming particles are part of a broader phenomenon, called particle production. Reactions in particle physics can produce new particles that weren’t there before. This wasn’t part of the earliest theories of quantum mechanics that described one electron at a time. But if you want to consider the quantum properties of not just electrons, but the electric field as well, then you need a more complete theory, called a quantum field theory. And in those theories, you can produce new particles. It’s as simple as turning on the lights: from a wiggling electron, you make light, which in a fully quantum theory is made up of photons. Those photons weren’t “part of” the electron to start with, they are produced by its motion.

If you want to avoid transforming particles, to describe everything in terms of lego-like building-blocks, then you want to avoid particle production altogether. Can you do this in a quantum field theory?

Actually, yes! But your theory won’t describe the whole of the real world.

In physics, we have examples of theories that don’t have particle production. These example theories have a property called integrability. They are theories we can “solve”, doing calculations that aren’t possible in ordinary theories, named after the fact that the oldest such theories in classical mechanics were solved using integrals.

Normal particle physics theories have conserved charges. Beta decay conserves electric charge: you start out with a neutral particle, and end up with one particle with positive charge and another with negative charge. It also conserves other things, like “electron-number” (the electron has electron-number one, the neutrino that comes out with it has electron-number minus one), energy, and momentum.

Integrable theories have those charges too, but they have more. In fact, they have an infinite number of conserved charges. As a result, you can show that in these theories it is impossible to produce new particles. It’s as if each particle’s existence is its own kind of conserved charge, one that can never be created or destroyed, so that each collision just rearranges the particles, never makes new ones.

But while we can write down these theories, we know they can’t describe the whole of the real world. In an integrable theory, when you build things up from the fundamental building-blocks, their energy follows a pattern. Compare the energy of a bunch of different combinations, and you find a characteristic kind of statistical behavior called a Poisson distribution.

Look at the distribution of energies of nuclei of atoms, and you’ll find a very different kind of behavior. It’s called a Wigner-Dyson distribution, and it indicates the opposite of integrability: chaos. Chaos is behavior that can’t be “solved” like integrable theories, behavior that has to be approached by simulations and approximations.

So if you really want there to be un-changing building-blocks, if you think that’s really essential? Then you should probably start looking at integrable theories. But I wouldn’t hold my breath if I were you: the real world seems pretty clearly chaotic, not integrable. And probably, that means particle production is here to stay.

Lack of Recognition Is a Symptom, Not a Cause

Science is all about being first. Once a discovery has been made, discovering the same thing again is redundant. At best, you can improve the statistical evidence…but for a theorem or a concept, you don’t even have that. This is why we make such a big deal about priority: the first person to discover something did something very valuable. The second, no matter how much effort and insight went into their work, did not.

Because priority matters, for every big scientific discovery there is a priority dispute. Read about science’s greatest hits, and you’ll find people who were left in the wings despite their accomplishments, people who arguably found key ideas and key discoveries earlier than the people who ended up famous. That’s why the idea Peter Higgs is best known for, the Higgs mechanism,

“is therefore also called the Brout–Englert–Higgs mechanism, or Englert–Brout–Higgs–Guralnik–Hagen–Kibble mechanism, Anderson–Higgs mechanism,Anderson–Higgs–Kibble mechanism, Higgs–Kibble mechanism by Abdus Salam and ABEGHHK’tH mechanism (for Anderson, Brout, Englert, Guralnik, Hagen, Higgs, Kibble, and ‘t Hooft) by Peter Higgs.”

Those who don’t get the fame don’t get the rewards. The scientists who get less recognition than they deserve get fewer grants and worse positions, losing out on the career outcomes that the person famous for the discovery gets, even if the less-recognized scientist made the discovery first.

…at least, that’s the usual story.

You can start to see the problem when you notice a contradiction: if a discovery has already been made, what would bring someone to re-make it?

Sometimes, people actually “steal” discoveries, finding something that isn’t widely known and re-publishing it without acknowledging the author. More often, though, the re-discoverer genuinely didn’t know. That’s because, in the real world, we don’t all know about a discovery as soon as it’s made. It has to be communicated.

At minimum, this means you need enough time to finish ironing out the kinks of your idea, write up a paper, and disseminate it. In the days before the internet, dissemination might involve mailing pre-prints to universities across the ocean. It’s relatively easy, in such a world, for two people to get started discovering the same thing, write it up, and even publish it before they learn about the other person’s work.

Sometimes, though, something gets rediscovered long after the original paper should have been available. In those cases, the problem isn’t time, it’s reach. Maybe the original paper was written in a way that hid its implications. Maybe it was published in a way only accessible to a smaller community: either a smaller part of the world, like papers that were only available to researchers in the USSR, or a smaller research community. Maybe the time hadn’t come yet, and the whole reason why the result mattered had yet to really materialize.

For a result like that, a lack of citations isn’t really the problem. Rather than someone who struggles because their work is overlooked, these are people whose work is overlooked, in a sense, because they are struggling: because their work is having a smaller impact on the work of others. Acknowledging them later can do something, but it can’t change the fact that this was work published for a smaller community, yielding smaller rewards.

And ultimately, it isn’t just priority we care about, but impact. While the first European to make contact with the New World might have been Erik the Red, we don’t call the massive exchange of plants and animals between the Old and New World the “Red Exchange”. Erik the Red being “first” matters much less, historically speaking, than Columbus changing the world. Similarly, in science, being the first to discover something is meaningless if that discovery doesn’t change how other people do science, and the person who manages to cause that change is much more valuable than someone who does the same work but doesn’t manage the same reach.

Am I claiming that it’s fair when scientists get famous for other peoples’ discoveries? No, it’s definitely not fair. It’s not fair because most of the reasons one might have lesser reach aren’t under one’s control. Soviet scientists (for the most part) didn’t choose to be based in the USSR. People who make discoveries before they become relevant don’t choose the time in which they were born. And while you can get better at self-promotion with practice, there’s a limited extent to which often-reclusive scientists should be blamed for their lack of social skills.

What I am claiming is that addressing this isn’t a matter of scrupulously citing the “original” discoverer after the fact. That’s a patch, and a weak one. If we want to get science closer to the ideal, where each discovery only has to be made once, then we need to work to increase reach for everyone. That means finding ways to speed up publication, to let people quickly communicate preliminary ideas with a wide audience and change the incentives so people aren’t penalized when others take up those ideas. It means enabling conversations between different fields and sub-fields, building shared vocabulary and opportunities for dialogue. It means making a community that rewards in-person hand-shaking less and careful online documentation more, so that recognition isn’t limited to the people with the money to go to conferences and the social skills to schmooze their way through them. It means anonymity when possible, and openness when we can get away with it.

Lack of recognition and redundant effort are both bad, and they both stem from the same failures to communicate. Instead of fighting about who deserves fame, we should work to make sure that science is truly global and truly universal. We can aim for a future where no-one’s contribution goes unrecognized, and where anything that is known to one is known to all.