Tag Archives: PublicPerception

Energy Is That Which Is Conserved

In school, kids learn about different types of energy. They learn about solar energy and wind energy, nuclear energy and chemical energy, electrical energy and mechanical energy, and potential energy and kinetic energy. They learn that energy is conserved, that it can never be created or destroyed, but only change form. They learn that energy makes things happen, that you can use energy to do work, that energy is different from matter.

Some, between good teaching and good students, manage to impose order on the jumble of concepts and terms. Others end up envisioning the whole story a bit like Pokemon, with different types of some shared “stuff”.

Energy isn’t “stuff”, though. So what is it? What relates all these different types of things?

Energy is something which is conserved.

The mathematician Emmy Noether showed that, when the laws of physics are symmetrical, they come with a conserved quantity. For example, because the laws of the physics are the same from place to place, momentum is conserved. Similarly, because the laws of physics are the same from one time to another, Noether’s theorem states that there must be some quantity related to time, some number we can calculate, that is conserved, even as other things change. We call that number energy.

If energy is that simple, why are there all those types?

Energy is a number we can calculate. It’s a number we can calculate for different things. If you have a detailed description of how something in physics works, you can use that description to calculate that thing’s energy. In school, you memorize formulas like \frac{1}{2}m v^2 and m g h. These are all formulas that, with a bit more knowledge, you could calculate. They are the things that, for a something that meets the conditions, are conserved. They are things that, according to Noether’s theorem, stay the same.

Because of this, you shouldn’t think of energy as a substance, or a fuel. Energy is something we can do: we physicists, and we students of physics. We can take a physical system, and see what about it ought to be conserved. Energy is an action, a calculation, a conceptual tool that can be used to make predictions.

Most things are, in the end.

Bonus Info For “Cosmic Paradox Reveals the Awful Consequence of an Observer-Free Universe”

I had a piece in Quanta Magazine recently, about a tricky paradox that’s puzzling quantum gravity researchers and some early hints at its resolution.

The paradox comes from trying to describe “closed universes”, which are universes where it is impossible to reach the edge, even if you had infinite time to do it. This could be because the universe wraps around like a globe, or because the universe is expanding so fast no traveler could ever reach an edge. Recently, theoretical physicists have been trying to describe these closed universes, and have noticed a weird issue: each such universe appears to have only one possible quantum state. In general, quantum systems have more possible states the more complex they are, so for a whole universe to have only one possible state is a very strange thing, implying a bizarrely simple universe. Most worryingly, our universe may well be closed. Does that mean that secretly, the real world has only one possible state?

There is a possible solution that a few groups are playing around with. The argument that a closed universe has only one state depends on the fact that nothing inside a closed universe can reach the edge. But if nothing can reach the edge, then trying to observe the universe as a whole from outside would tell you nothing of use. Instead, any reasonable measurement would have to come from inside the universe. Such a measurement introduces a new kind of “edge of the universe”, this time not in the far distance, but close by: the edge between an observer and the rest of the world. And when you add that edge to the calculations, the universe stops being closed, and has all the many states it ought to.

This was an unusually tricky story for me to understand. I narrowly avoided several misconceptions, and I’m still not sure I managed to dodge all of them. Likewise, it was unusually tricky for the editors to understand, and I suspect it was especially tricky for Quanta’s social media team to understand.

It was also, quite clearly, tricky for the readers to understand. So I thought I would use this post to clear up a few misconceptions. I’ll say a bit more about what I learned investigating this piece, and try to clarify what the result does and does not mean.

Q: I’m confused about the math terms you’re using. Doesn’t a closed set contain its boundary?

A: Annoyingly, what physicists mean by a closed universe is a bit different from what mathematicians mean by a closed manifold, which is in turn more restrictive than what mathematicians mean by a closed set. One way to think about this that helped me is that in an open set you can take a limit that takes you out of the set, which is like being able to describe a (possibly infinite) path that takes you “out of the universe”. A closed set doesn’t have that, every path, no matter how long, still ends up in the same universe.

Q: So a bunch of string theorists did a calculation and got a result that doesn’t make sense, a one-state universe. What if they’re just wrong?

A: Two things:

First, the people I talked to emphasized that it’s pretty hard to wiggle out of the conclusion. It’s not just a matter of saying you don’t believe in string theory and that’s that. The argument is based in pretty fundamental principles, and it’s not easy to propose a way out that doesn’t mess up something even more important.

That’s not to say it’s impossible. One of the people I interviewed, Henry Maxfield, thinks that some of the recent arguments are misunderstanding how to use one of their core techniques, in a way that accidentally presupposes the one-state universe.

But even he thinks that the bigger point, that closed universes have only one state, is probably true.

And that’s largely due to a second reason: there are older arguments that back the conclusion up.

One of the oldest dates back to John Wheeler, a physicist famous for both deep musings about the nature of space and time and coining evocative terms like “wormhole”. In the 1960’s, Wheeler argued that, in a theory where space and time can be curved, one should think of a system’s state as including every configuration it can evolve into over time, since it can be tricky to specify a moment “right now”. In a closed universe, you could expect a quantum system to explore every possible configuration…meaning that such a universe should be described by only one state.

Later, physicists studying holography ran into a similar conclusion. They kept noticing systems in quantum gravity where you can describe everything that happens inside by what happens on the edges. If there are no edges, that seems to suggest that in some sense there is nothing inside. Apparently, Lenny Susskind had a slide at the end of talks in the 90’s where he kept bringing up this point.

So even if the modern arguments are wrong, and even if string theory is wrong…it still looks like the overall conclusion is right.

Q: If a closed universe has only one state, does that make it deterministic, and thus classical?

A: Oh boy…

So, on the one hand, there is an idea, which I think also goes back to Wheeler, that asks: “if the universe as a whole has a wavefunction, how does it collapse?” One possibility is that the universe has only one state, so that nobody is needed to collapse the wavefunction, it already is in a definite state.

On the other hand, a universe with only one state does not actually look much like a classical universe. Our universe looks classical largely due to a process called decoherence, where small quantum systems interact with big quantum systems with many states, diluting quantum effects until the world looks classical. If there is only one state, there are no big systems to interact with, and the world has large quantum fluctuations that make it look very different from a classical universe.

Q: How, exactly, are you defining “observer”?

A: A few commenters helpfully chimed in to talk about how physics models observers as “witness” systems, objects that preserve some record of what happens to them. A simple example is a ball sitting next to a bowl: if you find the ball in the bowl later, it means something moved it. This process, preserving what happens and making it more obvious, is in essence how physicists think about observers.

However, this isn’t the whole story in this case. Here, different research groups introducing observers are doing it in different ways. That’s, in part, why none of them are confident they have the right answer.

One of the approaches describes an observer in terms of its path through space and time, its worldline. Instead of a detailed witness system with specific properties, all they do is pick out a line and say “the observer is there”. Identifying that line, and declaring it different from its surroundings, seems to be enough to recover the complexity the universe ought to have.

The other approach treats the witness system in a bit more detail. We usually treat an observer in quantum mechanics as infinitely large compared to the quantum systems they measure. This approach instead gives the observer a finite size, and uses that to estimate how far their experience will be from classical physics.

Crucially, both approaches aren’t a matter of defining a physical object, and looking for it in the theory. Given a collection of atoms, neither team can tell you what is an observer, and what isn’t. Instead, in each approach, the observer is arbitrary: a choice, made by us when we use quantum mechanics, of what to count as an observer and what to count as the rest of the world. That choice can be made in many different ways, and each approach tries to describe what happens when you change that choice.

This is part of what makes this approach uncomfortable to some more philosophically-minded physicists: it treats observers not as a predictable part of the physical world, but as a mathematical description used to make statements about the world.

Q: If these ideas come from AdS/CFT, which is an open universe, how do you use them to describe a closed universe?

A: While more examples emerged later, initially theorists were thinking about two types of closed universes:

First, think about a black hole. You may have heard that when you fall into a black hole, you watch the whole universe age away before your eyes, due to the dramatic differences in the passage of time caused by the extreme gravity. Once you’ve seen the outside universe fade away, you are essentially in a closed universe of your own. The outside world will never affect you again, and you are isolated, with no path to the outside. These black hole interiors are one of the examples theorists looked at.

The other example are so-called “baby universes”. When physicists use quantum mechanics to calculate the chance of something happening, they have to add up every possible series of events that could have happened in between. For quantum gravity, this includes every possible arrangement of space and time. This includes arrangements with different shapes, including ones with tiny extra “baby universes” which branch off from the main universe and return. Universes with these “baby universes” are another example that theorists considered to understand closed universes.

Q: So wait, are you actually saying the universe needs to be observed to exist? That’s ridiculous, didn’t the universe exist long before humans existed to observe it? Is this some sort of Copenhagen Interpretation thing, or that thing called QBism?

You’re starting to ask philosophical questions, and here’s the thing:

There are physicists who spend their time thinking about how to interpret quantum mechanics. They talk to philosophers, and try to figure out how to answer these kinds of questions in a consistent and systematic way, keeping track of all the potential pitfalls and implications. They’re part of a subfield called “quantum foundations”.

The physicists whose work I was talking about in that piece are not those people.

Of the people I interviewed, one of them, Rob Myers, probably has lunch with quantum foundations researchers on occasion. The others, based at places like MIT and the IAS, probably don’t even do that.

Instead, these are people trying to solve a technical problem, people whose first inclination is to put philosophy to the side, and “shut up and calculate”. These people did a calculation that ought to have worked, checking how many quantum states they could find in a closed universe, and found a weird and annoying answer: just one. Trying to solve the problem, they’ve done technical calculation work, introducing a path through the universe, or a boundary around an observer, and seeing what happens. While some of them may have their own philosophical leanings, they’re not writing works of philosophy. Their papers don’t talk through the philosophical implications of their ideas in all that much detail, and they may well have different thoughts as to what those implications are.

So while I suspect I know the answers they would give to some of these questions, I’m not sure.

Instead, how about I tell you what I think?

I’m not a philosopher, I can’t promise my views will be consistent, that they won’t suffer from some pitfall. But unlike other people’s views, I can tell you what my own views are.

To start off: yes, the universe existed before humans. No, there is nothing special about our minds, we don’t have psychic powers to create the universe with our thoughts or anything dumb like that.

What I think is that, if we want to describe the world, we ought to take lessons from science.

Science works. It works for many reasons, but two important ones stand out.

Science works because it leads to technology, and it leads to technology because it guides actions. It lets us ask, if I do this, what will happen? What will I experience?

And science works because it lets people reach agreement. It lets people reach agreement because it lets us ask, if I observe this, what do I expect you to observe? And if we agree, we can agree on the science.

Ultimately, if we want to describe the world with the virtues of science, our descriptions need to obey this rule: they need to let us ask “what if?” questions about observations.

That means that science cannot avoid an observer. It can often hide the observer, place them far away and give them an infinite mind to behold what they see, so that one observer is essentially the same as another. But we shouldn’t expect to always be able to do this. Sometimes, we can’t avoid saying something about the observer: about where they are, or how big they are, for example.

These observers, though, don’t have to actually exist. We should be able to ask “what if” questions about others, and that means we should be able to dream up fictional observers, and ask, if they existed, what would they see? We can imagine observers swimming in the quark-gluon plasma after the Big Bang, or sitting inside a black hole’s event horizon, or outside our visible universe. The existence of the observer isn’t a physical requirement, but a methodological one: a restriction on how we can make useful, scientific statements about the world. Our theory doesn’t have to explain where observers “come from”, and can’t and shouldn’t do that. The observers aren’t part of the physical world being described, they’re a precondition for us to describe that world.

Is this the Copenhagen Interpretation? I’m not a historian, but I don’t think so. The impression I get is that there was no real Copenhagen Interpretation, that Bohr and Heisenberg, while more deeply interested in philosophy than many physicists today, didn’t actually think things through in enough depth to have a perspective you can name and argue with.

Is this QBism? I don’t think so. It aligns with some things QBists say, but they say a lot of silly things as well. It’s probably some kind of instrumentalism, for what that’s worth.

Is it logical positivism? I’ve been told logical positivists would argue that the world outside the visible universe does not exist. If that’s true, I’m not a logical positivist.

Is it pragmatism? Maybe? What I’ve seen of pragmatism definitely appeals to me, but I’ve seen my share of negative characterizations as well.

In the end, it’s an idea about what’s useful and what’s not, about what moves science forward and what doesn’t. It tries to avoid being preoccupied with unanswerable questions, and as much as possible to cash things out in testable statements. If I do this, what happens? What if I did that instead?

The results I covered for Quanta, to me, show that the observer matters on a deep level. That isn’t a physical statement, it isn’t a mystical statement. It’s a methodological statement: if we want to be scientists, we can’t give up on the observer.

Mandatory Dumb Acronyms

Sometimes, the world is silly for honest, happy reasons. And sometimes, it’s silly for reasons you never even considered.

Scientific projects often have acronyms, some of which are…clever, let’s say. Astronomers are famous for acronyms. Read this list, and you can find examples from 2D-FRUTTI and ABRACADABRA to WOMBAT and YORIC. Some of these aren’t even “really” acronyms, using letters other than the beginning of each word, multiple letters from a word, or both. (An egregious example from that list: VESTALE from “unVEil the darknesS of The gAlactic buLgE”.)

But here’s a pattern you’ve probably not noticed. I suggest that you should see more of these…clever…acronyms in projects in Europe, and they should show up in a wider range of fields, not just astronomy. And the reason why, is the European Research Council.

In the US, scientific grants are spread out among different government agencies. Typical grants are small, the kind of thing that lets a group share a postdoc every few years, with different types of grants covering projects of different scales.

The EU, instead, has the European Research Council, or ERC, with a flagship series of grants covering different career stages: Starting, Consolidator, and Advanced. Unlike most US grants, these are large (supporting multiple employees over several years), individual (awarded to a single principal investigator, not a collaboration) and general (the ERC uses the same framework across multiple fields, from physics to medicine to history).

That means there are a lot of medium-sized research projects in Europe that are funded by an ERC grant. And each of them are required to have an acronym.

Why? Who knows? “Acronym” is simply one of the un-skippable entries in the application forms, with a pre-set place of honor in their required grant proposal format. Nobody checks whether it’s a “real acronym”, so in practice it often isn’t, turning into some sort of catchy short name with “acronym vibes”. It, like everything else on these forms, is optimized to catch the attention of a committee of scientists who really would rather be doing something else, often discussed and refined by applicants’ mentors and sometimes even dedicated university staff.

So if you run into a scientist in Europe who proudly leads a group with a cutesy, vaguely acronym-adjacent name? And you keep running into these people?

It’s not a coincidence, and it’s not just scientists’ sense of humor. It’s the ERC.

Reminder to Physics Popularizers: “Discover” Is a Technical Term

When a word has both an everyday meaning and a technical meaning, it can cause no end of confusion.

I’ve written about this before using one of the most common examples, the word “model”, which means something quite different in the phrases “large language model”, “animal model for Alzheimer’s” and “model train”. And I’ve written about running into this kind of confusion at the beginning of my PhD, with the word “effective”.

But there is one example I see crop up again and again, even with otherwise skilled science communicators. It’s the word “discover”.

“Discover”, in physics, has a technical meaning. It’s a first-ever observation of something, with an associated standard of evidence. In this sense, the LHC discovered the Higgs boson in 2012, and LIGO discovered gravitational waves in 2015. And there are discoveries we can anticipate, like the cosmic neutrino background.

But of course, “discover” has a meaning in everyday English, too.

You probably think I’m going to say that “discover”, in everyday English, doesn’t have the same statistical standards it does in physics. That’s true of course, but it’s also pretty obvious, I don’t think it’s confusing anybody.

Rather, there is a much more important difference that physicists often forget: in everyday English, a discovery is a surprise.

“Discover”, a word arguably popularized by Columbus’s discovery of the Americas, is used pretty much exclusively to refer to learning about something you did not know about yet. It can be minor, like discovering a stick of gum you forgot, or dramatic, like discovering you’ve been transformed into a giant insect.

Now, as a scientist, you might say that everything that hasn’t yet been observed is unknown, ready for discovery. We didn’t know that the Higgs boson existed before the LHC, and we don’t know yet that there is a cosmic neutrino background.

But just because we don’t know something in a technical sense, doesn’t mean it’s surprising. And if something isn’t surprising at all, then in everyday, colloquial English, people don’t call it a discovery. You don’t “discover” that the store has milk today, even if they sometimes run out. You don’t “discover” that a movie is fun, if you went because you heard reviews claim it would be, even if the reviews might have been wrong. You don’t “discover” something you already expect.

At best, maybe you could “discover” something controversial. If you expect to find a lost city of gold, and everyone says you’re crazy, then fine, you can discover the lost city of gold. But if everyone agrees that there is probably a lost city of gold there? Then in everyday English, it would be very strange to say that you were the one who discovered it.

With this in mind, the way physicists use the word “discover” can cause a lot of confusion. It can make people think, as with gravitational waves, that a “discovery” is something totally new, that we weren’t pretty confident before LIGO that gravitational waves exist. And it can make people get jaded, and think physicists are overhyping, talking about “discovering” this or that particle physics fact because an experiment once again did exactly what it was expected to.

My recommendation? If you’re writing for the general public, use other words. The LHC “decisively detected” the Higgs boson. We expect to see “direct evidence” of the cosmic neutrino background. “Discover” has baggage, and should be used with care.

Explain/Teach/Advocate

Scientists have different goals when they communicate, leading to different styles, or registers, of communication. If you don’t notice what register a scientist is using, you might think they’re saying something they’re not. And if you notice someone using the wrong register for a situation, they may not actually be a scientist.

Sometimes, a scientist is trying to explain an idea to the general public. The point of these explanations is to give you appreciation and intuition for the science, not to understand it in detail. This register makes heavy use of metaphors, and sometimes also slogans. It should almost never be taken literally, and a contradiction between two different scientist explanations usually just means they are using incompatible metaphors for the same concept. Sometimes, scientists who do this a lot will comment on other metaphors you might have heard, referencing other slogans to help explain what those explanations miss. They do this knowing that they do, in the end, agree on the actual science: they’re just trying to give you another metaphor, with a deeper intuition for a neglected part of the story.

Other times, scientists are trying to teach a student to be able to do something. Teaching can use metaphors or slogans as introductions, but quickly moves past them, because it wants to show the students something they can use: an equation, a diagram, a classification. If a scientist shows you any of these equations/diagrams/classifications without explaining what they mean, then you’re not the student they had in mind: they had designed their lesson for someone who already knew those things. Teaching may convey the kinds of appreciation and intuition that explanations for the general public do, but that goal gets much less emphasis. The main goal is for students with the appropriate background to learn to do something new.

Finally, sometimes scientists are trying to advocate for a scientific point. In this register, and only in this register, are they trying to convince people who don’t already trust them. This kind of communication can include metaphors and slogans as decoration, but the bulk will be filled with details, and those details should constitute evidence: they should be a structured argument, one that lays out, scientifically, why others should come to the same conclusion.

A piece that tries to address multiple audiences can move between registers in a clean way. But if the register jumps back and forth, or if the wrong register is being used for a task, that usually means trouble. That trouble can be simple boredom, like a scientist’s typical conference talk that can’t decide whether it just wants other scientists to appreciate the work, whether it wants to teach them enough to actually use it, or whether it needs to convince any skeptics. It can also be more sinister: a lot of crackpots write pieces that are ostensibly aimed at convincing other scientists, but are almost entirely metaphors and slogans, pieces good at tugging on the general public’s intuition without actually giving scientists anything meaningful to engage with.

If you’re writing, or speaking, know what register you need to use to do what you’re trying to do! And if you run into a piece that doesn’t make sense, consider that it might be in a different register than you thought.

When Your Theory Is Already Dead

Occasionally, people try to give “even-handed” accounts of crackpot physics, like people who claim to have invented anti-gravity devices. These accounts don’t go so far as to say that the crackpots are right, and will freely point out plausible doubts about the experiments. But at the end of the day, they’ll conclude that we still don’t really know the answer, and perhaps the next experiment will go differently. More tests are needed.

For someone used to engineering, or to sciences without much theory behind them, this might sound pretty reasonable. Sure, any one test can be critiqued. But you can’t prove a negative: you can’t rule out a future test that might finally see the effect.

That’s all well and good…if you have no idea what you’re doing. But these people, just like anyone else who grapples with physics, aren’t just proposing experiments. They’re proposing theories: models of the world.

And once you’ve got a theory, you don’t just have to care about future experiments. You have to care about past experiments too. Some theories…are already dead.

The "You're already dead" scene from the anime North Star
Warning: this is a link to TVTropes, enter only if you have lots of time on your hands

To get a little more specific, let’s talk about antigravity proposals that use scalar fields.

Scalar fields seem to have some sort of mysticism attached to them in the antigravity crackpot community, but for physicists they’re just the simplest possible type of field, the most obvious thing anyone would have proposed once they were comfortable enough with the idea of fields in the first place. We know of one, the Higgs field, which gives rise to the Higgs boson.

We also know that if there are any more, they’re pretty subtle…and as a result, pretty useless.

We know this because of a wide variety of what are called “fifth-force experiments“, tests and astronomical observations looking for an undiscovered force that, like gravity, reaches out to long distances. Many of these experiments are quite general, the sort of thing that would pick up a wide variety of scalar fields. And so far, none of them have seen anything.

That “so far” doesn’t mean “wait and see”, though. Each time physicists run a fifth-force experiment, they establish a limit. They say, “a fifth force cannot be like this“. It can’t be this strong, it can’t operate on these scales, it can’t obey this model. Each experiment doesn’t just say “no fifth force yet”, it says “no fifth force of this kind, at all”.

When you write down a theory, if you’re not careful, you might find it has already been ruled out by one of these experiments. This happens to physicists all the time. Physicists want to use scalar fields to understand the expansion of the universe, they use them to think about dark matter. And frequently, a model one physicist proposed will be ruled out, not by new experiments, but by someone doing the math and realizing that the model is already contradicted by a pre-existing fifth-force experiment.

So can you prove a negative? Sort of.

If you never commit to a model, if you never propose an explanation, then you can never be disproven, you can always wait for the experiment of your dreams to come true. But if you have any model, any idea, any explanation at all, then your explanation will have implications. Those implications may kill your theory in a future experiment. Or, they may have already killed it.

To Measure Something or to Test It

Black holes have been in the news a couple times recently.

On one end, there was the observation of an extremely large black hole in the early universe, when no black holes of the kind were expected to exist. My understanding is this is very much a “big if true” kind of claim, something that could have dramatic implications but may just be being misunderstood. At the moment, I’m not going to try to work out which one it is.

In between, you have a piece by me in Quanta Magazine a couple weeks ago, about tests of whether black holes deviate from general relativity. They don’t, by the way, according to the tests so far.

And on the other end, you have the coverage last week of a “confirmation” (or even “proof”) of the black hole area law.

The black hole area law states that the total area of the event horizons of all black holes will always increase. It’s also known as the second law of black hole thermodynamics, paralleling the second law of thermodynamics that entropy always increases. Hawking proved this as a theorem in 1971, assuming that general relativity holds true.

(That leaves out quantum effects, which indeed can make black holes shrink, as Hawking himself famously later argued.)

The black hole area law is supposed to hold even when two black holes collide and merge. While the combination may lose energy (leading to gravitational waves that carry energy to us), it will still have greater area, in the end, than the sum of the black holes that combined to make it.

Ok, so that’s the area law. What’s this paper that’s supposed to “finally prove” it?

The LIGO, Virgo, and KAGRA collaborations recently published a paper based on gravitational waves from one particularly clear collision of black holes, which they measured back in January. They compare their measurements to predictions from general relativity, and checked two things: whether the measurements agreed with predictions based on the Kerr metric (how space-time around a rotating black hole is supposed to behave), and whether they obeyed the area law.

The first check isn’t so different in purpose from the work I wrote about in Quanta Magazine, just using different methods. In both studies, physicists are looking for deviations from the laws of general relativity, triggered by the highly curved environments around black holes. These deviations could show up in one way or another in any black hole collision, so while you would ideally look for them by scanning over many collisions (as the paper I reported on did), you could do a meaningful test even with just one collision. That kind of a check may not be very strenuous (if general relativity is wrong, it’s likely by a very small amount), but it’s still an opportunity, diligently sought, to be proven wrong.

The second check is the one that got the headlines. It also got first billing in the paper title, and a decent amount of verbiage in the paper itself. And if you think about it for more than five minutes, it doesn’t make a ton of sense as presented.

Suppose the black hole area law is wrong, and sometimes black holes lose area when they collide. Even if this happened sometimes, you wouldn’t expect it to happen every time. It’s not like anyone is pondering a reverse black hole area law, where black holes only shrink!

Because of that, I think it’s better to say that LIGO measured the black hole area law for this collision, while they tested whether black holes obey the Kerr metric. In one case, they’re just observing what happened in this one situation. In the other, they can try to draw implications for other collisions.

That doesn’t mean their work wasn’t impressive, but it was impressive for reasons that don’t seem to be getting emphasized. It’s impressive because, prior to this paper, they had not managed to measure the areas of colliding black holes well enough to confirm that they obeyed the area law! The previous collisions looked like they obeyed the law, but when you factor in the experimental error they couldn’t say it with confidence. The current measurement is better, and can. So the new measurement is interesting not because it confirms a fundamental law of the universe or anything like that…it’s interesting because previous measurements were so bad, that they couldn’t even confirm this kind of fundamental law!

That, incidentally, feels like a “missing mood” in pop science. Some things are impressive not because of their amazing scale or awesome implications, but because they are unexpectedly, unintuitively, really really hard to do. These measurements shouldn’t be thought of, or billed, as tests of nature’s fundamental laws. Instead they’re interesting because they highlight what we’re capable of, and what we still need to accomplish.

The Rocks in the Ground Era of Fundamental Physics

It’s no secret that the early twentieth century was a great time to make progress in fundamental physics. On one level, it was an era when huge swaths of our understanding of the world were being rewritten, with relativity and quantum mechanics just being explored. It was a time when a bright student could guide the emergence of whole new branches of scholarship, and recently discovered physical laws could influence world events on a massive scale.

Put that way, it sounds like it was a time of low-hanging fruit, the early days of a field when great strides can be made before the easy problems are all solved and only the hard ones are left. And that’s part of it, certainly: the fields sprung from that era have gotten more complex and challenging over time, requiring more specialized knowledge to make any kind of progress. But there is also a physical reason why physicists had such an enormous impact back then.

The early twentieth century was the last time that you could dig up a rock out of the ground, do some chemistry, and end up with a discovery about the fundamental laws of physics.

When scientists like Curie and Becquerel were working with uranium, they didn’t yet understand the nature of atoms. The distinctions between elements were described in qualitative terms, but only just beginning to be physically understood. That meant that a weird object in nature, “a weird rock”, could do quite a lot of interesting things.

And once you find a rock that does something physically unexpected, you can scale up. From the chemistry experiments of a single scientist’s lab, countries can build industrial processes to multiply the effect. Nuclear power and the bomb were such radical changes because they represented the end effect of understanding the nature of atoms, and atoms are something people could build factories to manipulate.

Scientists went on to push that understanding further. They wanted to know what the smallest pieces of matter were composed of, to learn the laws behind the most fundamental laws they knew. And with relativity and quantum mechanics, they could begin to do so systematically.

US particle physics has a nice bit of branding. They talk about three frontiers: the Energy Frontier, the Intensity Frontier, and the Cosmic Frontier.

Some things we can’t yet test in physics are gated by energy. If we haven’t discovered a particle, it may be because it’s unstable, decaying quickly into lighter particles so we can’t observe it in everyday life. If these particles interact appreciably with particles of everyday matter like protons and electrons, then we can try to make them in particle colliders. These end up creating pretty much everything up to a certain mass, due to a combination of the tendency in quantum mechanics for everything that can happen to happen, and relativity’s E=mc^2. In the mid-20th century these particle colliders were serious pieces of machinery, but still small enough to make industrial: now, there are so-called medical accelerators in many hospitals based on their designs. But current particle accelerators are a different beast, massive facilities built by international collaborations. This is the Energy Frontier.

Some things in physics are gated by how rare they are. Some particles interact only very faintly with other particles, so to detect them, physicists have to scan a huge chunk of matter, a giant tank of argon or a kilometer of antarctic ice, looking for deviations from the norm. Over time, these experiments have gotten bigger, looking for more and more subtle effects. A few weird ones still fit on tabletops, but only because they have the tools to measure incredibly small variations. Most are gigantic. This is the Intensity Frontier.

Finally, the Cosmic Frontier looks for the unknown behind both kinds of gates, using the wider universe to look at events with extremely high energy or size.

Pushing these frontiers has meant cleaning up our understanding of the fundamental laws of physics up to these frontiers. It means that whatever is still hiding, it either requires huge amounts of energy to produce, or is an extremely rare, subtle effect.

That means that you shouldn’t expect another nuclear bomb out of fundamental physics. Physics experiments are already working on vast scales, to the extent that a secret government project would have to be smaller than publicly known experiments, in physical size, energy use, and budget. And you shouldn’t expect another nuclear power plant, either: we’ve long passed the kinds of things you could devise a clever industrial process to take advantage of at scale.

Instead, new fundamental physics will only be directly useful once we’re the kind of civilization that operates on a much greater scale than we do today. That means larger than the solar system: there wouldn’t be much advantage, at this point, of putting a particle physics experiment on the edge of the Sun. It means the kind of civilization that tosses galaxies around.

It means that right now, you won’t see militaries or companies pushing the frontiers of fundamental physics, unlike the way they might have wanted to at the dawn of the twentieth century. By the time fundamental physics is useful in that way, all of these actors will likely be radically different: companies, governments, and in all likelihood human beings themselves. Instead, supporting fundamental physics right now is an act of philanthropy, maintaining a practice because it maintains good habits of thought and produces powerful ideas, the same reasons organizations support mathematics or poetry. That’s not nothing, and fundamental physics is still often affordable as philanthropy goes. But it’s not changing the world, not the way physicists did in the early twentieth century.

Two Types of Scientific Fraud: for a Fee and for Power

A paper about scientific fraud has been making the rounds in social media lately. The authors gather evidence of large-scale networks of fraudsters across multiple fields, from teams of editors that fast-track fraudulent research to businesses that take over journals, sell spots for articles, and then move on to a new target when the journal is de-indexed. I’m not an expert in this kind of statistical sleuthing, but the work looks impressively thorough.

Still, I think the authors overplay their results a bit. They describe themselves as revealing something many scientists underestimate. They point to what they label as misconceptions: that scientific fraud is usually perpetrated alone by individual unethical scientists, or that it is almost entirely a problem of the developing world, and present their work as disproving those misconceptions. Listen to them, and you might get the feeling that science is rife with corruption, that no result, or scientist, can be trusted.

As far as I can tell, though, those “misconceptions” they identify are true. Someone who believes that scientific fraud is perpetrated by loners is probably right, as is someone who believes it largely takes place outside of the first world.

As is often the case, the problem is words.

“Scientific Fraud” is a single term for two different things. The two both involve bad actors twisting scientific activity. But in everything else — their incentives, their geography, their scale, and their consequences — they are dramatically different.

One of the types of scientific fraud is largely about power.

In references 84-89 of the paper, the authors give examples of large-scale scientific fraud in Europe and the US. All (except one, which I’ll mention later) are about the career of a single researcher. Each of these people systematically bent the truth, whether with dodgy statistics, doctored images, or inflating citation counts. Some seemed motivated to promote a particular scientific argument, cutting corners to push a particular conclusion through. Others were purer cases of self-promotion. These people often put pressure on students, postdocs, and other junior researchers in their orbits, which increases the scale of their impact. In some cases, their work rippled out to convince other researchers, prolonging bad ideas and strangling good ones. These were people with power, who leveraged that power to increase their power.

There also don’t appear to be that many of them. These people are loners in a meaningful sense, cores of fraud working on their own behalf. They don’t form networks with each other, for the most part: because they work towards their own aggrandizement, they have no reason to trust anyone else doing the same. I have yet to see evidence that the number of these people is increasing. They exist, they’re a problem, they’re important to watch out for. But they’re not a crisis, and they shouldn’t shift your default expectations of science.

The other, quite different, type of scientific fraud is fraud for a fee.

The cases this paper investigates seem to fall into this category. They are businesses, offering the raw material of academic credit (papers, co-authorship, citations, publication) for cash. They’re paper mills, of various sorts. These are, at least from an academic perspective, large organizations, with hundreds or thousands of customers and tens of suborned editors or scientists farming out their credibility. As the authors of this paper argue, fraudsters of this type are churning out more and more papers, potentially now fueled by AI, adding up to a still small, but non-negligible, proportion of scientific papers in total.

Compared to the first type of fraud, though, buying credit in this way doesn’t give very much power. As the paper describes, many of the papers churned out by paper mills don’t even go into relevant journals: for example, they mention “an article about roasting hazelnuts in a journal about HIV/AIDS care”. An article like that isn’t going to mislead the hazelnut roasting community, or the HIV/AIDS community. Indeed, that would be counter to its purpose. The paper isn’t intended to be read at all, and ideally gets ignored: it’s just supposed to inflate a number.

These numbers are most relevant in the developing world, and when push comes to shove, almost all of the buyers of these services identified by the authors of this paper come from there. In many developing countries, a combination of low trust and advice from economists leads to explicit point systems, where academics are paid or hired explicitly based on criteria like where and how often they publish or how they are cited. The more a country can trust people to vouch for each other without corruption, the less these kinds of incentives have purchase. Outside of the developing world, involvement in paper mills and the like generally seems to involve a much smaller number of people, and typically as sellers, not buyers: selling first-world credibility in exchange for fees from many developing-world applicants.

(The one reference I mentioned above is an interesting example of this: a system built out of points and low trust to recruit doctors from the developing world to the US, gamed by a small number of co-authorship brokers.)

This kind of fraud doesn’t influence science directly. Its perpetrators aren’t trying to get noticed, but to keep up a cushy scam. You don’t hear their conclusions in the press, other scientists don’t see their work. Instead, they siphon off resources: cannibalizing journals, flooding editors with mass-produced crap, and filling positions and slurping up science budgets in the countries that can least afford them. As they publish more and more, they shouldn’t affect your expectations of the credibility of science: any science you hear about will be either genuine, or fraud from the other category. But they do make the science you hear about harder and harder to do.

(The authors point out one exception: what about AI? If a company trains a large language model on the current internet, will its context windows be long enough to tell that that supposedly legitimate paper about hazelnuts is in an HIV/AIDS journal? If something gets said often enough, copied again and again in papers sold by a mill, will an AI trained on all these papers be convinced? Presumably, someone is being paid good money to figure out how to filter AI-generated slop from training data: can they filter paper mill fraud as well?)

It’s a shame that we have one term, scientific fraud, to deal with these two very different things. But it’s important to keep in mind that they are different. Fraud for power and fraud for money can have very different profiles, and offer very different risks. If you don’t trust a scientific result, it’s worth understanding what might be at play.

Technology as Evidence

How much can you trust general relativity?

On the one hand, you can read through a lovely Wikipedia article full of tests, explaining just how far and how precisely scientists have pushed their knowledge of space and time. On the other hand, you can trust GPS satellites.

As many of you may know, GPS wouldn’t work if we didn’t know about general relativity. In order for the GPS in your phone to know where you are, it has to compare signals from different satellites, each giving the location and time the signal was sent. To get an accurate result, the times measured on those satellites have to be adjusted: because of the lighter gravity they experience, time moves more quickly for them than for us down on Earth.

In a sense, general relativity gets tested every minute of every day, on every phone in the world. That’s pretty trustworthy! Any time that science is used in technology, it gets tested in this way. The ideas we can use are ideas that have shown they can perform, ideas which do what we expect again and again and again.

In another sense, though, GPS is a pretty bad test of general relativity. It tests one of general relativity’s simplest consequences, based on the Schwarzchild metric for how gravity behaves near a large massive object, and not to an incredibly high degree of precision. Gravity could still violate general relativity in a huge number of other ways, and GPS would still function. That’s why the other tests are valuable: if you want to be sure general relativity doesn’t break down, you need to test it under conditions that GPS doesn’t cover, and to higher precision.

Once you know to look for it, these layers of tests come up everywhere. You might see the occasional article talking about tests of quantum gravity. The tests they describe are very specific, testing a very general and basic question: does quantum mechanics make sense at all in a gravitational world? In contrast, most scientists who research quantum gravity don’t find that question very interesting: if gravity breaks quantum mechanics in a way those experiments could test, it’s hard to imagine it not leading to a huge suite of paradoxes. Instead, quantum gravity researchers tend to be interested in deeper problems with quantum gravity, distinctions between theories that don’t dramatically break with our existing ideas, but that because of that are much harder to test.

The easiest tests are important, especially when they come from technology: they tell us, on a basic level, what we can trust. But we need the hard tests too, because those are the tests that are most likely to reveal something new, and bring us to a new level of understanding.