The Parameter Was Inside You All Along

Sabine Hossenfelder had an explainer video recently on how to tell science from pseudoscience. This is a famously difficult problem, so naturally we have different opinions. I actually think the picture she draws is reasonably sound. But while it is a good criterion to tell whether you yourself are doing pseudoscience, it’s surprisingly tricky to apply it to other people.

Hossenfelder argues that science, at its core, is about explaining observations. To tell whether something is science or pseudoscience you need to ask, first, if it agrees with observations, and second, if it is simpler than those observations. In particular, a scientist should prefer models with fewer parameters. If your model has so many parameters that you can fit any observation, you’re not being scientific.

This is a great rule of thumb, one that as Hossenfelder points out forms the basis of a whole raft of statistical techniques. It does rely on one tricky judgement, though: how many parameters does your model actually have?

Suppose I’m one of those wacky theorists who propose a whole new particle to explain some astronomical mystery. Hossenfelder, being more conservative in these things, proposes a model with no new particles. Neither of our models fit the data perfectly. Perhaps my model fits a little better, but after all it has one extra parameter, from the new particle. If we want to compare our models, we should take that into account, and penalize mine.

Here’s the question, though: how do I know that Hossenfelder didn’t start out with more particles, and got rid of them to get a better fit? If she did, she had more parameters than I did. She just fit them away.

The problem here is closely related to one called the look-elsewhere effect. Scientists don’t publish everything they try. An unscrupulous scientist can do a bunch of different tests until one of them randomly works, and just publish that one, making the result look meaningful when really it was just random chance. Even if no individual scientist is unscrupulous, a community can do the same thing: many scientists testing many different models, until one accidentally appears to work.

As a scientist, you mostly know if your motivations are genuine. You know if you actually tried a bunch of different models or had good reasons from the start to pick the one you did. As someone judging other scientists, you often don’t have that luxury. Sometimes you can look at prior publications and see all the other attempts someone made. Sometimes they’ll even tell you explicitly what parameters they used and how they fit them. But sometimes, someone will swear up and down that their model is just the most natural, principled choice they could have made, and they never considered anything else. When that happens, how do we guard against the look-elsewhere effect?

The normal way to deal with the look-elsewhere effect is to consider, not just whatever tests the scientist claims to have done, but all tests they could reasonably have done. You need to count all the parameters, not just the ones they say they varied.

This works in some fields. If you have an idea of what’s reasonable and what’s not, you have a relatively manageable list of things to look at. You can come up with clear rules for which theories are simpler than others, and people will agree on them.

Physics doesn’t have it so easy. We don’t have any pre-set rules for what kind of model is “reasonable”. If we want to parametrize every “reasonable” model, the best we can do are what are called Effective Field Theories, theories which try to describe every possible type of new physics in terms of its effect on the particles we already know. Even there, though, we need assumptions. The most popular effective field theory, called SMEFT, assumes the forces of the Standard Model keep their known symmetries. You get a different model if you relax that assumption, and even that model isn’t the most general: for example, it still keeps relativity intact. Try to make the most general model possible, and you end up waist-deep in parameter soup.

Subjectivity is a dirty word in science…but as far as I can tell it’s the only way out of this. We can try to count parameters when we can, and use statistical tools…but at the end of the day, we still need to make choices. We need to judge what counts as an extra parameter and what doesn’t, which possible models to compare to and which to ignore. That’s going to be dependent on our scientific culture, on fashion and aesthetics, there just isn’t a way around that. The best we can do is own up to our assumptions, and be ready to change them when we need to.

Pseudonymity Matters. I Stand With Slate Star Codex.

Slate Star Codex is one of the best blogs on the net. Written under the pseudonym Scott Alexander, the blog covers a wide variety of topics with a level of curiosity and humility that the rest of us bloggers can only aspire to.

Recently, this has all been jeopardized. A reporter at the New York Times, writing an otherwise positive article, told Scott he was going to reveal his real name publicly. In a last-ditch effort to stop this, Scott deleted his blog.

I trust Scott. When he says that revealing his identity would endanger his psychiatric practice, not to mention the safety of friends and loved ones, I believe him. What’s more, I think working under a pseudonym makes him a better blogger: some of his best insights have come from talking to people who don’t think of him as “the Slate Star Codex guy”.

I don’t know why the Times thinks revealing Scott’s name is a good idea. I do know that there are people out there who view anyone under a pseudonym with suspicion. Compared to Scott, my pseudonym is paper-thin: it’s very easy to find who I am. Still, I have met people who are irked just by that, by the bare fact that I don’t print my real name on this blog.

I think this might be a generational thing. My generation grew up alongside the internet. We’re used to the idea that very little is truly private, that anything made public somewhere risks becoming public everywhere. In that world, writing under a pseudonym is like putting curtains on a house. It doesn’t make us unaccountable: if you break the law behind your curtains the police can get a warrant, similarly Scott’s pseudonym wouldn’t stop a lawyer from tracking him down. All it is, is a filter: a way to have a life of our own, shielded just a little from the whirlwind of the web.

I know there are journalists who follow this blog. If you have contacts in the Times tech section, or know someone who does, please reach out. I want to hope that someone there is misunderstanding the situation, that when things are fully explained they will back down. We have to try.

The Citation Motivation Situation

Citations are the bread and butter of academia, or maybe its prison cigarettes. They link us together, somewhere between a map to show us the way and an informal currency. They’re part of how the world grades us, a measure more objective than letters from our peers but that’s not saying much. It’s clear why we we want to be cited, but why do we cite others?

For more reasons than you’d expect.

First, we cite to respect priority. Since the dawn of science, we’ve kept track not only of what we know, but of who figured it out first. If we use an idea in our paper, we cite its origin: the paper that discovered or invented it. We don’t do this for the oldest and most foundational ideas: nobody cites Einstein for relativity. But if the idea is at all unusual, we make sure to give credit where credit is due.

Second, we cite to substantiate our claims. Academic papers don’t stand on their own: they depend on older proofs and prior discoveries. If we make a claim that was demonstrated in older work, we don’t need to prove it again. By citing the older work, we let the reader know where to look. If they doubt our claim, they can look at the older paper and see what went wrong.

Those two are the most obvious uses of citations, but there are more. Another important use is to provide context. Academic work doesn’t stand alone: we choose what we work on in part based on how it relates to other work. As such, it’s important to cite that other work, to help readers understand our motivation. When we’re advancing the state of the art, we need to tell the reader what that state of the art is. When we’re answering a question or solving a problem, we can cite the paper that asked the question or posed the problem. When we’re introducing a new method or idea, we need to clearly say what’s new about it: how it improves on older, similar ideas.

Scientists are social creatures. While we often have a scientific purpose in mind, citations also follow social conventions. These vary from place to place, field to field, and sub-field to sub-field. Mention someone’s research program, and you might be expected to cite every paper in that program. Cite one of a pair of rivals, and you should probably cite the other one too. Some of these conventions are formalized in the form of “citeware“, software licenses that require citations, rather than payments, to use. Others come from unspoken cultural rules. Citations are a way to support each other, something that can slightly improve another’s job prospects at no real cost to your own. It’s not surprising that they ended up part of our culture, well beyond their pure academic use.

In Defense of Shitty Code

Scientific programming was in the news lately, when doubts were raised about a coronavirus simulation by researchers at Imperial College London. While the doubts appear to have been put to rest, doing so involved digging through some seriously messy code. The whole situation seems to have gotten a lot of people worried. If these people are that bad at coding, why should we trust their science?

I don’t know much about coronavirus simulations, my knowledge there begins and ends with a talk I saw last month. But I know a thing or two about bad scientific code, because I write it. My code is atrocious. And I’ve seen published code that’s worse.

Why do scientists write bad code?

In part, it’s a matter of training. Some scientists have formal coding training, but most don’t. I took two CS courses in college and that was it. Despite that lack of training, we’re expected and encouraged to code. Before I took those courses, I spent a summer working in a particle physics lab, where I was expected to pick up the C++-based interface pretty much on the fly. I don’t think there’s another community out there that has as much reason to code as scientists do, and as little training for it.

Would it be useful for scientists to have more of the tools of a trained coder? Sometimes, yeah. Version control is a big one, I’ve collaborated on papers that used Git and papers that didn’t, and there’s a big difference. There are coding habits that would speed up our work and lead to fewer dead ends, and they’re worth picking up when we have the time.

But there’s a reason we don’t prioritize “proper coding”. It’s because the things we’re trying to do, from a coding perspective, are really easy.

What, code-wise, is a coronavirus simulation? A vector of “people”, really just simple labels, all randomly infecting each other and recovering, with a few parameters describing how likely they are to do so and how long it takes. What do I do, code-wise? Mostly, giant piles of linear algebra.

These are not some sort of cutting-edge programming tasks. These are things people have been able to do since the dawn of computers. These are things that, when you screw them up, become quite obvious quite quickly.

Compared to that, the everyday tasks of software developers, like making a reliable interface for users, or efficient graphics, are much more difficult. They’re tasks that really require good coding practices, that just can’t function without them.

For us, the important part is not the coding itself, but what we’re doing with it. Whatever bugs are in a coronavirus simulation, they will have much less impact than, for example, the way in which the simulation includes superspreaders. Bugs in my code give me obviously wrong answers, bad scientific assumptions are much harder for me to root out.

There’s an exception that proves the rule here, and it’s that, when the coding task is actually difficult, scientists step up and write better code. Scientists who want to run efficiently on supercomputers, who are afraid of numerical error or need to simulate on many scales at once, these people learn how to code properly. The code behind the LHC still might be jury-rigged by industry standards, but it’s light-years better than typical scientific code.

I get the furor around the Imperial group’s code. I get that, when a government makes a critical decision, you hope that their every input is as professional as possible. But without getting too political for this blog, let me just say that whatever your politics are, if any of it is based on science, it comes from code like this. Psychology studies, economic modeling, polling…they’re using code, and it’s jury-rigged to hell. Scientists just have more important things to worry about.

How the Higgs Is, and Is Not, Like an Eel

In the past, what did we know about eel reproduction? What do we know now?

The answer to both questions is, surprisingly little! For those who don’t know the story, I recommend this New Yorker article. Eels turn out to have a quite complicated life cycle, and can only reproduce in the very last stage. Different kinds of eels from all over Europe and the Americas spawn in just one place: the Sargasso Sea. But while researchers have been able to find newborn eels in those waters, and more recently track a few mature adults on their migration back, no-one has yet observed an eel in the act. Biologists may be able to infer quite a bit, but with no direct evidence yet the truth may be even more surprising than they expect. The details of eel reproduction are an ongoing mystery, the “eel question” one of the field’s most enduring.

But of course this isn’t an eel blog. I’m here to answer a different question.

In the past, what did we know about the Higgs boson? What do we know now?

Ask some physicists, and they’ll say that even before the LHC everyone knew the Higgs existed. While this isn’t quite true, it is certainly true that something like the Higgs boson had to exist. Observations of other particles, the W and Z bosons in particular, gave good evidence for some kind of “Higgs mechanism”, that gives other particles mass in a “Higgs-like-way”. A Higgs boson was in some sense the simplest option, but there could have been more than one, or a different sort of process instead. Some of these alternatives may have been sensible, others as silly as believing that eels come from horses’ tails. Until 2012, when the Higgs boson was observed, we really didn’t know.

We also didn’t know one other piece of information: the Higgs boson’s mass. That tells us, among other things, how much energy we need to make one. Physicists were pretty sure the LHC was capable of producing a Higgs boson, but they weren’t sure where or how they’d find it, or how much energy would ultimately be involved.

Now thanks to the LHC, we know the mass of the Higgs boson, and we can rule out some of the “alternative” theories. But there’s still quite a bit we haven’t observed. In particular, we haven’t observed many of the Higgs boson’s couplings.

The couplings of a quantum field are how it interacts, both with other quantum fields and with itself. In the case of the Higgs, interacting with other particles gives those particles mass, while interacting with itself is how it itself gains mass. Since we know the masses of these particles, we can infer what these couplings should be, at least for the simplest model. But, like the eels, the truth may yet surprise us. Nothing guarantees that the simplest model is the right one: what we call simplicity is a judgement based on aesthetics, on how we happen to write models down. Nature may well choose differently. All we can honestly do is parametrize our ignorance.

In the case of the eels, each failure to observe their reproduction deepens the mystery. What are they doing that is so elusive, so impossible to discover? In this, eels are different from the Higgs boson. We know why we haven’t observed the Higgs boson coupling to itself, at least according to our simplest models: we’d need a higher-energy collider, more powerful than the LHC, to see it. That’s an expensive proposition, much more expensive than using satellites to follow eels around the ocean. Because our failure to observe the Higgs self-coupling is itself no mystery, our simplest models could still be correct: as theorists, we probably have it easier than the biologists. But if we want to verify our models in the real world, we have it much harder.

Zoomplitudes Retrospective

During Zoomplitudes (my field’s big yearly conference, this year on Zoom) I didn’t have time to write a long blog post. I said a bit about the format, but didn’t get a chance to talk about the science. I figured this week I’d go back and give a few more of my impressions. As always, conference posts are a bit more technical than my usual posts, so regulars be warned!

The conference opened with a talk by Gavin Salam, there as an ambassador for LHC physics. Salam pointed out that, while a decent proportion of speakers at Amplitudes mention the LHC in their papers, that fraction has fallen over the years. (Another speaker jokingly wondered which of those mentions were just in the paper’s introduction.) He argued that there is still useful work for us, LHC measurements that will require serious amplitudes calculations to understand. He also brought up what seems like the most credible argument for a new, higher-energy collider: that there are important properties of the Higgs, in particular its interactions, that we still have not observed.

The next few talks hopefully warmed Salam’s heart, as they featured calculations for real-world particle physics. Nathaniel Craig and Yael Shadmi in particular covered the link between amplitudes and Standard Model Effective Field Theory (SMEFT), a method to systematically characterize corrections beyond the Standard Model. Shadmi’s talk struck me because the kind of work she described (building the SMEFT “amplitudes-style”, directly from observable information rather than more complicated proxies) is something I’d seen people speculate about for a while, but which hadn’t been done until quite recently. Now, several groups have managed it, and look like they’ve gotten essentially “all the way there”, rather than just partial results that only manage to replicate part of the SMEFT. Overall it’s much faster progress than I would have expected.

After Shadmi’s talk was a brace of talks on N=4 super Yang-Mills, featuring cosmic Galois theory and an impressively groan-worthy “origin story” joke. The final talk of the day, by Hofie Hannesdottir, covered work with some of my colleagues at the NBI. Due to coronavirus I hadn’t gotten to hear about this in person, so it was good to hear a talk on it, a blend of old methods and new priorities to better understand some old discoveries.

The next day focused on a topic that has grown in importance in our community, calculations for gravitational wave telescopes like LIGO. Several speakers focused on new methods for collisions of spinning objects, where a few different approaches are making good progress (Radu Roiban’s proposal to use higher-spin field theory was particularly interesting) but things still aren’t quite “production-ready”. The older, post-Newtonian method is still very much production-ready, as evidenced by Michele Levi’s talk that covered, among other topics, our recent collaboration. Julio Parra-Martinez discussed some interesting behavior shared by both supersymmetric and non-supersymmetric gravity theories. Thibault Damour had previously expressed doubts about use of amplitudes methods to answer this kind of question, and part of Parra-Martinez’s aim was to confirm the calculation with methods Damour would consider more reliable. Damour (who was actually in the audience, which I suspect would not have happened at an in-person conference) had already recanted some related doubts, but it’s not clear to me whether that extended to the results Parra-Martinez discussed (or whether Damour has stated the problem with his old analysis).

There were a few talks that day that didn’t relate to gravitational waves, though this might have been an accident, since both speakers also work on that topic. Zvi Bern’s talk linked to the previous day’s SMEFT discussion, with a calculation using amplitudes methods of direct relevance to SMEFT researchers. Clifford Cheung’s talk proposed a rather strange/fun idea, conformal symmetry in negative dimensions!

Wednesday was “amplituhedron day”, with a variety of talks on positive geometries and cluster algebras. Featured in several talks was “tropicalization“, a mathematical procedure that can simplify complicated geometries while still preserving essential features. Here, it was used to trim down infinite “alphabets” conjectured for some calculations into a finite set, and in doing so understand the origin of “square root letters”. The day ended with a talk by Nima Arkani-Hamed, who despite offering to bet that he could finish his talk within the half-hour slot took almost twice that. The organizers seemed to have planned for this, since there was one fewer talk that day, and as such the day ended at roughly the usual time regardless.

We also took probably the most unique conference photo I will ever appear in.

For lack of a better name, I’ll call Thursday’s theme “celestial”. The day included talks by cosmologists (including approaches using amplitudes-ish methods from Daniel Baumann and Charlotte Sleight, and a curiously un-amplitudes-related talk from Daniel Green), talks on “celestial amplitudes” (amplitudes viewed from the surface of an infinitely distant sphere), and various talks with some link to string theory. I’m including in that last category intersection theory, which has really become its own thing. This included a talk by Simon Caron-Huot about using intersection theory more directly in understanding Feynman integrals, and a talk by Sebastian Mizera using intersection theory to investigate how gravity is Yang-Mills squared. Both gave me a much better idea of the speakers’ goals. In Mizera’s case he’s aiming for something very ambitious. He wants to use intersection theory to figure out when and how one can “double-copy” theories, and might figure out why the procedure “got stuck” at five loops. The day ended with a talk by Pedro Vieira, who gave an extremely lucid and well-presented “blackboard-style” talk on bootstrapping amplitudes.

Friday was a grab-bag of topics. Samuel Abreu discussed an interesting calculation using the numerical unitarity method. It was notable in part because renormalization played a bigger role than it does in most amplitudes work, and in part because they now have a cool logo for their group’s software, Caravel. Claude Duhr and Ruth Britto gave a two-part talk on their work on a Feynman integral coaction. I’d had doubts about the diagrammatic coaction they had worked on in the past because it felt a bit ad-hoc. Now, they’re using intersection theory, and have a clean story that seems to tie everything together. Andrew McLeod talked about our work on a Feynman diagram Calabi-Yau “bestiary”, while Cristian Vergu had a more rigorous understanding of our “traintrack” integrals.

There are two key elements of a conference that are tricky to do on Zoom. You can’t do a conference dinner, so you can’t do the traditional joke-filled conference dinner speech. The end of the conference is also tricky: traditionally, this is when everyone applauds the organizers and the secretaries are given flowers. As chair for the last session, Lance Dixon stepped up to fill both gaps, with a closing speech that was both a touching tribute to the hard work of organizing the conference and a hilarious pile of in-jokes, including a participation award to Arkani-Hamed for his (unprecedented, as far as I’m aware) perfect attendance.

The Sum of Our Efforts

I got a new paper out last week, with Andrew McLeod, Henrik Munch, and Georgios Papathanasiou.

A while back, some collaborators and I found an interesting set of Feynman diagrams that we called “Omega”. These Omega diagrams were fun because they let us avoid one of the biggest limitations of particle physics: that we usually have to compute approximations, diagram by diagram, rather than finding an exact answer. For these Omegas, we figured out how to add all the infinite set of Omega diagrams up together, with no approximation.

One implication of this was that, in principle, we now knew the answer for each individual Omega diagram, far past what had been computed before. However, writing down these answers was easier said than done. After some wrangling, we got the answer for each diagram in terms of an infinite sum. But despite tinkering with it for a while, even our resident infinite sum expert Georgios Papathanasiou couldn’t quite sum them up.

Naturally, this made me think the sums would make a great Master’s project.

When Henrik Munch showed up looking for a project, Andrew McLeod and I gave him several options, but he settled on the infinite sums. Impressively, he ended up solving the problem in two different ways!

First, he found an old paper none of us had seen before, that gave a general method for solving that kind of infinite sum. When he realized that method was really annoying to program, he took the principle behind it, called telescoping, and came up with his own, simpler method, for our particular case.

Picture an old-timey folding telescope. It might be long when fully extended, but when you fold it up each piece fits inside the previous one, resulting in a much smaller object. Telescoping a sum has the same spirit. If each pair of terms in a sum “fit together” (if their difference is simple), you can rearrange them so that most of the difficulty “cancels out” and you’re left with a much simpler sum.

Henrik’s telescoping idea worked even better than expected. We found that we could do, not just the Omega sums, but other sums in particle physics as well. Infinite sums are a very well-studied field, so it was interesting to find something genuinely new.

The rest of us worked to generalize the result, to check the examples and to put it in context. But the core of the work was Henrik’s. I’m really proud of what he accomplished. If you’re looking for a PhD student, he’s on the market!

Zoomplitudes 2020

This week, I’m at Zoomplitudes!

My field’s big yearly conference, Amplitudes, was supposed to happen in Michigan this year, but with the coronavirus pandemic it was quickly clear that would be impossible. Luckily, Anastasia Volovich stepped in to Zoomganize the conference from Brown.

Obligatory photo of the conference venue

The conference is still going, so I’ll say more about the scientific content later. (Except to say there have been a lot of interesting talks!) Here, I’ll just write a bit about the novel experience of going to a conference on Zoom.

Time zones are always tricky in an online conference like this. Our field is spread widely around the world, but not evenly: there are a few areas with quite a lot of amplitudes research. As a result, Zoomganizing from the US east coast seems like it was genuinely the best compromise. It means the talks start a bit early for the west coast US (6am their time), but still end not too late for the Europeans (10:30pm CET). The timing is awkward for our colleagues in China and Taiwan, but they can still join in the morning session (their evening). Overall, I don’t think it was possible to do better there.

Usually, Amplitudes is accompanied by a one-week school for Master’s and PhD students. That wasn’t feasible this year, but to fill the gap Nima Arkani-Hamed gave a livestreamed lecture the Friday before, which apparently clocked in at thirteen hours!

One aspect of the conference that really impressed me was the Slack space. The organizers wanted to replicate the “halls” part of the conference, with small groups chatting around blackboards between the talks. They set up a space on the platform Slack, and let attendees send private messages and make their own channels for specific topics. Soon the space was filled with lively discussion, including a #coffeebreak channel with pictures of everyone’s morning coffee. I think the organizers did a really good job of achieving the kind of “serendipity” I talked about in this post, where accidental meetings spark new ideas. More than that, this is the kind of thing I’d appreciate even in face-to-face conferences. The ability to message anyone at the conference from a shared platform, to have discussions that anyone can stumble on and read later, to post papers and links, all of this seems genuinely quite useful. As one of the organizers for Amplitudes 2021, I may soon get a chance to try this out.

Zoom itself worked reasonably well. A few people had trouble connecting or sharing screens, but overall things worked reliably, and the Zoom chat window is arguably better than people whispering to each other in the back of an in-person conference. One feature of the platform that confused people a bit is that co-hosts can’t raise their hands to ask questions: since speakers had to be made co-hosts to share their screens they had a harder time asking questions during other speakers’ talks.

A part I was more frustrated by was the scheduling. Fitting everyone who wanted to speak between 6am west coast and 10:30pm Europe must have been challenging, and the result was a tightly plotted conference, with three breaks each no more than 45 minutes. That’s already a bit tight, but it ended up much tighter because most talks went long. The conference’s 30 minute slots regularly took 40 minutes, between speakers running over and questions going late. As a result, the conference’s “lunch break” (roughly dinner break for the Europeans) was often only 15 minutes. I appreciate the desire for lively discussion, especially since the conference is recorded and the question sessions can be a resource for others. But I worry that, as a pitfall of remote conferences, the inconveniences people suffer to attend can become largely invisible. Yes, we can always skip a talk, and watch the recording later. Yes, we can prepare food beforehand. Still, I don’t think a 15 minute lunch break was what the organizers had in mind, and if our community does more remote conferences we should brainstorm ways to avoid this problem next time.

I’m curious how other fields are doing remote conferences right now. Even after the pandemic, I suspect some fields will experiment with this kind of thing. It’s worth sharing and paying attention to what works and what doesn’t.

Bottomless Science

There’s an attitude I keep seeing among physics crackpots. It goes a little something like this:

“Once upon a time, physics had rules. You couldn’t just wave your hands and write down math, you had to explain the world with real physical things.”

What those “real physical things” were varies. Some miss the days when we explained things mechanically, particles like little round spheres clacking against each other. Some want to bring back absolutes: an absolute space, an absolute time, an absolute determinism. Some don’t like the proliferation of new particles, and yearn for the days when everything was just electrons, protons, and neutrons.

In each case, there’s a sense that physicists “cheated”. That, faced with something they couldn’t actually explain, they made up new types of things (fields, relativity, quantum mechanics, antimatter…) instead. That way they could pretend to understand the world, while giving up on their real job, explaining it “the right way”.

I get where this attitude comes from. It does make a certain amount of sense…for other fields.

An an economist, you can propose whatever mathematical models you want, but at the end of the day they have to boil down to actions taken by people. An economist who proposed some sort of “dark money” that snuck into the economy without any human intervention would get laughed at. Similarly, as a biologist or a chemist, you ultimately need a description that makes sense in terms of atoms and molecules. Your description doesn’t actually need to be in terms of atoms and molecules, and often it can’t be: you’re concerned with a different level of explanation. But it should be possible in terms of atoms and molecules, and that puts some constraints on what you can propose.

Why shouldn’t physics have similar constraints?

Suppose you had a mandatory bottom level like this. Maybe everything boils down to ball bearings, for example. What happens when you study the ball bearings?

Your ball bearings have to have some properties: their shape, their size, their weight. Where do those properties come from? What explains them? Who studies them?

Any properties your ball bearings have can be studied, or explained, by physics. That’s physics’s job: to study the fundamental properties of matter. Any “bottom level” is just as fit a subject for physics as anything else, and you can’t explain it using itself. You end up needing another level of explanation.

Maybe you’re objecting here that your favorite ball bearings aren’t up for debate: they’re self-evident, demanded by the laws of mathematics or philosophy.

Here for lack of space, I’ll only say that mathematics and philosophy don’t work that way. Mathematics can tell you whether you’ve described the world consistently, whether the conclusions you draw from your assumptions actually follow. Philosophy can see if you’re asking the right questions, if you really know what you think you know. Both have lessons for modern physics, and you can draw valid criticisms from either. But neither one gives you a single clear way the world must be. Not since the days of Descartes and Kant have people been that naive.

Because of this, physics is doing something a bit different from economics and biology. Each field wants to make models, wants to describe its observations. But in physics, ultimately, those models are all we have. We don’t have a “bottom level”, a backstop where everything has to make sense. That doesn’t mean we can just make stuff up, and whenever possible we understand the world in terms of physics we’ve already discovered. But when we can’t, all bets are off.

The Point of a Model

I’ve been reading more lately, partially for the obvious reasons. Mostly, I’ve been catching up on books everyone else already read.

One such book is Daniel Kahneman’s “Thinking, Fast and Slow”. With all the talk lately about cognitive biases, Kahneman’s account of his research on decision-making was quite familiar ground. The book turned out to more interesting as window into the culture of psychology research. While I had a working picture from psychologist friends in grad school, “Thinking, Fast and Slow” covered the other side, the perspective of a successful professor promoting his field.

Most of this wasn’t too surprising, but one passage struck me:

Several economists and psychologists have proposed models of decision making that are based on the emotions of regret and disappointment. It is fair to say that these models have had less influence than prospect theory, and the reason is instructive. The emotions of regret and disappointment are real, and decision makers surely anticipate these emotions when making their choices. The problem is that regret theories make few striking predictions that would distinguish them from prospect theory, which has the advantage of being simpler. The complexity of prospect theory was more acceptable in the competition with expected utility theory because it did predict observations that expected utility theory could not explain.

Richer and more realistic assumptions do not suffice to make a theory successful. Scientists use theories as a bag of working tools, and they will not take on the burden of a heavier bag unless the new tools are very useful. Prospect theory was accepted by many scholars not because it is “true” but because the concepts that it added to utility theory, notably the reference point and loss aversion, were worth the trouble; they yielded new predictions that turned out to be true. We were lucky.

Thinking Fast and Slow, page 288

Kahneman is contrasting three theories of decision making here: the old proposal that people try to maximize their expected utility (roughly, the benefit they get in future), his more complicated “prospect theory” that takes into account not only what benefits people get but their attachment to what they already have, and other more complicated models based on regret. His theory ended up more popular, both than the older theory and than the newer regret-based models.

Why did his theory win out? Apparently, not because it was the true one: as he says, people almost certainly do feel regret, and make decisions based on it. No, his theory won because it was more useful. It made new, surprising predictions, while being simpler and easier to use than the regret-based models.

This, a theory defeating another without being “more true”, might bug you. By itself, it doesn’t bug me. That’s because, as a physicist, I’m used to the idea that models should not just be true, but useful. If we want to test our theories against reality, we have a large number of “levels” of description to choose from. We can “zoom in” to quarks and gluons, or “zoom out” to look at atoms, or molecules, or polymers. We have to decide how much detail to include, and we have real pragmatic reasons for doing so: some details are just too small to measure!

It’s not clear Kahneman’s community was doing this, though. That is, it doesn’t seem like he’s saying that regret and disappointment are just “too small to be measured”. Instead, he’s saying that they don’t seem to predict much differently from prospect theory, and prospect theory is simpler to use.

Ok, we do that in physics too. We like working with simpler theories, when we have a good excuse. We’re just careful about it. When we can, we derive our simpler theories from more complicated ones, carving out complexity and estimating how much of a difference it would have made. Do this carefully, and we can treat black holes as if they were subatomic particles. When we can’t, we have what we call “phenomenological” models, models built up from observation and not from an underlying theory. We never take such models as the last word, though: a phenomenological model is always viewed as temporary, something to bridge a gap while we try to derive it from more basic physics.

Kahneman doesn’t seem to view prospect theory as temporary. It doesn’t sound like anyone is trying to derive it from regret theory, or to make regret theory easier to use, or to prove it always agrees with regret theory. Maybe they are, and Kahneman simply doesn’t think much of their efforts. Either way, it doesn’t sound like a major goal of the field.

That’s the part that bothered me. In physics, we can’t always hope to derive things from a more fundamental theory, some theories are as fundamental as we know. Psychology isn’t like that: any behavior people display has to be caused by what’s going on in their heads. What Kahneman seems to be saying here is that regret theory may well be closer to what’s going on in people’s heads, but he doesn’t care: it isn’t as useful.

And at that point, I have to ask: useful for what?

As a psychologist, isn’t your goal ultimately to answer that question? To find out “what’s going on in people’s heads”? Isn’t every model you build, every theory you propose, dedicated to that question?

And if not, what exactly is it “useful” for?

For technology? It’s true, “Thinking Fast and Slow” describes several groups Kahneman advised, most memorably the IDF. Is the advantage of prospect theory, then, its “usefulness”, that it leads to better advice for the IDF?

I don’t think that’s what Kahneman means, though. When he says “useful”, he doesn’t mean “useful for advice”. He means it’s good for giving researchers ideas, good for getting people talking. He means “useful for designing experiments”. He means “useful for writing papers”.

And this is when things start to sound worryingly familiar. Because if I’m accusing Kahneman’s community of giving up on finding the fundamental truth, just doing whatever they can to write more papers…well, that’s not an uncommon accusation in physics as well. If the people who spend their lives describing cognitive biases are really getting distracted like that, what chance does, say, string theory have?

I don’t know how seriously to take any of this. But it’s lurking there, in the back of my mind, that nasty, vicious, essential question: what are all of our models for?

Bonus quote, for the commenters to have fun with:

I have yet to meet a successful scientist who lacks the ability to exaggerate the importance of what he or she is doing, and I believe that someone who lacks a delusional sense of significance will wilt in the face of repeated experiences of multiple small failures and rare successes, the fate of most researchers.

Thinking Fast and Slow, page 264