Monthly Archives: November 2021

Searching for Stefano

On Monday, Quanta magazine released an article on a man who transformed the way we do particle physics: Stefano Laporta. I’d tipped them off that Laporta would make a good story: someone who came up with the bread-and-butter algorithm that fuels all of our computations, then vanished from the field for ten years, returning at the end with an 1,100 digit masterpiece. There’s a resemblance to Searching for Sugar Man, fans and supporters baffled that their hero is living in obscurity.

If anything, I worry I under-sold the story. When Quanta interviewed me, it was clear they were looking for ties to well-known particle physics results: was Laporta’s work necessary for the Higgs boson discovery, or linked to the controversy over the magnetic moment of the muon? I was careful, perhaps too careful, in answering. The Higgs, to my understanding, didn’t require so much precision for its discovery. As for the muon, the controversial part is a kind of calculation that wouldn’t use Laporta’s methods, while the un-controversial part was found numerically by a group that doesn’t use his algorithm either.

With more time now, I can make a stronger case. I can trace Laporta’s impact, show who uses his work and for what.

In particle physics, we have a lovely database called INSPIRE that lists all our papers. Here is Laporta’s page, his work sorted by number of citations. When I look today, I find his most cited paper, the one that first presented his algorithm, at the top, with a delightfully apt 1,001 citations. Let’s listen to a few of those 1,001 tales, and see what they tell us.

Once again, we’ll sort by citations. The top paper, “Higgs boson production at hadron colliders in NNLO QCD“, is from 2002. It computes the chance that a particle collider like the LHC could produce a Higgs boson. It in turn has over a thousand citations, headlined by two from the ATLAS and CMS collaborations: “Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC” and “Observation of a New Boson at a Mass of 125 GeV with the CMS Experiment at the LHC“. Those are the papers that announced the discovery of the Higgs, each with more than twelve thousand citations. Later in the list, there are design reports: discussions of why the collider experiments are built a certain way. So while it’s true that the Higgs boson could be seen clearly from the data, Laporta’s work still had a crucial role: with his algorithm, we could reassure experimenters that they really found the Higgs (not something else), and even more importantly, help them design the experiment so that they could detect it.

The next paper tells a similar story. A different calculation, with almost as many citations, feeding again into planning and prediction for collider physics.

The next few touch on my own corner of the field. “New Relations for Gauge-Theory Amplitudes” triggered a major research topic in its own right, one with its own conference series. Meanwhile, “Iteration of planar amplitudes in maximally supersymmetric Yang-Mills theory at three loops and beyond” served as a foundation for my own career, among many others. None of this would have happened without Laporta’s algorithm.

After that, more applications: fundamental quantities for collider physics, pieces of math that are used again and again. In particular, they are referenced again and again by the Particle Data Group, who collect everything we know about particle physics.

Further down still, and we get to specific code: FIRE and Reduze, programs made by others to implement Laporta’s algorithm, each with many uses in its own right.

All that, just from one of Laporta’s papers.

His ten-year magnum opus is more recent, and has fewer citations: checking now, just 139. Still, there are stories to tell there too.

I mentioned earlier 1,100 digits, and this might confuse some of you. The most precise prediction in particle physics has ten digits of precision, the magnetic behavior of the electron. Laporta’s calculation didn’t change that, because what he calculated isn’t the only contribution: he calculated Feynman diagrams with four “loops”, which is its own approximation, one limited in precision by what might be contributed by more loops. The current result has Feynman diagrams with five loops as well (known to much less than 1,100 digits), but the diagrams with six or more are unknown, and can only be estimated. The result also depends on measurements, which themselves can’t reach 1,100 digits of precision.

So why would you want 1,100 digits, then? In a word, mathematics. The calculation involves exotic types of numbers called periods, more complicated cousins of numbers like pi. These numbers are related to each other, often in complicated and surprising ways, ways which are hard to verify without such extreme precision. An older result of Laporta’s inspired the physicist David Broadhurst and mathematician Anton Mellit to conjecture new relations between this type of numbers, relations that were only later proven using cutting-edge mathematics. The new result has inspired mathematicians too: Oliver Schnetz found hints of a kind of “numerical footprint”, special types of numbers tied to the physics of electrons. It’s a topic I’ve investigated myself, something I think could lead to much more efficient particle physics calculations.

In addition to being inspired by Laporta’s work, Broadhurst has advocated for it. He was the one who first brought my attention to Laporta’s story, with a moving description of welcoming him back to the community after his ten-year silence, writing a letter to help him get funding. I don’t have all the details of the situation, but the impression I get is that Laporta had virtually no academic support for those ten years: no salary, no students, having to ask friends elsewhere for access to computer clusters.

When I ask why someone with such an impact didn’t have a professorship, the answer I keep hearing is that he didn’t want to move away from his home town in Bologna. If you aren’t an academic, that won’t sound like much of an explanation: Bologna has a university after all, the oldest in the world. But that isn’t actually a guarantee of anything. Universities hire rarely, according to their own mysterious agenda. I remember another colleague whose wife worked for a big company. They offered her positions in several cities, including New York. They told her that, since New York has many universities, surely her husband could find a job at one of them? We all had a sad chuckle at that.

For almost any profession, a contribution like Laporta’s would let you live anywhere you wanted. That’s not true for academia, and it’s to our loss. By demanding that each scientist be able to pick up and move, we’re cutting talented people out of the field, filtering by traits that have nothing to do with our contributions to knowledge. I don’t know Laporta’s full story. But I do know that doing the work you love in the town you love isn’t some kind of unreasonable request. It’s a request academia should be better at fulfilling.

Don’t Trust the Experiments, Trust the Science

I was chatting with an astronomer recently, and this quote by Arthur Eddington came up:

“Never trust an experimental result until it has been confirmed by theory.”

Arthur Eddington

At first, this sounds like just typical theorist arrogance, thinking we’re better than all those experimentalists. It’s not that, though, or at least not just that. Instead, it’s commenting on a trend that shows up again and again in science, but rarely makes the history books. Again and again an experiment or observation comes through with something fantastical, something that seems like it breaks the laws of physics or throws our best models into disarray. And after a few months, when everyone has checked, it turns out there was a mistake, and the experiment agrees with existing theories after all.

You might remember a recent example, when a lab claimed to have measured neutrinos moving faster than the speed of light, only for it to turn out to be due to a loose cable. Experiments like this aren’t just a result of modern hype: as Eddington’s quote shows, they were also common in his day. In general, Eddington’s advice is good: when an experiment contradicts theory, theory tends to win in the end.

This may sound unscientific: surely we should care only about what we actually observe? If we defer to theory, aren’t we putting dogma ahead of the evidence of our senses? Isn’t that the opposite of good science?

To understand what’s going on here, we can use an old philosophical argument: David Hume’s argument against miracles. David Hume wanted to understand how we use evidence to reason about the world. He argued that, for miracles in particular, we can never have good evidence. In Hume’s definition, a miracle was something that broke the established laws of science. Hume argued that, if you believe you observed a miracle, there are two possibilities: either the laws of science really were broken, or you made a mistake. The thing is, laws of science don’t just come from a textbook: they come from observations as well, many many observations in many different conditions over a long period of time. Some of those observations establish the laws in the first place, others come from the communities that successfully apply them again and again over the years. If your miracle was real, then it would throw into doubt many, if not all, of those observations. So the question you have to ask is: it it more likely those observations were wrong? Or that you made a mistake? Put another way, your evidence is only good enough for a miracle if it would be a bigger miracle if you were wrong.

Hume’s argument always struck me as a little bit too strict: if you rule out miracles like this, you also rule out new theories of science! A more modern approach would use numbers and statistics, weighing the past evidence for a theory against the precision of the new result. Most of the time you’d reach the same conclusion, but sometimes an experiment can be good enough to overthrow a theory.

Still, theory should always sit in the background, a kind of safety net for when your experiments screw up. It does mean that when you don’t have that safety net you need to be extra-careful. Physics is an interesting case of this: while we have “the laws of physics”, we don’t have any established theory that tells us what kinds of particles should exist. That puts physics in an unusual position, and it’s probably part of why we have such strict standards of statistical proof. If you’re going to be operating without the safety net of theory, you need that kind of proof.

This post was also inspired by some biological examples. The examples are politically controversial, so since this is a no-politics blog I won’t discuss them in detail. (I’ll also moderate out any comments that do.) All I’ll say is that I wonder if in that case the right heuristic is this kind of thing: not to “trust scientists” or “trust experts” or even “trust statisticians”, but just to trust the basic, cartoon-level biological theory.

The Irons in the Fire Metric

I remember, a while back, visiting a friend in his office. He had just became a professor, and was still setting things up. I noticed a list on the chalkboard, taking up almost one whole side. Taking a closer look, I realized that list was a list of projects. To my young postdoc eyes, the list was amazing: how could one person be working on so many things?

There’s an idiom in English, “too many irons in the fire”. You can imagine a blacksmith forging many things at once, each piece of iron taking focus from the others. Too many, and a piece might break, or otherwise fail.

Perhaps the irons in the fire are fire irons

In theoretical physics, a typical PhD publishes three papers before they graduate. That usually means one project at a time, maybe two. For someone used to one or two irons in the fire, so many at a time seems an impossible feat.

Scientists grow over their careers, though, and in more than one way. What seems impossible can eventually be business as usual.

First, as your skill grows, you become more efficient. A lot of scientific work is a kind of debugging: making mistakes, and figuring out how to fix them. The more experience you have, the more you know what kinds of mistakes you might make, and the better you will be at avoiding them. (Never perfect, of course: scientists always have to debug something.)

Second, your collaborations grow. The more people you work with, the more you can share these projects, each person contributing their own piece. With time, you start supervising as well: Masters students, PhD students, postdocs. Each one adds to the number of irons you can manage in your fire. While for bad supervisors this just means having their name on lots of projects, the good supervisors will be genuinely contributing to each one. That’s yet another kind of growth: as you get further along, you get a better idea of what works and what doesn’t, so even in a quick meeting you can solve meaningful problems.

Third, you grow your system. The ideas you explore early on blossom into full-fledged methods, tricks which you can pull out again and again when you need them. The tricks combine, forming new, bigger tricks, and eventually a long list of projects becomes second nature, a natural thing your system is able to do.

As you grow as a scientist, you become more than just one researcher, one debugger at a laptop or pipetter at a lab bench. You become a research program, one that manifests across many people and laptops and labs. As your expertise grows, you become a kind of living exchange of ideas, concepts flowing through you when needed, building your own scientific world.

Facts About Math Are Facts About Us

Each year, the Niels Bohr International Academy has a series of public talks. Part of Copenhagen’s Folkeuniversitet (“people’s university”), they attract a mix of older people who want to keep up with modern developments and young students looking for inspiration. I gave a talk a few days ago, as part of this year’s program. The last time I participated, back in 2017, I covered a topic that comes up a lot on this blog: “The Quest for Quantum Gravity”. This year, I was asked to cover something more unusual: “The Unreasonable Effectiveness of Mathematics in the Natural Sciences”.

Some of you might notice that title is already taken: it’s a famous lecture by the physicist Wigner, from 1959. Wigner posed an interesting question: why is advanced mathematics so useful in physics? Time and time again, mathematicians develop an idea purely for its own sake, only for physicists to find it absolutely indispensable to describe some part of the physical world. Should we be surprised that this keeps working? Suspicious?

I talked a bit about this: some of the answers people have suggested over the years, and my own opinion. But like most public talks, the premise was mostly a vehicle for cool examples: physicists through history bringing in new math, and surprising mathematical facts like the ones I talked about a few weeks back at Culture Night. Because of that, I was actually a bit unprepared to dive into the philosophical side of the topic (despite it being in principle a very philosophical topic!) When one of the audience members brought up mathematical Platonism, I floundered a bit, not wanting to say something that was too philosophically naive.

Well, if there’s anywhere I can be naive, it’s my own blog. I even have a label for Amateur Philosophy posts. So let’s do one.

Mathematical Platonism is the idea that mathematical truths “exist”: that they’re somewhere “out there” being discovered. On the other side, one might believe that mathematics is not discovered, but invented. For some reason, a lot of people with the latter opinion seem to think this has something to do with describing nature (for example, an essay a few years back by Lee Smolin defines mathematics as “the study of systems of evoked relationships inspired by observations of nature”).

I’m not a mathematical Platonist. I don’t even like to talk about which things do or don’t “exist”. But I also think that describing mathematics in terms of nature is missing the point. Mathematicians aren’t physicists. While there may have been a time when geometers argued over lines in the sand, these days mathematicians’ inspiration isn’t usually the natural world, at least not in the normal sense.

Instead, I think you can’t describe mathematics without describing mathematicians. A mathematical fact is, deep down, something a mathematician can say without other mathematicians shouting them down. It’s an allowed move in what my hazy secondhand memory of Wittgenstein wants to call a “language game”: something that gets its truth from a context of people interpreting and reacting to it, in the same way a move in chess matters only when everyone is playing by its rules.

This makes mathematics sound very subjective, and we’re used to the opposite: the idea that a mathematical fact is as objective as they come. The important thing to remember is that even with this kind of description, mathematics still ends up vastly less subjective than any other field. We care about subjectivity between different people: if a fact is “true” for Brits and “false” for Germans, then it’s a pretty limited fact. Mathematics is special because the “rules of its game” aren’t rules of one group or another. They’re rules that are in some sense our birthright. Any human who can read and write, or even just act and perceive, can act as a Turing Machine, a universal computer. With enough patience and paper, anything that you can prove to one person you can prove to another: you just have to give them the rules and let them follow them. It doesn’t matter how smart you are, or what you care about most: if something is mathematically true for others, it is mathematically true for you.

Some would argue that this is evidence for mathematical Platonism, that if something is a universal truth it should “exist”. Even if it does, though, I don’t think it’s useful to think of it in that way. Once you believe that mathematical truth is “out there”, you want to try to characterize it, to say something about it besides that it’s “out there”. You’ll be tempted to have an opinion on the Axiom of Choice, or the Continuum Hypothesis. And the whole point is that those aren’t sensible things to have opinions on, that having an opinion about them means denying the mathematical proofs that they are, in the “standard” axioms, undecidable. Whatever is “out there”, it has to include everything you can prove with every axiom system, whichever non-standard ones you can cook up, because mathematicians will happily work on any of them. The whole point of mathematics, the thing that makes it as close to objective as anything can be, is that openness: the idea that as long as an argument is good enough, as long as it can convince anyone prepared to wade through the pages, then it is mathematics. Nothing, so long as it can convince in the long-run, is excluded.

If we take this definition seriously, there are some awkward consequences. You could imagine a future in which every mind, everyone you might be able to do mathematics with, is crushed under some tyrant, forced to agree to something false. A real philosopher would dig in to this corner case, try to salvage the definition or throw it out. I’m not a real philosopher though. So all I can say is that while I don’t think that tyrant gets to define mathematics, I also don’t think there are good alternatives to my argument. Our only access to mathematics, and to truth in general, is through the people who pursue it. I don’t think we can define one without the other.