Monthly Archives: December 2020

Newtonmas in Uncertain Times

Three hundred and eighty-two years ago today (depending on which calendars you use), Isaac Newton was born. For a scientist, that’s a pretty good reason to celebrate.

Reason’s Greetings Everyone!

Last month, our local nest of science historians at the Niels Bohr Archive hosted a Zoom talk by Jed Z. Buchwald, a Newton scholar at Caltech. Buchwald had a story to tell about experimental uncertainty, one where Newton had an important role.

If you’ve ever had a lab course in school, you know experiments never quite go like they’re supposed to. Set a room of twenty students to find Newton’s constant, and you’ll get forty different answers. Whether you’re reading a ruler or clicking a stopwatch, you can never measure anything with perfect accuracy. Each time you measure, you introduce a little random error.

Textbooks worth of statistical know-how has cropped up over the centuries to compensate for this error and get closer to the truth. The simplest trick though, is just to average over multiple experiments. It’s so obvious a choice, taking a thousand little errors and smoothing them out, that you might think people have been averaging in this way through history.

They haven’t though. As far as Buchwald had found, the first person to average experiments in this way was Isaac Newton.

What did people do before Newton?

Well, what might you do, if you didn’t have a concept of random error? You can still see that each time you measure you get a different result. But you would blame yourself: if you were more careful with the ruler, quicker with the stopwatch, you’d get it right. So you practice, you do the experiment many times, just as you would if you were averaging. But instead of averaging, you just take one result, the one you feel you did carefully enough to count.

Before Newton, this was almost always what scientists did. If you were an astronomer mapping the stars, the positions you published would be the last of a long line of measurements, not an average of the rest. Some other tricks existed. Tycho Brahe for example folded numbers together pair by pair, averaging the first two and then averaging that average with the next one, getting a final result weighted to the later measurements. But, according to Buchwald, Newton was the first to just add everything together.

Even Newton didn’t yet know why this worked. It would take later research, theorems of statistics, to establish the full justification. It seems Newton and his later contemporaries had a vague physics analogy in mind, finding a sort of “center of mass” of different experiments. This doesn’t make much sense – but it worked, well enough for physics as we know it to begin.

So this Newtonmas, let’s thank the scientists of the past. Working piece by piece, concept by concept, they gave use the tools to navigate our uncertain times.

Inevitably Arbitrary

Physics is universal…or at least, it aspires to be. Drop an apple anywhere on Earth, at any point in history, and it will accelerate at roughly the same rate. When we call something a law of physics, we expect it to hold everywhere in the universe. It shouldn’t depend on anything arbitrary.

Sometimes, though, something arbitrary manages to sneak in. Even if the laws of physics are universal, the questions we want to answer are not: they depend on our situation, on what we want to know.

The simplest example is when we have to use units. The mass of an electron is the same here as it is on Alpha Centauri, the same now as it was when the first galaxies formed. But what is that mass? We could write it as 9.1093837015×10−31 kilograms, if we wanted to, but kilograms aren’t exactly universal. Their modern definition is at least based on physical constants, but with some pretty arbitrary numbers. It defines the Planck constant as 6.62607015×10−34 Joule-seconds. Chase that number back, and you’ll find references to the Earth’s circumference and the time it takes to turn round on its axis. The mass of the electron may be the same on Alpha Centauri, but they’d never write it as 9.1093837015×10−31 kilograms.

Units aren’t the only time physics includes something arbitrary. Sometimes, like with units, we make a choice of how we measure or calculate something. We choose coordinates for a plot, a reference frame for relativity, a zero for potential energy, a gauge for gauge theories and regularization and subtraction schemes for quantum field theory. Sometimes, the choice we make is instead what we measure. To do thermodynamics we must choose what we mean by a state, to call two substances water even if their atoms are in different places. Some argue a perspective like this is the best way to think about quantum mechanics. In a different context, I’d argue it’s why we say coupling constants vary with energy.

So what do we do, when something arbitrary sneaks in? We have a few options. I’ll illustrate each with the mass of the electron:

  • Make an arbitrary choice, and stick with it: There’s nothing wrong with measuring an electron in kilograms, if you’re consistent about it. You could even use ounces. You just have to make sure that everyone else you compare with is using the same units, or be careful to convert.
  • Make a “natural” choice: Why not set the speed of light and Planck’s constant to one? They come up a lot in particle physics, and all they do is convert between length and time, or time and energy. That way you can use the same units for all of them, and use something convenient, like electron-Volts. They even have electron in the name! Of course they also have “Volt” in the name, and Volts are as arbitrary as any other metric unit. A “natural” choice might make your life easier, but you should always remember it’s still arbitrary.
  • Make an efficient choice: This isn’t always the same as the “natural” choice. The units you choose have an effect on how difficult your calculation is. Sometimes, the best choice for the mass of an electron is “one electron-mass”, because it lets you calculate something else more easily. This is easier to illustrate with other choices: for example, if you have to pick a reference frame for a collision, picking one in which one of the objects is at rest, or where they move symmetrically, might make your job easier.
  • Stick to questions that aren’t arbitrary: No matter what units we use, the electron’s mass will be arbitrary. Its ratios to other masses won’t be though. No matter where we measure, dimensionless ratios like the mass of the muon divided by the mass of the electron, or the mass of the electron divided by the value of the Higgs field, will be the same. If we can make sure to ask only this kind of question, we can avoid arbitrariness. Note that we can think of even a mass in “kilograms” as this kind of question: what’s the ratio of the mass of the electron to “this arbitrary thing we’ve chosen”? In practice though, you want to compare things in the same theory, without the historical baggage of metric.

This problem may seem silly, and if we just cared about units it might be. But at the cutting-edge of physics there are still areas where the arbitrary shows up. Our choices of how to handle it, or how to avoid it, can be crucial to further progress.

QCD Meets Gravity 2020, Retrospective

I was at a Zoomference last week, called QCD Meets Gravity, about the many ways gravity can be thought of as the “square” of other fundamental forces. I didn’t have time to write much about the actual content of the conference, so I figured I’d say a bit more this week.

A big theme of this conference, as in the past few years, was gravitational waves. From LIGO’s first announcement of a successful detection, amplitudeologists have been developing new methods to make predictions for gravitational waves more efficient. It’s a field I’ve dabbled in a bit myself. Last year’s QCD Meets Gravity left me impressed by how much progress had been made, with amplitudeologists already solidly part of the conversation and able to produce competitive results. This year felt like another milestone, in that the amplitudeologists weren’t just catching up with other gravitational wave researchers on the same kinds of problems. Instead, they found new questions that amplitudes are especially well-suited to answer. These included combining two pieces of these calculations (“potential” and “radiation”) that the older community typically has to calculate separately, using an old quantum field theory trick, finding the gravitational wave directly from amplitudes, and finding a few nice calculations that can be used to “generate” the rest.

A large chunk of the talks focused on different “squaring” tricks (or as we actually call them, double-copies). There were double-copies for cosmology and conformal field theory, for the celestial sphere, and even some version of M theory. There were new perspectives on the double-copy, new building blocks and algebraic structures that lie behind it. There were talks on the so-called classical double-copy for space-times, where there have been some strange discoveries (an extra dimension made an appearance) but also a more rigorous picture of where the whole thing comes from, using twistor space. There were not one, but two talks linking the double-copy to the Navier-Stokes equation describing fluids, from two different groups. (I’m really curious whether these perspectives are actually useful for practical calculations about fluids, or just fun to think about.) Finally, while there wasn’t a talk scheduled on this paper, the authors were roped in by popular demand to talk about their work. They claim to have made progress on a longstanding puzzle, how to show that double-copy works at the level of the Lagrangian, and the community was eager to dig into the details.

From there, a grab-bag of talks covered other advancements. There were talks from string theorists and ambitwistor string theorists, from Effective Field Theorists working on gravity and the Standard Model, from calculations in N=4 super Yang-Mills, QCD, and scalar theories. Simon Caron-Huot delved into how causality constrains the theories we can write down, showing an interesting case where the common assumption that all parameters are close to one is actually justified. Nima Arkani-Hamed began his talk by saying he’d surprise us, which he certainly did (and not by keeping on time). It’s tricky to explain why his talk was exciting. Comparing to his earlier discovery of the Amplituhedron, which worked for a toy model, this is a toy calculation in a toy model. While the Amplituhedron wasn’t based on Feynman diagrams, this can’t even be compared with Feynman diagrams. Instead of expanding in a small coupling constant, this expands in a parameter that by all rights should be equal to one. And instead of positivity conditions, there are negativity conditions. All I can say is that with all of that in mind, it looks like real progress on an important and difficult problem from a totally unanticipated direction. In a speech summing up the conference, Zvi Bern mentioned a few exciting words from Nima’s talk: “nonplanar”, “integrated”, “nonperturbative”. I’d add “differential equations” and “infinite sums of ladder diagrams”. Nima and collaborators are trying to figure out what happens when you sum up all of the Feynman diagrams in a theory. I’ve made progress in the past for diagrams with one “direction”, a ladder that grows as you add more loops, but I didn’t know how to add “another direction” to the ladder. In very rough terms, Nima and collaborators figured out how to add that direction.

I’ve probably left things out here, it was a packed conference! It’s been really fun seeing what the community has cooked up, and I can’t wait to see what happens next.

QCD Meets Gravity 2020

I’m at another Zoom conference this week, QCD Meets Gravity. This year it’s hosted by Northwestern.

The view of the campus from wonder.me

QCD Meets Gravity is a conference series focused on the often-surprising links between quantum chromodynamics on the one hand and gravity on the other. By thinking of gravity as the “square” of forces like the strong nuclear force, researchers have unlocked new calculation techniques and deep insights.

Last year’s conference was very focused on one particular topic, trying to predict the gravitational waves observed by LIGO and VIRGO. That’s still a core topic of the conference, but it feels like there is a bit more diversity in topics this year. We’ve seen a variety of talks on different “squares”: new theories that square to other theories, and new calculations that benefit from “squaring” (even surprising applications to the Navier-Stokes equation!) There are talks on subjects from String Theory to Effective Field Theory, and even a talk on a very different way that “QCD meets gravity”, in collisions of neutron stars.

With still a few more talks to go, expect me to say a bit more next week, probably discussing a few in more detail. (Several people presented exciting work in progress!) Until then, I should get back to watching!