Category Archives: Life as a Physicist

Sandbox Collaboration

In science, every project is different. Sometimes, my collaborators and I have a clear enough goal, and a clear enough way to get there. There are always surprises along the way, of course, but nonetheless we keep a certain amount of structure. That can mean dividing tasks (“you find the basis, I’ll find the constraints”), or it can mean everyone doing the same work in parallel, like a group of students helping each other with homework.

Recently, I’ve experienced a different kind of collaboration. The goals are less clear, and the methods are more…playful.

Oh, are you building a sandcastle? Or a polylogarithm?

A big task improves with collaboration: you can divide it up. A delicate task improves with collaboration: you can check each other’s work. An unclear task also improves with collaboration: you can explore more ground.

Picture a bunch of children playing in a sandbox. The children start out sitting by themselves, each digging in the sand. Some are building castles, others dig moats, or search for buried treasure, or dinosaur bones. As the children play, their games link up: the moat protects the castle, the knights leave for treasure, the dinosaur awakens and attacks. The stories feed back on one another, and the game grows.

The project I’m working on now is a bit like that sandbox. Each of us has our own ideas about what we’d like to build, and each experiments with them. We see what works and what doesn’t, which castles hold and which fall over. We keep an eye on what each other are doing, and adjust: if that castle is close to done, maybe a moat would improve the view. Piece by piece, the unclear task becomes clearer. Our individual goals draw us in different directions, but what we discover in the end brings us back together, richer for our distant discoveries.

Working this way requires a lot of communication! In the past, I was mystified when I saw other physicists spend hours talking at a blackboard. I thought that must be a waste of time: surely they’d get more done if they sat at their desks and worked things out, rather than having to talk through every step. Now I realize they were likely part of a different kind of collaboration: not dividing tasks or working in parallel on a clear calculation, but exploring different approaches. In these collaborations, those long chats are a kind of calibration: by explaining what you’re trying to do, you see whether it makes sense to your collaborators. You can drop the parts that don’t make sense and build in some of your collaborators’ ideas. In the end you begin to converge, to something that everyone can endorse. Your sandcastles meet up, your stories become one story. When everything looks good, you’re ready to call over your mom (or in this case, the arXiv) and show it off.

Amplitudes 2021 Retrospective

Phew!

The conference photo

Now that I’ve rested up after this year’s Amplitudes, I’ll give a few of my impressions.

Overall, I think the conference went pretty well. People seemed amused by the digital Niels Bohr, even if he looked a bit like a puppet (Lance compared him to Yoda in his final speech, which was…apt). We used Gather.town, originally just for the poster session and a “virtual reception”, but later we also encouraged people to meet up in it during breaks. That in particular was a big hit: I think people really liked the ability to just move around and chat in impromptu groups, and while nobody seemed to use the “virtual bar”, the “virtual beach” had a lively crowd. Time zones were inevitably rough, but I think we ended up with a good compromise where everyone could still see a meaningful chunk of the conference.

A few things didn’t work as well. For those planning conferences, I would strongly suggest not making a brand new gmail account to send out conference announcements: for a lot of people the emails went straight to spam. Zulip was a bust: I’m not sure if people found it more confusing than last year’s Slack or didn’t notice it due to the spam issue, but almost no-one posted in it. YouTube was complicated: the stream went down a few times and I could never figure out exactly why, it may have just been internet issues here at the Niels Bohr Institute (we did have a power outage one night and had to scramble to get internet access back the next morning). As far as I could tell YouTube wouldn’t let me re-open the previous stream so each time I had to post a new link, which probably was frustrating for those following along there.

That said, this was less of a problem than it might have been, because attendance/”viewership” as a whole was lower than expected. Zoomplitudes last year had massive numbers of people join in both on Zoom and via YouTube. We had a lot fewer: out of over 500 registered participants, we had fewer than 200 on Zoom at any one time, and at most 30 or so on YouTube. Confusion around the conference email might have played a role here, but I suspect part of the difference is simple fatigue: after over a year of this pandemic, online conferences no longer feel like an exciting new experience.

The actual content of the conference ranged pretty widely. Some people reviewed earlier work, others presented recent papers or even work-in-progress. As in recent years, a meaningful chunk of the conference focused on applications of amplitudes techniques to gravitational wave physics. This included a talk by Thibault Damour, who has by now mostly made his peace with the field after his early doubts were sorted out. He still suspected that the mismatch of scales (weak coupling on the one hand, classical scattering on the other) would cause problems in future, but after his work with Laporta and Mastrolia even he had to acknowledge that amplitudes techniques were useful.

In the past I would have put the double-copy and gravitational wave researchers under the same heading, but this year they were quite distinct. While a few of the gravitational wave talks mentioned the double-copy, most of those who brought it up were doing something quite a bit more abstract than gravitational wave physics. Indeed, several people were pushing the boundaries of what it means to double-copy. There were modified KLT kernels, different versions of color-kinematics duality, and explorations of what kinds of massive particles can and (arguably more interestingly) cannot be compatible with a double-copy framework. The sheer range of different generalizations had me briefly wondering whether the double-copy could be “too flexible to be meaningful”, whether the right definitions would let you double-copy anything out of anything. I was reassured by the points where each talk argued that certain things didn’t work: it suggests that wherever this mysterious structure comes from, its powers are limited enough to make it meaningful.

A fair number of talks dealt with what has always been our main application, collider physics. There the context shifted, but the message stayed consistent: for a “clean” enough process two or three-loop calculations can make a big difference, taking a prediction that would be completely off from experiment and bringing it into line. These are more useful the more that can be varied about the calculation: functions are more useful than numbers, for example. I was gratified to hear confirmation that a particular kind of process, where two massless particles like quarks become three massive particles like W or Z bosons, is one of these “clean enough” examples: it means someone will need to compute my “tardigrade” diagram eventually.

If collider physics is our main application, N=4 super Yang-Mills has always been our main toy model. Jaroslav Trnka gave us the details behind Nima’s exciting talk from last year, and Nima had a whole new exciting talk this year with promised connections to category theory (connections he didn’t quite reach after speaking for two and a half hours). Anastasia Volovich presented two distinct methods for predicting square-root symbol letters, while my colleague Chi Zhang showed some exciting progress with the elliptic double-box, realizing the several-year dream of representing it in a useful basis of integrals and showcasing several interesting properties. Anne Spiering came over from the integrability side to show us just how special the “planar” version of the theory really is: by increasing the number of colors of gluons, she showed that one could smoothly go between an “integrability-esque” spectrum and a “chaotic” spectrum. Finally, Lance Dixon mentioned his progress with form-factors in his talk at the end of the conference, showing off some statistics of coefficients of different functions and speculating that machine learning might be able to predict them.

On the more mathematical side, Francis Brown showed us a new way to get numbers out of graphs, one distinct but related to our usual interpretation in terms of Feynman diagrams. I’m still unsure what it will be used for, but the fact that it maps every graph to something finite probably has some interesting implications. Albrecht Klemm and Claude Duhr talked about two sides of the same story, their recent work on integrals involving Calabi-Yau manifolds. They focused on a particular nice set of integrals, and time will tell whether the methods work more broadly, but there are some exciting suggestions that at least parts will.

There’s been a resurgence of the old dream of the S-matrix community, constraining amplitudes via “general constraints” alone, and several talks dealt with those ideas. Sebastian Mizera went the other direction, and tried to test one of those “general constraints”, seeing under which circumstances he could prove that you can swap a particle going in with an antiparticle going out. Others went out to infinity, trying to understand amplitudes from the perspective of the so-called “celestial sphere” where they appear to be governed by conformal field theories of some sort. A few talks dealt with amplitudes in string theory itself: Yvonne Geyer built them out of field-theory amplitudes, while Ashoke Sen explained how to include D-instantons in them.

We also had three “special talks” in the evenings. I’ve mentioned Nima’s already. Zvi Bern gave a retrospective talk that I somewhat cheesily describe as “good for the soul”: a look to the early days of the field that reminded us of why we are who we are. Lance Dixon closed the conference with a light-hearted summary and a look to the future. That future includes next year’s Amplitudes, which after a hasty discussion during this year’s conference has now localized to Prague. Let’s hope it’s in person!

Busy Organizing Amplitudes 2021

I’m busy this week with Amplitudes 2021. Being behind the “organizer’s desk” for one of these conferences is an entirely different experience. There’s a lot to keep track of, keeping the Zoom going smoothly, the website up to date, and the YouTube stream running. Luckily we have good help, a team of students handling a lot of the more finicky details. I think we’ve been putting on a good conference, but there are definitely lessons I’ve learned for the next time I host something.

The content has been interesting too of course, and despite being busy I’ve still gotten to watch the talks. I’ll say more about this after the conference, there have been quite a few interesting developments in the past year.

Next Week, Amplitudes 2021!

I calculate things called scattering amplitudes, the building-blocks of predictions in particle physics. I’m part of a community of “amplitudeologists” that try to find better ways to compute these things, to achieve more efficiency and deeper understanding. We meet once a year for our big conference, called Amplitudes. And this year, I’m one of the organizers.

This year also happens to be the 100th anniversary of the founding of the Niels Bohr Institute, so we wanted to do something special. We found a group of artists working on a rendering of Niels Bohr. The original idea was to do one of those celebrity holograms, but after the conference went online we decided to make a few short clips instead. I wrote a Bohr-esque script, and we got help from one of Bohr’s descendants to get the voice just-so. Now, you can see the result, as our digital Bohr invites you to the conference.

We’ll be livestreaming the conference on the same YouTube channel, and posting videos of the talks each day. If you’re curious about the latest developments in scattering amplitudes, I encourage you to tune in. And if you’re an amplitudeologist yourself, registration is still open!

Digging for Buried Insight

The scientific method, as we usually learn it, starts with a hypothesis. The scientist begins with a guess, and asks a question with a clear answer: true, or false? That guess lets them design an experiment, observe the consequences, and improve our knowledge of the world.

But where did the scientist get the hypothesis in the first place? Often, through some form of exploratory research.

Exploratory research is research done, not to answer a precise question, but to find interesting questions to ask. Each field has their own approach to exploration. A psychologist might start with interviews, asking broad questions to find narrower questions for a future survey. An ecologist might film an animal, looking for changes in its behavior. A chemist might measure many properties of a new material, seeing if any stand out. Each approach is like digging for treasure, not sure of exactly what you will find.

Mathematicians and theoretical physicists don’t do experiments, but we still need hypotheses. We need an idea of what we plan to prove, or what kind of theory we want to build: like other scientists, we want to ask a question with a clear, true/false answer. And to find those questions, we still do exploratory research.

What does exploratory research look like, in the theoretical world? Often, it begins with examples and calculations. We can start with a known method, or a guess at a new one, a recipe for doing some specific kind of calculation. Recipe in hand, we proceed to do the same kind of calculation for a few different examples, covering different sorts of situation. Along the way, we notice patterns: maybe the same steps happen over and over, or the result always has some feature.

We can then ask, do those same steps always happen? Does the result really always have that feature? We have our guess, our hypothesis, and our attempt to prove it is much like an experiment. If we find a proof, our hypothesis was true. On the other hand, we might not be able to find a proof. Instead, exploring, we might find a counterexample – one where the steps don’t occur, the feature doesn’t show up. That’s one way to learn that our hypothesis was false.

This kind of exploration is essential to discovery. As scientists, we all have to eventually ask clear yes/no questions, to submit our beliefs to clear tests. But we can’t start with those questions. We have to dig around first, to observe the world without a clear plan, to get to a point where we have a good question to ask.

Papers With Questions and Papers With Answers

I’ve found that when it comes to reading papers, there are two distinct things I look for.

Sometimes, I read a paper looking for an answer. Typically, this is a “how to” kind of answer: I’m trying to do something, and the paper I’m reading is supposed to explain how. More rarely, I’m directly using a result: the paper proved a theorem or compute a formula, and I just take it as written and use it to calculate something else. Either way, I’m seeking out the paper with a specific goal in mind, which typically means I’m reading it long after it came out.

Other times, I read a paper looking for a question. Specifically, I look for the questions the author couldn’t answer. Sometimes these are things they point out, limitations of their result or opportunities for further study. Sometimes, these are things they don’t notice, holes or patterns in their results that make me wonder “what if?” Either can be the seed of a new line of research, a problem I can solve with a new project. If I read a paper in this way, typically it just came out, and this is the first time I’ve read it. When that isn’t the case, it’s because I start out with another reason to read it: often I’m looking for an answer, only to realize the answer I need isn’t there. The missing answer then becomes my new question.

I’m curious about the balance of these two behaviors in different fields. My guess is that some fields read papers more for their answers, while others read them more for their questions. If you’re working in another field, let me know what you do in the comments!

A Week Among the Pedagogues

Pedagogy courses have a mixed reputation among physicists, and for once I don’t just mean “mixed” as a euphemism for “bad”. I’ve met people who found them very helpful, and I’ve been told that attending a Scandinavian pedagogy course looks really good on a CV. On the other hand, I’ve heard plenty of horror stories of classes that push a jumble of dogmatic requirements and faddish gimmicks, all based on research that if anything has more of a replication crisis going than psychology does.

With that reputation in mind, I went into the pedagogy course last week hopeful, but skeptical. In part, I wasn’t sure whether pedagogy was the kind of thing that could be taught. Each class is different, and so much of what makes a bad or good teacher seems to be due to experience, which one can’t get much of in a one-week course. I couldn’t imagine what facts a pedagogy course could tell me that would actually improve my teaching, and wouldn’t just be ill-justified dogma.

The answer, it turned out, would be precisely the message of the course. A pedagogy course that drills you in “pedagogy facts” would indeed be annoying. But one of those “pedagogy facts” is that teaching isn’t just drilling students in facts. And because this course practiced what it preached, it ended up much less annoying than I worried it would be.

There were hints of that dogmatic approach in the course materials, but only hints. An early slide had a stark quote calling pure lecturing irresponsible. The teacher immediately and awkwardly distanced himself from it, almost literally saying “well that is a thing someone could say”. Instead, most of the class was made up of example lessons and student discussions. We’d be assembled into groups to discuss something, then watch a lesson intended to show off a particular technique. Only then would we get a brief lecture about the technique, giving a name and some justification, before being thrown into yet more discussion about it.

In the terminology we were taught, this made the course dialogical rather than authoritative, and inductive rather than deductive. We learned by reflecting on examples rather than deriving general truths, and discussed various perspectives rather than learning one canonical one.

Did we learn anything from that, besides the terms?

One criticism of both dialogical and inductive approaches to teaching is that students can only get out what they put in. If you learn by discussing and solving examples by yourself, you’d expect the only things you’ll learn are things you already know.

We weren’t given the evidence to refute this criticism in general, and honestly I wouldn’t have trusted it if we had (see above: replication crisis). But in this context, that criticism does miss something. Yes, pretty much every method I learned in this course was something I could come up with on my own in the right situation. But I wouldn’t be thinking of the methods systematically. I’d notice a good idea for one lesson or another, but miss others because I wouldn’t be thinking of the ideas as part of a larger pattern. With the patterns in mind, with terms to “hook” the methods on to, I can be more aware of when opportunities come up. I don’t have to think of dialogical as better than authoritative, or inductive as better than deductive, in general. All I have to do is keep an eye out for when a dialogical or inductive approach might prove useful. And that’s something I feel genuinely better at after taking this course.

Beyond that core, we got some extremely topical tips about online teaching and way too many readings (I think the teachers overestimated how easy it is to read papers from a different discipline…and a “theory paper” in education is about as far from a “theory paper” in physics as you can get). At times the dialogue aspect felt a little too open, we heard “do what works for you” often enough that it felt like the teachers were apologizing for their own field. But overall, the course worked, and I expect to teach better going forward because of it.

At a Pedagogy Course

I’m at a pedagogy course this week. It’s the first time I’ve taken a course like this, and it has been really interesting learning about different approaches to teaching (which, as I keep being reminded, is very different from outreach!). It’s also really time-consuming: seven hours of class a day, with readings and lecture prep in the evening. As such, I haven’t had time to do a full blog post. Next week I’ll likely post some reflections about the course. Until then, here’s a slide from the practice lecture I gave:

Building One’s Technology

There are theoretical physicists who can do everything they do with a pencil and a piece of paper. I’m not one of them. The calculations I do are long, complicated, or tedious enough that they’re often best done with a computer. For a calculation like that, I can’t just use existing software “out of the box”: I need to program special-purpose tools to do the kind of calculation I need. This means each project has its own kind of learning curve. If I already have the right code, or almost the right code, things go very smoothly: with a few tweaks I can do a lot of interesting calculations. If I don’t have the right code yet, things go much more slowly: I have to build up my technology, figuring out what I need piece by piece until I’m back up to my usual speed.

I don’t always need to use computers to do my calculations. Sometimes my work hinges on something more conceptual: understanding a mathematical proof, or the arguments from another physicist’s paper. While this seems different on the surface, I’ve found that it has the same kinds of learning curves. If I know the right papers and mathematical methods, I can go pretty quickly. If I don’t, I have to “build up my technology”, reading and practicing, a slow build-up to my goal.

The times when I have to “build my technology” are always a bit frustrating. I don’t work as fast as I’d like, and I get tripped up by dumb mistakes. I keep having to go back, almost to the beginning, realizing that some aspect of how I set things up needs to be changed to make the rest work. As I go, though, the work gets more and more satisfying. I find pieces (of the code, of my understanding) that become solid, that I can rely on. I build my technology, and I can do more and more, and feel better about myself in the bargain. Eventually, I get back up to my full abilities, my technology set up, and a wide variety of calculations become possible.

Theoretical Uncertainty and Uncertain Theory

Yesterday, Fermilab’s Muon g-2 experiment announced a new measurement of the magnetic moment of the muon, a number which describes how muons interact with magnetic fields. For what might seem like a small technical detail, physicists have been very excited about this measurement because it’s a small technical detail that the Standard Model seems to get wrong, making it a potential hint of new undiscovered particles. Quanta magazine has a great piece on the announcement, which explains more than I will here, but the upshot is that there are two different calculations on the market that attempt to predict the magnetic moment of the muon. One of them, using older methods, disagrees with the experiment. The other, with a new approach, agrees. The question then becomes, which calculation was wrong? And why?

What does it mean for a prediction to match an experimental result? The simple, wrong, answer is that the numbers must be equal: if you predict “3”, the experiment has to measure “3”. The reason why this is wrong is that in practice, every experiment and every prediction has some uncertainty. If you’ve taken a college physics class, you’ve run into this kind of uncertainty in one of its simplest forms, measurement uncertainty. Measure with a ruler, and you can only confidently measure down to the smallest divisions on the ruler. If you measure 3cm, but your ruler has ticks only down to a millimeter, then what you’re measuring might be as large as 3.1cm or as small as 2.9 cm. You just don’t know.

This uncertainty doesn’t mean you throw up your hands and give up. Instead, you estimate the effect it can have. You report, not a measurement of 3cm, but of 3cm plus or minus 1mm. If the prediction was 2.9cm, then you’re fine: it falls within your measurement uncertainty.

Measurements aren’t the only thing that can be uncertain. Predictions have uncertainty too, theoretical uncertainty. Sometimes, this comes from uncertainty on a previous measurement: if you make a prediction based on that experiment that measured 3cm plus or minus 1mm, you have to take that plus or minus into account and estimate its effect (we call this propagation of errors). Sometimes, the uncertainty comes instead from an approximation you’re making. In particle physics, we sometimes approximate interactions between different particles with diagrams, beginning with the simplest diagrams and adding on more complicated ones as we go. To estimate the uncertainty there, we estimate the size of the diagrams we left out, the more complicated ones we haven’t calculated yet. Other times, that approximation doesn’t work, and we need to use a different approximation, treating space and time as a finite grid where we can do computer simulations. In that case, you can estimate your uncertainty based on how small you made your grid. The new approach to predicting the muon magnetic moment uses that kind of approximation.

There’s a common thread in all of these uncertainty estimates: you don’t expect to be too far off on average. Your measurements won’t be perfect, but they won’t all be screwed up in the same way either: chances are, they will randomly be a little below or a little above the truth. Your calculations are similar: whether you’re ignoring complicated particle physics diagrams or the spacing in a simulated grid, you can treat the difference as something small and random. That randomness means you can use statistics to talk about your errors: you have statistical uncertainty. When you have statistical uncertainty, you can estimate, not just how far off you might get, but how likely it is you ended up that far off. In particle physics, we have very strict standards for this kind of thing: to call something new a discovery, we demand that it is so unlikely that it would only show up randomly under the old theory roughly one in a million times. The muon magnetic moment isn’t quite up to our standards for a discovery yet, but the new measurement brought it closer.

The two dueling predictions for the muon’s magnetic moment both estimate some amount of statistical uncertainty. It’s possible that the two calculations just disagree due to chance, and that better measurements or a tighter simulation grid would make them agree. Given their estimates, though, that’s unlikely. That takes us from the realm of theoretical uncertainty, and into uncertainty about the theoretical. The two calculations use very different approaches. The new calculation tries to compute things from first principles, using the Standard Model directly. The risk is that such a calculation needs to make assumptions, ignoring some effects that are too difficult to calculate, and one of those assumptions may be wrong. The older calculation is based more on experimental results, using different experiments to estimate effects that are hard to calculate but that should be similar between different situations. The risk is that the situations may be less similar than expected, their assumptions breaking down in a way that the bottom-up calculation could catch.

None of these risks are easy to estimate. They’re “unknown unknowns”, or rather, “uncertain uncertainties”. And until some of them are resolved, it won’t be clear whether Fermilab’s new measurement is a sign of undiscovered particles, or just a (challenging!) confirmation of the Standard Model.