Tag Archives: theoretical physics

Zoomplitudes Retrospective

During Zoomplitudes (my field’s big yearly conference, this year on Zoom) I didn’t have time to write a long blog post. I said a bit about the format, but didn’t get a chance to talk about the science. I figured this week I’d go back and give a few more of my impressions. As always, conference posts are a bit more technical than my usual posts, so regulars be warned!

The conference opened with a talk by Gavin Salam, there as an ambassador for LHC physics. Salam pointed out that, while a decent proportion of speakers at Amplitudes mention the LHC in their papers, that fraction has fallen over the years. (Another speaker jokingly wondered which of those mentions were just in the paper’s introduction.) He argued that there is still useful work for us, LHC measurements that will require serious amplitudes calculations to understand. He also brought up what seems like the most credible argument for a new, higher-energy collider: that there are important properties of the Higgs, in particular its interactions, that we still have not observed.

The next few talks hopefully warmed Salam’s heart, as they featured calculations for real-world particle physics. Nathaniel Craig and Yael Shadmi in particular covered the link between amplitudes and Standard Model Effective Field Theory (SMEFT), a method to systematically characterize corrections beyond the Standard Model. Shadmi’s talk struck me because the kind of work she described (building the SMEFT “amplitudes-style”, directly from observable information rather than more complicated proxies) is something I’d seen people speculate about for a while, but which hadn’t been done until quite recently. Now, several groups have managed it, and look like they’ve gotten essentially “all the way there”, rather than just partial results that only manage to replicate part of the SMEFT. Overall it’s much faster progress than I would have expected.

After Shadmi’s talk was a brace of talks on N=4 super Yang-Mills, featuring cosmic Galois theory and an impressively groan-worthy “origin story” joke. The final talk of the day, by Hofie Hannesdottir, covered work with some of my colleagues at the NBI. Due to coronavirus I hadn’t gotten to hear about this in person, so it was good to hear a talk on it, a blend of old methods and new priorities to better understand some old discoveries.

The next day focused on a topic that has grown in importance in our community, calculations for gravitational wave telescopes like LIGO. Several speakers focused on new methods for collisions of spinning objects, where a few different approaches are making good progress (Radu Roiban’s proposal to use higher-spin field theory was particularly interesting) but things still aren’t quite “production-ready”. The older, post-Newtonian method is still very much production-ready, as evidenced by Michele Levi’s talk that covered, among other topics, our recent collaboration. Julio Parra-Martinez discussed some interesting behavior shared by both supersymmetric and non-supersymmetric gravity theories. Thibault Damour had previously expressed doubts about use of amplitudes methods to answer this kind of question, and part of Parra-Martinez’s aim was to confirm the calculation with methods Damour would consider more reliable. Damour (who was actually in the audience, which I suspect would not have happened at an in-person conference) had already recanted some related doubts, but it’s not clear to me whether that extended to the results Parra-Martinez discussed (or whether Damour has stated the problem with his old analysis).

There were a few talks that day that didn’t relate to gravitational waves, though this might have been an accident, since both speakers also work on that topic. Zvi Bern’s talk linked to the previous day’s SMEFT discussion, with a calculation using amplitudes methods of direct relevance to SMEFT researchers. Clifford Cheung’s talk proposed a rather strange/fun idea, conformal symmetry in negative dimensions!

Wednesday was “amplituhedron day”, with a variety of talks on positive geometries and cluster algebras. Featured in several talks was “tropicalization“, a mathematical procedure that can simplify complicated geometries while still preserving essential features. Here, it was used to trim down infinite “alphabets” conjectured for some calculations into a finite set, and in doing so understand the origin of “square root letters”. The day ended with a talk by Nima Arkani-Hamed, who despite offering to bet that he could finish his talk within the half-hour slot took almost twice that. The organizers seemed to have planned for this, since there was one fewer talk that day, and as such the day ended at roughly the usual time regardless.

We also took probably the most unique conference photo I will ever appear in.

For lack of a better name, I’ll call Thursday’s theme “celestial”. The day included talks by cosmologists (including approaches using amplitudes-ish methods from Daniel Baumann and Charlotte Sleight, and a curiously un-amplitudes-related talk from Daniel Green), talks on “celestial amplitudes” (amplitudes viewed from the surface of an infinitely distant sphere), and various talks with some link to string theory. I’m including in that last category intersection theory, which has really become its own thing. This included a talk by Simon Caron-Huot about using intersection theory more directly in understanding Feynman integrals, and a talk by Sebastian Mizera using intersection theory to investigate how gravity is Yang-Mills squared. Both gave me a much better idea of the speakers’ goals. In Mizera’s case he’s aiming for something very ambitious. He wants to use intersection theory to figure out when and how one can “double-copy” theories, and might figure out why the procedure “got stuck” at five loops. The day ended with a talk by Pedro Vieira, who gave an extremely lucid and well-presented “blackboard-style” talk on bootstrapping amplitudes.

Friday was a grab-bag of topics. Samuel Abreu discussed an interesting calculation using the numerical unitarity method. It was notable in part because renormalization played a bigger role than it does in most amplitudes work, and in part because they now have a cool logo for their group’s software, Caravel. Claude Duhr and Ruth Britto gave a two-part talk on their work on a Feynman integral coaction. I’d had doubts about the diagrammatic coaction they had worked on in the past because it felt a bit ad-hoc. Now, they’re using intersection theory, and have a clean story that seems to tie everything together. Andrew McLeod talked about our work on a Feynman diagram Calabi-Yau “bestiary”, while Cristian Vergu had a more rigorous understanding of our “traintrack” integrals.

There are two key elements of a conference that are tricky to do on Zoom. You can’t do a conference dinner, so you can’t do the traditional joke-filled conference dinner speech. The end of the conference is also tricky: traditionally, this is when everyone applauds the organizers and the secretaries are given flowers. As chair for the last session, Lance Dixon stepped up to fill both gaps, with a closing speech that was both a touching tribute to the hard work of organizing the conference and a hilarious pile of in-jokes, including a participation award to Arkani-Hamed for his (unprecedented, as far as I’m aware) perfect attendance.

The Sum of Our Efforts

I got a new paper out last week, with Andrew McLeod, Henrik Munch, and Georgios Papathanasiou.

A while back, some collaborators and I found an interesting set of Feynman diagrams that we called “Omega”. These Omega diagrams were fun because they let us avoid one of the biggest limitations of particle physics: that we usually have to compute approximations, diagram by diagram, rather than finding an exact answer. For these Omegas, we figured out how to add all the infinite set of Omega diagrams up together, with no approximation.

One implication of this was that, in principle, we now knew the answer for each individual Omega diagram, far past what had been computed before. However, writing down these answers was easier said than done. After some wrangling, we got the answer for each diagram in terms of an infinite sum. But despite tinkering with it for a while, even our resident infinite sum expert Georgios Papathanasiou couldn’t quite sum them up.

Naturally, this made me think the sums would make a great Master’s project.

When Henrik Munch showed up looking for a project, Andrew McLeod and I gave him several options, but he settled on the infinite sums. Impressively, he ended up solving the problem in two different ways!

First, he found an old paper none of us had seen before, that gave a general method for solving that kind of infinite sum. When he realized that method was really annoying to program, he took the principle behind it, called telescoping, and came up with his own, simpler method, for our particular case.

Picture an old-timey folding telescope. It might be long when fully extended, but when you fold it up each piece fits inside the previous one, resulting in a much smaller object. Telescoping a sum has the same spirit. If each pair of terms in a sum “fit together” (if their difference is simple), you can rearrange them so that most of the difficulty “cancels out” and you’re left with a much simpler sum.

Henrik’s telescoping idea worked even better than expected. We found that we could do, not just the Omega sums, but other sums in particle physics as well. Infinite sums are a very well-studied field, so it was interesting to find something genuinely new.

The rest of us worked to generalize the result, to check the examples and to put it in context. But the core of the work was Henrik’s. I’m really proud of what he accomplished. If you’re looking for a PhD student, he’s on the market!

Zoomplitudes 2020

This week, I’m at Zoomplitudes!

My field’s big yearly conference, Amplitudes, was supposed to happen in Michigan this year, but with the coronavirus pandemic it was quickly clear that would be impossible. Luckily, Anastasia Volovich stepped in to Zoomganize the conference from Brown.

Obligatory photo of the conference venue

The conference is still going, so I’ll say more about the scientific content later. (Except to say there have been a lot of interesting talks!) Here, I’ll just write a bit about the novel experience of going to a conference on Zoom.

Time zones are always tricky in an online conference like this. Our field is spread widely around the world, but not evenly: there are a few areas with quite a lot of amplitudes research. As a result, Zoomganizing from the US east coast seems like it was genuinely the best compromise. It means the talks start a bit early for the west coast US (6am their time), but still end not too late for the Europeans (10:30pm CET). The timing is awkward for our colleagues in China and Taiwan, but they can still join in the morning session (their evening). Overall, I don’t think it was possible to do better there.

Usually, Amplitudes is accompanied by a one-week school for Master’s and PhD students. That wasn’t feasible this year, but to fill the gap Nima Arkani-Hamed gave a livestreamed lecture the Friday before, which apparently clocked in at thirteen hours!

One aspect of the conference that really impressed me was the Slack space. The organizers wanted to replicate the “halls” part of the conference, with small groups chatting around blackboards between the talks. They set up a space on the platform Slack, and let attendees send private messages and make their own channels for specific topics. Soon the space was filled with lively discussion, including a #coffeebreak channel with pictures of everyone’s morning coffee. I think the organizers did a really good job of achieving the kind of “serendipity” I talked about in this post, where accidental meetings spark new ideas. More than that, this is the kind of thing I’d appreciate even in face-to-face conferences. The ability to message anyone at the conference from a shared platform, to have discussions that anyone can stumble on and read later, to post papers and links, all of this seems genuinely quite useful. As one of the organizers for Amplitudes 2021, I may soon get a chance to try this out.

Zoom itself worked reasonably well. A few people had trouble connecting or sharing screens, but overall things worked reliably, and the Zoom chat window is arguably better than people whispering to each other in the back of an in-person conference. One feature of the platform that confused people a bit is that co-hosts can’t raise their hands to ask questions: since speakers had to be made co-hosts to share their screens they had a harder time asking questions during other speakers’ talks.

A part I was more frustrated by was the scheduling. Fitting everyone who wanted to speak between 6am west coast and 10:30pm Europe must have been challenging, and the result was a tightly plotted conference, with three breaks each no more than 45 minutes. That’s already a bit tight, but it ended up much tighter because most talks went long. The conference’s 30 minute slots regularly took 40 minutes, between speakers running over and questions going late. As a result, the conference’s “lunch break” (roughly dinner break for the Europeans) was often only 15 minutes. I appreciate the desire for lively discussion, especially since the conference is recorded and the question sessions can be a resource for others. But I worry that, as a pitfall of remote conferences, the inconveniences people suffer to attend can become largely invisible. Yes, we can always skip a talk, and watch the recording later. Yes, we can prepare food beforehand. Still, I don’t think a 15 minute lunch break was what the organizers had in mind, and if our community does more remote conferences we should brainstorm ways to avoid this problem next time.

I’m curious how other fields are doing remote conferences right now. Even after the pandemic, I suspect some fields will experiment with this kind of thing. It’s worth sharing and paying attention to what works and what doesn’t.

The Point of a Model

I’ve been reading more lately, partially for the obvious reasons. Mostly, I’ve been catching up on books everyone else already read.

One such book is Daniel Kahneman’s “Thinking, Fast and Slow”. With all the talk lately about cognitive biases, Kahneman’s account of his research on decision-making was quite familiar ground. The book turned out to more interesting as window into the culture of psychology research. While I had a working picture from psychologist friends in grad school, “Thinking, Fast and Slow” covered the other side, the perspective of a successful professor promoting his field.

Most of this wasn’t too surprising, but one passage struck me:

Several economists and psychologists have proposed models of decision making that are based on the emotions of regret and disappointment. It is fair to say that these models have had less influence than prospect theory, and the reason is instructive. The emotions of regret and disappointment are real, and decision makers surely anticipate these emotions when making their choices. The problem is that regret theories make few striking predictions that would distinguish them from prospect theory, which has the advantage of being simpler. The complexity of prospect theory was more acceptable in the competition with expected utility theory because it did predict observations that expected utility theory could not explain.

Richer and more realistic assumptions do not suffice to make a theory successful. Scientists use theories as a bag of working tools, and they will not take on the burden of a heavier bag unless the new tools are very useful. Prospect theory was accepted by many scholars not because it is “true” but because the concepts that it added to utility theory, notably the reference point and loss aversion, were worth the trouble; they yielded new predictions that turned out to be true. We were lucky.

Thinking Fast and Slow, page 288

Kahneman is contrasting three theories of decision making here: the old proposal that people try to maximize their expected utility (roughly, the benefit they get in future), his more complicated “prospect theory” that takes into account not only what benefits people get but their attachment to what they already have, and other more complicated models based on regret. His theory ended up more popular, both than the older theory and than the newer regret-based models.

Why did his theory win out? Apparently, not because it was the true one: as he says, people almost certainly do feel regret, and make decisions based on it. No, his theory won because it was more useful. It made new, surprising predictions, while being simpler and easier to use than the regret-based models.

This, a theory defeating another without being “more true”, might bug you. By itself, it doesn’t bug me. That’s because, as a physicist, I’m used to the idea that models should not just be true, but useful. If we want to test our theories against reality, we have a large number of “levels” of description to choose from. We can “zoom in” to quarks and gluons, or “zoom out” to look at atoms, or molecules, or polymers. We have to decide how much detail to include, and we have real pragmatic reasons for doing so: some details are just too small to measure!

It’s not clear Kahneman’s community was doing this, though. That is, it doesn’t seem like he’s saying that regret and disappointment are just “too small to be measured”. Instead, he’s saying that they don’t seem to predict much differently from prospect theory, and prospect theory is simpler to use.

Ok, we do that in physics too. We like working with simpler theories, when we have a good excuse. We’re just careful about it. When we can, we derive our simpler theories from more complicated ones, carving out complexity and estimating how much of a difference it would have made. Do this carefully, and we can treat black holes as if they were subatomic particles. When we can’t, we have what we call “phenomenological” models, models built up from observation and not from an underlying theory. We never take such models as the last word, though: a phenomenological model is always viewed as temporary, something to bridge a gap while we try to derive it from more basic physics.

Kahneman doesn’t seem to view prospect theory as temporary. It doesn’t sound like anyone is trying to derive it from regret theory, or to make regret theory easier to use, or to prove it always agrees with regret theory. Maybe they are, and Kahneman simply doesn’t think much of their efforts. Either way, it doesn’t sound like a major goal of the field.

That’s the part that bothered me. In physics, we can’t always hope to derive things from a more fundamental theory, some theories are as fundamental as we know. Psychology isn’t like that: any behavior people display has to be caused by what’s going on in their heads. What Kahneman seems to be saying here is that regret theory may well be closer to what’s going on in people’s heads, but he doesn’t care: it isn’t as useful.

And at that point, I have to ask: useful for what?

As a psychologist, isn’t your goal ultimately to answer that question? To find out “what’s going on in people’s heads”? Isn’t every model you build, every theory you propose, dedicated to that question?

And if not, what exactly is it “useful” for?

For technology? It’s true, “Thinking Fast and Slow” describes several groups Kahneman advised, most memorably the IDF. Is the advantage of prospect theory, then, its “usefulness”, that it leads to better advice for the IDF?

I don’t think that’s what Kahneman means, though. When he says “useful”, he doesn’t mean “useful for advice”. He means it’s good for giving researchers ideas, good for getting people talking. He means “useful for designing experiments”. He means “useful for writing papers”.

And this is when things start to sound worryingly familiar. Because if I’m accusing Kahneman’s community of giving up on finding the fundamental truth, just doing whatever they can to write more papers…well, that’s not an uncommon accusation in physics as well. If the people who spend their lives describing cognitive biases are really getting distracted like that, what chance does, say, string theory have?

I don’t know how seriously to take any of this. But it’s lurking there, in the back of my mind, that nasty, vicious, essential question: what are all of our models for?

Bonus quote, for the commenters to have fun with:

I have yet to meet a successful scientist who lacks the ability to exaggerate the importance of what he or she is doing, and I believe that someone who lacks a delusional sense of significance will wilt in the face of repeated experiences of multiple small failures and rare successes, the fate of most researchers.

Thinking Fast and Slow, page 264

The Academic Workflow (Or Lack Thereof)

I was chatting with someone in biotech recently, who was frustrated with the current state of coronavirus research. The problem, in her view, was that researchers were approaching the problem in too “academic” a way. Instead of coordinating, trying to narrow down to a few approaches and make sure they get the testing they need, researchers were each focusing on their own approach, answering the questions they thought were interesting or important without fitting their work into a broader plan. She thought that a more top-down, corporate approach would do much better.

I don’t know anything about the current state of coronavirus research, what works and what doesn’t. But the conversation got me thinking about my own field.

Theoretical physics is about as far from “top-down” as you can get. As a graduate student, your “boss” is your advisor, but that “bossiness” can vary from telling you to do specific calculations to just meeting you every so often to discuss ideas. As a postdoc, even that structure evaporates: while you usually have an official “supervisor”, they won’t tell you what to do outside of the most regimented projects. Instead, they suggest, proposing ideas they’d like to collaborate on. As a professor, you don’t have this kind of “supervisor”: while there are people in charge of the department, they won’t tell you what to research. At most, you have informal hierarchies: senior professors influencing junior professors, or the hot-shots influencing the rest.

Even when we get a collaboration going, we don’t tend to have assigned roles. People do what they can, when they can, and if you’re an expert on one part of the work you’ll probably end up doing that part, but that won’t be “the plan” because there almost never is a plan. There’s very rarely a “person in charge”: if there’s a disagreement it gets solved by one person convincing another that they’re right.

This kind of loose structure is freeing, but it can also be frustrating. Even the question of who is on a collaboration can be up in the air, with a sometimes tacit assumption that if you were there for certain conversations you’re there for the paper. It’s possible to push for more structure, but push too hard and people will start ignoring you anyway.

Would we benefit from more structure? That depends on the project. Sometimes, when we have clear goals, a more “corporate” approach can work. Other times, when we’re exploring something genuinely new, any plan is going to fail: we simply don’t know what we’re going to run into, what will matter and what won’t. Maybe there are corporate strategies for that kind of research, ways to manage that workflow. I don’t know them.

The Wolfram Physics Project Makes Me Queasy

Stephen Wolfram is…Stephen Wolfram.

Once a wunderkind student of Feynman, Wolfram is now best known for his software, Mathematica, a tool used by everyone from scientists to lazy college students. Almost all of my work is coded in Mathematica, and while it has some flaws (can someone please speed up the linear solver? Maple’s is so much better!) it still tends to be the best tool for the job.

Wolfram is also known for being a very strange person. There’s his tendency to name, or rename, things after himself. (There’s a type of Mathematica file that used to be called “.m”. Now by default they’re “.wl”, “Wolfram Language” files.) There’s his live-streamed meetings. And then there’s his physics.

In 2002, Wolfram wrote a book, “A New Kind of Science”, arguing that computational systems called cellular automata were going to revolutionize science. A few days ago, he released an update: a sprawling website for “The Wolfram Physics Project”. In it, he claims to have found a potential “theory of everything”, unifying general relativity and quantum physics in a cellular automata-like form.

If that gets your crackpot klaxons blaring, yeah, me too. But Wolfram was once a very promising physicist. And he has collaborators this time, who are currently promising physicists. So I should probably give him a fair reading.

On the other hand, his introduction for a technical audience is 448 pages long. I may have more time now due to COVID-19, but I still have a job, and it isn’t reading that.

So I compromised. I didn’t read his 448-page technical introduction. I read his 90-ish page blog post. The post is written for a non-technical audience, so I know it isn’t 100% accurate. But by seeing how someone chooses to promote their work, I can at least get an idea of what they value.

I started out optimistic, or at least trying to be. Wolfram starts with simple mathematical rules, and sees what kinds of structures they create. That’s not an unheard of strategy in theoretical physics, including in my own field. And the specific structures he’s looking at look weirdly familiar, a bit like a generalization of cluster algebras.

Reading along, though, I got more and more uneasy. That unease peaked when I saw him describe how his structures give rise to mass.

Wolfram had already argued that his structures obey special relativity. (For a critique of this claim, see this twitter thread.) He found a way to define energy and momentum in his system, as “fluxes of causal edges”. He picks out a particular “flux of causal edges”, one that corresponds to “just going forward in time”, and defines it as mass. Then he “derives” E=mc^2, saying,

Sometimes in the standard formalism of physics, this relation by now seems more like a definition than something to derive. But in our model, it’s not just a definition, and in fact we can successfully derive it.

In “the standard formalism of physics”, E=mc^2 means “mass is the energy of an object at rest”. It means “mass is the energy of an object just going forward in time”. If the “standard formalism of physics” “just defines” E=mc^2, so does Wolfram.

I haven’t read his technical summary. Maybe this isn’t really how his “derivation” works, maybe it’s just how he decided to summarize it. But it’s a pretty misleading summary, one that gives the reader entirely the wrong idea about some rather basic physics. It worries me, because both as a physicist and a blogger, he really should know better. I’m left wondering whether he meant to mislead, or whether instead he’s misleading himself.

That feeling kept recurring as I kept reading. There was nothing else as extreme as that passage, but a lot of pieces that felt like they were making a big deal about the wrong things, and ignoring what a physicist would find the most important questions.

I was tempted to get snarkier in this post, to throw in a reference to Lewis’s trilemma or some variant of the old quip that “what is new is not good; and what is good is not new”. For now, I’ll just say that I probably shouldn’t have read a 90 page pop physics treatise before lunch, and end the post with that.

Thoughts on Doing Science Remotely

In these times, I’m unusually lucky.

I’m a theoretical physicist. I don’t handle goods, or see customers. Other scientists need labs, or telescopes: I just need a computer and a pad of paper. As a postdoc, I don’t even teach. In the past, commenters have asked me why I don’t just work remotely. Why go to conferences, why even go to the office?

With COVID-19, we’re finding out.

First, the good: my colleagues at the Niels Bohr Institute have been hard at work keeping everyone connected. Our seminars have moved online, where we hold weekly Zoom seminars jointly with Iceland, Uppsala and Nordita. We have a “virtual coffee room”, a Zoom room that’s continuously open with “virtual coffee breaks” at 10 and 3:30 to encourage people to show up. We’re planning virtual colloquia, and even a virtual social night with Jackbox games.

Is it working? Partially.

The seminars are the strongest part. Remote seminars let us bring in speakers from all over the world (time zones permitting). They let one seminar serve the needs of several different institutes. Most of the basic things a seminar needs (slides, blackboards, ability to ask questions, ability to clap) are present on online platforms, particularly Zoom. And our seminar organizers had the bright idea to keep the Zoom room open after the talk, which allows the traditional “after seminar conversation with the speaker” for those who want it.

Still, the setup isn’t as good as it could be. If the audience turns off their cameras and mics, the speaker can feel like they’re giving a talk to an empty room. This isn’t just awkward, it makes the talk worse: speakers improve when they can “feel the room” and see what catches their audience’s interest. If the audience keeps their cameras or mics on instead, it takes a lot of bandwidth, and the speaker still can’t really feel the room. I don’t know if there’s a good solution here, but it’s worth working on.

The “virtual coffee room” is weaker. It was quite popular at first, but as time went on fewer and fewer people (myself included) showed up. In contrast, my wife’s friends at Waterloo do a daily cryptic crossword, and that seems to do quite well. What’s the difference? They have real crosswords, we don’t have real coffee.

I kid, but only a little. Coffee rooms and tea breaks work because of a core activity, a physical requirement that brings people together. We value them for their social role, but that role on its own isn’t enough to get us in the door. We need the excuse: the coffee, the tea, the cookies, the crossword. Without that shared structure, people just don’t show up.

Getting this kind of thing right is more important than it might seem. Social activities help us feel better, they help us feel less isolated. But more than that, they help us do science better.

That’s because science works, at least in part, through serendipity.

You might think of scientific collaboration as something we plan, and it can be sometimes. Sometimes we know exactly what we’re looking for: a precise calculation someone else can do, a question someone else can answer. Sometimes, though, we’re helped by chance. We have random conversations, different people in different situations, coffee breaks and conference dinners, and eventually someone brings up an idea we wouldn’t have thought of on our own.

Other times, chance helps by providing an excuse. I have a few questions rattling around in my head that I’d like to ask some of my field’s big-shots, but that don’t feel worth an email. I’ve been waiting to meet them at a conference instead. The advantage of those casual meetings is that they give an excuse for conversation: we have to talk about something, it might as well be my dumb question. Without that kind of causal contact, it feels a lot harder to broach low-stakes topics.

None of this is impossible to do remotely. But I think we need new technology (social or digital) to make it work well. Serendipity is easy to find in person, but social networks can imitate it. Log in to facebook or tumblr looking for your favorite content, and you face a pile of ongoing conversations. Looking through them, you naturally “run into” whatever your friends are talking about. I could see something similar for academia. Take something like the list of new papers on arXiv, then run a list of ongoing conversations next to it. When we check the arXiv each morning, we could see what our colleagues were talking about, and join in if we see something interesting. It would be a way to stay connected that would keep us together more, giving more incentive and structure beyond simple loneliness, and lead to the kind of accidental meetings that science craves. You could even graft conferences on to that system, talks in the middle with conversation threads on the side.

None of us know how long the pandemic will last, or how long we’ll be asked to work from home. But even afterwards, it’s worth thinking about the kind of infrastructure science needs to work remotely. Some ideas may still be valuable after all this is over.

4gravitons Exchanges a Graviton

I had a new paper up last Friday with Michèle Levi and Andrew McLeod, on a topic I hadn’t worked on before: colliding black holes.

I am an “amplitudeologist”. I work on particle physics calculations, computing “scattering amplitudes” to find the probability that fundamental particles bounce off each other. This sounds like the farthest thing possible from black holes. Nevertheless, the two are tightly linked, through the magic of something called Effective Field Theory.

Effective Field Theory is a kind of “zoom knob” for particle physics. You “zoom out” to some chosen scale, and write down a theory that describes physics at that scale. Your theory won’t be a complete description: you’re ignoring everything that’s “too small to see”. It will, however, be an effective description: one that, at the scale you’re interested in, is effectively true.

Particle physicists usually use Effective Field Theory to go between different theories of particle physics, to zoom out from strings to quarks to protons and neutrons. But you can zoom out even further, all the way out to astronomical distances. Zoom out far enough, and even something as massive as a black hole looks like just another particle.

Just click the “zoom X10” button fifteen times, and you’re there!

In this picture, the force of gravity between black holes looks like particles (specifically, gravitons) going back and forth. With this picture, physicists can calculate what happens when two black holes collide with each other, making predictions that can be checked with new gravitational wave telescopes like LIGO.

Researchers have pushed this technique quite far. As the calculations get more and more precise (more and more “loops”), they have gotten more and more challenging. This is particularly true when the black holes are spinning, an extra wrinkle in the calculation that adds a surprising amount of complexity.

That’s where I came in. I can’t compete with the experts on black holes, but I certainly know a thing or two about complicated particle physics calculations. Amplitudeologists, like Andrew McLeod and me, have a grab-bag of tricks that make these kinds of calculations a lot easier. With Michèle Levi’s expertise working with spinning black holes in Effective Field Theory, we were able to combine our knowledge to push beyond the state of the art, to a new level of precision.

This project has been quite exciting for me, for a number of reasons. For one, it’s my first time working with gravitons: despite this blog’s name, I’d never published a paper on gravity before. For another, as my brother quipped when he heard about it, this is by far the most “applied” paper I’ve ever written. I mostly work with a theory called N=4 super Yang-Mills, a toy model we use to develop new techniques. This paper isn’t a toy model: the calculation we did should describe black holes out there in the sky, in the real world. There’s a decent chance someone will use this calculation to compare with actual data, from LIGO or a future telescope. That, in particular, is an absurdly exciting prospect.

Because this was such an applied calculation, it was an opportunity to explore the more applied part of my own field. We ended up using well-known techniques from that corner, but I look forward to doing something more inventive in future.

What I Was Not Saying in My Last Post

Science communication is a gradual process. Anything we say is incomplete, prone to cause misunderstanding. Luckily, we can keep talking, give a new explanation that corrects those misunderstandings. This of course will lead to new misunderstandings. We then explain again, and so on. It sounds fruitless, but in practice our audience nevertheless gets closer and closer to the truth.

Last week, I tried to explain physicists’ notion of a fundamental particle. In particular, I wanted to explain what these particles aren’t: tiny, indestructible spheres, like Democritus imagined. Instead, I emphasized the idea of fields, interacting and exchanging energy, with particles as just the tip of the field iceberg.

I’ve given this kind of explanation before. And when I do, there are two things people often misunderstand. These correspond to two topics which use very similar language, but talk about different things. So this week, I thought I’d get ahead of the game and correct those misunderstandings.

The first misunderstanding: None of that post was quantum.

If you’ve heard physicists explain quantum mechanics, you’ve probably heard about wave-particle duality. Things we thought were waves, like light, also behave like particles, things we thought were particles, like electrons, also behave like waves.

If that’s on your mind, and you see me say particles don’t exist, maybe you think I mean waves exist instead. Maybe when I say “fields”, you think I’m talking about waves. Maybe you think I’m choosing one side of the duality, saying that waves exist and particles don’t.

To be 100% clear: I am not saying that.

Particles and waves, in quantum physics, are both manifestations of fields. Is your field just at one specific point? Then it’s a particle. Is it spread out, with a fixed wavelength and frequency? Then it’s a wave. These are the two concepts connected by wave-particle duality, where the same object can behave differently depending on what you measure. And both of them, to be clear, come from fields. Neither is the kind of thing Democritus imagined.

The second misunderstanding: This isn’t about on-shell vs. off-shell.

Some of you have seen some more “advanced” science popularization. In particular, you might have listened to Nima Arkani-Hamed, of amplituhedron fame, talk about his perspective on particle physics. Nima thinks we need to reformulate particle physics, as much as possible, “on-shell”. “On-shell” means that particles obey their equations of motion, normally quantum calculations involve “off-shell” particles that violate those equations.

To again be clear: I’m not arguing with Nima here.

Nima (and other people in our field) will sometimes talk about on-shell vs off-shell as if it was about particles vs. fields. Normal physicists will write down a general field, and let it be off-shell, we try to do calculations with particles that are on-shell. But once again, on-shell doesn’t mean Democritus-style. We still don’t know what a fully on-shell picture of physics will look like. Chances are it won’t look like the picture of sloshing, omnipresent fields we started with, at least not exactly. But it won’t bring back indivisible, unchangeable atoms. Those are gone, and we have no reason to bring them back.

These Ain’t Democritus’s Particles

Physicists talk a lot about fundamental particles. But what do we mean by fundamental?

The Ancient Greek philosopher Democritus thought the world was composed of fundamental indivisible objects, constantly in motion. He called these objects “atoms”, and believed they could never be created or destroyed, with every other phenomenon explained by different types of interlocking atoms.

The things we call atoms today aren’t really like this, as you probably know. Atoms aren’t indivisible: their electrons can be split from their nuclei, and with more energy their nuclei can be split into protons and neutrons. More energy yet, and protons and neutrons can in turn be split into quarks. Still, at this point you might wonder: could quarks be Democritus’s atoms?

In a word, no. Nonetheless, quarks are, as far as we know, fundamental particles. As it turns out, our “fundamental” is very different from Democritus’s. Our fundamental particles can transform.

Think about beta decay. You might be used to thinking of it in terms of protons and neutrons: an unstable neutron decays, becoming a proton, an electron, and an (electron-anti-)neutrino. You might think that when the neutron decays, it literally “decays”, falling apart into smaller pieces.

But when you look at the quarks, the neutron’s smallest pieces, that isn’t the picture at all. In beta decay, a down quark in the neutron changes, turning into an up quark and an unstable W boson. The W boson then decays into an electron and a neutrino, while the up quark becomes part of the new proton. Even looking at the most fundamental particles we know, Democritus’s picture of unchanging atoms just isn’t true.

Could there be some even lower level of reality that works the way Democritus imagined? It’s not impossible. But the key insight of modern particle physics is that there doesn’t need to be.

As far as we know, up quarks and down quarks are both fundamental. Neither is “made of” the other, or “made of” anything else. But they also aren’t little round indestructible balls. They’re manifestations of quantum fields, “ripples” that slosh from one sort to another in complicated ways.

When we ask which particles are fundamental, we’re asking what quantum fields we need to describe reality. We’re asking for the simplest explanation, the simplest mathematical model, that’s consistent with everything we could observe. So “fundamental” doesn’t end up meaning indivisible, or unchanging. It’s fundamental like an axiom: used to derive the rest.