Tag Archives: DoingScience

Making More Nails

They say when all you have is a hammer, everything looks like a nail.

Academics are a bit smarter than that. Confidently predict a world of nails, and you fall to the first paper that shows evidence of a screw. There are limits to how long you can delude yourself when your job is supposed to be all about finding the truth.

You can make your own nails, though.

Suppose there’s something you’re really good at. Maybe, like many of my past colleagues, you can do particle physics calculations faster than anyone else, even when the particles are super-complicated hypothetical gravitons. Maybe you know more than anyone else about how to make a quantum computer, or maybe you just know how to build a “quantum computer“. Maybe you’re an expert in esoteric mathematics, who can re-phrase anything in terms of the arcane language of category theory.

That’s your hammer. Get good enough with it, and anyone with a nail-based problem will come to you to solve it. If nails are trendy, then you’ll impress grant committees and hiring committees, and your students will too.

When nails aren’t trendy, though, you need to try something else. If your job is secure, and you don’t have students with their own insecure jobs banging down your door, then you could spend a while retraining. You could form a reading group, pick up a textbook or two about screwdrivers and wrenches, and learn how to use different tools. Eventually, you might find a screwdriving task you have an advantage with, something you can once again do better than everyone else, and you’ll start getting all those rewards again.

Or, maybe you won’t. You’ll get less funding to hire people, so you’ll do less research, so your work will get less impressive and you’ll get less funding, and so on and so forth.

Instead of risking that, most academics take another path. They take what they’re good at, and invent new problems in the new trendy area to use that expertise.

If everyone is excited about gravitational waves, you turn a black hole calculation into a graviton calculation. If companies are investing in computation in the here-and-now, then you find ways those companies can use insights from your quantum research. If everyone wants to know how AI works, you build a mathematical picture that sort of looks like one part of how AI works, and do category theory to it.

At first, you won’t be competitive. Your hammer isn’t going to work nearly as well as the screwdrivers people have been using forever for these problems, and there will be all sorts of new issues you have to solve just to get your hammer in position in the first place. But that doesn’t matter so much, as long as you’re honest. Academic research is expected to take time, applications aren’t supposed to be obvious. Grant committees care about what you’re trying to do, as long as you have a reasonably plausible story about how you’ll get there.

(Investors are also not immune to a nice story. Customers are also not immune to a nice story. You can take this farther than you might think.)

So, unlike the re-trainers, you survive. And some of the time, you make it work. Your hammer-based screwdriving ends up morphing into something that, some of the time, actually does something the screwdrivers can’t. Instead of delusionally imagining nails, you’ve added a real ersatz nail to the world, where previously there was just a screw.

Making nails is a better path for you. Is it a better path for the world? I’m not sure.

If all those grants you won, all those jobs you and your students got, all that money from investors or customers drawn in by a good story, if that all went to the people who had the screwdrivers in the first place, could they have done a better job?

Sometimes, no. Sometimes you happen upon some real irreproducible magic. Your hammer is Thor’s hammer, and when hefted by the worthy it can do great things.

Sometimes, though, your hammer was just the hammer that got the funding. Now every screwdriver kit has to have a space for a little hammer, when it could have had another specialized screwdriver that fit better in the box.

In the end, the world is build out of these kinds of ill-fitting toolkits. We all try to survive, both as human beings and by our sub-culture’s concept of the good life. We each have our hammers, and regardless of whether the world is full of screws, we have to convince people they want a hammer anyway. Everything we do is built on a vast rickety pile of consequences, the end-results of billions of people desperate to be wanted. For those of us who love clean solutions and ideal paths, this is maddening and frustrating and terrifying. But it’s life, and in a world where we never know the ideal path, screw-nails and nail-screws are the best way we’ve found to get things done.

An “Open-Source” Grant Proposal

Back in the Fall, I spent most of my time writing a grant proposal.

In Europe, getting a European Research Council (ERC) grant is how you know you’ve made it as a researcher. Covering both science and the humanities, ERC grants give a lump of funding big enough to hire a research group, turning you from a lone expert into a local big-shot. The grants last five years, and are organized by “academic age”, the number of years since your PhD. ERC Starting Grants give 1.5 million euros for those with academic age 2-7. At academic age 7-12, you need to apply for the Consolidator Grant. The competition is fiercer, but if you make it through you get 2 million euros. Finally, Advanced Grants give 2.5 million to more advanced researchers.

I’m old, at least in terms of academic age. I applied to the ERC Starting Grant in 2021, but this last year I was too academically old to qualify, so I applied to the Consolidator Grant instead.

I won’t know if they invite me for an interview until June…but since I’m leaving the field, there wouldn’t be much point in going anyway. So I figured, why not share the grant application with you guys?

That’s what I’m doing in this post. I think there are good ideas in here, a few research directions that fellow amplitudeologists might want to consider. (I’ve removed details on one of them, the second work package, because some friends of mine are already working on it.)

The format could also be helpful. My wife is more than a bit of a LaTeX wiz, she coded up Gantt charts and helped with the format of the headers and the color scheme. If you want an ERC proposal that doesn’t look like the default thing you could do with LaTeX or Word, then take a look.

Finally, I suspect some laymen in the audience are just curious what a scientific grant proposal looks like. While I’ve cut a few things (and a few of these were shorter than they ought to have been to begin with), this might satisfy your curiosity.

You can find the proposal in a zip file here: https://drive.proton.me/urls/WTVN0F16HG#mYaz0edaOGha . I’ve included pdfs of the two required parts, B1 and B2, as well as the LaTeX files used to generate them.

For those of you still in the game, good luck with your ERCs!


Update from November 2024:

I wanted to include a bit more information for those who want to build off some of the ideas in the proposal.

I did end up getting offered an interview for this grant, and since the ERC doesn’t give any way to withdraw in their system I ended up going through with the interview. I didn’t get the grant, but I think it would have had a solid chance if I had had the time to focus and prepare for it (rather than mostly being busy applying for industry jobs). If anyone wants to write their own proposal building on some of the research directions I’m proposing here, I’m happy to chat and give you advice. In particular, a few things to keep in mind:

  • You need a good list of pheno applications. In particular, unless you focus your proposal heavily on the gravitational wave side, you need a good list of particle physics applications, because the particle physicists generally won’t think that the gravitational wave side “counts”. I was asked in the interview to name three particle physics measurements this would help with, I had mentioned two in the proposal and could only come up with one off the top of my head. You can do a lot better with preparation.
  • Relatedly, you need some idea of what the pipeline looks like, what these calculations eventually get used for, including the looming question of “why do this analytically rather than numerically?”
  • If you’re including the N=4 super Yang-Mills side of the story, you’ll have to overcome some skepticism. Some of that skepticism can be brushed aside by emphasizing the theory’s track record (canonical differential equations probably wouldn’t exist without research in N=4 symbols), but a meaningful source of skepticism is just whether you can work with dim reg. This is an issue currently facing a few other approaches, so it’s good to have a good answer for it!
  • If you’re relying a lot on the expertise of the people you plan on hiring (I definitely was, especially in planning to hire a mathematician) then ideally you should have some idea of who you could hire. I wasn’t in a position to do this for obvious reasons, but anyone that has a stable position should consider talking to potential hires in advance so you have a list of names.
  • Have justifications in mind for your budget. Yes, you’ll be encouraged by your home institution to just increase every budget line as far as you can get. But you will be asked about anything unusually high, so you really need a picture for what you will spend it on. Along these lines, if your institution imposes any unusual expenses (since my budget was written for the CEA, it had to pay for Mathematica and Maple licenses since the CEA is technically a private business and doesn’t have access to site licenses at academic rates) then you need to be able to justify why it’s still a good host despite that.

What’s in a Subfield?

A while back, someone asked me what my subfield, amplitudeology, is really about. I wrote an answer to that here, a short-term and long-term perspective that line up with the stories we often tell about the field. I talked about how we try to figure out ways to calculate probabilities faster, first for understanding the output of particle colliders like the LHC, then more recently for gravitational wave telescopes. I talked about how the philosophy we use for that carries us farther, how focusing on the minimal information we need to make a prediction gives us hope that we can generalize and even propose totally new theories.

The world doesn’t follow stories, though, not quite so neatly. Try to define something as simple as the word “game” and you run into trouble. Some games have a winner and a loser, some games everyone is on one team, and some games don’t have winners or losers at all. Games can involve physical exercise, computers, boards and dice, or just people telling stories. They can be played for fun, or for money, silly or deadly serious. Most have rules, but some don’t even have that. Instead, games are linked by history: a series of resemblances, people saying that “this” is a game because it’s kind of like “that”.

A subfield isn’t just a word, it’s a group of people. So subfields aren’t defined just by resemblance. Instead, they’re defined by practicality.

To ask what amplitudeology is really about, think about why you might want to call yourself an amplitudeologist. It could be a question of goals, certainly: you might care a lot about making better predictions for the LHC, or you could have some other grand story in mind about how amplitudes will save the world. Instead, though, it could be a matter of training: you learned certain methods, certain mathematics, a certain perspective, and now you apply it to your research, even if it goes further afield from what was considered “amplitudeology” before. It could even be a matter of community, joining with others who you think do cool stuff, even if you don’t share exactly the same goals or the same methods.

Calling yourself an amplitudeologist means you go to their conferences and listen to their talks, means you look to them to collaborate and pay attention to their papers. Those kinds of things define a subfield: not some grand mission statement, but practical questions of interest, what people work on and know and where they’re going with that. Instead of one story, like every other word, amplitudeology has a practical meaning that shifts and changes with time. That’s the way subfields should be: useful to the people who practice them.

What Referees Are For

This week, we had a colloquium talk by the managing editor of the Open Journal of Astrophysics.

The Open Journal of Astrophysics is an example of an arXiv overlay journal. In the old days, journals shouldered the difficult task of compiling scientists’ work into a readable format and sending them to university libraries all over the world so people could stay up to date with the work of distant colleagues. They used to charge libraries for the journals, now some instead charge authors per paper they want to publish.

Now, most of that is unnecessary due to online resources, in my field the arXiv. We prepare our papers using free tools like LaTeX, then upload them to arXiv.org, a website that makes the papers freely accessible for everybody. I don’t think I’ve ever read a paper in a physical journal in my field, and I only check journal websites if I think there’s a mistake in the arXiv version. The rest of the time, I just use the arXiv.

Still, journals do one thing the arXiv doesn’t do, and that’s refereeing. Each paper a journal receives is sent out to a few expert referees. The referees read the paper, and either reject it, accept it as-is, or demand changes before they can accept it. The journal then publishes accepted papers only.

The goal of arXiv overlay journals is to make this feature of journals also unnecessary. To do this, they notice that if every paper is already on arXiv, they don’t need to host papers or print them or typeset them. They just need to find suitable referees, and announce which papers passed.

The Open Journal of Astrophysics is a relatively small arXiv overlay journal. They operate quite cheaply, in part because the people running it can handle most of it as a minor distraction from their day job. SciPost is much bigger, and has to spend more per paper to operate. Still, it spends a lot less than journals charge authors.

We had a spirited discussion after the talk, and someone brought up an interesting point: why do we need to announce which papers passed? Can’t we just publish everything?

What, in the end, are the referees actually for? Why do we need them?

One function of referees is to check for mistakes. This is most important in mathematics, where referees might spend years making sure every step in a proof works as intended. Other fields vary, from theoretical physics (where we can check some things sometimes, but often have to make do with spotting poorly explained parts of a calculation), to fields that do experiments in the real world (where referees can look for warning signs and shady statistics, but won’t actually reproduce the experiment). A mistake found by a referee can be a boon to not just the wider scientific community, but to the author as well. Most scientists would prefer their papers to be correct, so we’re often happy to hear about a genuine mistake.

If this was all referees were for, though, then you don’t actually need to reject any papers. As a colleague of mine suggested, you just need the referees to publish their reports. Then the papers could be published along with comments from the referees, and possibly also responses from the author. Readers could see any mistakes the referees found, and judge for themselves what they show about the result.

Referees already publish their reports in SciPost much of the time, though not currently in the Open Journal of Astrophysics. Both journals still reject some papers, though. In part, that’s because they serve another function: referees are supposed to tell us which papers are “good”.

Some journals are more prestigious and fancy than others. Nature and Science are the most famous, though people in my field almost never bother to publish in either. Still, we have a hierarchy in mind, with Physical Review Letters on the high end and JHEP on the lower one. Publishing in a fancier and more prestigious journal is supposed to say something about you as a scientist, to say that your work is fancier and more prestigious. If you can’t publish in any journal at all, then your work wasn’t interesting enough to merit getting credit for it, and maybe you should have worked harder.

What does that credit buy you? Ostensibly, everything. Jobs are more likely to hire you if you’ve published in more prestigious places, and grant agencies will be more likely to give you money.

In practice, though, this depends a lot on who’s making the decisions. Some people will weigh these kinds of things highly, especially if they aren’t familiar with a candidate’s work. Others will be able to rely on other things, from numbers of papers and citations to informal assessments of a scientist’s impact. I genuinely don’t know whether the journals I published in made any impact at all when I was hired, and I’m a bit afraid to ask. I haven’t yet sat on the kind of committee that makes these decisions, so I don’t know what things look like from the other side either.

But I do know that, on a certain level, journals and publications can’t matter quite as much as we think. As I mentioned, my field doesn’t use Nature or Science, while others do. A grant agency or hiring committee comparing two scientists would have to take that into account, just as they have to take into account the thousands of authors on every single paper by the ATLAS and CMS experiments. If a field started publishing every paper regardless of quality, they’d have to adapt there too, and find a new way to judge people compatible with that.

Can we just publish everything, papers and referee letters and responses and letters and reviews? Maybe. I think there are fields where this could really work well, and fields where it would collapse into the invective of a YouTube comments section. I’m not sure where my own field sits. Theoretical particle physics is relatively small and close-knit, but it’s also cool and popular, with many strong and dumb opinions floating around. I’d like to believe we could handle it, that we could prune back the professional cruft and turn our field into a real conversation between scholars. But I don’t know.

A Significant Calculation

Particle physicists have a weird relationship to journals. We publish all our results for free on a website called the arXiv, and when we need to read a paper that’s the first place we look. But we still submit our work to journals, because we need some way to vouch that we’re doing good work. Explicit numbers (h-index, impact factor) are falling out of favor, but we still need to demonstrate that we get published in good journals, that we do enough work, and that work has an impact on others. We need it to get jobs, to get grants to fund research at those jobs, and to get future jobs for the students and postdocs we hire with those grants. Our employers need it to justify their own funding, to summarize their progress so governments and administrators can decide who gets what.

This can create weird tensions. When people love a topic, they want to talk about it with each other. They want to say all sorts of things, big and small, to contribute new ideas and correct others and move things forward. But as professional physicists, we also have to publish papers. We can publish some “notes”, little statements on the arXiv that we don’t plan to make into a paper, but we don’t really get “credit” for those. So in practice, we try to force anything we want to say into a paper-sized chunk.

That wouldn’t be a problem if paper-sized chunks were flexible, and you can see when journals historically tried to make them that way. Some journals publish “letters”, short pieces a few pages long, to contrast with their usual papers that can run from twenty to a few hundred pages. These “letters” tend to be viewed as prestigious, though, so they end up being judged on roughly the same standards as the normal papers, if not more strictly.

What standards are those? For each journal, you can find an official list. The Journal of High-Energy Physics, for example, instructs reviewers to look for “high scientific
quality, originality and relevance”. That rules out papers that just reproduce old results, but otherwise is frustratingly vague. What constitutes high scientific quality? Relevant to whom?

In practice, reviewers use a much fuzzier criterion: is this “paper-like”? Does this look like other things that get published, or not?

Each field will assess that differently. It’s a criterion of familiarity, of whether a paper looks like what people in the field generally publish. In my field, one rule of thumb is that a paper must contain a significant calculation.

A “significant calculation” is still quite fuzzy, but the idea is to make sure that a paper requires some amount of actual work. Someone has to do something challenging, and the work shouldn’t be half-done: as much as feasible, they should finish, and calculate something new. Ideally, this should be something that nobody had calculated before, but if the perspective is new enough it can be something old. It should “look hard”, though.

That’s a fine way to judge whether someone is working hard, which is something we sometimes want to judge. But since we’re incentivized to make everything into a paper, this means that every time we want to say something, we want to accompany it with some “significant calculation”, some concrete time-consuming work. This can happen even if we want to say something that’s quite direct and simple, a fact that can be quickly justified but nonetheless has been ignored by the field. If we don’t want it to be “just” an un-credited note, we have to find some way to turn it into a “significant calculation”. We do extra work, sometimes pointless work, in order to make something “paper-sized”.

I like to think about what academia would be like without the need to fill out a career. The model I keep imagining is that of a web forum or a blogging platform. There would be the big projects, the in-depth guides and effortposts. But there would also be shorter contributions, people building off each other, comments on longer pieces and quick alerts pinned to the top of the page. We’d have a shared record of knowledge, where everyone contributes what they want to whatever level of detail they want.

I think math is a bit closer to this ideal. Despite their higher standards for review, checking the logic of every paper to make sure it makes sense to publish, math papers can sometimes be very short, or on apparently trivial things. Physics doesn’t quite work this way, and I suspect part of it is our funding sources. If you’re mostly paid to teach, like many mathematicians, your research is more flexible. If you’re paid to research, like many physicists, then people want to make sure your research is productive, and that tends to cram it into measurable boxes.

In today’s world, I don’t think physics can shift cultures that drastically. Even as we build new structures to rival the journals, the career incentives remain. Physics couldn’t become math unless it shed most of the world’s physicists.

In the long run, though…well, we may one day find ourselves in a world where we don’t have to work all our days to keep each other alive. And if we do, hopefully we’ll change how scientists publish.

IPhT’s 60-Year Anniversary

This year is the 60th anniversary of my new employer, the Institut de Physique Théorique of CEA Paris-Saclay (or IPhT for short). In celebration, they’re holding a short conference, with a variety of festivities. They’ve been rushing to complete a film about the institute, and I hear there’s even a vintage arcade game decorated with Feynman diagrams. For me, it will be a chance to learn a bit more about the history of this place, which I currently know shamefully little about.

(For example, despite having his textbook on my shelf, I don’t know much about what our Auditorium’s namesake Claude Itzykson was known for.)

Since I’m busy with the conference this week, I won’t have time for a long blog post. Next week I’ll be able to say more, and tell you what I learned!

Physics’ Unique Nightmare

Halloween is coming up, so let’s talk about the most prominent monster of the physics canon, the nightmare scenario.

Not to be confused with the D&D Nightmare, which once was a convenient source of infinite consumable items for mid-level characters.

Right now, thousands of physicists search for more information about particle physics beyond our current Standard Model. They look at data from the Large Hadron Collider to look for signs of new particles and unexpected behavior, they try to detect a wide range of possible dark matter particles, and they make very precise measurements to try to detect subtle deviations. And in the back of their minds, almost all of those physicists wonder if they’ll find anything at all.

It’s not that we think the Standard Model is right. We know it has problems, deep mathematical issues that make it give nonsense answers and an apparent big mismatch with what we observe about the motion of matter and light in the universe. (You’ve probably heard this mismatch called dark matter and dark energy.)

But none of those problems guarantee an answer soon. The Standard Model will eventually fail, but it may fail only for very difficult and expensive experiments, not a Large Hadron Collider but some sort of galactic-scale Large Earth Collider. It might be that none of the experiments or searches or theories those thousands of physicists are working on will tell them anything they didn’t already know. That’s the nightmare scenario.

I don’t know another field that has a nightmare scenario quite like this. In most fields, one experiment or another might fail, not just not giving the expected evidence but not teaching anything new. But most experiments teach us something new. We don’t have a theory, in almost any field, that has the potential to explain every observation up to the limits of our experiments, but which we still hope to disprove. Only the Standard Model is like that.

And while thousands of physicists are exposed to this nightmare scenario, the majority of physicists aren’t. Physics isn’t just the science of the reductionistic laws of the smallest constituents of matter. It’s also the study of physical systems, from the bubbling chaos of nuclear physics to the formation of planets and galaxies and black holes, to the properties of materials to the movement of bacteria on a petri dish and bees in a hive. It’s also the development of new methods, from better control of individual atoms and quantum states to powerful new tricks for calculation. For some, it can be the discovery, not of reductionistic laws of the smallest scales, but of general laws of the largest scales, of how systems with many different origins can show echoes of the same behavior.

Over time, more and more of those thousands of physicists break away from the nightmare scenario, “waking up” to new questions of these kinds. For some, motivated by puzzles and skill and the beauty of physics, the change is satisfying, a chance to work on ideas that are moving forward, connected with experiment or grounded in evolving mathematics. But if your motivation is really tied to those smallest scales, to that final reductionistic “why”, then such a shift won’t be satisfying, and this is a nightmare you won’t wake up from.

Me, I’m not sure. I’m a tool-builder, and I used to tell myself that tool-builders are always needed. But I find I do care, in the end, what my tools are used for. And as we approach the nightmare scenario, I’m not at all sure I know how to wake up.

Getting Started in Saclay

I started work this week in my new position, as a permanent researcher at the Institute for Theoretical Physics of CEA Paris-Saclay. I’m still settling in, figuring out how to get access to the online system and food at the canteen and healthcare. Things are slowly getting into shape, with a lot of running around involved. Until then, I don’t have a ton of time to write (and am dedicating most of it to writing grants!) But I thought, mirroring a post I made almost a decade ago, that I’d at least give you a view of my new office.

Amplitudes 2023 Retrospective

I’m back from CERN this week, with a bit more time to write, so I thought I’d share some thoughts about last week’s Amplitudes conference.

One thing I got wrong in last week’s post: I’ve now been told only 213 people actually showed up in person, as opposed to the 250-ish estimate I had last week. This may seem fewer than Amplitudes in Prague had, but it seems likely they had a few fewer show up than appeared on the website. Overall, the field is at least holding steady from year to year, and definitely has grown since the pandemic (when 2019’s 175 was already a very big attendance).

It was cool having a conference in CERN proper, surrounded by the history of European particle physics. The lecture hall had an abstract particle collision carved into the wood, and the visitor center would in principle have had Standard Model coffee mugs were they not sold out until next May. (There was still enough other particle physics swag, Swiss chocolate, and Swiss chocolate that was also particle physics swag.) I’d planned to stay on-site at the CERN hostel, but I ended up appreciated not doing that: the folks who did seemed to end up a bit cooped up by the end of the conference, even with the conference dinner as a chance to get out.

Past Amplitudes conferences have had associated public lectures. This time we had a not-supposed-to-be-public lecture, a discussion between Nima Arkani-Hamed and Beate Heinemann about the future of particle physics. Nima, prominent as an amplitudeologist, also has a long track record of reasoning about what might lie beyond the Standard Model. Beate Heinemann is an experimentalist, one who has risen through the ranks of a variety of different particle physics experiments, ending up well-positioned to take a broad view of all of them.

It would have been fun if the discussion erupted into an argument, but despite some attempts at provocative questions from the audience that was not going to happen, as Beate and Nima have been friends for a long time. Instead, they exchanged perspectives: on what’s coming up experimentally, and what theories could explain it. Both argued that it was best to have many different directions, a variety of experiments covering a variety of approaches. (There wasn’t any evangelism for particular experiments, besides a joking sotto voce mention of a muon collider.) Nima in particular advocated that, whether theorist or experimentalist, you have to have some belief that what you’re doing could lead to a huge breakthrough. If you think of yourself as just a “foot soldier”, covering one set of checks among many, then you’ll lose motivation. I think Nima would agree that this optimism is irrational, but necessary, sort of like how one hears (maybe inaccurately) that most new businesses fail, but someone still needs to start businesses.

Michelangelo Mangano’s talk on Thursday covered similar ground, but with different emphasis. He agrees that there are still things out there worth discovering: that our current model of the Higgs, for instance, is in some ways just a guess: a simplest-possible answer that doesn’t explain as much as we’d like. But he also emphasized that Standard Model physics can be “new physics” too. Just because we know the model doesn’t mean we can calculate its consequences, and there are a wealth of results from the LHC that improve our models of protons, nuclei, and the types of physical situations they partake in, without changing the Standard Model.

We saw an impressive example of this in Gregory Korchemsky’s talk on Wednesday. He presented an experimental mystery, an odd behavior in the correlation of energies of jets of particles at the LHC. These jets can include a very large number of particles, enough to make it very hard to understand them from first principles. Instead, Korchemsky tried out our field’s favorite toy model, where such calculations are easier. By modeling the situation in the limit of a very large number of particles, he was able to reproduce the behavior of the experiment. The result was a reminder of what particle physics was like before the Standard Model, and what it might become again: partial models to explain odd observations, a quest to use the tools of physics to understand things we can’t just a priori compute.

On the other hand, amplitudes does do a priori computations pretty well as well. Fabrizio Caola’s talk opened the conference by reminding us just how much our precise calculations can do. He pointed out that the LHC has only gathered 5% of its planned data, and already it is able to rule out certain types of new physics to fairly high energies (by ruling out indirect effects, that would show up in high-precision calculations). One of those precise calculations featured in the next talk, by Guilio Gambuti. (A FORM user, his diagrams were the basis for the header image of my Quanta article last winter.) Tiziano Peraro followed up with a technique meant to speed up these kinds of calculations, a trick to simplify one of the more computationally intensive steps in intersection theory.

The rest of Monday was more mathematical, with talks by Zeno Capatti, Jaroslav Trnka, Chia-Kai Kuo, Anastasia Volovich, Francis Brown, Michael Borinsky, and Anna-Laura Sattelberger. Borinksy’s talk felt the most practical, a refinement of his numerical methods complete with some actual claims about computational efficiency. Francis Brown discussed an impressively powerful result, a set of formulas that manages to unite a variety of invariants of Feynman diagrams under a shared explanation.

Tuesday began with what I might call “visitors”: people from adjacent fields with an interest in amplitudes. Alday described how the duality between string theory in AdS space and super Yang-Mills on the boundary can be used to get quite concrete information about string theory, calculating how the theory’s amplitudes are corrected by the curvature of AdS space using a kind of “bootstrap” method that felt nicely familiar. Tim Cohen talked about a kind of geometric picture of theories that extend the Standard Model, including an interesting discussion of whether it’s really “geometric”. Marko Simonovic explained how the integration techniques we develop in scattering amplitudes can also be relevant in cosmology, especially for the next generation of “sky mappers” like the Euclid telescope. This talk was especially interesting to me since this sort of cosmology has a significant presence at CEA Paris-Saclay. Along those lines an interesting paper, “Cosmology meets cohomology”, showed up during the conference. I haven’t had a chance to read it yet!

Just before lunch, we had David Broadhurst give one of his inimitable talks, complete with number theory, extremely precise numerics, and literary and historical references (apparently, Källén died flying his own plane). He also remedied a gap in our whimsically biological diagram naming conventions, renaming the pedestrian “double-box” as a (in this context, Orwellian) lobster. Karol Kampf described unusual structures in a particular Effective Field Theory, while Henriette Elvang’s talk addressed what would become a meaningful subtheme of the conference, where methods from the mathematical field of optimization help amplitudes researchers constrain the space of possible theories. Giulia Isabella covered another topic on this theme later in the day, though one of her group’s selling points is managing to avoid quite so heavy-duty computations.

The other three talks on Tuesday dealt with amplitudes techniques for gravitational wave calculations, as did the first talk on Wednesday. Several of the calculations only dealt with scattering black holes, instead of colliding ones. While some of the results can be used indirectly to understand the colliding case too, a method to directly calculate behavior of colliding black holes came up again and again as an important missing piece.

The talks on Wednesday had to start late, owing to a rather bizarre power outage (the lights in the room worked fine, but not the projector). Since Wednesday was the free afternoon (home of quickly sold-out CERN tours), this meant there were only three talks: Veneziano’s talk on gravitational scattering, Korchemsky’s talk, and Nima’s talk. Nima famously never finishes on time, and this time attempted to control his timing via the surprising method of presenting, rather than one topic, five “abstracts” on recent work that he had not yet published. Even more surprisingly, this almost worked, and he didn’t run too ridiculously over time, while still managing to hint at a variety of ways that the combinatorial lessons behind the amplituhedron are gradually yielding useful perspectives on more general realistic theories.

Thursday, Andrea Puhm began with a survey of celestial amplitudes, a topic that tries to build the same sort of powerful duality used in AdS/CFT but for flat space instead. They’re gradually tackling the weird, sort-of-theory they find on the boundary of flat space. The two next talks, by Lorenz Eberhardt and Hofie Hannesdottir, shared a collaborator in common, namely Sebastian Mizera. They also shared a common theme, taking a problem most people would have assumed was solved and showing that approaching it carefully reveals extensive structure and new insights.

Cristian Vergu, in turn, delved deep into the literature to build up a novel and unusual integration method. We’ve chatted quite a bit about it at the Niels Bohr Institute, so it was nice to see it get some attention on the big stage. We then had an afternoon of trips beyond polylogarithms, with talks by Anne Spiering, Christoph Nega, and Martijn Hidding, each pushing the boundaries of what we can do with our hardest-to-understand integrals. Einan Gardi and Ruth Britto finished the day, with a deeper understanding of the behavior of high-energy particles and a new more mathematically compatible way of thinking about “cut” diagrams, respectively.

On Friday, João Penedones gave us an update on a technique with some links to the effective field theory-optimization ideas that came up earlier, one that “bootstraps” whole non-perturbative amplitudes. Shota Komatsu talked about an intriguing variant of the “planar” limit, one involving large numbers of particles and a slick re-writing of infinite sums of Feynman diagrams. Grant Remmen and Cliff Cheung gave a two-parter on a bewildering variety of things that are both surprisingly like, and surprisingly unlike, string theory: important progress towards answering the question “is string theory unique?”

Friday afternoon brought the last three talks of the conference. James Drummond had more progress trying to understand the symbol letters of supersymmetric Yang-Mills, while Callum Jones showed how Feynman diagrams can apply to yet another unfamiliar field, the study of vortices and their dynamics. Lance Dixon closed the conference without any Greta Thunberg references, but with a result that explains last year’s mystery of antipodal duality. The explanation involves an even more mysterious property called antipodal self-duality, so we’re not out of work yet!

At Amplitudes 2023 at CERN

I’m at the big yearly conference of my sub-field this week, called Amplitudes. This year, surprisingly for the first time, it’s at the very appropriate location of CERN.

Somewhat overshadowed by the very picturesque Alps

Amplitudes keeps on growing. In 2019, we had 175 participants. We were on Zoom in 2020 and 2021, with many more participants, but that probably shouldn’t count. In Prague last year we had 222. This year, I’ve been told we have even more, something like 250 participants (the list online is bigger, but includes people joining on Zoom). We’ve grown due to new students, but also new collaborations: people from adjacent fields who find the work interesting enough to join along. This year we have mathematicians talking about D-modules, bootstrappers finding new ways to get at amplitudes in string theory, beyond-the-standard-model theorists talking about effective field theories, and cosmologists talking about the large-scale structure of the universe.

The talks have been great, from clear discussions of earlier results to fresh-off-the-presses developments, plenty of work in progress, and even one talk where the speaker’s opinion changed during the coffee break. As we’re at CERN, there’s also a through-line about the future of particle physics, with a chat between Nima Arkani-Hamed and the experimentalist Beate Heinemann on Tuesday and a talk by Michelangelo Mangano about the meaning of “new physics” on Thursday.

I haven’t had a ton of time to write, I keep getting distracted by good discussions! As such, I’ll do my usual thing, and say a bit more about specific talks in next week’s post.