Tag Archives: supersymmetry

Amplitudes 2025 This Week

Summer is conference season for academics, and this week held my old sub-field’s big yearly conference, called Amplitudes. This year, it was in Seoul at Seoul National University, the first time the conference has been in Asia.

(I wasn’t there, I don’t go to these anymore. But I’ve been skimming slides in my free time, to give you folks the updates you crave. Be forewarned that conference posts like these get technical fast, I’ll be back to my usual accessible self next week.)

There isn’t a huge amplitudes community in Korea, but it’s bigger than it was back when I got started in the field. Of the organizers, Kanghoon Lee of the Asia Pacific Center for Theoretical Physics and Sangmin Lee of Seoul National University have what I think of as “core amplitudes interests”, like recursion relations and the double-copy. The other Korean organizers are from adjacent areas, work that overlaps with amplitudes but doesn’t show up at the conference each year. There was also a sizeable group of organizers from Taiwan, where there has been a significant amplitudes presence for some time now. I do wonder if Korea was chosen as a compromise between a conference hosted in Taiwan or in mainland China, where there is also quite a substantial amplitudes community.

One thing that impresses me every year is how big, and how sophisticated, the gravitational-wave community in amplitudes has grown. Federico Buccioni’s talk began with a plot that illustrates this well (though that wasn’t his goal):

At the conference Amplitudes, dedicated to the topic of scattering amplitudes, there were almost as many talks with the phrase “black hole” in the title as there were with “scattering” or “amplitudes”! This is for a topic that did not even exist in the subfield when I got my PhD eleven years ago.

With that said, gravitational wave astronomy wasn’t quite as dominant at the conference as Buccioni’s bar chart suggests. There were a few talks each day on the topic: I counted seven in total, excluding any short talks on the subject in the gong show. Spinning black holes were a significant focus, central to Jung-Wook Kim’s, Andres Luna’s and Mao Zeng’s talks (the latter two showing some interesting links between the amplitudes story and classic ideas in classical mechanics) and relevant in several others, with Riccardo Gonzo, Miguel Correia, Ira Rothstein, and Enrico Herrmann’s talks showing not just a wide range of approaches, but an increasing depth of research in this area.

Herrmann’s talk in particular dealt with detector event shapes, a framework that lets physicists think more directly about what a specific particle detector or observer can see. He applied the idea not just to gravitational waves but to quantum gravity and collider physics as well. The latter is historically where this idea has been applied the most thoroughly, as highlighted in Hua Xing Zhu’s talk, where he used them to pick out particular phenomena of interest in QCD.

QCD is, of course, always of interest in the amplitudes field. Buccioni’s talk dealt with the theory’s behavior at high-energies, with a nice example of the “maximal transcendentality principle” where some quantities in QCD are identical to quantities in N=4 super Yang-Mills in the “most transcendental” pieces (loosely, those with the highest powers of pi). Andrea Guerreri’s talk also dealt with high-energy behavior in QCD, trying to address an experimental puzzle where QCD results appeared to violate a fundamental bound all sensible theories were expected to obey. By using S-matrix bootstrap techniques, they clarify the nature of the bound, finding that QCD still obeys it once correctly understood, and conjecture a weird theory that should be possible to frame right on the edge of the bound. The S-matrix bootstrap was also used by Alexandre Homrich, who talked about getting the framework to work for multi-particle scattering.

Heribertus Bayu Hartanto is another recent addition to Korea’s amplitudes community. He talked about a concrete calculation, two-loop five-particle scattering including top quarks, a tricky case that includes elliptic curves.

When amplitudes lead to integrals involving elliptic curves, many standard methods fail. Jake Bourjaily’s talk raised a question he has brought up again and again: what does it mean to do an integral for a new type of function? One possible answer is that it depends on what kind of numerics you can do, and since more general numerical methods can be cumbersome one often needs to understand the new type of function in more detail. In light of that, Stephen Jones’ talk was interesting in taking a common problem often cited with generic approaches (that they have trouble with the complex numbers introduced by Minkowski space) and finding a more natural way in a particular generic approach (sector decomposition) to take them into account. Giulio Salvatori talked about a much less conventional numerical method, linked to the latest trend in Nima-ology, surfaceology. One of the big selling points of the surface integral framework promoted by people like Salvatori and Nima Arkani-Hamed is that it’s supposed to give a clear integral to do for each scattering amplitude, one which should be amenable to a numerical treatment recently developed by Michael Borinsky. Salvatori can currently apply the method only to a toy model (up to ten loops!), but he has some ideas for how to generalize it, which will require handling divergences and numerators.

Other approaches to the “problem of integration” included Anna-Laura Sattelberger’s talk that presented a method to find differential equations for the kind of integrals that show up in amplitudes using the mathematical software Macaulay2, including presenting a package. Matthias Wilhelm talked about the work I did with him, using machine learning to find better methods for solving integrals with integration-by-parts, an area where two other groups have now also published. Pierpaolo Mastrolia talked about integration-by-parts’ up-and-coming contender, intersection theory, a method which appears to be delving into more mathematical tools in an effort to catch up with its competitor.

Sometimes, one is more specifically interested in the singularities of integrals than their numerics more generally. Felix Tellander talked about a geometric method to pin these down which largely went over my head, but he did have a very nice short description of the approach: “Describe the singularities of the integrand. Find a map representing integration. Map the singularities of the integrand onto the singularities of the integral.”

While QCD and gravity are the applications of choice, amplitudes methods germinate in N=4 super Yang-Mills. Ruth Britto’s talk opened the conference with an overview of progress along those lines before going into her own recent work with one-loop integrals and interesting implications of ideas from cluster algebras. Cluster algebras made appearances in several other talks, including Anastasia Volovich’s talk which discussed how ideas from that corner called flag cluster algebras may give insights into QCD amplitudes, though some symbol letters still seem to be hard to track down. Matteo Parisi covered another idea, cluster promotion maps, which he thinks may help pin down algebraic symbol letters.

The link between cluster algebras and symbol letters is an ongoing mystery where the field is seeing progress. Another symbol letter mystery is antipodal duality, where flipping an amplitude like a palindrome somehow gives another valid amplitude. Lance Dixon has made progress in understanding where this duality comes from, finding a toy model where it can be understood and proved.

Others pushed the boundaries of methods specific to N=4 super Yang-Mills, looking for novel structures. Song He’s talk pushes an older approach by Bourjaily and collaborators up to twelve loops, finding new patterns and connections to other theories and observables. Qinglin Yang bootstraps Wilson loops with a Lagrangian insertion, adding a side to the polygon used in previous efforts and finding that, much like when you add particles to amplitudes in a bootstrap, the method gets stricter and more powerful. Jaroslav Trnka talked about work he has been doing with “negative geometries”, an odd method descended from the amplituhedron that looks at amplitudes from a totally different perspective, probing a bit of their non-perturbative data. He’s finding more parts of that setup that can be accessed and re-summed, finding interestingly that multiple-zeta-values show up in quantities where we know they ultimately cancel out. Livia Ferro also talked about a descendant of the amplituhedron, this time for cosmology, getting differential equations for cosmological observables in a particular theory from a combinatorial approach.

Outside of everybody’s favorite theories, some speakers talked about more general approaches to understanding the differences between theories. Andreas Helset covered work on the geometry of the space of quantum fields in a theory, applying the method to a general framework for characterizing deviations from the standard model called the SMEFT. Jasper Roosmale Nepveu also talked about a general space of theories, thinking about how positivity (a trait linked to fundamental constraints like causality and unitarity) gets tangled up with loop effects, and the implications this has for renormalization.

Soft theorems, universal behavior of amplitudes when a particle has low energy, continue to be a trendy topic, with Silvia Nagy showing how the story continues to higher orders and Sangmin Choi investigating loop effects. Callum Jones talks about one of the more powerful results from the soft limit, Weinberg’s theorem showing the uniqueness of gravity. Weinberg’s proof was set up in Minkowski space, but we may ultimately live in curved, de Sitter space. Jones showed how the ideas Weinberg explored generalize in de Sitter, using some tools from the soft-theorem-inspired field of dS/CFT. Julio Parra-Martinez, meanwhile, tied soft theorems to another trendy topic, higher symmetries, a more general notion of the usual types of symmetries that physicists have explored in the past. Lucia Cordova reported work that was not particularly connected to soft theorems but was connected to these higher symmetries, showing how they interact with crossing symmetry and the S-matrix bootstrap.

Finally, a surprisingly large number of talks linked to Kevin Costello and Natalie Paquette’s work with self-dual gauge theories, where they found exact solutions from a fairly mathy angle. Paquette gave an update on her work on the topic, while Alfredo Guevara talked about applications to black holes, comparing the power of expanding around a self-dual gauge theory to that of working with supersymmetry. Atul Sharma looked at scattering in self-dual backgrounds in work that merges older twistor space ideas with the new approach, while Roland Bittelson talked about calculating around an instanton background.


Also, I had another piece up this week at FirstPrinciples, based on an interview with the (outgoing) president of the Sloan Foundation. I won’t have a “bonus info” post for this one, as most of what I learned went into the piece. But if you don’t know what the Sloan Foundation does, take a look! I hadn’t known they funded Jupyter notebooks and Hidden Figures, or that they introduced Kahneman and Tversky.

Which String Theorists Are You Complaining About?

Do string theorists have an unfair advantage? Do they have an easier time getting hired, for example?

In one of the perennial arguments about this on Twitter, Martin Bauer posted a bar chart of faculty hires in the US by sub-field. The chart was compiled by Erich Poppitz from data in the US particle physics rumor mill, a website where people post information about who gets hired where for the US’s quite small number of permanent theoretical particle physics positions at research universities and national labs. The data covers 1994 to 2017, and shows one year, 1999, when there were more string theorists hired than all other topics put together. The years around then also had many string theorists hired, but the proportion starts falling around the mid 2000’s…around when Lee Smolin wrote a book, The Trouble With Physics, arguing that string theorists had strong-armed their way into academic dominance. After that, the percentage of string theorists falls, oscillating between a tenth and a quarter of total hires.

Judging from that, you get the feeling that string theory’s critics are treating a temporary hiring fad as if it was a permanent fact. The late 1990’s were a time of high-profile developments in string theory that excited a lot of people. Later, other hiring fads dominated, often driven by experiments: I remember when the US decided to prioritize neutrino experiments and neutrino theorists had a much easier time getting hired, and there seem to be similar pushes now with gravitational waves, quantum computing, and AI.

Thinking about the situation in this way, though, ignores what many of the critics have in mind. That’s because the “string” column on that bar chart is not necessarily what people think of when they think of string theory.

If you look at the categories on Poppitz’s bar chart, you’ll notice something odd. “String” its itself a category. Another category, “lattice”, refers to lattice QCD, a method to find the dynamics of quarks numerically. The third category, though, is a combination of three things “ph/th/cosm”.

“Cosm” here refers to cosmology, another sub-field. “Ph” and “th” though aren’t really sub-fields. Instead, they’re arXiv categories, sections of the website arXiv.org where physicists post papers before they submit them to journals. The “ph” category is used for phenomenology, the type of theoretical physics where people try to propose models of the real world and make testable predictions. The “th” category is for “formal theory”, papers where theoretical physicists study the kinds of theories they use in more generality and develop new calculation methods, with insights that over time filter into “ph” work.

“String”, on the other hand, is not an arXiv category. When string theorists write papers, they’ll put them into “th” or “ph” or another relevant category (for example “gr-qc”, for general relativity and quantum cosmology). This means that when Poppitz distinguishes “ph/th/cosm” from “string”, he’s being subjective, using his own judgement to decide who counts as a string theorist.

So who counts as a string theorist? The simplest thing to do would be to check if their work uses strings. Failing that, they could use other tools of string theory and its close relatives, like Calabi-Yau manifolds, M-branes, and holography.

That might be what Poppitz was doing, but if he was, he was probably missing a lot of the people critics of string theory complain about. He even misses many people who describe themselves as string theorists. In an old post of mine I go through the talks at Strings, string theory’s big yearly conference, giving them finer-grained categories. The majority don’t use anything uniquely stringy.

Instead, I think critics of string theory have two kinds of things in mind.

First, most of the people who made their reputations on string theory are still in academia, and still widely respected. Some of them still work on string theory topics, but many now work on other things. Because they’re still widely respected, their interests have a substantial influence on the field. When one of them starts looking at connections between theories of two-dimensional materials, you get a whole afternoon of talks at Strings about theories of two-dimensional materials. Working on those topics probably makes it a bit easier to get a job, but also, many of the people working on them are students of these highly respected people, who just because of that have an easier time getting a job. If you’re a critic of string theory who thinks the founders of the field led physics astray, then you probably think they’re still leading physics astray even if they aren’t currently working on string theory.

Second, for many other people in physics, string theorists are their colleagues and friends. They’ll make fun of trends that seem overhyped and under-thought, like research on the black hole information paradox or the swampland, or hopes that a slightly tweaked version of supersymmetry will show up soon at the LHC. But they’ll happily use ideas developed in string theory when they prove handy, using supersymmetric theories to test new calculation techniques, string theory’s extra dimensions to inspire and ground new ideas for dark matter, or the math of strings themselves as interesting shortcuts to particle physics calculations. String theory is available as reference to these people in a way that other quantum gravity proposals aren’t. That’s partly due to familiarity and shared language (I remember a talk at Perimeter where string theorists wanted to learn from practitioners from another area and the discussion got bogged down by how they were using the word “dimension”), but partly due to skepticism of the various alternate approaches. Most people have some idea in their heads of deep problems with various proposals: screwing up relativity, making nonsense out of quantum mechanics, or over-interpreting on limited evidence. The most commonly believed criticisms are usually wrong, with objections long-known to practitioners of the alternate approaches, and so those people tend to think they’re being treated unfairly. But the wrong criticisms are often simplified versions of correct criticisms, passed down by the few people who dig deeply into these topics, criticisms that the alternative approaches don’t have good answers to.

The end result is that while string theory itself isn’t dominant, a sort of “string friendliness” is. Most of the jobs aren’t going to string theorists in the literal sense. But the academic world string theorists created keeps turning. People still respect string theorists and the research directions they find interesting, and people are still happy to collaborate and discuss with string theorists. For research communities people are more skeptical of, it must feel very isolating, like the world is still being run by their opponents. But this isn’t the kind of hegemony that can be solved by a revolution. Thinking that string theory is a failed research program, and people focused on it should have a harder time getting hired, is one thing. Thinking that everyone who respects at least one former string theorist should have a harder time getting hired is a very different goal. And if what you’re complaining about is “string friendliness”, not actual string theorists, then that’s what you’re asking for.

What RIBs Could Look Like

The journal Nature recently published an opinion piece about a new concept for science funding called Research Impact Bonds (or RIBs).

Normally, when a government funds something, they can’t be sure it will work. They pay in advance, and have to guess whether a program will do what they expect, or whether a project will finish on time. Impact bonds are a way for them to pay afterwards, so they only pay for projects that actually deliver. Instead, the projects are funded by private investors, who buy “impact bonds” that guarantee them a share of government funding if the project is successful. Here’s an example given in the Nature piece:

For instance, say the Swiss government promises to pay up to one million Swiss francs (US$1.1 million) to service providers that achieve a measurable outcome, such as reducing illiteracy in a certain population by 5%, within a specified number of years. A broker finds one or more service providers that think they can achieve this at a cost of, say, 900,000 francs, as well as investors who agree to pay these costs up front — thus taking on the risk of the project — for a potential 10% gain if successful. If the providers achieve their goals, the government pays 990,000 francs: 900,000 francs for the work and a 90,000-franc investment return. If the project does not succeed, the investors lose their money, but the government does not.

The author of the piece, Michael Hill, thinks that this could be a new way for governments to fund science. In his model, scientists would apply to the government to propose new RIBs. The projects would have to have specific goals and time-frames: “measure the power of this cancer treatment to this accuracy in five years”, for example. If the government thinks the goal is valuable, they commit to paying some amount of money if the goal is reached. Then investors can decide whether the investment is worthwhile. The projects they expect to work get investor money, and if they do end up working the investors get government money. The government only has to pay if the projects work, but the scientists get paid regardless.

Ok, what’s the catch?

One criticism I’ve seen is that this kind of model could only work for very predictable research, maybe even just for applied research. While the author admits RIBs would only be suitable for certain sorts of projects, I think the range is wider than you might think. The project just has to have a measurable goal by a specified end date. Many particle physics experiments work that way: a dark matter detector, for instance, is trying to either rule out or detect dark matter to a certain level of statistical power within a certain run time. Even “discovery” machines, that we build to try to discover the unexpected, usually have this kind of goal: a bigger version of the LHC, for instance, might try to measure the coupling of Higgs bosons to a certain accuracy.

There are a few bigger issues with this model, though. If you go through the math in Hill’s example, you’ll notice that if the project works, the government ends up paying one million Swiss francs for a service that only cost the provider 900,000 Swiss francs. Under a normal system, the government would only have had to pay 900,000. This gets compensated by the fact that not every project works, so the government only pays for some projects and not others. But investors will be aware of this, and that means the government can’t offer too many unrealistic RIBs: the greater the risk investors are going to take, the more return they’ll expect. On average then, the government would have to pay about as much as they would normally: the cost of the projects that succeed, plus enough money to cover the risk that some fail. (In fact, they’d probably pay a bit more, to give the investors a return on the investment.)

So the government typically won’t save money, at least not if they want to fund the same amount of research. Instead, the idea is that they will avoid risk. But it’s not at all clear to me that the type of risk they avoid is one they want to.

RIBs might appeal to voters: it might sound only fair that a government only funds the research that actually works. That’s not really a problem for the government itself, though: because governments usually pay for many small projects, they still get roughly as much success overall as they want, they just don’t get to pick where. Instead, RIBS put the government agency in a much bigger risk, the risk of unexpected success. As part of offering RIBs, the government would have to estimate how much money they would be able to pay when the projects end. They would want to fund enough projects so that, on average, they pay that amount of money. (Otherwise, they’d end up funding science much less than they do now!) But if the projects work out better than expected, then they’d have to pay much more than they planned. And government science agencies usually can’t do this. In many countries, they can’t plan far in advance at all: their budgets get decided by legislators year to year, and delays in decisions mean delays in funding. If an agency offered RIBs that were more successful than expected, they’d either have to cut funding somewhere else (probably firing a lot of people), or just default on their RIBs, weakening the concept for the next time they used them. These risks, unlike the risk of individual experiments not working, are risks that can really hurt government agencies.

Impact bonds typically have another advantage, in that they spread out decision-making. The Swiss government in Hill’s example doesn’t have to figure out which service providers can increase literacy, or how much it will cost them: it just puts up a budget, and lets investors and service providers figure out if they can make it work. This also serves as a hedge against corruption. If the government made the decisions, they might distribute funding for unrelated political reasons or even out of straight-up bribery. They’d also have to pay evaluators to figure things out. Investors won’t take bribes to lose money, so in theory would be better at choosing projects that will actually work, and would have a vested interest in paying for a good investigation.

This advantage doesn’t apply to Hill’s model of RIBs, though. In Hill’s model, scientists still need to apply to the government to decide which of their projects get offered as RIBs, so the government still needs to decide which projects are worth investing in. Then the scientists or the government need to take another step, and convince investors. The scientists in this equation effectively have to apply twice, which anyone who has applied for a government grant will realize is quite a lot of extra time and effort.

So overall, I don’t think Hills’ model of RIBs is useful, even for the purpose he imagines. It’s too risky for government science agencies to commit to payments like that, and it generates more, not less, work for scientists and the agency.

Hill’s model, though, isn’t the only way RIBs can work. And “avoiding risk” isn’t the only reason we might want them. There are two other reasons one might want RIBs, with very different-sounding motivations.

First, you might be pessimistic about mainstream science. Maybe you think scientists are making bad decisions, choosing ideas that either won’t pan out or won’t have sufficient impact, based more on fashion than on careful thought. You want to incentivize them to do better, to try to work out what impact they might have with some actual numbers and stand by their judgement. If that’s your perspective, you might be interested in RIBs for the same reason other people are interested in prediction markets: by getting investors involved, you have people willing to pay for an accurate estimate.

Second, you might instead be optimistic about mainstream science. You think scientists are doing great work, work that could have an enormous impact, but they don’t get to “capture that value”. Some projects might be essential to important, well-funded goals, but languish unrewarded. Others won’t see their value until long in the future, or will do so in unexpected ways. If scientists could fund projects based on their future impact, with RIBs, maybe they could fund more of this kind of work.

(I first started thinking about this perspective due to a talk by Sabrina Pasterski. The talk itself offended a lot of people, and had some pretty impractical ideas, like selling NFTs of important physics papers. But I think one part of the perspective, that scientists have more impact than we think, is worth holding on to.)

If you have either of those motivations, Hill’s model won’t help. But another kind of model perhaps could. Unlike Hill’s, it could fund much more speculative research, ideas where we don’t know the impact until decades down the line. To demonstrate, I’ll show how it could fund some very speculative research: the work of Peter van Nieuwenhuizen.

Peter van Nieuwenhuizen is one of the pioneers of the theory of supergravity, a theory that augments gravity with supersymmetric partner particles. From its beginnings in the 1970’s, the theory ended up having a major impact on string theory, and today they are largely thought of as part of the same picture of how the universe might work.

His work has, over time, had more practical consequences though. In the 2000’s, researchers working with supergravity noticed a calculational shortcut: they could do a complicated supergravity calculation as the “square” of a much simpler calculation in another theory, called Yang-Mills. Over time, they realized the shortcut worked not just for supergravity, but for ordinary gravity as well, and not just for particle physics calculations but for gravitational wave calculations. Now, their method may make an important contribution to calculations for future gravitational wave telescopes like the Einstein telescope, letting them measure properties of neutron stars.

With that in mind, imagine the following:

In 1967, Jocelyn Bell Burnell and Antony Hewish detected a pulsar, in one of the first direct pieces of evidence for the existence of neutron stars. Suppose that in the early 1970’s NASA decided to announce a future purchase of RIBs: in 2050, they would pay a certain amount to whoever was responsible for finding the equation of state of a neutron star, the formula that describes how its matter moves under pressure. They compute based on estimates of economic growth and inflation, and arrive at some suitably substantial number.

At the same time, but unrelatedly, van Nieuwenhuizen and collaborators sell RIBs. Maybe they use the proceeds to buy more computer time for their calculations, or to refund travel so they can more easily meet and discuss. They tell the buyers that, if some government later decides to reward their discoveries, the holders of the RIB would get a predetermined cut of the rewards.

The years roll by, and barring some unexpected medical advances the discoverers of supergravity die. In the meantime, researchers use their discovery to figure out how to make accurate predictions of gravitational waves from merging neutron stars. When the Einstein telescope turns out, it detects such a merger, and the accurate predictions let them compute the neutron star’s equation of state.

In 2050, then, NASA looks back. They make a list of everyone who contributed to the discovery of the neutron star’s equation of state, every result that was needed for the discovery, and try to estimate how important each contribution was. Then they spend the money they promised buying RIBs, up to the value for each contributor. This includes RIBs originally held by the investors in van Nieuwenhuizen and collaborators. Their current holders make some money, justifying whatever value they paid from their previous owners.

Imagine a world in which government agencies do this kind of thing all the time. Scientists could sell RIBs in their projects, without knowing exactly which agency would ultimately pay for them. Rather than long grant applications, they could write short summaries for investors, guessing at the range of their potential impact, and it would be up to the investors to decide whether the estimate made sense. Scientists could get some of the value of their discoveries, even when that value is quite unpredictable. And they would be incentivized to pick discoveries that could have high impact, and to put a bit of thought and math into what kind of impact that could be.

(Should I still be calling these things bonds, when the buyers don’t know how much they’ll be worth at the end? Probably not. These are more like research impact shares, on a research impact stock market.)

Are there problems with this model, then? Oh sure, loads!

I already mentioned that it’s hard for government agencies to commit to spending money five years down the line. A seventy-year commitment, from that perspective, sounds completely ridiculous.

But we don’t actually need that in the model. All we need is a good reason for investors to think that, eventually, NASA will buy some research impact shares. If government agencies do this regularly, then they would have that reason. They could buy a variety of theoretical developments, a diversified pool to make it more likely some government agency would reward them. This version of the model would be riskier, though, so they’d want more return in exchange.

Another problem is the decision-making aspect. Government agencies wouldn’t have to predict the future, but they would have to accurately assess the past, fairly estimating who contributed to a project, and they would have to do it predictably enough that it could give rise to worthwhile investments. This is itself both controversial and a lot of work. If we figure out the neutron star equation of state, I’m not sure I trust NASA to reward van Nieuwenhuizen’s contribution to it.

This leads to the last modification of the model, and the most speculative one. Over time, government agencies will get better and better at assigning credit. Maybe they’ll have better models of how scientific progress works, maybe they’ll even have advanced AI. A future government (or benevolent AI, if you’re into that) might decide to buy research impact shares in order to validate important past work.

If you believe that might happen, then you don’t need a track record of government agencies buying research impact shares. As a scientist, you can find a sufficiently futuristically inclined investor, and tell them this story. You can sell them some shares, and tell them that, when the AI comes, they will have the right to whatever benefit it bestows upon your research.

I could imagine some people doing this. If you have an image of your work saving humanity in the distant future, you should be able to use that image to sell something to investors. It would be insanely speculative, a giant pile of what-ifs with no guarantee of any of it cashing out. But at least it’s better than NFTs.

Stop Listing the Amplituhedron as a Competitor of String Theory

The Economist recently had an article (paywalled) that meandered through various developments in high-energy physics. It started out talking about the failure of the LHC to find SUSY, argued this looked bad for string theory (which…not really?) and used it as a jumping-off point to talk about various non-string “theories of everything”. Peter Woit quoted it a few posts back as kind of a bellwether for public opinion on supersymmetry and string theory.

The article was a muddle, but a fairly conventional muddle, explaining or mis-explaining things in roughly the same way as other popular physics pieces. For the most part that didn’t bug me, but one piece of the muddle hit a bit close to home:

The names of many of these [non-string theories of everything] do, it must be conceded, torture the English language. They include “causal dynamical triangulation”, “asymptotically safe gravity”, “loop quantum gravity” and the “amplituhedron formulation of quantum theory”.

I’ve posted about the amplituhedron more than a few times here on this blog. Out of every achievement of my sub-field, it has most captured the public imagination. It’s legitimately impressive, a way to translate calculations of probabilities of collisions of fundamental particles (in a toy model, to be clear) into geometrical objects. What it isn’t, and doesn’t pretend to be, is a theory of everything.

To be fair, the Economist piece admits this:

Most attempts at a theory of everything try to fit gravity, which Einstein describes geometrically, into quantum theory, which does not rely on geometry in this way. The amplituhedron approach does the opposite, by suggesting that quantum theory is actually deeply geometric after all. Better yet, the amplituhedron is not founded on notions of spacetime, or even statistical mechanics. Instead, these ideas emerge naturally from it. So, while the amplituhedron approach does not as yet offer a full theory of quantum gravity, it has opened up an intriguing path that may lead to one.

The reasoning they have leading up to it has a few misunderstandings anyway. The amplituhedron is geometrical, but in a completely different way from how Einstein’s theory of gravity is geometrical: Einstein’s gravity is a theory of space and time, the amplituhedron’s magic is that it hides space and time behind a seemingly more fundamental mathematics.

This is not to say that the amplituhedron won’t lead to insights about gravity. That’s a big part of what it’s for, in the long-term. Because the amplituhedron hides the role of space and time, it might show the way to theories that lack them altogether, theories where space and time are just an approximation for a more fundamental reality. That’s a real possibility, though not at this point a reality.

Even if you take this possibility completely seriously, though, there’s another problem with the Economist’s description: it’s not clear that this new theory would be a non-string theory!

The main people behind the amplituhedron are pretty positively disposed to string theory. If you asked them, I think they’d tell you that, rather than replacing string theory, they expect to learn more about string theory: to see how it could be reformulated in a way that yields insight about trickier problems. That’s not at all like the other “non-string theories of everything” in that list, which frame themselves as alternatives to, or even opponents of, string theory.

It is a lot like several other research programs, though, like ER=EPR and It from Qubit. Researchers in those programs try to use physical principles and toy models to say fundamental things about quantum gravity, trying to think about space and time as being made up of entangled quantum objects. By that logic, they belong in that list in the article alongside the amplituhedron. The reason they aren’t is obvious if you know where they come from: ER=EPR and It from Qubit are worked on by string theorists, including some of the most prominent ones.

The thing is, any reason to put the amplituhedron on that list is also a reason to put them. The amplituhedron is not a theory of everything, it is not at present a theory of quantum gravity. It’s a research direction that might shed new insight about quantum gravity. It doesn’t explicitly involve strings, but neither does It from Qubit most of the time. Unless you’re going to describe It from Qubit as a “non-string theory of everything”, you really shouldn’t describe the amplituhedron as one.

The amplituhedron is a really cool idea, one with great potential. It’s not something like loop quantum gravity, or causal dynamical triangulations, and it doesn’t need to be. Let it be what it is, please!

Rooting out the Answer

I have a new paper out today, with Jacob Bourjaily, Andrew McLeod, Matthias Wilhelm, Cristian Vergu and Matthias Volk.

There’s a story I’ve told before on this blog, about a kind of “alphabet” for particle physics predictions. When we try to make a prediction in particle physics, we need to do complicated integrals. Sometimes, these integrals simplify dramatically, in unexpected ways. It turns out we can understand these simplifications by writing the integrals in a sort of “alphabet”, breaking complicated mathematical “periods” into familiar logarithms. If we want to simplify an integral, we can use relations between logarithms like these:

\log(a b)=\log(a)+\log(b),\quad \log(a^n)=n\log(a)

to factor our “alphabet” into pieces as simple as possible.

The simpler the alphabet, the more progress you can make. And in the nice toy model theory we’re working with, the alphabets so far have been simple in one key way. Expressed in the right variables, they’re rational. For example, they contain no square roots.

Would that keep going? Would we keep finding rational alphabets? Or might the alphabets, instead, have square roots?

After some searching, we found a clean test case. There was a calculation we could do with just two Feynman diagrams. All we had to do was subtract one from the other. If they still had square roots in their alphabet, we’d have proven that the nice, rational alphabets eventually had to stop.

Easy-peasy

So we calculated these diagrams, doing the complicated integrals. And we found they did indeed have square roots in their alphabet, in fact many more than expected. They even had square roots of square roots!

You’d think that would be the end of the story. But square roots are trickier than you’d expect.

Remember that to simplify these integrals, we break them up into an alphabet, and factor the alphabet. What happens when we try to do that with an alphabet that has square roots?

Suppose we have letters in our alphabet with \sqrt{-5}. Suppose another letter is the number 9. You might want to factor it like this:

9=3\times 3

Simple, right? But what if instead you did this:

9=(2+ \sqrt{-5} )\times(2- \sqrt{-5} )

Once you allow \sqrt{-5} in the game, you can factor 9 in two different ways. The central assumption, that you can always just factor your alphabet, breaks down. In mathematical terms, you no longer have a unique factorization domain.

Instead, we had to get a lot more mathematically sophisticated, factoring into something called prime ideals. We got that working and started crunching through the square roots in our alphabet. Things simplified beautifully: we started with a result that was ten million terms long, and reduced it to just five thousand. And at the end of the day, after subtracting one integral from the other…

We found no square roots!

After all of our simplifications, all the letters we found were rational. Our nice test case turned out much, much simpler than we expected.

It’s been a long road on this calculation, with a lot of false starts. We were kind of hoping to be the first to find square root letters in these alphabets; instead it looks like another group will beat us to the punch. But we developed a lot of interesting tricks along the way, and we thought it would be good to publish our “null result”. As always in our field, sometimes surprising simplifications are just around the corner.

Breakthrough Prize for Supergravity

This week, $3 Million was awarded by the Breakthrough Prize to Sergio Ferrara, Daniel Z. Freedman and Peter van Nieuwenhuizen, the discoverers of the theory of supergravity, part of a special award separate from their yearly Fundamental Physics Prize. There’s a nice interview with Peter van Nieuwenhuizen on the Stony Brook University website, about his reaction to the award.

The Breakthrough Prize was designed to complement the Nobel Prize, rewarding deserving researchers who wouldn’t otherwise get the Nobel. The Nobel Prize is only awarded to theoretical physicists when they predict something that is later observed in an experiment. Many theorists are instead renowned for their mathematical inventions, concepts that other theorists build on and use but that do not by themselves make testable predictions. The Breakthrough Prize celebrates these theorists, and while it has also been awarded to others who the Nobel committee could not or did not recognize (various large experimental collaborations, Jocelyn Bell Burnell), this has always been the physics prize’s primary focus.

The Breakthrough Prize website describes supergravity as a theory that combines gravity with particle physics. That’s a bit misleading: while the theory does treat gravity in a “particle physics” way, unlike string theory it doesn’t solve the famous problems with combining quantum mechanics and gravity. (At least, as far as we know.)

It’s better to say that supergravity is a theory that links gravity to other parts of particle physics, via supersymmetry. Supersymmetry is a relationship between two types of particles: bosons, like photons, gravitons, or the Higgs, and fermions, like electrons or quarks. In supersymmetry, each type of boson has a fermion “partner”, and vice versa. In supergravity, gravity itself gets a partner, called the gravitino. Supersymmetry links the properties of particles and their partners together: both must have the same mass and the same charge. In a sense, it can unify different types of particles, explaining both under the same set of rules.

In the real world, we don’t see bosons and fermions with the same mass and charge. If gravitinos exist, then supersymmetry would have to be “broken”, giving them a high mass that makes them hard to find. Some hoped that the Large Hadron Collider could find these particles, but now it looks like it won’t, so there is no evidence for supergravity at the moment.

Instead, supergravity’s success has been as a tool to understand other theories of gravity. When the theory was proposed in the 1970’s, it was thought of as a rival to string theory. Instead, over the years it consistently managed to point out aspects of string theory that the string theorists themselves had missed, for example noticing that the theory needed not just strings but higher-dimensional objects called “branes”. Now, supergravity is understood as one part of a broader string theory picture.

In my corner of physics, we try to find shortcuts for complicated calculations. We benefit a lot from toy models: simpler, unrealistic theories that let us test our ideas before applying them to the real world. Supergravity is one of the best toy models we’ve got, a theory that makes gravity simple enough that we can start to make progress. Right now, colleagues of mine are developing new techniques for calculations at LIGO, the gravitational wave telescope. If they hadn’t worked with supergravity first, they would never have discovered these techniques.

The discovery of supergravity by Ferrara, Freedman, and van Nieuwenhuizen is exactly the kind of work the Breakthrough Prize was created to reward. Supergravity is a theory with deep mathematics, rich structure, and wide applicability. There is of course no guarantee that such a theory describes the real world. What is guaranteed, though, is that someone will find it useful.

Strings 2018

I’m at Strings this week, in tropical Okinawa. Opening the conference, organizer Hirosi Ooguri joked that they had carefully scheduled things for a sunny time of year, and since the rainy season had just ended “who says that string theorists don’t make predictions?”

IMG_20180625_125441806

There was then a rainstorm during lunch, falsifying string theory

This is the first time I’ve been to Strings. There are almost 500 people here, which might seem small for folks in other fields, but for me this is the biggest conference I’ve attended. The size is noticeable in the little things: this is the first conference I’ve been to with a diaper changing room, the first managed by a tour company, the first with a dedicated “Cultural Evening” featuring classical music from the region. With this in mind, the conference were impressively well-organized, but there were some substantial gaps (tightly packed tours before the Cultural Evening that didn’t leave time for dinner, and a talk by Morrison cut short by missing slides that offset the schedule of the whole last day).

On the well-organized side, Strings has a particular structure for its talks, with Review Talks and Plenary Talks. The Review Talks each summarize a subject: mostly main focuses of the conference, but with a few (Ashoke Sen on String Field Theory, David Simmons-Duffin on the Conformal Bootstrap) that only covered the content of a few talks.

I’m not going to make another pie chart this year, if you want that kind of breakdown Daniel Harlow gave one during the “Golden Jubilee” at the end. If I did something like that this time, I’d divide it up not by sub-fields, but by goals. Talks here focused on a few big questions: “Can we classify all quantum field theories?” “What are the general principles behind quantum gravity?” “Can we make some of the murky aspects of string theory clearer?” “How can string theory give rise to sensible physics in four dimensions?”

Of those questions, classifying quantum field theories made up the bulk of the conference. I’ve heard people dismiss this work on the ground that much of it only works in supersymmetric theories. With that in mind, it was remarkable just how much of the conference was non-supersymmetric. Supersymmetry still played a role, but the assumption seemed to be that it was more of a sub-topic than something universal (to the extent that one of the Review Talks, Clay Cordova’s “What’s new with Q?”, was “the supersymmetry review talk”). Both supersymmetric and non-supersymmetric theories are increasingly understood as being part of a “landscape”, linked by duality and thinking at different scales. These links are sometimes understood in terms of string theory, but often not. So far it’s not clear if there is a real organizing principle here, especially for the non-supersymmetric cases, and people seem to be kept busy enough just proving the links they observe.

Finding general principles behind quantum gravity motivated a decent range of the talks, from Andrew Strominger to Jorge Santos. The topics that got the most focus, and two of the Review Talks, were by what I’ve referred to as “entanglers”, people investigating the structure of space and time via quantum entanglement and entropy. My main takeaway from these talks was perhaps a bit frivolous: between Maldacena’s talk (about an extremely small wormhole made from Standard Model-compatible building blocks) and Hartman’s discussion of the Average Null Energy Condition, it looks like a “useful sci-fi wormhole” (specifically, one that gets you there faster than going the normal way) has been conclusively ruled out in quantum field theory.

Only a minority of talks discussed using string theory to describe the real world, though I get the impression this was still more focus than in past years. In particular, there were several talks trying to discover properties of Calabi-Yaus, the geometries used to curl up string theory’s extra dimensions. Watching these talks I had a similar worry to Strominger’s question after Irene Valenzuela’s talk: it’s not clear that these investigations aren’t just examining a small range of possibilities, one that might become irrelevant if new dualities or types of compactification are found. Ironically, this objection seems to apply least to Valenzuela’s talk itself: characterizing the “swampland” of theories that don’t make sense as part of a theory of quantum gravity may start with examples from string compactifications, but its practitioners are looking for more general principles about quantum gravity and seem to manage at least reasonable arguments that don’t depend on string theory being true.

There wasn’t much from the amplitudes field at this conference, with just Yu-tin Huang’s talk carrying that particular flag. Despite that, amplitudes methods came up in several talks, with Silviu Pufu praising an amplitudes textbook and David Simmons-Duffin bringing up amplitudes several times (more than he did in his talk last week at Amplitudes).

The end of the conference featured a panel discussion in honor of String Theory’s 50th Anniversary, its “Golden Jubilee”. The panel was evenly split between founders of string theory, heroes of the string duality revolution, and the current crop of young theorists. The panelists started by each giving a short presentation. Michael Green joked that it felt like a “geriatric gong show”, and indeed a few of the presentations were gong show-esque. Still, some of the speeches were inspiring. I was particularly impressed by Juan Maldacena, Eva Silverstein, and Daniel Harlow, who each laid out a compelling direction for string theory’s future. The questions afterwards were collated by David Gross from audience submissions, and were largely what you would expect, with quite a lot of questions about whether string theory can ever connect with experiment. I was more than a little disappointed by the discussion of whether string theory can give rise to de Sitter space, which was rather botched: Maldacena was appointed as the defender of de Sitter, but (contra Gross’s summary) the quantum complexity-based derivation he proposed didn’t sound much like the flux compactifications that have inspired so much controversy, so everyone involved ended up talking past each other.

Edit: See Shamit’s comment below, I apparently misunderstood what Maldacena was referring to.

The State of Four Gravitons

This blog is named for a question: does the four-graviton amplitude in N=8 supergravity diverge?

Over the years, Zvi Bern and a growing cast of collaborators have been trying to answer that question. They worked their way up, loop by loop, until they stalled at five loops. Last year, they finally broke the stall, and last week, they published the result of the five-loop calculation. They find that N=8 supergravity does not diverge at five loops in four dimensions, but does diverge in 24/5 dimensions. I thought I’d write a brief FAQ about the status so far.

Q: Wait a minute, 24/5 dimensions? What does that mean? Are you talking about fractals, or…

Nothing so exotic. The number 24/5 comes from a regularization trick. When we’re calculating an amplitude that might be divergent, one way to deal with it is to treat the dimension like a free variable. You can then see what happens as you vary the dimension, and see when the amplitude starts diverging. If the dimension is an integer, then this ends up matching a more physics-based picture, where you start with a theory in eleven dimensions and curl up the extra ones until you get to the dimension you’re looking for. For fractional dimensions, it’s not clear that there’s any physical picture like this: it’s just a way to talk about how close something is to diverging.

Q: I’m really confused. What’s a graviton? What is supergravity? What’s a divergence?

I don’t have enough space to explain these things here, but that’s why I write handbooks. Here are explanations of gravitons, supersymmetry, and (N=8) supergravity, loops, and divergences. Please let me know if anything in those explanations is unclear, or if you have any more questions.

Q: Why do people think that N=8 supergravity will diverge at seven loops?

There’s a useful rule of thumb in quantum field theory: anything that can happen, will happen. In this case, that means if there’s a way for a theory to diverge that’s consistent with the symmetries of the theory, then it almost always does diverge. In the past, that meant that people expected N=8 supergravity to diverge at five loops. However, researchers found a previously unknown symmetry that looked like it would forbid the five-loop divergence, and only allow a divergence at seven loops (in four dimensions). Zvi and co.’s calculation confirms that the five-loop divergence doesn’t show up.

More generally, string theory not only avoids divergences but clears up other phenomena, like black holes. These two things seem tied together: string theory cleans up problems in quantum gravity in a consistent, unified way. There isn’t a clear way for N=8 supergravity on its own to clean up these kinds of problems, which makes some people skeptical that it can match string theory’s advantages. Either way N=8 supergravity, unlike string theory, isn’t a candidate theory of nature by itself: it would need to be modified in order to describe our world, and no-one has suggested a way to do that.

Q: Why do people think that N=8 supergravity won’t diverge at seven loops?

There’s a useful rule of thumb in amplitudes: amplitudes are weird. In studying amplitudes we often notice unexpected simplifications, patterns that uncover new principles that weren’t obvious before.

Gravity in general seems to have a lot of these kinds of simplifications. Even without any loops, its behavior is surprisingly tame: it’s a theory that we can build up piece by piece from the three-particle interaction, even though naively we shouldn’t be able to (for the experts: I’m talking about large-z behavior in BCFW). This behavior seems to have an effect on one-loop amplitudes as well. There are other ways in which gravity seems better-behaved than expected, overall this suggests that we still have a fair ways to go before we understand all of the symmetries of gravity theories.

Supersymmetric gravity in particular also seems unusually well-behaved. N=5 supergravity was expected to diverge at four loops, but doesn’t. N=4 supergravity does diverge at four loops, but that seems to be due to an effect that is specific to that case (for the experts: an anomaly).

For N=8 specifically, a suggestive hint came from varying the dimension. If you checked the dimension in which the theory diverged at each loop, you’d find it matched the divergences of another theory, N=4 super Yang-Mills. At l loops, N=4 super Yang-Mills diverges in dimension 4+6/l. From that formula, you can see that no matter how much you increase l, you’ll never get to four dimensions: in four dimensions, N=4 super Yang-Mills doesn’t diverge.

At five loops, N=4 super Yang-Mills diverges in 26/5 dimensions. Zvi Bern made a bet with supergravity expert Kelly Stelle that the dimension would be the same for N=8 supergravity: a bottle of California wine from Bern versus English wine from Stelle. Now that they’ve found a divergence in 24/5 dimensions instead, Stelle will likely be getting his wine soon.

Q: It sounds like the calculation was pretty tough. Can they still make it to seven loops?

I think so, yes. Doing the five-loop calculation they noticed simplifications, clever tricks uncovered by even more clever grad students. The end result is that if they just want to find out whether the theory diverges then they don’t have to do the “whole calculation”, just part of it. This simplifies things a lot. They’ll probably have to find a few more simplifications to make seven loops viable, but I’m optimistic that they’ll find them, and in the meantime the new tricks should have some applications in other theories.

Q: What do you think? Will the theory diverge?

I’m not sure.

To be honest, I’m a bit less optimistic than I used to be. The agreement of divergence dimensions between N=8 supergravity and N=4 super Yang-Mills wasn’t the strongest argument (there’s a reason why, though Stelle accepted the bet on five loops, string theorist Michael Green is waiting on seven loops for his bet). Fractional dimensions don’t obviously mean anything physically, and many of the simplifications in gravity seem specific to four dimensions. Still, it was suggestive, the kind of “motivation” that gets a conjecture started.

Without that motivation, none of the remaining arguments are specific to N=8. I still think unexpected simplifications are likely, that gravity overall behaves better than we yet appreciate. I still would bet on seven loops being finite. But I’m less confident about what it would mean for the theory overall. That’s going to take more serious analysis, digging in to the anomaly in N=4 supergravity and seeing what generalizes. It does at least seem like Zvi and co. are prepared to undertake that analysis.

Regardless, it’s still worth pushing for seven loops. Having that kind of heavy-duty calculation in our sub-field forces us to improve our mathematical technology, in the same way that space programs and particle colliders drive technology in the wider world. If you think your new amplitudes method is more efficient than the alternatives, the push to seven loops is the ideal stress test. Jacob Bourjaily likes to tell me how his prescriptive unitarity technique is better than what Zvi and co. are doing, this is our chance to find out!

Overall, I still stand by what I say in my blog’s sidebar. I’m interested in N=8 supergravity, I’d love to find out whether the four-graviton amplitude diverges…and now that the calculation is once again making progress, I expect that I will.

Amplitudes Papers I Haven’t Had Time to Read

Interesting amplitudes papers seem to come in groups. Several interesting papers went up this week, and I’ve been too busy to read any of them!

Well, that’s not quite true, I did manage to read this paper, by James Drummond, Jack Foster, and Omer Gurdogan. At six pages long, it wasn’t hard to fit in, and the result could be quite useful. The way my collaborators and I calculate amplitudes involves building up a mathematical object called a symbol, described in terms of a string of “letters”. What James and collaborators have found is a restriction on which “letters” can appear next to each other, based on the properties of a mathematical object called a cluster algebra. Oddly, the restriction seems to have the same effect as a more physics-based condition we’d been using earlier. This suggests that the abstract mathematical restriction and the physics-based restriction are somehow connected, but we don’t yet understand how. It also could be useful for letting us calculate amplitudes with more particles: previously we thought the number of “letters” we’d have to consider there was going to be infinite, but with James’s restriction we’d only need to consider a finite number.

I didn’t get a chance to read David Dunbar, John Godwin, Guy Jehu, and Warren Perkins’s paper. They’re computing amplitudes in QCD (which unlike N=4 super Yang-Mills actually describes the real world!) and doing so for fairly complicated arrangements of particles. They claim to get remarkably simple expressions: since that sort of claim was what jump-started our investigations into N=4, I should probably read this if only to see if there’s something there in the real world amenable to our technique.

I also haven’t read Rutger Boels and Hui Lui’s paper yet. From the abstract, I’m still not clear which parts of what they’re describing is new, or how much it improves on existing methods. It will probably take a more thorough reading to find out.

I really ought to read Burkhard Eden, Yunfeng Jiang, Dennis le Plat, and Alessandro Sfondrini’s paper. They’re working on a method referred to as the Hexagon Operator Product Expansion, or HOPE. It’s related to an older method, the Pentagon Operator Product Expansion (POPE), but applicable to trickier cases. I’ve been keeping an eye on the HOPE in part because my collaborators have found the POPE very useful, and the HOPE might enable something similar. It will be interesting to find out how Eden et al.’s paper modifies the HOPE story.

Finally, I’ll probably find the time to read my former colleague Sebastian Mizera’s paper. He’s found a connection between the string-theory-like CHY picture of scattering amplitudes and some unusual mathematical structures. I’m not sure what to make of it until I get a better idea of what those structures are.

One, Two, Infinity

Physicists and mathematicians count one, two, infinity.

We start with the simplest case, as a proof of principle. We take a stripped down toy model or simple calculation and show that our idea works. We count “one”, and we publish.

Next, we let things get a bit more complicated. In the next toy model, or the next calculation, new interactions can arise. We figure out how to deal with those new interactions, our count goes from “one” to “two”, and once again we publish.

By this point, hopefully, we understand the pattern. We know what happens in the simplest case, and we know what happens when the different pieces start to interact. If all goes well, that’s enough: we can extrapolate our knowledge to understand not just case “three”, but any case: any model, any calculation. We publish the general case, the general method. We’ve counted one, two, infinity.

200px-infinite-svg

Once we’ve counted “infinity”, we don’t have to do any more cases. And so “infinity” becomes the new “zero”, and the next type of calculation you don’t know how to do becomes “one”. It’s like going from addition to multiplication, from multiplication to exponentiation, from exponentials up into the wilds of up-arrow notation. Each time, once you understand the general rules you can jump ahead to an entirely new world with new capabilities…and repeat the same process again, on a new scale. You don’t need to count one, two, three, four, on and on and on.

Of course, research doesn’t always work out this way. My last few papers counted three, four, five, with six on the way. (One and two were already known.) Unlike the ideal cases that go one, two, infinity, here “two” doesn’t give all the pieces you need to keep going. You need to go a few numbers more to get novel insights. That said, we are thinking about “infinity” now, so look forward to a future post that says something about that.

A lot of frustration in physics comes from situations when “infinity” remains stubbornly out of reach. When people complain about all the models for supersymmetry, or inflation, in some sense they’re complaining about fields that haven’t taken that “infinity” step. One or two models of inflation are nice, but by the time the count reaches ten you start hoping that someone will describe all possible models of inflation in one paper, and see if they can make any predictions from that.

(In particle physics, there’s an extent to which people can actually do this. There are methods to describe all possible modifications of the Standard Model in terms of what sort of effects they can have on observations of known particles. There’s a group at NBI who work on this sort of thing.)

The gold standard, though, is one, two, infinity. Our ability to step back, stop working case-by-case, and move on to the next level is not just a cute trick: it’s a foundation for exponential progress. If we can count one, two, infinity, then there’s nowhere we can’t reach.