I was on the debate team in high school. There’s a type of debate, called Policy, where one team proposes a government policy and the other team argues the policy is bad. The rules of Policy debate don’t say who the debaters are pretending to be: they could be congresspeople, cabinet members, or staff at a think tank. This creates ambiguity, and nerds are great at exploiting ambiguity. A popular strategy was to argue that the opponents had a perfectly good policy, but were wrong about who should implement it. This had reasonable forms (no, congress does not have the power to do X) but could also get very silly (the crux of one debate was whether the supreme court or the undersecretary of the TSA was the best authority to usher in a Malthusian dictatorship). When debating policy, “who” could be much more important than “what”.
Occasionally, when I see people argue that something needs to be done, I ask myself this question. Who, precisely, should do it?
Recently, I saw a tweet complaining about scientific publishing. Physicists put their work out for free on arXiv.org, then submit that work to journals, which charge huge fees either to the scientists themselves or to libraries that want access to the work. It’s a problem academics complain about frequently, but usually we act like it’s something we should fix ourselves, a kind of grassroots movement to change our publication and hiring culture.
This tweet, surprisingly, didn’t do that. Instead, it seemed to have a different “who” in mind. The tweet argued that the stranglehold of publishers like Elsevier on academic publishing is a waste of taxpayer money. The implication, maybe intended maybe not, is that the problem should be fixed by the taxpayers: that is, by the government.
Which in turn got me thinking, what could that look like?
I could imagine a few different options, from the kinds of things normal governments do to radical things that would probably never happen.
First, the most plausible strategy: collective negotiation. Particle physicists don’t pay from our own grants to publish papers, and we don’t pay to read them. Instead, we have a collective agreement, called SCOAP3, where the big institutions pay together each year to guarantee open access. The University of California system tried to negotiate a similar agreement a few years back, not just for physicists but for all fields. You could imagine governments leaning on this, with the university systems of whole countries negotiating a fixed payment. The journals would still be getting paid, but less.
Second, less likely but not impossible: governments could use the same strategies against the big publishers that they use against other big companies. This could be antitrust action (if you have to publish in Nature to get hired, are they really competing with anybody?), or even some kind of price controls. The impression I get is that when governments do try to change scientific publishing they usually do it via restrictions on the scientists (such as requiring them to publish open-access), while this would involve restrictions on the publishers.
Third, governments could fund alternative institutions to journals. They could put more money into websites like arXiv.org and its equivalents in other fields or fund an alternate review process to vet papers like journal referees do. There are existing institutions they could build on, or they could create their own.
Fourth, you could imagine addressing the problem on the job market side, with universities told not to weigh the prestige of journals when considering candidates. This seems unlikely to happen, and that’s probably a good thing, because it’s very micromanagey. Still, I do think that both grants and jobs could do with less time and effort spent attempting to vet candidates and more explicit randomness.
Fifth, you could imagine governments essentially opting out of the game altogether. They could disallow spending any money from publicly funded grants or universities on open-access fees or subscription fees, pricing most scientists out of the journal system. Journals would either have to radically lower their prices so that scientists could pay for them out of pocket, or more likely go extinct. This does have the problem that if only some countries did it, their scientists would have a harder time in other countries’ job markets. And of course, many critics of journals just want the journals to make less obscene profits, and not actually go extinct.
Most academics I know agree that something is deeply wrong with how academic journals work. While the situation might be solved at the grassroots level, it’s worth imagining what governments might do. Realistically, I don’t expect them to do all that much. But stranger things have gotten political momentum before.
For science-minded folks who want to learn a bit more: I have a sentence in the article mentioning other uncertainties. In case you’re curious what those uncertainties are:
Gamma () here is the decay rate, its inverse gives the time it takes for a cubic gigaparsec of space to experience vacuum decay. The three uncertainties are from experiments, the uncertainties of our current knowledge of the Higgs mass, top quark mass, and the strength of the strong force.
Occasionally, you see futurology-types mention “uncertainties in the exponent” to argue that some prediction (say, how long it will take till we have human-level AI) is so uncertain that estimates barely even make sense: it might be 10 years, or 1000 years. I find it fun that for vacuum decay, because of that , there is actually uncertainty in the exponent! Vacuum decay might happen in as few as years or as many as years, and that’s the result of an actual, reasonable calculation!
For physicist readers, I should mention that I got a lot out of reading some slides from a 2016 talk by Matthew Schwartz. Not many details of the calculation made it into the piece, but the slides were helpful in dispelling a few misconceptions that could have gotten into the piece. There’s an instinct to think about the situation in terms of the energy, to think about how difficult it is for quantum uncertainty to get you over the energy barrier to the next vacuum. There are methods that sort of look like that, if you squint, but that’s not really how you do the calculation, and there end up being a lot of interesting subtleties in the actual story. There were also a few numbers that it was tempting to put on the plots in the article, but turn out to be gauge dependent!
Another thing I learned from those slides how far you can actually take the uncertainties mentioned above. The higher-energy Higgs vacuum is pretty dang high-energy, to the point where quantum gravity effects could potentially matter. And at that point, all bets are off. The calculation, with all those nice uncertainties, is a calculation within the framework of the Standard Model. All of the things we don’t yet know about high-energy physics, especially quantum gravity, could freely mess with this. The universe as we know it could still be long-lived, but it could be a lot shorter-lived as well. That in turns makes this calculation a lot more of a practice-ground to hone techniques, rather than an actual estimate you can rely on.
Quantum mechanics is famously unintuitive, but the most intuitive way to think about it is probably the path integral. In the path integral formulation, to find the chance a particle goes from point A to point B, you look at every path you can draw from one place to another. For each path you calculate a complex number, a “weight” for that path. Most of these weights cancel out, leaving the path the particle would travel under classical physics with the biggest contribution. They don’t perfectly cancel out, though, so the other paths still matter. In the end, the way the particle behaves depends on all of these possible paths.
If you’ve heard this story, it might make you feel like you have some intuition for how quantum physics works. With each path getting less likely as it strays from the classical, you might have a picture of a nice orderly set of options, with physicists able to pick out the chance of any given thing happening based on the path.
In a world with just one particle swimming along, this might not be too hard. But our world doesn’t run on the quantum mechanics of individual particles. It runs on quantum field theory. And there, things stop being so intuitive.
First, the paths aren’t “paths”. For particles, you can imagine something in one place, traveling along. But particles are just ripples in quantum fields, which can grow, shrink, or change. For quantum fields instead of quantum particles, the path integral isn’t a sum over paths of a single particle, but a sum over paths traveled by fields. The fields start out in some configuration (which may look like a particle at point A) and then end up in a different configuration (which may look like a particle at point B). You have to add up weights, not for every path a single particle could travel, but every different set of ways the fields could have been in between configuration A and configuration B.
More importantly, though, there is more than one field! Maybe you’ve heard about electric and magnetic fields shifting back and forth in a wave of light, one generating the other. Other fields interact like this, including the fields behind things you might think of as particles like electrons. For any two fields that can affect each other, a disturbance in one can lead to a disturbance in the other. An electromagnetic field can disturb the electron field, which can disturb the Higgs field, and so on.
The path integral formulation tells you that all of these paths matter. Not just the path of one particle or one field chugging along by itself, but the path where the electromagnetic field kicks off a Higgs field disturbance down the line, only to become a disturbance in the electromagnetic field again. Reality is all of these paths at once, a Rube Goldberg machine of a universe.
A loose rule of thumb: PhD candidates in the US are treated like students. In Europe, they’re treated like employees.
This does exaggerate things a bit. In both Europe and the US, PhD candidates get paid a salary (at least in STEM). In both places, PhD candidates count as university employees, if sometimes officially part-time ones, with at least some of the benefits that entails.
On the other hand, PhD candidates in both places take classes (albeit more classes in the US). Universities charge both for tuition, which is in turn almost always paid by their supervisor’s grants or department, not by them. Both aim for a degree, capped off with a thesis defense.
But there is a difference. And it’s at its most obvious in how applications work.
In Europe, PhD applications are like job applications. You apply to a particular advisor, advertising a particular kind of project. You submit things like a CV, cover letter, and publication list, as well as copies of your previous degrees.
In the US, PhD applications are like applications to a school. You apply to the school, perhaps mentioning an advisor or topic you are interested in. You submit things like essays, test scores, and transcripts. And typically, you have to pay an application fee.
I don’t think I quite appreciated, back when I applied for PhD programs, just how much those fees add up to. With each school charging a fee in the $100 range, and students commonly advised to apply to ten or so schools, applying to PhD programs in the US can quickly get unaffordable for many. Schools do offer fee waivers under certain conditions, but the standards vary from school to school. Most don’t seem to apply to non-Americans, so if you’re considering a US PhD from abroad be aware that just applying can be an expensive thing to do.
Why the fee? I don’t really know. The existence of application fees, by itself, isn’t a US thing. If you want to get a Master’s degree from the University of Copenhagen and you’re coming from outside Europe, you have to pay an application fee of roughly the same size that US schools charge.
Based on that, I’d guess part of the difference is funding. It costs something for a university to process an application, and governments might be willing to cover it for locals (in the case of the Master’s in Copenhagen) or more specifically for locals in need (in the US PhD case). I don’t know whether it makes sense for that cost to be around $100, though.
It’s also an incentive, presumably. Schools don’t want too many applicants, so they attach a fee so only the most dedicated people apply.
Jobs don’t typically have an application fee, and I think it would piss a lot of people off if they did. Some jobs get a lot of applicants, enough that bigger and more well-known companies in some places use AI to filter applications. I have to wonder if US PhD schools are better off in this respect. Does charging a fee mean they have a reasonable number of applications to deal with? Or do they still have to filter through a huge pile, with nothing besides raw numbers to pare things down? (At least, because of the “school model” with test scores, they have some raw numbers to use.)
Overall, coming at this with a “theoretical physicist mentality”, I have to wonder if any of this is necessary. Surely there’s a way to make it easy for students to apply, and just filter them down to the few you want to accept? But the world is of course rarely that simple.
Last month, I had a post about a type of theory that is, in a certain sense, “immune to gravity”. These theories don’t allow you to build antigravity machines, and they aren’t totally independent of the overall structure of space-time. But they do ignore the core thing most people think of as gravity, the curvature of space that sends planets around the Sun and apples to the ground. And while that trait isn’t something we can use for new technology, it has led to extremely productive conversations between mathematicians and physicists.
After posting, I had some interesting discussions on twitter. A few people felt that I was over-hyping things. Given all the technical caveats, does it really make sense to say that these theories defy gravity? Isn’t a title like “Gravity-Defying Theories” just clickbait?
Obviously, I don’t think so.
There’s a concept in education called inductive teaching. We remember facts better when they come in context, especially the context of us trying to solve a puzzle. If you try to figure something out, and then find an answer, you’re going to remember that answer better than if you were just told the answer from the beginning. There are some similarities here to the concept of a Zen koan: by asking questions like “what is the sound of one hand clapping?” a Zen master is supposed to get you to think about the world in a different way.
When I post with a counterintuitive title, I’m aiming for that kind of effect. I know that you’ll read the title and think “that can’t be right!” Then you’ll read the post, and hear the explanation. That explanation will stick with you better because you asked that question, because “how can that be right?” is the solution to a puzzle that, in that span of words, you cared about.
Clickbait is bad for two reasons. First, it sucks you in to reading things that aren’t actually interesting. I write my blog posts because I think they’re interesting, so I hope I avoid that. Second, it can spread misunderstandings. I try to be careful about these, and I have some tips how you can be too:
Correct the misunderstanding early. If I’m worried a post might be misunderstood in a clickbaity way, I make sure that every time I post the link I include a sentence discouraging the misunderstanding. For example, for the post on Gravity-Defying Theories, before the link I wrote “No flying cars, but it is technically possible for something to be immune to gravity”. If I’m especially worried, I’ll also make sure that the first paragraph of the piece corrects the misunderstanding as well.
Know your audience. This means both knowing the normal people who read your work, and how far something might go if it catches on. Your typical readers might be savvy enough to skip the misunderstanding, but if they latch on to the naive explanation immediately then the “koan” effect won’t happen. The wider your reach can be, the more careful you need to be about what you say. If you’re a well-regarded science news piece, don’t write a title saying that scientists have built a wormhole.
Have enough of a conclusion to be “worth it”. This is obviously a bit subjective. If your post introduces a mystery and the answer is that you just made some poetic word choice, your audience is going to feel betrayed, like the puzzle they were considering didn’t have a puzzly answer after all. Whatever you’re teaching in your post, it needs to have enough “meat” that solving it feels like a real discovery, like the reader did some real work to solve it.
I don’t think I always live up to these, but I do try. And I think trying is better than the conservative option, of never having catchy titles that make counterintuitive claims. One of the most fun aspects of science is that sometimes a counterintuitive fact is actually true, and that’s an experience I want to share.
I’ve now had time to look over the rest of the slides from the Amplitudes 2024 conference, so I can say something about Thursday and Friday’s talks.
Thursday was gravity-focused. Zvi Bern’s review talk was actually a review, a tour of the state of the art in using amplitudes techniques to make predictions for gravitational wave physics. Bern emphasized that future experiments will require much more precision: two more orders of magnitude, which in our lingo amounts to two more “loops”. The current state of the art is three loops, but they’ve been hacking away at four, doing things piece by piece in a way that cleverly also yields publications (for example, they can do just the integrals needed for supergravity, which are simpler). Four loops here is the first time that the Feynman diagrams involve Calabi-Yau manifolds, so they will likely need techniques from some of the folks I talked about last week. Once they have four loops, they’ll want to go to five, since that is the level of precision you need to learn something about the material in neutron stars. The talk covered a variety of other developments, some of which were talked about later on Thursday and some of which were only mentioned here.
Of that day’s other speakers, Stefano De Angelis, Lucile Cangemi, Mikhail Ivanov, and Alessandra Buonanno also focused on gravitational waves. De Angelis talked about the subtleties that show up when you try to calculate gravitational waveforms directly with amplitudes methods, showcasing various improvements to the pipeline there. Cangemi talked about a recurring question with its own list of subtleties, namely how the Kerr metric for spinning black holes emerges from the math of amplitudes of spinning particles. Gravitational waves were the focus of only the second half of Ivanov’s talk, where he talked about how amplitudes methods can clear up some of the subtler effects people try to take into account. The first half was about another gravitational application, that of using amplitudes methods to compute the correlations of galaxy structures in the sky, a field where it looks like a lot of progress can be made. Finally, Buonanno gave the kind of talk she’s given a few times at these conferences, a talk that puts these methods in context, explaining how amplitudes results are packaged with other types of calculations into the Effective-One-Body framework which then is more directly used at LIGO. This year’s talk went into more detail about what the predictions are actually used for, which I appreciated. I hadn’t realized that there have been a handful of black hole collisions discovered by other groups from LIGO’s data, a win for open science! Her slides had a nice diagram explaining what data from the gravitational wave is used to infer what black hole properties, quite a bit more organized than the statistical template-matching I was imagining. She explained the logic behind Bern’s statement that gravitational wave telescopes will need two more orders of magnitude, pointing out that that kind of precision is necessary to be sure that something that might appear to be a deviation from Einstein’s theory of gravity is not actually a subtle effect of known physics. Her method typically is adjusted to fit numerical simulations, but she shows that even without that adjustment they now fit the numerics quite well, thanks in part to contributions from amplitudes calculations.
Of the other talks that day, David Kosower’s was the only one that didn’t explicitly involve gravity. Instead, his talk focused on a more general question, namely how to find a well-defined basis of integrals for Feynman diagrams, which turns out to involve some rather subtle mathematics and geometry. This is a topic that my former boss Jake Bourjaily worked on in a different context for some time, and I’m curious whether there is any connection between the two approaches. Oliver Schlotterer gave the day’s second review talk, once again of the “actually a review” kind, covering a variety of recent developments in string theory amplitudes. These include some new pictures of how string theory amplitudes that correspond to Yang-Mills theories “square” to amplitudes involving gravity at higher loops and progress towards going past two loops, the current state of the art for most string amplitude calculations. (For the experts: this does not involve taking the final integral over the moduli space, which is still a big unsolved problem.) He also talked about progress by Sebastian Mizera and collaborators in understanding how the integrals that show up in string theory make sense in the complex plane. This is a problem that people had mostly managed to avoid dealing with because of certain simplifications in the calculations people typically did (no moduli space integration, expansion in the string length), but taking things seriously means confronting it, and Mizera and collaborators found a novel solution to the problem that has already passed a lot of checks. Finally, Tobias Hansen’s talk also related to string theory, specifically in anti-de-Sitter space, where the duality between string theory and N=4 super Yang-Mills lets him and his collaborators do Yang-Mills calculations and see markedly stringy-looking behavior.
Friday began with Kevin Costello, whose not-really-a-review talk dealt with his work with Natalie Paquette showing that one can use an exactly-solvable system to learn something about QCD. This only works for certain rather specific combinations of particles: for example, in order to have three colors of quarks, they need to do the calculation for nine flavors. Still, they managed to do a calculation with this method that had not previously been done with more traditional means, and to me it’s impressive that anything like this works for a theory without supersymmetry. Mina Himwich and Diksha Jain both had talks related to a topic of current interest, “celestial” conformal field theory, a picture that tries to apply ideas from holography in which a theory on the boundary of a space fully describes the interior, to the “boundary” of flat space, infinitely far away. Himwich talked about a symmetry observed in that research program, and how that symmetry can be seen using more normal methods, which also lead to some suggestions of how the idea might be generalized. Jain likewise covered a different approach, one in which one sets artificial boundaries in flat space and sees what happens when those boundaries move.
Yifei He described progress in the modern S-matrix bootstrap approach. Previously, this approach had gotten quite general constraints on amplitudes. She tries to do something more specific, and predict the S-matrix for scattering of pions in the real world. By imposing compatibility with knowledge from low energies and high energies, she was able to find a much more restricted space of consistent S-matrices, and these turn out to actually match pretty well to experimental results. Mathieu Giroux addresses an important question for a variety of parts of amplitudes research, how to predict the singularities of Feynman diagrams. He explored a recursive approach to solving Landau’s equations for these singularities, one which seems impressively powerful, in one case being able to find a solution that in text form is approximately the length of Harry Potter. Finally, Juan Maldacena closed the conference by talking about some progress he’s made towards an old idea, that of defining M theory in terms of a theory involving actual matrices. This is a very challenging thing to do, but he is at least able to tackle the simplest possible case, involving correlations between three observations. This had a known answer, so his work serves mostly as a confirmation that the original idea makes sense at at least this level.
Arguably my biggest project over the last two years wasn’t a scientific paper, a journalistic article, or even a grant application. It was a conference.
Most of the time, when scientists organize a conference, they do it “at home”. Either they host the conference at their own university, or rent out a nearby event venue. There is an alternative, though. Scattered around the world, often in out-of-the way locations, are places dedicated to hosting scientific conferences. These places accept applications each year from scientists arguing that their conference would best serve the place’s scientific mission.
One of these places is the Banff International Research Station in Alberta, Canada. Since 2001, Banff has been hosting gatherings of mathematicians from around the world, letting them focus on their research in an idyllic Canadian ski resort.
If you don’t like skiing, though, Banff still has you covered! They have “affiliate centers” elsewhere, with one elsewhere in Canada, one in China, two on the way in India and Spain…and one, that particularly caught my interest, in Oaxaca, Mexico.
Back around this time of year in 2022, I started putting a proposal together for a conference at the Casa Mathemática Oaxaca. The idea would be a conference discussing the frontier of the field, how to express the strange mathematical functions that live in Feynman diagrams. I assembled a big team of co-organizers, five in total. At the time, I wasn’t sure whether I could find a permanent academic job, so I wanted to make sure there were enough people involved that they could run the conference without me.
Followers of the blog know I did end up finding that permanent job…only to give it up. In the end, I wasn’t able to make it to the conference. But my four co-organizers were (modulo some delays in the Houston airport). The conference was this week, with the last few talks happening over the next few hours.
I gave a short speech via Zoom at the beginning of the conference, a mix of welcome and goodbye. Since then I haven’t had the time to tune in to the talks, but they’re good folks and I suspect they’re having good discussions.
I do regret that, near the end, I wasn’t able to give the conference the focus it deserved. There were people we really hoped to have, but who couldn’t afford the travel. I’d hoped to find a source of funding that could support them, but the plan fell through. The week after Amplitudes 2024 was also a rough time to have a conference in this field, with many people who would have attended not able to go to both. (At least they weren’t the same week, thanks to some flexibility on the part of the Amplitudes organizers!)
Still, it’s nice to see something I’ve been working on for two years finally come to pass, to hopefully stir up conversations between different communities and give various researchers a taste of one of Mexico’s most beautiful places. I still haven’t been to Oaxaca yet, but I suspect I will eventually. Danish companies do give at minimum five weeks of holiday per year, so I should get a chance at some point.
For over a decade, I studied scattering amplitudes, the formulas particle physicists use to find the probability that particles collide, or scatter, in different ways. I went to Amplitudes, the field’s big yearly conference, every year from 2015 to 2023.
This year is different. I’m on the way out of the field, looking for my next steps. Meanwhile, Amplitudes 2024 is going full speed ahead at the Institute for Advanced Study in Princeton.
With poster art that is, as the kids probably don’t say anymore, “on fleek”
The talks aren’t live-streamed this year, but they are posting slides, and they will be posting recordings. Since a few of my readers are interested in new amplitudes developments, I’ve been paging through the posted slides looking for interesting highlights. So far, I’ve only seen slides from the first few days: I will probably write about the later talks in a future post.
Each day of Amplitudes this year has two 45-minute “review talks”, one first thing in the morning and the other first thing after lunch. I put “review talks” in quotes because they vary a lot, between talks that try to introduce a topic for the rest of the conference to talks that mostly focus on the speaker’s own research. Lorenzo Tancredi’s talk was of the former type, an introduction to the many steps that go into making predictions for the LHC, with a focus on those topics where amplitudeologists have made progress. The talk opens with the type of motivation I’d been writing in grant and job applications over the last few years (we don’t know most of the properties of the Higgs yet! To measure them, we’ll need to calculate amplitudes with massive particles to high precision!), before moving into a review of the challenges and approaches in different steps of these calculations. While Tancredi apologizes in advance that the talk may be biased, I found it surprisingly complete: if you want to get an idea of the current state of the “LHC amplitudes pipeline”, his slides are a good place to start.
Tancredi’s talk serves as introduction for a variety of LHC-focused talks, some later that day and some later in the week. Federica Devoto discussed high-energy quarks while Chiara Signorile-Signorile and George Sterman showed advances in handling of low-energy particles. Xiaofeng Xu has a program that helps predict symbol letters, the building-blocks of scattering amplitudes that can be used to reconstruct or build up the whole thing, while Samuel Abreu talked about a tricky state-of-the-art case where Xu’s program misses part of the answer.
Later Monday morning veered away from the LHC to focus on more toy-model theories. Renata Kallosh’s talk in particular caught my attention. This blog is named after a long-standing question in amplitudes: will the four-graviton amplitude in N=8 supergravity diverge at seven loops in four dimensions? This seemingly arcane question is deep down a question about what is actually required for a successful theory of quantum gravity, and in particular whether some of the virtues of string theory can be captured by a simpler theory instead. Answering the question requires a prodigious calculation, and the more “loops” are involved the more difficult it is. Six years ago, the calculation got to five loops, and it hasn’t passed that mark since then. That five-loop calculation gave some reason for pessimism, a nice pattern at lower loops that stopped applying at five.
Kallosh thinks she has an idea of what to expect. She’s noticed a symmetry in supergravity, one that hadn’t previously been taken into account. She thinks that symmetry should keep N=8 supergravity from diverging on schedule…but only in exactly four dimensions. All of the lower-loop calculations in N=8 supergravity diverged in higher dimensions than four, and it seems like with this new symmetry she understands why. Her suggestion is to focus on other four-dimensional calculations. If seven loops is still too hard, then dialing back the amount of supersymmetry from N=8 to something lower should let her confirm her suspicions. Already a while back N=5 supergravity was found to diverge later than expected in four dimensions. She wants to know whether that pattern continues.
(Her backup slides also have a fun historical point: in dimensions greater than four, you can’t get elliptical planetary orbits. So four dimensions is special for our style of life.)
Other talks on Monday included a talk by Zahra Zahraee on progress towards “solving” the field’s favorite toy model, N=4 super Yang-Mills. Christian Copetti talked about the work I mentioned here, while Meta employee François Charlton’s “review talk” dealt with his work applying machine learning techniques to “translate” between questions in mathematics and their answers. In particular, he reported progress with my current boss Matthias Wilhelm and frequent collaborator and mentor Lance Dixon on using transformers to guess high-loop formulas in N=4 super Yang-Mills. They have an interesting proof of principle now, but it will probably still be a while until they can use the method to predict something beyond the state of the art.
In the meantime at least they have some hilarious AI-generated images
Tuesday’s review by Ian Moult was genuinely a review, but of a topic not otherwise covered at the conference, that of “detector observables”. The idea is that rather than talking about which individual particles are detected, one can ask questions that make more sense in terms of the experimental setup, like asking about the amounts of energy deposited in different detectors. This type of story has gone from an idle observation by theorists to a full research program, with theorists and experimentalists in active dialogue.
Natalia Toro brought up that, while we say each particle has a definite spin, that may not actually be the case. Particles with so-called “continuous spins” can masquerade as particles with a definite integer spin at lower energies. Toro and Schuster promoted this view of particles ten years ago, but now can make a bit more sense of it, including understanding how continuous-spin particles can interact.
The rest of Tuesday continued to be a bit of a grab-bag. Yael Shadmi talked about applying amplitudes techniques to Effective Field Theory calculations, while Franziska Porkert talked about a Feynman diagram involving two different elliptic curves. Interestingly (well, to me at least), the curves never appear “together”, you can represent the diagram as a sum of terms involving one curve and terms involving the other, much simpler than it could have been!
Tuesday afternoon’s review talk by Iain Stewart was one of those “guest from an adjacent field” talks, in this case from an approach called SCET, and at first glance didn’t seem to do much to reach out to the non-SCET people in the audience. Frequent past collaborator of mine Andrew McLeod showed off a new set of relations between singularities of amplitudes, found by digging in to the structure of the equations discovered by Landau that control this behavior. He and his collaborators are proposing a new way to keep track of these things involving “minimal cuts”, a clear pun on the “maximal cuts” that have been of great use to other parts of the community. Whether this has more or less staying power than “negative geometries” remains to be seen.
Closing Tuesday, Shruti Paranjape showed there was more to discover about the simplest amplitudes, called “tree amplitudes”. By asking why these amplitudes are sometimes equal to zero, she was able to draw a connection to the “double-copy” structure that links the theory of the strong force and the theory of gravity. Johannes Henn’s talk noticed an intriguing pattern. A while back, I had looked into under which circumstances amplitudes were positive. Henn found that “positive” is an understatement. In a certain region, the amplitudes we were looking at turn out to not just be positive, but also always decreasing, and also with second derivative always positive. In fact, the derivatives appear to alternate, always with one sign or the other as one takes more derivatives. Henn is calling this unusual property “completely monotonous”, and trying to figure out how widely it holds.
Wednesday had a more mathematical theme. Bernd Sturmfels began with a “review talk” that largely focused on his own work on the space of curves with marked points, including a surprising analogy between amplitudes and the likelihood functions one needs to minimize in machine learning. Lauren Williams was the other “actual mathematician” of the day, and covered her work on various topics related to the amplituhedron.
The remaining talks on Wednesday were not literally by mathematicians, but were “mathematically informed”. Carolina Figueiredo and Hayden Lee talked about work with Nima Arkani-Hamed on different projects. Figueiredo’s talk covered recent developments in the “curve integral formalism”, a recent step in Nima’s quest to geometrize everything in sight, this time in the context of more realistic theories. The talk, which like those Nima gives used tablet-written slides, described new insights one can gain from this picture, including new pictures of how more complicated amplitudes can be built up of simpler ones. If you want to understand the curve integral formalism further, I’d actually suggest instead looking at Mark Spradlin’s slides from later that day. The second part of Spradlin’s talk dealt with an area Figueiredo marked for future research, including fermions in the curve integral picture. I confess I’m still not entirely sure what the curve integral formalism is good for, but Spradlin’s talk gave me a better idea of what it’s doing. (The first part of his talk was on a different topic, exploring the space of string-like amplitudes to figure out which ones are actually consistent.)
Hayden Lee’s talk mentions the emergence of time, but the actual story is a bit more technical. Lee and collaborators are looking at cosmological correlators, observables like scattering amplitudes but for cosmology. Evaluating these is challenging with standard techniques, but can be approached with some novel diagram-based rules which let the results be described in terms of the measurable quantities at the end in a kind of “amplituhedron-esque” way.
Aidan Herderschee and Mariana Carrillo González had talks on Wednesday on ways of dealing with curved space. Herderschee talked about how various amplitudes techniques need to be changed to deal with amplitudes in anti-de-Sitter space, with difference equations replacing differential equations and sum-by-parts relations replacing integration-by-parts relations. Carrillo González looked at curved space through the lens of a special kind of toy model theory called a self-dual theory, which allowed her to do cosmology-related calculations using a double-copy technique.
Finally, Stephen Sharpe had the second review talk on Wednesday. This was another “outside guest” talk, a discussion from someone who does Lattice QCD about how they have been using their methods to calculate scattering amplitudes. They seem to count the number of particles a bit differently than we do, I’m curious whether this came up in the question session.
Universal gravitation was arguably Newton’s greatest discovery. Newton realized that the same laws could describe the orbits of the planets and the fall of objects on Earth, that bodies like the Moon can be fully understood only if you take into account both the Earth and the Sun’s gravity. In a Newtonian world, every mass attracts every other mass in a tiny, but detectable way.
Einstein, in turn, explained why. In Einstein’s general theory of relativity, gravity comes from the shape of space and time. Mass attracts mass, but energy affects gravity as well. Anything that can be measured has a gravitational effect, because the shape of space and time is nothing more than the rules by which we measure distances and times. So gravitation really is universal, and has to be universal.
…except when it isn’t.
It turns out, physicists can write down theories with some odd properties. Including theories where things are, in a certain sense, immune to gravity.
The story started with two mathematicians, Shiing-Shen Chern and Jim Simons. Chern and Simons weren’t trying to say anything in particular about physics. Instead, they cared about classifying different types of mathematical space. They found a formula that, when added up over one of these spaces, counted some interesting properties of that space. A bit more specifically, it told them about the space’s topology: rough details, like the number of holes in a donut, that stay the same even if the space is stretched or compressed. Their formula was called the Chern-Simons Form.
The physicist Albert Schwarz saw this Chern-Simons Form, and realized it could be interpreted another way. He looked at it as a formula describing a quantum field, like the electromagnetic field, describing how the field’s energy varied across space and time. He called the theory describing the field Chern-Simons Theory, and it was one of the first examples of what would come to be known as topological quantum field theories.
In a topological field theory, every question you might want to ask can be answered in a topological way. Write down the chance you observe the fields at particular strengths in particular places, and you’ll find that the answer you get only depends on the topology of the space the fields occupy. The answers are the same if the space is stretched or squished together. That means that nothing you ask depends on the details of how you measure things, that nothing depends on the detailed shape of space and time. Your theory is, in a certain sense, independent of gravity.
Our world is for the most part not described by a topological theory, gravity matters! (Though it can be a good approximation for describing certain materials.) These theories are most useful, though, in how they allow physicists and mathematicians to work together. Physicists don’t have a fully mathematically rigorous way of defining most of their theories, just a series of approximations and an overall picture that’s supposed to tie them together. For a topological theory, though, that overall picture has a rigorous mathematical meaning: it counts topological properties! As such, topological theories allow mathematicians to prove rigorous results about physical theories. It means they can take a theory of quantum fields or strings that has a particular property that physicists are curious about, and find a version of that property that they can study in fully mathematical rigorous detail. It’s been a boon both to mathematicians interested in topology, and to physicists who want to know more about their theories.
So while you won’t have antigravity boots any time soon, theories that defy gravity are still useful!
As is traditional, twitter erupted into dumb arguments over this. Some made fun of Yann LeCun for implying that Elon Musk will be forgotten, which despite any other faults of his seems unlikely. Science popularizer Sabine Hossenfelder pointed out that there are two senses of “publish” getting confused here: publish as in “make public” and publish as in “put in a scientific journal”. The latter tends to be necessary for scientists in practice, but is not required in principle. (The way journals work has changed a lot over just the last century!) The former, Sabine argued, is still 100% necessary.
Plenty of people on twitter still disagreed (this always happens). It got me thinking a bit about the role of publication in science.
When we talk about what science requires or doesn’t require, what are we actually talking about?
“Science” is a word, and like any word its meaning is determined by how it is used. Scientists use the word “science” of course, as do schools and governments and journalists. But if we’re getting into arguments about what does or does not count as science, then we’re asking about a philosophical problem, one in which philosophers of science try to understand what counts as science and what doesn’t.
What do philosophers of science want? Many things, but a big one is to explain why science works so well. Over a few centuries, humanity went from understanding the world in terms of familiar materials and living creatures to decomposing them in terms of molecules and atoms and cells and proteins. In doing this, we radically changed what we were capable of, computers out of the reach of blacksmiths and cures for diseases that weren’t even distinguishable. And while other human endeavors have seen some progress over this time (democracy, human rights…), science’s accomplishment demands an explanation.
Part of that explanation, I think, has to include making results public. Alchemists were interested in many of the things later chemists were, and had started to get some valuable insights. But alchemists were fearful of what their knowledge would bring (especially the ones who actually thought they could turn lead into gold). They published almost only in code. As such, the pieces of progress they made didn’t build up, didn’t aggregate, didn’t become overall progress. It was only when a new scientific culture emerged, when natural philosophers and physicists and chemists started writing to each other as clearly as they could, that knowledge began to build on itself.
Some on twitter pointed out the example of the Manhattan project during World War II. A group of scientists got together and made progress on something almost entirely in secret. Does that not count as science?
I’m willing to bite this bullet: I don’t think it does! When the Soviets tried to replicate the bomb, they mostly had to start from scratch, aside from some smuggled atomic secrets. Today, nations trying to build their own bombs know more, but they still must reinvent most of it. We may think this is a good thing, we may not want more countries to make progress in this way. But I don’t think we can deny that it genuinely does slow progress!
At the same time, to contradict myself a bit: I think you can think of science that happens within a particular community. The scientists of the Manhattan project didn’t publish in journals the Soviets could read. But they did write internal reports, they did publish to each other. I don’t think science by its nature has to include the whole of humanity (if it does, then perhaps studying the inside of black holes really is unscientific). You probably can do science sticking to just your own little world. But it will be slower. Better, for progress’s sake, if you can include people from across the world.