Tag Archives: string theory

To Elliptics and Beyond!

I’ve been busy running a conference this week, Elliptics and Beyond.

After Amplitudes was held online this year, a few of us at the Niels Bohr Institute were inspired. We thought this would be the perfect time to hold a small online conference, focused on the Calabi-Yaus that have been popping up lately in Feynman diagrams. Then we heard from the organizers of Elliptics 2020. They had been planning to hold a conference in Mainz about elliptic integrals in Feynman diagrams, but had to postpone it due to the pandemic. We decided to team up and hold a joint conference on both topics: the elliptic integrals that are just starting to be understood, and the mysterious integrals that lie beyond. Hence, Elliptics and Beyond.

I almost suggested Buzz Lightyear for the logo but I chickened out

The conference has been fun thus far. There’s been a mix of review material bringing people up to speed on elliptic integrals and exciting new developments. Some are taking methods that have been successful in other areas and generalizing them to elliptic integrals, others have been honing techniques for elliptics to make them “production-ready”. A few are looking ahead even further, to higher-genus amplitudes in string theory and Calabi-Yaus in Feynman diagrams.

We organized the conference along similar lines to Zoomplitudes, but with a few experiments of our own. Like Zoomplitudes, we made a Slack space for the conference, so people could chat physics outside the talks. Ours was less active, though. I suspect that kind of space needs a critical mass of people, and with a smaller conference we may just not have gotten there. Having fewer people did allow us a more relaxed schedule, which in turn meant we could mostly keep things on-time. We had discussion sessions in the morning (European time), with talks in the afternoon, so almost everyone could make the talks at least. We also had a “conference dinner”, which went much better than I would have expected. We put people randomly into Zoom Breakout Rooms of five or six, to emulate the tables of an in-person conference, and folks chatted while eating their (self-brought of course) dinner. People seemed to really enjoy the chance to just chat casually with the other folks at the conference. If you’re organizing an online conference soon, I’d recommend trying it!

Holding a conference online means that a lot of people can attend who otherwise couldn’t. We had over a hundred people register, and while not all of them showed up there were typically fifty or sixty people on the Zoom session. Some of these were specialists in elliptics or Calabi-Yaus who wouldn’t ordinarily make it to a conference like this. Others were people from the rest of the amplitudes field who joined for parts of the conference that caught their eye. But surprisingly many weren’t even amplitudeologists, but students and young researchers in a variety of topics from all over the world. Some seemed curious and eager to learn, others I suspect just needed to say they had been to a conference. Both are responding to a situation where suddenly conference after conference is available online, free to join. It will be interesting to see if, and how, the world adapts.

Particles vs Waves, Particles vs Strings

On my “Who Am I?” page, I open with my background, calling myself a string theorist, then clarify: “in practice I’m more of a Particle Theorist, describing the world not in terms of short lengths of string but rather with particles that each occupy a single point in space”.

When I wrote that I didn’t think it would confuse people. Now that I’m older and wiser, I know people can be confused in a variety of ways. And since I recently saw someone confused about this particular phrase (yes I’m vagueblogging, but I suspect you’re reading this and know who you are 😉 ), I figured I’d explain it.

If you’ve learned a few things about quantum mechanics, maybe you have this slogan in mind:

“What we used to think of as particles are really waves. They spread out over an area, with peaks and troughs that interfere, and you never know exactly where you will measure them.”

With that in mind, my talk of “particles that each occupy a single point” doesn’t make sense. Doesn’t the slogan mean that particles don’t exist?

Here’s the thing: that’s the wrong slogan. The right slogan is just a bit different:

“What we used to think of as particles are ALSO waves. They spread out over an area, with peaks and troughs that interfere, and you never know exactly where you will measure them.”

The principle you were remembering is often called “wave-particle duality“. That doesn’t mean “particles don’t exist”. It means “waves and particles are the same thing”.

This matters, because just as wave-like properties are important, particle-like properties are important. And while it’s true that you can never know exactly where you will measure a particle, it’s also true that it’s useful, and even necessary, to think of it as occupying a single point.

That’s because particles can only affect each other when they’re at the same point. Physicists call this the principle of locality, the idea that there is no real “action at a distance”, everything happens because of something traveling from point A to point B. Wave-particle duality doesn’t change that, it just makes the specific point uncertain. It means you have to add up over every specific point where the particles could have interacted, but each term in your sum has to still involve a specific point: quantum mechanics doesn’t let particles affect each other non-locally.

Strings, in turn, are a little bit different. Strings have length, particles don’t. Particles interact at a point, strings can interact anywhere along the string. Strings introduce a teeny bit of non-locality.

When you compare particles and waves, you’re thinking pre-quantum mechanics, two classical things neither of which is the full picture. When you compare particles and strings, both are quantum, both are also waves. But in a meaningful sense one occupies a single point, and the other doesn’t.

Zoomplitudes Retrospective

During Zoomplitudes (my field’s big yearly conference, this year on Zoom) I didn’t have time to write a long blog post. I said a bit about the format, but didn’t get a chance to talk about the science. I figured this week I’d go back and give a few more of my impressions. As always, conference posts are a bit more technical than my usual posts, so regulars be warned!

The conference opened with a talk by Gavin Salam, there as an ambassador for LHC physics. Salam pointed out that, while a decent proportion of speakers at Amplitudes mention the LHC in their papers, that fraction has fallen over the years. (Another speaker jokingly wondered which of those mentions were just in the paper’s introduction.) He argued that there is still useful work for us, LHC measurements that will require serious amplitudes calculations to understand. He also brought up what seems like the most credible argument for a new, higher-energy collider: that there are important properties of the Higgs, in particular its interactions, that we still have not observed.

The next few talks hopefully warmed Salam’s heart, as they featured calculations for real-world particle physics. Nathaniel Craig and Yael Shadmi in particular covered the link between amplitudes and Standard Model Effective Field Theory (SMEFT), a method to systematically characterize corrections beyond the Standard Model. Shadmi’s talk struck me because the kind of work she described (building the SMEFT “amplitudes-style”, directly from observable information rather than more complicated proxies) is something I’d seen people speculate about for a while, but which hadn’t been done until quite recently. Now, several groups have managed it, and look like they’ve gotten essentially “all the way there”, rather than just partial results that only manage to replicate part of the SMEFT. Overall it’s much faster progress than I would have expected.

After Shadmi’s talk was a brace of talks on N=4 super Yang-Mills, featuring cosmic Galois theory and an impressively groan-worthy “origin story” joke. The final talk of the day, by Hofie Hannesdottir, covered work with some of my colleagues at the NBI. Due to coronavirus I hadn’t gotten to hear about this in person, so it was good to hear a talk on it, a blend of old methods and new priorities to better understand some old discoveries.

The next day focused on a topic that has grown in importance in our community, calculations for gravitational wave telescopes like LIGO. Several speakers focused on new methods for collisions of spinning objects, where a few different approaches are making good progress (Radu Roiban’s proposal to use higher-spin field theory was particularly interesting) but things still aren’t quite “production-ready”. The older, post-Newtonian method is still very much production-ready, as evidenced by Michele Levi’s talk that covered, among other topics, our recent collaboration. Julio Parra-Martinez discussed some interesting behavior shared by both supersymmetric and non-supersymmetric gravity theories. Thibault Damour had previously expressed doubts about use of amplitudes methods to answer this kind of question, and part of Parra-Martinez’s aim was to confirm the calculation with methods Damour would consider more reliable. Damour (who was actually in the audience, which I suspect would not have happened at an in-person conference) had already recanted some related doubts, but it’s not clear to me whether that extended to the results Parra-Martinez discussed (or whether Damour has stated the problem with his old analysis).

There were a few talks that day that didn’t relate to gravitational waves, though this might have been an accident, since both speakers also work on that topic. Zvi Bern’s talk linked to the previous day’s SMEFT discussion, with a calculation using amplitudes methods of direct relevance to SMEFT researchers. Clifford Cheung’s talk proposed a rather strange/fun idea, conformal symmetry in negative dimensions!

Wednesday was “amplituhedron day”, with a variety of talks on positive geometries and cluster algebras. Featured in several talks was “tropicalization“, a mathematical procedure that can simplify complicated geometries while still preserving essential features. Here, it was used to trim down infinite “alphabets” conjectured for some calculations into a finite set, and in doing so understand the origin of “square root letters”. The day ended with a talk by Nima Arkani-Hamed, who despite offering to bet that he could finish his talk within the half-hour slot took almost twice that. The organizers seemed to have planned for this, since there was one fewer talk that day, and as such the day ended at roughly the usual time regardless.

We also took probably the most unique conference photo I will ever appear in.

For lack of a better name, I’ll call Thursday’s theme “celestial”. The day included talks by cosmologists (including approaches using amplitudes-ish methods from Daniel Baumann and Charlotte Sleight, and a curiously un-amplitudes-related talk from Daniel Green), talks on “celestial amplitudes” (amplitudes viewed from the surface of an infinitely distant sphere), and various talks with some link to string theory. I’m including in that last category intersection theory, which has really become its own thing. This included a talk by Simon Caron-Huot about using intersection theory more directly in understanding Feynman integrals, and a talk by Sebastian Mizera using intersection theory to investigate how gravity is Yang-Mills squared. Both gave me a much better idea of the speakers’ goals. In Mizera’s case he’s aiming for something very ambitious. He wants to use intersection theory to figure out when and how one can “double-copy” theories, and might figure out why the procedure “got stuck” at five loops. The day ended with a talk by Pedro Vieira, who gave an extremely lucid and well-presented “blackboard-style” talk on bootstrapping amplitudes.

Friday was a grab-bag of topics. Samuel Abreu discussed an interesting calculation using the numerical unitarity method. It was notable in part because renormalization played a bigger role than it does in most amplitudes work, and in part because they now have a cool logo for their group’s software, Caravel. Claude Duhr and Ruth Britto gave a two-part talk on their work on a Feynman integral coaction. I’d had doubts about the diagrammatic coaction they had worked on in the past because it felt a bit ad-hoc. Now, they’re using intersection theory, and have a clean story that seems to tie everything together. Andrew McLeod talked about our work on a Feynman diagram Calabi-Yau “bestiary”, while Cristian Vergu had a more rigorous understanding of our “traintrack” integrals.

There are two key elements of a conference that are tricky to do on Zoom. You can’t do a conference dinner, so you can’t do the traditional joke-filled conference dinner speech. The end of the conference is also tricky: traditionally, this is when everyone applauds the organizers and the secretaries are given flowers. As chair for the last session, Lance Dixon stepped up to fill both gaps, with a closing speech that was both a touching tribute to the hard work of organizing the conference and a hilarious pile of in-jokes, including a participation award to Arkani-Hamed for his (unprecedented, as far as I’m aware) perfect attendance.

Calabi-Yaus in Feynman Diagrams: Harder and Easier Than Expected

I’ve got a new paper up, about the weird geometrical spaces we keep finding in Feynman diagrams.

With Jacob Bourjaily, Andrew McLeod, and Matthias Wilhelm, and most recently Cristian Vergu and Matthias Volk, I’ve been digging up odd mathematics in particle physics calculations. In several calculations, we’ve found that we need a type of space called a Calabi-Yau manifold. These spaces are often studied by string theorists, who hope they represent how “extra” dimensions of space are curled up. String theorists have found an absurdly large number of Calabi-Yau manifolds, so many that some are trying to sift through them with machine learning. We wanted to know if our situation was quite that ridiculous: how many Calabi-Yaus do we really need?

So we started asking around, trying to figure out how to classify our catch of Calabi-Yaus. And mostly, we just got confused.

It turns out there are a lot of different tools out there for understanding Calabi-Yaus, and most of them aren’t all that useful for what we’re doing. We went in circles for a while trying to understand how to desingularize toric varieties, and other things that will sound like gibberish to most of you. In the end, though, we noticed one small thing that made our lives a whole lot simpler.

It turns out that all of the Calabi-Yaus we’ve found are, in some sense, the same. While the details of the physics varies, the overall “space” is the same in each case. It’s a space we kept finding for our “Calabi-Yau bestiary”, but it turns out one of the “traintrack” diagrams we found earlier can be written in the same way. We found another example too, a “wheel” that seems to be the same type of Calabi-Yau.

And that actually has a sensible name

We also found many examples that we don’t understand. Add another rung to our “traintrack” and we suddenly can’t write it in the same space. (Personally, I’m quite confused about this one.) Add another spoke to our wheel and we confuse ourselves in a different way.

So while our calculation turned out simpler than expected, we don’t think this is the full story. Our Calabi-Yaus might live in “the same space”, but there are also physics-related differences between them, and these we still don’t understand.

At some point, our abstract included the phrase “this paper raises more questions than it answers”. It doesn’t say that now, but it’s still true. We wrote this paper because, after getting very confused, we ended up able to say a few new things that hadn’t been said before. But the questions we raise are if anything more important. We want to inspire new interest in this field, toss out new examples, and get people thinking harder about the geometry of Feynman integrals.

In Defense of the Streetlight

If you read physics blogs, you’ve probably heard this joke before:

A policeman sees a drunk man searching for something under a streetlight and asks what the drunk has lost. He says he lost his keys and they both look under the streetlight together. After a few minutes the policeman asks if he is sure he lost them here, and the drunk replies, no, and that he lost them in the park. The policeman asks why he is searching here, and the drunk replies, “this is where the light is”.

The drunk’s line of thinking has a name, the streetlight effect, and while it may seem ridiculous it’s a common error, even among experts. When it gets too tough to research something, scientists will often be tempted by an easier problem even if it has little to do with the original question. After all, it’s “where the light is”.

Physicists get accused of this all the time. Dark matter could be completely undetectable on Earth, but physicists still build experiments to search for it. Our universe appears to be curved one way, but string theory makes it much easier to study universes curved the other way, so physicists write a lot of nice proofs about a universe we don’t actually inhabit. In my own field, we spend most of our time studying a very nice theory that we know can’t describe the real world.

I’m not going to defend this behavior in general. There are real cases where scientists trick themselves into thinking they can solve an easy problem when they need to solve a hard one. But there is a crucial difference between scientists and drunkards looking for their keys, one that makes this behavior a lot more reasonable: scientists build technology.

As scientists, we can’t just grope around in the dark for our keys. The spaces we’re searching, from the space of all theories of gravity to actual outer space, are much too vast to search randomly. We need new ideas, new mathematics or new equipment, to do the search properly. If we were the drunkard of the story, we’d need to invent night-vision goggles.

Is the light better here, or is it just me?

Suppose you wanted to design new night-vision goggles, to search for your keys in the park. You could try to build them in the dark, but you wouldn’t be able to see what you were doing: you’d lose pieces, miss screws, and break lenses. Much better to build the goggles under that convenient streetlight.

Of course, if you build and test your prototype goggles under the streetlight, you risk that they aren’t good enough for the dark. You’ll have calibrated them in an unrealistic case. In all likelihood, you’ll have to go back and fix your goggles, tweaking them as you go, and you’ll run into the same problem: you can’t see what you’re doing in the dark.

At that point, though, you have an advantage: you now know how to build night-vision goggles. You’ve practiced building goggles in the light, and now even if the goggles aren’t good enough, you remember how you put them together. You can tweak the process, modify your goggles, and make something good enough to find your keys. You’re good enough at making goggles that you can modify them now, even in the dark.

Sometimes scientists really are like the drunk, searching under the most convenient streetlight. Sometimes, though, scientists are working where the light is for a reason. Instead of wasting their time lost in the dark, they’re building new technology and practicing new methods, getting better and better at searching until, when they’re ready, they can go back and find their keys. Sometimes, the streetlight is worth it.

Breakthrough Prize for Supergravity

This week, $3 Million was awarded by the Breakthrough Prize to Sergio Ferrara, Daniel Z. Freedman and Peter van Nieuwenhuizen, the discoverers of the theory of supergravity, part of a special award separate from their yearly Fundamental Physics Prize. There’s a nice interview with Peter van Nieuwenhuizen on the Stony Brook University website, about his reaction to the award.

The Breakthrough Prize was designed to complement the Nobel Prize, rewarding deserving researchers who wouldn’t otherwise get the Nobel. The Nobel Prize is only awarded to theoretical physicists when they predict something that is later observed in an experiment. Many theorists are instead renowned for their mathematical inventions, concepts that other theorists build on and use but that do not by themselves make testable predictions. The Breakthrough Prize celebrates these theorists, and while it has also been awarded to others who the Nobel committee could not or did not recognize (various large experimental collaborations, Jocelyn Bell Burnell), this has always been the physics prize’s primary focus.

The Breakthrough Prize website describes supergravity as a theory that combines gravity with particle physics. That’s a bit misleading: while the theory does treat gravity in a “particle physics” way, unlike string theory it doesn’t solve the famous problems with combining quantum mechanics and gravity. (At least, as far as we know.)

It’s better to say that supergravity is a theory that links gravity to other parts of particle physics, via supersymmetry. Supersymmetry is a relationship between two types of particles: bosons, like photons, gravitons, or the Higgs, and fermions, like electrons or quarks. In supersymmetry, each type of boson has a fermion “partner”, and vice versa. In supergravity, gravity itself gets a partner, called the gravitino. Supersymmetry links the properties of particles and their partners together: both must have the same mass and the same charge. In a sense, it can unify different types of particles, explaining both under the same set of rules.

In the real world, we don’t see bosons and fermions with the same mass and charge. If gravitinos exist, then supersymmetry would have to be “broken”, giving them a high mass that makes them hard to find. Some hoped that the Large Hadron Collider could find these particles, but now it looks like it won’t, so there is no evidence for supergravity at the moment.

Instead, supergravity’s success has been as a tool to understand other theories of gravity. When the theory was proposed in the 1970’s, it was thought of as a rival to string theory. Instead, over the years it consistently managed to point out aspects of string theory that the string theorists themselves had missed, for example noticing that the theory needed not just strings but higher-dimensional objects called “branes”. Now, supergravity is understood as one part of a broader string theory picture.

In my corner of physics, we try to find shortcuts for complicated calculations. We benefit a lot from toy models: simpler, unrealistic theories that let us test our ideas before applying them to the real world. Supergravity is one of the best toy models we’ve got, a theory that makes gravity simple enough that we can start to make progress. Right now, colleagues of mine are developing new techniques for calculations at LIGO, the gravitational wave telescope. If they hadn’t worked with supergravity first, they would never have discovered these techniques.

The discovery of supergravity by Ferrara, Freedman, and van Nieuwenhuizen is exactly the kind of work the Breakthrough Prize was created to reward. Supergravity is a theory with deep mathematics, rich structure, and wide applicability. There is of course no guarantee that such a theory describes the real world. What is guaranteed, though, is that someone will find it useful.

Amplitudes 2019 Retrospective

I’m back from Amplitudes 2019, and since I have more time I figured I’d write down a few more impressions.

Amplitudes runs all the way from practical LHC calculations to almost pure mathematics, and this conference had plenty of both as well as everything in between. On the more practical side a standard “pipeline” has developed: get a large number of integrals from generalized unitarity, reduce them to a more manageable number with integration-by-parts, and then compute them with differential equations. Vladimir Smirnov and Johannes Henn presented the state of the art in this pipeline, challenging QCD calculations that required powerful methods. Others aimed to replace various parts of the pipeline. Integration-by-parts could be avoided in the numerical unitarity approach discussed by Ben Page, or alternatively with the intersection theory techniques showcased by Pierpaolo Mastrolia. More radical departures included Stefan Weinzierl’s refinement of loop-tree duality, and Jacob Bourjaily’s advocacy of prescriptive unitarity. Robert Schabinger even brought up direct integration, though I mostly viewed his talk as an independent confirmation of the usefulness of Erik Panzer’s thesis. It also showcased an interesting integral that had previously been represented by Lorenzo Tancredi and collaborators as elliptic, but turned out to be writable in terms of more familiar functions. It’s going to be interesting to see whether other such integrals arise, and whether they can be spotted in advance.

On the other end of the scale, Francis Brown was the only speaker deep enough in the culture of mathematics to insist on doing a blackboard talk. Since the conference hall didn’t actually have a blackboard, this was accomplished by projecting video of a piece of paper that he wrote on as the talk progressed. Despite the awkward setup, the talk was impressively clear, though there were enough questions that he ran out of time at the end and had to “cheat” by just projecting his notes instead. He presented a few theorems about the sort of integrals that show up in string theory. Federico Zerbini and Eduardo Casali’s talks covered similar topics, with the latter also involving intersection theory. Intersection theory also appeared in a poster from grad student Andrzej Pokraka, which overall is a pretty impressively broad showing for a part of mathematics that Sebastian Mizera first introduced to the amplitudes community less than two years ago.

Nima Arkani-Hamed’s talk on Wednesday fell somewhere in between. A series of airline mishaps brought him there only a few hours before his talk, and his own busy schedule sent him back to the airport right after the last question. The talk itself covered several topics, tied together a bit better than usual by a nice account in the beginning of what might motivate a “polytope picture” of quantum field theory. One particularly interesting aspect was a suggestion of a space, smaller than the amplituhedron, that might more accuractly the describe the “alphabet” that appears in N=4 super Yang-Mills amplitudes. If his proposal works, it may be that the infinite alphabet we were worried about for eight-particle amplitudes is actually finite. Ömer Gürdoğan’s talk mentioned this, and drew out some implications. Overall, I’m still unclear as to what this story says about whether the alphabet contains square roots, but that’s a topic for another day. My talk was right after Nima’s, and while he went over-time as always I compensated by accidentally going under-time. Overall, I think folks had fun regardless.

Though I don’t know how many people recognized this guy

Amplitudes 2019

It’s that time of year again, and I’m at Amplitudes, my field’s big yearly conference. This year we’re in Dublin, hosted by Trinity.

Which also hosts the Book of Kells, and the occasional conference reception just down the hall from the Book of Kells

Increasingly, the organizers of Amplitudes have been setting aside a few slots for talks from people in other fields. This year the “closest” such speaker was Kirill Melnikov, who pointed out some of the hurdles that make it difficult to have useful calculations to compare to the LHC. Many of these hurdles aren’t things that amplitudes-people have traditionally worked on, but are still things that might benefit from our particular expertise. Another such speaker, Maxwell Hansen, is from a field called Lattice QCD. While amplitudeologists typically compute with approximations, order by order in more and more complicated diagrams, Lattice QCD instead simulates particle physics on supercomputers, chopping up their calculations on a grid. This allows them to study much stronger forces, including the messy interactions of quarks inside protons, but they have a harder time with the situations we’re best at, where two particles collide from far away. Apparently, though, they are making progress on that kind of calculation, with some clever tricks to connect it to calculations they know how to do. While I was a bit worried that this would let them fire all the amplitudeologists and replace us with supercomputers, they’re not quite there yet, nonetheless they are doing better than I would have expected. Other speakers from other fields included Leron Borsten, who has been applying the amplitudes concept of the “double copy” to M theory and Andrew Tolley, who uses the kind of “positivity” properties that amplitudeologists find interesting to restrict the kinds of theories used in cosmology.

The biggest set of “non-traditional-amplitudes” talks focused on using amplitudes techniques to calculate the behavior not of particles but of black holes, to predict the gravitational wave patterns detected by LIGO. This year featured a record six talks on the topic, a sixth of the conference. Last year I commented that the research ideas from amplitudeologists on gravitational waves had gotten more robust, with clearer proposals for how to move forward. This year things have developed even further, with several initial results. Even more encouragingly, while there are several groups doing different things they appear to be genuinely listening to each other: there were plenty of references in the talks both to other amplitudes groups and to work by more traditional gravitational physicists. There’s definitely still plenty of lingering confusion that needs to be cleared up, but it looks like the community is robust enough to work through it.

I’m still busy with the conference, but I’ll say more when I’m back next week. Stay tuned for square roots, clusters, and Nima’s travel schedule. And if you’re a regular reader, please fill out last week’s poll if you haven’t already!

Things I’d Like to Know More About

This is an accountability post, of sorts.

As a kid, I wanted to know everything. Eventually, I realized this was a little unrealistic. Doomed to know some things and not others, I picked physics as a kind of triage. Other fields I could learn as an outsider: not well enough to compete with the experts, but enough to at least appreciate what they were doing. After watching a few string theory documentaries, I realized this wasn’t the case for physics: if I was going to ever understand what those string theorists were up to, I would have to go to grad school in string theory.

Over time, this goal lost focus. I’ve become a very specialized creature, an “amplitudeologist”. I didn’t have time or energy for my old questions. In an irony that will surprise no-one, a career as a physicist doesn’t leave much time for curiosity about physics.

One of the great things about this blog is how you guys remind me of those old questions, bringing me out of my overspecialized comfort zone. In that spirit, in this post I’m going to list a few things in physics that I really want to understand better. The idea is to make a public commitment: within a year, I want to understand one of these topics at least well enough to write a decent blog post on it.

Wilsonian Quantum Field Theory:

When you first learn quantum field theory as a physicist, you learn how unsightly infinite results get covered up via an ad-hoc-looking process called renormalization. Eventually you learn a more modern perspective, that these infinite results show up because we’re ignorant of the complete theory at high energies. You learn that you can think of theories at a particular scale, and characterize them by what happens when you “zoom” in and out, in an approach codified by the physicist Kenneth Wilson.

While I understand the basics of Wilson’s approach, the courses I took in grad school skipped the deeper implications. This includes the idea of theories that are defined at all energies, “flowing” from an otherwise scale-invariant theory perturbed with extra pieces. Other physicists are much more comfortable thinking in these terms, and the topic is important for quite a few deep questions, including what it means to properly define a theory and where laws of nature “live”. If I’m going to have an informed opinion on any of those topics, I’ll need to go back and learn the Wilsonian approach properly.

Wormholes:

If you’re a fan of science fiction, you probably know that wormholes are the most realistic option for faster-than-light travel, something that is at least allowed by the equations of general relativity. “Most realistic” isn’t the same as “realistic”, though. Opening a wormhole and keeping it stable requires some kind of “exotic matter”, and that matter needs to violate a set of restrictions, called “energy conditions”, that normal matter obeys. Some of these energy conditions are just conjectures, some we even know how to violate, while others are proven to hold for certain types of theories. Some energy conditions don’t rule out wormholes, but instead restrict their usefulness: you can have non-traversable wormholes (basically, two inescapable black holes that happen to meet in the middle), or traversable wormholes where the distance through the wormhole is always longer than the distance outside.

I’ve seen a few talks on this topic, but I’m still confused about the big picture: which conditions have been proven, what assumptions were needed, and what do they all imply? I haven’t found a publicly-accessible account that covers everything. I owe it to myself as a kid, not to mention everyone who’s a kid now, to get a satisfactory answer.

Quantum Foundations:

Quantum Foundations is a field that many physicists think is a waste of time. It deals with the questions that troubled Einstein and Bohr, questions about what quantum mechanics really means, or why the rules of quantum mechanics are the way they are. These tend to be quite philosophical questions, where it’s hard to tell if people are making progress or just arguing in circles.

I’m more optimistic about philosophy than most physicists, at least when it’s pursued with enough analytic rigor. I’d like to at least understand the leading arguments for different interpretations, what the constraints on interpretations are and the main loopholes. That way, if I end up concluding the field is a waste of time at least I’d be making an informed decision.

Amplitudes in String and Field Theory at NBI

There’s a conference at the Niels Bohr Institute this week, on Amplitudes in String and Field Theory. Like the conference a few weeks back, this one was funded by the Simons Foundation, as part of Michael Green’s visit here.

The first day featured a two-part talk by Michael Green and Congkao Wen. They are looking at the corrections that string theory adds on top of theories of supergravity. These corrections are difficult to calculate directly from string theory, but one can figure out a lot about them from the kinds of symmetry and duality properties they need to have, using the mathematics of modular forms. While Michael’s talk introduced the topic with a discussion of older work, Congkao talked about their recent progress looking at this from an amplitudes perspective.

Francesca Ferrari’s talk on Tuesday also related to modular forms, while Oliver Schlotterer and Pierre Vanhove talked about a different corner of mathematics, single-valued polylogarithms. These single-valued polylogarithms are of interest to string theorists because they seem to connect two parts of string theory: the open strings that describe Yang-Mills forces and the closed strings that describe gravity. In particular, it looks like you can take a calculation in open string theory and just replace numbers and polylogarithms with their “single-valued counterparts” to get the same calculation in closed string theory. Interestingly, there is more than one way that mathematicians can define “single-valued counterparts”, but only one such definition, the one due to Francis Brown, seems to make this trick work. When I asked Pierre about this he quipped it was because “Francis Brown has good taste…either that, or String Theory has good taste.”

Wednesday saw several talks exploring interesting features of string theory. Nathan Berkovitz discussed his new paper, which makes a certain context of AdS/CFT (a duality between string theory in certain curved spaces and field theory on the boundary of those spaces) manifest particularly nicely. By writing string theory in five-dimensional AdS space in the right way, he can show that if the AdS space is small it will generate the same Feynman diagrams that one would use to do calculations in N=4 super Yang-Mills. In the afternoon, Sameer Murthy showed how localization techniques can be used in gravity theories, including to calculate the entropy of black holes in string theory, while Yvonne Geyer talked about how to combine the string theory-like CHY method for calculating amplitudes with supersymmetry, especially in higher dimensions where the relevant mathematics gets tricky.

Thursday ended up focused on field theory. Carlos Mafra was originally going to speak but he wasn’t feeling well, so instead I gave a talk about the “tardigrade” integrals I’ve been looking at. Zvi Bern talked about his work applying amplitudes techniques to make predictions for LIGO. This subject has advanced a lot in the last few years, and now Zvi and collaborators have finally done a calculation beyond what others had been able to do with older methods. They still have a way to go before they beat the traditional methods overall, but they’re off to a great start. Lance Dixon talked about two-loop five-particle non-planar amplitudes in N=4 super Yang-Mills and N=8 supergravity. These are quite a bit trickier than the planar amplitudes I’ve worked on with him in the past, in particular it’s not yet possible to do this just by guessing the answer without considering Feynman diagrams.

Today was the last day of the conference, and the emphasis was on number theory. David Broadhurst described some interesting contributions from physics to mathematics, in particular emphasizing information that the Weierstrass formulation of elliptic curves omits. Eric D’Hoker discussed how the concept of transcendentality, previously used in field theory, could be applied to string theory. A few of his speculations seemed a bit farfetched (in particular, his setup needs to treat certain rational numbers as if they were transcendental), but after his talk I’m a bit more optimistic that there could be something useful there.