It’s a question I’ve now heard several times, in different forms. People hear that I’ll be hired as a researcher at an institute of theoretical physics, and they ask, “what, exactly, are they paying you to research?”
The answer, with some caveats: “Whatever I want.”
When a company hires a researcher, they want to accomplish specific things: to improve their products, to make new ones, to cut down on fraud or out-think the competition. Some government labs are the same: if you work for NIST, for example, your work should contribute in some way to achieving more precise measurements and better standards for technology.
Other government labs, and universities, are different. They pursue basic research, research not on any specific application but on the general principles that govern the world. Researchers doing basic research are given a lot of freedom, and that freedom increases as their careers go on.
As a PhD student, a researcher is a kind of apprentice, working for their advisor. Even then, they have some independence: an advisor may suggest projects, but PhD students usually need to decide how to execute them on their own. In some fields, there can be even more freedom: in theoretical physics, it’s not unusual for the more independent students to collaborate with other people than just their advisor.
Postdocs, in turn, have even more freedom. In some fields they get hired to work on a specific project, but they tend to have more freedom as to how to execute it than a PhD student would. Other fields give them more or less free rein: in theoretical physics, a postdoc will have some guidance, but often will be free to work on whatever they find interesting.
Professors, and other long-term researchers, have the most freedom of all. Over the climb from PhD to postdoc to professor, researchers build judgement, demonstrating a track record for tackling worthwhile scientific problems. Universities, and institutes of basic research, trust that judgement. They hire for that judgement. They give their long-term researchers free reign to investigate whatever questions they think are valuable.
In practice, there are some restrictions. Usually, you’re supposed to research in a particular field: at an institute for theoretical physics, I should probably research theoretical physics. (But that can mean many things: one of my future colleagues studies the science of cities.) Further pressure comes from grant funding, money you need to hire other researchers or buy equipment that can come with restrictions attached. When you apply for a grant, you have to describe what you plan to do. (In practice, grant agencies are more flexible about this than you might expect, allowing all sorts of changes if you have a good reason…but you still can’t completely reinvent yourself.) Your colleagues themselves also have an impact: it’s much easier to work on something when you can walk down the hall and ask an expert when you get stuck. It’s why we seek out colleagues who care about the same big questions as we do.
Overall, though, research is one of the free-est professions there is. If you can get a job learning for a living, and do it well enough, then people will trust your judgement. They’ll set you free to ask your own questions, and seek your own answers.
My blog began, almost eleven years ago, with the title “Four Gravitons and a Grad Student”. Since then, I finished my PhD. The “Grad Student” dropped from the title, and the mysterious word “postdoc” showed up on a few pages. For three years I worked as a postdoc at the Perimeter Institute in Canada, before hopping the pond and starting another three-year postdoc job in Denmark. With a grant from the EU, three years became four. More funding got me to five (with a fancier title), and now nearing on six. Each step, my contract has been temporary: at first three years at a time, then one-year extensions. Each year I applied, all over the world, looking for a permanent job: for a chance to settle down somewhere, to build my own research group without worrying about having to move the next year.
This year, things have finally worked out. In the Fall I will be moving to France, starting a junior permanent position with L’Institut de Physique Théorique (or IPhT) at CEA Paris-Saclay.
A photo of the entryway to the Institute, taken when I interviewed
It’s been a long journey to get here, with a lot of soul-searching. This year in particular has been a year of reassessment: of digging deep and figuring out what matters to me, what I hope to accomplish and what clues I have to guide the way. Sometimes I feel like I’ve matured more as a physicist in the last year than in the last three put together.
The CEA (originally Commissariat à l’énergie atomique, now Commissariat à l’énergie atomique et aux énergies alternatives, or Alternative Energies and Atomic Energy Commission, and yes that means they’re using the “A” for two things at the same time), is roughly a parallel organization to the USA’s Department of Energy. Both organizations began as a way to manage their nation’s nuclear program, but both branched out, both into other forms of energy and into scientific research. Both run a nationwide network of laboratories, lightly linked but independent from their nations’ universities, both with notable facilities for particle physics. The CEA’s flagship site is in Saclay, on the outskirts of Paris, and it’s their Institute for Theoretical Physics where I’ll be working.
My new position is genuinely permanent: unlike a tenure-track position in the US, I don’t go up for review after a fixed span of time, with the expectation that if I don’t get promoted I lose the job altogether. It’s also not a university, which in particular means I’m not required to teach. I’ll have the option of teaching, working with nearby universities. In the long run, I think I’ll pursue that option. I’ve found teaching helpful the past couple years: it’s helped me think about physics, and think about how to communicate physics. But it’s good not to have to rush into preparing a new course when I arrive, as new professors often do.
Working temporary positions year after year, not knowing where I’ll be the next year, has been stressful. Others have had it worse, though. Some of you might have seen a recent post by Bret Deveraux, a military historian with a much more popular blog who has been in a series of adjunct positions. Deveraux describes the job market for the humanities in the US quite well. I’m in theoretical physics in Europe, so while my situation hasn’t been easy, it has been substantially better.
First, there’s the physics component. Physics has “adjunctified” much less than other fields. I don’t think I know a single physicist who has taken an adjunct teaching position, the kind of thing where you’re paid per course and only to teach. I know many who have left physics for other kinds of work, for Wall Street or Silicon Valley or to do data science for a bank or to teach high school. On the other side, I know people in other fields who do work as adjuncts, particularly in mathematics.
Deveraux blames the culture of his field, but I think funding also must have an important role. Physicists, and scientists in many other areas, rarely get professor positions right after their PhDs, but that doesn’t mean they leave the field entirely because most can find postdoc positions. Those postdocs are focused on research, and are often paid for by government grants: in my field in the US, that usually means the Department of Energy. People can go through two or sometimes even three such positions before finding something permanent, if they don’t leave the field before that. Without something like the Department of Energy or National Institutes of Health providing funding, I don’t know if the humanities could imitate that structure even if they wanted to.
Europe, in turn, has a different situation than the US. Most European countries don’t have a tenure-track: just permanent positions and fixed-term positions. Funding also works quite differently. Department of Energy funding in the US is spread widely and lightly: grants are shared by groups of theorists at a given university, each getting funding for a few postdocs and PhDs across the group. In Europe, a lot of the funding is much more concentrated: big grants from the European Research Council going to individual professors, with various national and private grants supplementing or mirroring that structure. That kind of funding, and the rarity of tenure, in turn leads to a different kind of temporary position: one not hired to teach a course but hired for research as long as the funding lasts. The Danish word for my current title is Adjunkt, but that’s as one says in France a faux ami: the official English translation is Assistant Professor, and it’s nothing like a US adjunct. I know people in a variety of forms of that kind of position in a variety of countries, people who landed a five-year grant where they could act like a professor, hire people and so on, but who in the end were expected to move when the grant was over. It’s a stressful situation, but at least it lets us further our research and make progress, unlike a US adjunct in the humanities or math who needs to spend much of their time on teaching.
I do hope Deveraux finds a permanent position, he’s got a great blog. And to return to the theme of the post, I am extremely grateful and happy that I have managed to find a permanent position. I’m looking forward to joining the group at Saclay: to learning more about physics from them, but also, to having a place where I can start to build something, and make a lasting impact on the world around me.
Scientists want to know everything, and we’ve been trying to get there since the dawn of science. So why aren’t we there yet? Why are there things we still don’t know?
Sometimes, the reason is obvious: we can’t do the experiments yet. Victorian London had neither the technology nor the wealth to build a machine like Fermilab, so they couldn’t discover the top quark. Even if Newton had the idea for General Relativity, the telescopes of the era wouldn’t have let astronomers see its effect on the motion of Mercury. As we grow (in technology, in resources, in knowledge, in raw number of human beings), we can test more things and learn more about the world.
But I’m a theoretical physicist, not an experimental physicist. I still want to understand the world, but what I contribute aren’t new experiments, but new ideas and new calculations. This brings back the question in a new form: why are there calculations we haven’t done yet? Why are there ideas we haven’t had yet?
Sometimes, we can track the reason down to bottlenecks. A bottleneck is a step in a calculation that, for some reason, is harder than the rest. As you try to push a calculation to new heights, the bottleneck is the first thing that slows you down, like the way liquid bubbles through the neck of a literal bottle. If you can clear the bottleneck, you can speed up your calculation and accomplish more.
In the clearest cases, we can see how these bottlenecks could be solved with more technology. As computers get faster and more powerful, calculations become possible that weren’t possible before, in the same way new experiments become possible with new equipment. This is essentially what has happened recently with machine learning, where relatively old ideas are finally feasible to apply on a massive scale.
In physics, a subtlety is that we rarely have access to the most powerful computers available. Some types of physics are done on genuine supercomputers, but for more speculative or lower-priority research we have to use small computer clusters, or even our laptops. Something can be a bottleneck not because it can’t be done on any computer, but because it can’t be done on the computers we can afford.
Most of the time, bottlenecks aren’t quite so obvious. That’s because in theoretical physics, often, we don’t know what we want to calculate. If we want to know why something happens, and not merely that it happens, then we need a calculation that we can interpret, that “makes sense” and that thus, hopefully, we can generalize. We might have some ideas for how that calculation could work: some property a mathematical theory might have that we already know how to understand. Some of those ideas are easy to check, so we check, and make progress. Others are harder, and we have to decide: is the calculation worth it, if we don’t know if it will give us the explanation we need?
Those decisions provide new bottlenecks, often hidden ones. As we get better at calculation, the threshold for an “easy” check gets easier and easier to meet. We put aside fewer possibilities, so we notice more things, which inspire yet more ideas. We make more progress, not because the old calculations were impossible, but because they weren’t easy enough, and now they are. Progress fuels progress, a virtuous cycle that gets us closer and closer to understanding everything we want to understand (which is everything).
Nowadays, we have telescopes that detect not just light, but gravitational waves. We’ve already learned quite a bit about astrophysics from these telescopes. They observe ripples coming from colliding black holes, giving us a better idea of what kinds of black holes exist in the universe. But the coolest thing a gravitational wave telescope could discover is something that hasn’t been seen yet: a cosmic string.
This art is from an article in Symmetry magazine which is, as far as I can tell, not actually about cosmic strings.
You might have heard of cosmic strings, but unless you’re a physicist you probably don’t know much about them. They’re a prediction, coming from cosmology, of giant string-like objects floating out in space.
That might sound like it has something to do with string theory, but it doesn’t actually have to, you can have these things without any string theory at all. Instead, you might have heard that cosmic strings are some kind of “cracks” or “wrinkles” in space-time. Some articles describe this as like what happens when ice freezes, cracks forming as water settles into a crystal.
That description, in terms of ice forming cracks between crystals, is great…if you’re a physicist who already knows how ice forms cracks between crystals. If you’re not, I’m guessing reading those kinds of explanations isn’t helpful. I’m guessing you’re still wondering why there ought to be any giant strings floating in space.
The real explanation has to do with a type of mathematical gadget physicists use, called a scalar field. You can think of a scalar field as described by a number, like a temperature, that can vary in space and time. The field carries potential energy, and that energy depends on what the scalar field’s “number” is. Left alone, the field settles into a situation with as little potential energy as it can, like a ball rolling down a hill. That situation is one of the field’s default values, something we call a “vacuum” value. Changing the field away from its vacuum value can take a lot of energy. The Higgs boson is one example of a scalar field. Its vacuum value is the value it has in day to day life. In order to make a detectable Higgs boson at the Large Hadron Collider, they needed to change the field away from its vacuum value, and that took a lot of energy.
In the very early universe, almost back at the Big Bang, the world was famously in a hot dense state. That hot dense state meant that there was a lot of energy to go around, so scalar fields could vary far from their vacuum values, pretty much randomly. As the universe expanded and cooled, there was less and less energy available for these fields, and they started to settle down.
Now, the thing about these default, “vacuum” values of a scalar field is that there doesn’t have to be just one of them. Depending on what kind of mathematical function the field’s potential energy is, there could be several different possibilities each with equal energy.
Let’s imagine a simple example, of a field with two vacuum values: +1 and -1. As the universe cooled down, some parts of the universe would end up with that scalar field number equal to +1, and some to -1. But what happens in between?
The scalar field can’t just jump from -1 to +1, that’s not allowed in physics. It has to pass through 0 in between. But, unlike -1 and +1, 0 is not a vacuum value. When the scalar field number is equal to 0, the field has more energy than it does when it’s equal to -1 or +1. Usually, a lot more energy.
That means the region of scalar field number 0 can’t spread very far: the further it spreads, the more energy it takes to keep it that way. On the other hand, the region can’t vanish altogether: something needs to happen to transition between the numbers -1 and +1.
The thing that happens is called a domain wall. A domain wall is a thin sheet, as thin as it can physically be, where the scalar field doesn’t take its vacuum value. You can roughly think of it as made up of the scalar field, a churning zone of the kind of bosons the LHC was trying to detect.
This sheet still has a lot of energy, bound up in the unusual value of the scalar field, like an LHC collision in every proton-sized chunk. As such, like any object with a lot of energy, it has a gravitational field. For a domain wall, the effect of this gravity would be very very dramatic: so dramatic, that we’re pretty sure they’re incredibly rare. If they were at all common, we would have seen evidence of them long before now!
Ok, I’ve shown you a wall, that’s weird, sure. What does that have to do with cosmic strings?
The number representing a scalar field doesn’t have to be a real number: it can be imaginary instead, or even complex. Now I’d like you to imagine a field with vacuum values on the unit circle, in the complex plane. That means that +1 and -1 are still vacuum values, but so are , and , and everything else you can write as . However, 0 is still not a vacuum value. Neither is, for example, .
With vacuum values like this, you can’t form domain walls. You can make a path between -1 and +1 that only goes through the unit circle, through for example. The field will be at its vacuum value throughout, taking no extra energy.
However, imagine the different regions form a circle. In the picture above, suppose that the blue area at the bottom is at vacuum value -1 and red is at +1. You might have in the green region, and in the purple region, covering the whole circle smoothly as you go around.
Now, think about what happens in the middle of the circle. On one side of the circle, you have -1. On the other, +1. (Or, on one side , on the other, ). No matter what, different sides of the circle are not allowed to be next to each other, you can’t just jump between them. So in the very middle of the circle, something else has to happen.
Once again, that something else is a field that goes away from its vacuum value, that passes through 0. Once again, that takes a lot of energy, so it occupies as little space as possible. But now, that space isn’t a giant wall. Instead, it’s a squiggly line: a cosmic string.
Cosmic strings don’t have as dramatic a gravitational effect as domain walls. That means they might not be super-rare. There might be some we haven’t seen yet. And if we do see them, it could be because they wiggle space and time, making gravitational waves.
Cosmic strings don’t require string theory, they come from a much more basic gadget, scalar fields. We know there is one quite important scalar field, the Higgs field. The Higgs vacuum values aren’t like +1 and -1, or like the unit circle, though, so the Higgs by itself won’t make domain walls or cosmic strings. But there are a lot of proposals for scalar fields, things we haven’t discovered but that physicists think might answer lingering questions in particle physics, and some of those could have the right kind of vacuum values to give us cosmic strings. Thus, if we manage to detect cosmic strings, we could learn something about one of those lingering questions.
As a kid who watched far too much educational television, I dimly remember learning about the USA’s first transcontinental railroad. Somehow, parts of the story stuck with me. Two companies built the railroad from different directions, one from California and the other from the middle of the country, aiming for a mountain in between. Despite the US Civil War happening around this time, the two companies built through, in the end racing to where the final tracks were laid with a golden spike.
I’m a theoretical physicist, so of course I don’t build railroads. Instead, I build new mathematical methods, ways to check our theories of particle physics faster and more efficiently. Still, something of that picture resonates with me.
You might think someone who develops new mathematical methods would be a mathematician, not a physicist. But while there are mathematicians who work on the problems I work on, their goals are a bit different. They care about rigor, about stating only things they can carefully prove. As such, they often need to work with simplified examples, “toy models” well-suited to the kinds of theorems they can build.
Physicists can be a bit messier. We don’t always insist on the same rigor the mathematicians do. This makes our results less reliable, but it makes our “toy models” a fair amount less “toy”. Our goal is to try to tackle questions closer to the actual real world.
What happens when physicists and mathematicians work on the same problem?
If the physicists worked alone, they might build and build, and end up with an answer that isn’t actually true. The mathematicians, keeping rigor in mind, would be safe in the truth of what they built, but might not end up anywhere near the physicists’ real-world goals.
Together, though, physicists and mathematicians can build towards each other. The physicists can keep their eyes on the mathematicians, correcting when they notice something might go wrong and building more and more rigor into their approach. The mathematicians can keep their eyes on the physicists, building more and more complex applications of their rigorous approaches to get closer and closer to the real world. Eventually, like the transcontinental railroad, the two groups meet: the mathematicians prove a rigorous version of the physicists’ approach, or the physicists adopt the mathematicians’ ideas and apply them to their own theories.
A sort of conference photo
In practice, it isn’t just two teams, physicists and mathematicians, building towards each other. Different physicists themselves work with different levels of rigor, aiming to understand different problems in different theories, and the mathematicians do the same. Each of us is building our own track, watching the other tracks build towards us on the horizon. Eventually, we’ll meet, and science will chug along over what we’ve built.
I’m at a workshop this week. It’s part of a series of “Bethe Forums”, cozy little conferences run by the Bethe Center for Theoretical Physics in Bonn.
You can tell it’s an institute for theoretical physics because they have one of these, but not a “doing room”
The workshop’s title, “Geometries and Special Functions for Physics and Mathematics”, covers a wide range of topics. There are talks on Calabi-Yau manifolds, elliptic (and hyper-elliptic) polylogarithms, and cluster algebras and cluster polylogarithms. Some of the talks are by mathematicians, others by physicists.
In addition to the talks, this conference added a fun innovative element, “my favorite problem sessions”. The idea is that a speaker spends fifteen minutes introducing their “favorite problem”, then the audience spends fifteen minutes discussing it. Some treated these sessions roughly like short talks describing their work, with the open directions at the end framed as their favorite problem. Others aimed broader, trying to describe a general problem and motivate interest in people of other sub-fields.
This was a particularly fun conference for me, because the seemingly distinct topics all connect in one way or another to my own favorite problem. In our “favorite theory” of N=4 super Yang-Mills, we can describe our calculations in terms of an “alphabet” of pieces that let us figure out predictions almost “by guesswork”. These alphabets, at least in the cases we know how to handle, turn out to correspond to mathematical structures called cluster algebras. If we look at interactions of six or seven particles, these cluster algebras are a powerful guide. For eight or nine, they still seem to matter, but are much harder to use.
We don’t know what an “alphabet” should look like for these Calabi-Yau manifolds (but I’m working on it). Because of that, we don’t know how these cluster algebras should appear.
In my view, any explanation for the role of cluster algebras in our calculations has to extend to these cases, to elliptic polylogarithms and Calabi-Yau manifolds. Without knowing how to frame an alphabet for these things, we won’t be able to solve the lingering mysteries that fill our field.
Because of that, “my favorite problem” is one of my biggest motivations, the question that drives a large chunk of what I do. It’s what’s made this conference so much fun, and so stimulating: almost every talk had something I wanted to learn.
In physics, we sometimes say that an idea “breaks down”. What do we mean by that?
When a theory “breaks down”, we mean that it stops being accurate. Newton’s theory of gravity is excellent most of the time, but for objects under strong enough gravity or high enough speed its predictions stop matching reality and a new theory (relativity) is needed. This is the sense in which we say that Newtonian gravity breaks down for the orbit of mercury, or breaks down much more severely in the area around a black hole.
When a symmetry is “broken”, we mean that it stops holding true. Most of physics looks the same when you flip it in a mirror, a property called parity symmetry. Take a pile of electric and magnetic fields, currents and wires, and you’ll find their mirror reflection is also a perfectly reasonable pile of electric and magnetic fields, currents and wires. This isn’t true for all of physics, though: the weak nuclear force isn’t the same when you flip it in a mirror. We say that the weak force breaks parity symmetry.
What about when a more general “idea” breaks down? What about space-time?
In order for space-time to break down, there needs to be a good reason to abandon the idea. And depending on how stubborn you are about it, that reason can come at different times.
You might think of space-time as just Einstein’s theory of general relativity. In that case, you could say that space-time breaks down as soon as the world deviates from that theory. In that view, any modification to general relativity, no matter how small, corresponds to space-time breaking down. You can think of this as the “least stubborn” option, the one with barely any stubbornness at all, that will let space-time break down with a tiny nudge.
But if general relativity breaks down, a slightly more stubborn person could insist that space-time is still fine. You can still describe things as located at specific places and times, moving across curved space-time. They just obey extra forces, on top of those built into the space-time.
Such a person would be happy as long as general relativity was a good approximation of what was going on, but they might admit space-time has broken down when general relativity becomes a bad approximation. If there are only small corrections on top of the usual space-time picture, then space-time would be fine, but if those corrections got so big that they overwhelmed the original predictions of general relativity then that’s quite a different situation. In that situation, space-time may have stopped being a useful description, and it may be much better to describe the world in another way.
But we could imagine an even more stubborn person who still insists that space-time is fine. Ultimately, our predictions about the world are mathematical formulas. No matter how complicated they are, we can always subtract a piece off of those formulas corresponding to the predictions of general relativity, and call the rest an extra effect. That may be a totally useless thing to do that doesn’t help you calculate anything, but someone could still do it, and thus insist that space-time still hasn’t broken down.
To convince such a person, space-time would need to break down in a way that made some important concept behind it invalid. There are various ways this could happen, corresponding to different concepts. For example, one unusual proposal is that space-time is non-commutative. If that were true then, in addition to the usual Heisenberg uncertainty principle between position and momentum, there would be an uncertainty principle between different directions in space-time. That would mean that you can’t define the position of something in all directions at once, which many people would agree is an important part of having a space-time!
Ultimately, physics is concerned with practicality. We want our concepts not just to be definable, but to do useful work in helping us understand the world. Our stubbornness should depend on whether a concept, like space-time, is still useful. If it is, we keep it. But if the situation changes, and another concept is more useful, then we can confidently say that space-time has broken down.
Sometimes, some scientists work alone. But mostly, scientists collaborate. We team up, getting more done together than we could alone.
Over the years, I’ve realized that theoretical physicists like me collaborate in a bit of a weird way, compared to other scientists. Most scientists do experiments, and those experiments require labs. Each lab typically has one principal investigator, or “PI”, who hires most of the other people in that lab. For any given project, scientists from the lab will be organized into particular roles. Some will be involved in the planning, some not. Some will do particular tests, gather data, manage lab animals, or do statistics. The whole experiment is at least roughly planned out from the beginning, and everyone has their own responsibility, to the extent that journals will sometimes ask scientists to list everyone’s roles when they publish papers. In this system, it’s rare for scientists from two different labs to collaborate. Usually it happens for a reason: a lab needs a statistician for a particularly subtle calculation, or one lab must process a sample so another lab can analyze it.
In contrast, theoretical physicists don’t have labs. Our collaborators sometimes come from the same university, but often they’re from a different one, frequently even in a different country. The way we collaborate is less like other scientists, and more like artists.
Sometimes, theoretical physicists have collaborations with dedicated roles and a detailed plan. This can happen when there is a specific calculation that needs to be done, that really needs to be done right. Some of the calculations that go into making predictions at the LHC are done in this way. I haven’t been in a collaboration like that (though in retrospect one collaborator may have had something like that in mind).
Instead, most of the collaborations I’ve been in have been more informal. They tend to start with a conversation. We chat by the coffee machine, or after a talk, anywhere there’s a blackboard nearby. It starts with “I’ve noticed something odd”, or “here’s something I don’t understand”. Then, we jam. We go back and forth, doing our thing and building on each other. Sometimes this happens in person, a barrage of questions and doubts until we hammer out something solid. Sometimes we go back to our offices, to calculate and look up references. Coming back the next day, we compare results: what did you manage to show? Did you get what I did? If not, why?
I make this sound spontaneous, but it isn’t completely. That starting conversation can be totally unplanned, but usually one of the scientists involved is trying to make it happen. There’s a different way you talk when you’re trying to start a collaboration, compared to when you just want to talk. If you’re looking for a collaboration, you go into more detail. If the other person is on the same wavelength, you start using “we” instead of “I”, or you start suggesting plans of action: “you could do X, while I do Y”. If you just want someone’s opinion, or just want to show off, then your conversation is less detailed, and less personal.
This is easiest to do with our co-workers, but we do it with people from other universities too. Sometimes this happens at conferences, more often during short visits for seminars. I’ve been on almost every end of this. As a visitor, I’ve arrived to find my hosts with a project in mind. As a host, I’ve invited a visitor with the goal of getting them involved in a collaboration, and I’ve received a visitor who came with their own collaboration idea.
After an initial flurry of work, we’ll have a rough idea of whether the project is viable. If it is, things get a bit more organized, and we sort out what needs to be done and a rough idea of who will do it. While the early stages really benefit from being done in person, this part is easier to do remotely. The calculations get longer but the concepts are clear, so each of us can work by ourselves, emailing when we make progress. If we get confused again, we can always schedule a Zoom to sort things out.
Once things are close (but often not quite done), it’s time to start writing the paper. In the past, I used Dropbox for this: my collaborators shared a folder with a draft, and we’d pass “control” back and forth as we wrote and edited. Now, I’m more likely to use something built for this purpose. Git is a tool used by programmers to collaborate on code. It lets you roll back edits you don’t like, and merge edits from two people to make sure they’re consistent. For other collaborations I use Overleaf, an online interface for the document-writing language LaTeX that lets multiple people edit in real-time. Either way, this part is also more or less organized, with a lot of “can you write this section?” that can shift around depending on how busy people end up being.
Finally, everything comes together. The edits stabilize, everyone agrees that the paper is good (or at least, that any dissatisfaction they have is too minor to be worth arguing over). We send it to a few trusted friends, then a few days later up on the arXiv it goes.
Then, the cycle begins again. If the ideas are still clear enough, the same collaboration might keep going, planning follow-up work and follow-up papers. We meet new people, or meet up with old ones, and establish new collaborations as we go. Our fortunes ebb and flow based on the conversations we have, the merits of our ideas and the strengths of our jams. Sometimes there’s more, sometimes less, but it keeps bubbling up if you let it.
There are infinitely many of these diagrams, but they’re all beautifully simple, variations on a theme that can be written down in a precise mathematical way.
Change things a little bit, though, and the situation gets wildly more intractable. Let the rungs of the ladder peek through the sides, and you get something looking more like the tracks for a train:
These traintrack integrals are much more complicated. Describing them requires the mathematics of Calabi-Yau manifolds, involving higher and higher dimensions as the tracks get longer. I don’t think there’s any hope of understanding these things for all loops, at least not any time soon.
What if we aimed somewhere in between? A ladder that just started to turn traintrack?
Add just a single pair of rungs, and it turns out that things remain relatively simple. If we do this, it turns out we don’t need any complicated Calabi-Yau manifolds. We just need the simplest Calabi-Yau manifold, called an elliptic curve. It’s actually the same curve for every version of the diagram. And the situation is simple enough that, with some extra cleverness, it looks like we’ve found a trick to calculate these diagrams to any number of loops we’d like.
These developments are exciting, because Feynman diagrams with elliptic curves are still tough to deal with. We still have whole conferences about them. These new elliptic diagrams can be a long list of test cases, things we can experiment with with any number of loops. With time, we might truly understand them as well as the ladder diagrams!
The problem of quantum gravity is one of the most famous problems in physics. You’ve probably heard someone say that quantum mechanics and general relativity are fundamentally incompatible. Most likely, this was narrated over pictures of a foaming, fluctuating grid of space-time. Based on that, you might think that all we have to do to solve this problem is to measure some quantum property of gravity. Maybe we could make a superposition of two different gravitational fields, see what happens, and solve the problem that way.
I mean, we could do that, some people are trying to. But it won’t solve the problem. That’s because the problem of quantum gravity isn’t just the problem of quantum gravity. It’s the problem of high-energy quantum gravity.
Merging quantum mechanics and general relativity is actually pretty easy. General relativity is a big conceptual leap, certainly, a theory in which gravity is really just the shape of space-time. At the same time, though, it’s also a field theory, the same general type of theory as electromagnetism. It’s a weirder field theory than electromagnetism, to be sure, one with deeper implications. But if we want to describe low energies, and weak gravitational fields, then we can treat it just like any other field theory. We know how to write down some pretty reasonable-looking equations, we know how to do some basic calculations with them. This part is just not that scary.
The scary part happens later. The theory we get from these reasonable-looking equations continues to look reasonable for a while. It gives formulas for the probability of things happening: things like gravitational waves bouncing off each other, as they travel through space. The problem comes when those waves have very high energy, and the nice reasonable probability formula now says that the probability is greater than one.
For those of you who haven’t taken a math class in a while, probabilities greater than one don’t make sense. A probability of one is a certainty, something guaranteed to happen. A probability greater than one isn’t more certain than certain, it’s just nonsense.
So we know something needs to change, we know we need a new theory. But we only know we need that theory when the energy is very high: when it’s the Planck energy. Before then, we might still have a different theory, but we might not: it’s not a “problem” yet.
Now, a few of you understand this part, but still have a misunderstanding. The Planck energy seems high for particle physics, but it isn’t high in an absolute sense: it’s about the energy in a tank of gasoline. Does that mean that all we have to do to measure quantum gravity is to make a quantum state out of your car?
Again, no. That’s because the problem of quantum gravity isn’t just the problem of high-energy quantum gravity either.
Energy seems objective, but it’s not. It’s subjective, or more specifically, relative. Due to special relativity, observers moving at different speeds observe different energies. Because of that, high energy alone can’t be the requirement: it isn’t something either general relativity or quantum field theory can “care about” by itself.
Instead, the real thing that matters is something that’s invariant under special relativity. This is hard to define in general terms, but it’s best to think of it as a requirement for not energy, but energy density.
(For the experts: I’m justifying this phrasing in part because of how you can interpret the quantity appearing in energy conditions as the energy density measured by an observer. This still isn’t the correct way to put it, but I can’t think of a better way that would be understandable to a non-technical reader. If you have one, let me know!)
Why do we need quantum gravity to fully understand black holes? Not just because they have a lot of mass, but because they have a lot of mass concentrated in a small area, a high energy density. Ditto for the Big Bang, when the whole universe had a very large energy density. Particle colliders are useful not just because they give particles high energy, but because they give particles high energy and put them close together, creating a situation with very high energy density.
Once you understand this, you can use it to think about whether some experiment or observation will help with the problem of quantum gravity. Does the experiment involve very high energy density, much higher than anything we can do in a particle collider right now? Is that telescope looking at something created in conditions of very high energy density, or just something nearby?
It’s not impossible for an experiment that doesn’t meet these conditions to find something. Whatever the correct quantum gravity theory is, it might be different from our current theories in a more dramatic way, one that’s easier to measure. But the only guarantee, the only situation where we know we need a new theory, is for very high energy density.