At Geometries and Special Functions for Physics and Mathematics in Bonn

I’m at a workshop this week. It’s part of a series of “Bethe Forums”, cozy little conferences run by the Bethe Center for Theoretical Physics in Bonn.

You can tell it’s an institute for theoretical physics because they have one of these, but not a “doing room”

The workshop’s title, “Geometries and Special Functions for Physics and Mathematics”, covers a wide range of topics. There are talks on Calabi-Yau manifolds, elliptic (and hyper-elliptic) polylogarithms, and cluster algebras and cluster polylogarithms. Some of the talks are by mathematicians, others by physicists.

In addition to the talks, this conference added a fun innovative element, “my favorite problem sessions”. The idea is that a speaker spends fifteen minutes introducing their “favorite problem”, then the audience spends fifteen minutes discussing it. Some treated these sessions roughly like short talks describing their work, with the open directions at the end framed as their favorite problem. Others aimed broader, trying to describe a general problem and motivate interest in people of other sub-fields.

This was a particularly fun conference for me, because the seemingly distinct topics all connect in one way or another to my own favorite problem. In our “favorite theory” of N=4 super Yang-Mills, we can describe our calculations in terms of an “alphabet” of pieces that let us figure out predictions almost “by guesswork”. These alphabets, at least in the cases we know how to handle, turn out to correspond to mathematical structures called cluster algebras. If we look at interactions of six or seven particles, these cluster algebras are a powerful guide. For eight or nine, they still seem to matter, but are much harder to use.

For ten particles, though, things get stranger. That’s because ten particles is precisely where elliptic curves, and their related elliptic polylogarithms, show up. Things then get yet more strange, and with twelve particles or more we start seeing Calabi-Yau manifolds magically show up in our calculations.

We don’t know what an “alphabet” should look like for these Calabi-Yau manifolds (but I’m working on it). Because of that, we don’t know how these cluster algebras should appear.

In my view, any explanation for the role of cluster algebras in our calculations has to extend to these cases, to elliptic polylogarithms and Calabi-Yau manifolds. Without knowing how to frame an alphabet for these things, we won’t be able to solve the lingering mysteries that fill our field.

Because of that, “my favorite problem” is one of my biggest motivations, the question that drives a large chunk of what I do. It’s what’s made this conference so much fun, and so stimulating: almost every talk had something I wanted to learn.

Talking and Teaching

Someone recently shared with me an article written by David Mermin in 1992 about physics talks. Some aspects are dated (our slides are no longer sheets of plastic, and I don’t think anyone writing an article like that today would feel the need to put it in the mouth of a fictional professor (which is a shame honestly)), but most of it still holds true. I particularly recognized the self-doubt of being a young physicist sitting in a talk and thinking “I’m supposed to enjoy this?”

Mermin’s basic point is to keep things as light as possible. You want to convey motivation more than content, and background more than your own contributions. Slides should be sparse, both because people won’t be able to see everything but also because people can get frustrated “reading ahead” of what you say.

Mermin’s suggestion that people read from a prepared text was probably good advice for him, but maybe not for others. It can be good if you can write like he does, but I don’t think most people’s writing is that much better than what they say in talks (you can judge this by reading peoples’ papers!) Some are much clearer speaking impromptu. I agree with him that in practice people end up just reading from their slides, which indeed is bad, but reading from a normal physics paper isn’t any better.

I also don’t completely agree with him about the value of speech over text. Yes, putting text on your slides means people can read ahead (unless you hide some of the text, which is easier to do these days than in the days of overhead transparencies). But just saying things means that if someone’s attention lapses for just a moment, they’ll be lost. Unless you repeat yourself a lot (good practice in any case), you should avoid just saying anything you need your audience to remember, and make sure they can read it somewhere if they need it as well.

That said, “if they need it” is doing a lot of work here, and this is where I agree again with Mermin. Fundamentally, you don’t need to convey everything you think you do. (I don’t usually need to convey everything I think I do!) It’s a lesson I’ve been learning this year from pedagogy courses, a message they try to instill in everyone who teaches at the university. If you want to really convey something well, then you just can’t convey that much. You need to focus, pick a few things and try to get them across, and structure the rest of what you say to reinforce those things. When teaching, or when speaking, less is more.

On Stubbornness and Breaking Down

In physics, we sometimes say that an idea “breaks down”. What do we mean by that?

When a theory “breaks down”, we mean that it stops being accurate. Newton’s theory of gravity is excellent most of the time, but for objects under strong enough gravity or high enough speed its predictions stop matching reality and a new theory (relativity) is needed. This is the sense in which we say that Newtonian gravity breaks down for the orbit of mercury, or breaks down much more severely in the area around a black hole.

When a symmetry is “broken”, we mean that it stops holding true. Most of physics looks the same when you flip it in a mirror, a property called parity symmetry. Take a pile of electric and magnetic fields, currents and wires, and you’ll find their mirror reflection is also a perfectly reasonable pile of electric and magnetic fields, currents and wires. This isn’t true for all of physics, though: the weak nuclear force isn’t the same when you flip it in a mirror. We say that the weak force breaks parity symmetry.

What about when a more general “idea” breaks down? What about space-time?

In order for space-time to break down, there needs to be a good reason to abandon the idea. And depending on how stubborn you are about it, that reason can come at different times.

You might think of space-time as just Einstein’s theory of general relativity. In that case, you could say that space-time breaks down as soon as the world deviates from that theory. In that view, any modification to general relativity, no matter how small, corresponds to space-time breaking down. You can think of this as the “least stubborn” option, the one with barely any stubbornness at all, that will let space-time break down with a tiny nudge.

But if general relativity breaks down, a slightly more stubborn person could insist that space-time is still fine. You can still describe things as located at specific places and times, moving across curved space-time. They just obey extra forces, on top of those built into the space-time.

Such a person would be happy as long as general relativity was a good approximation of what was going on, but they might admit space-time has broken down when general relativity becomes a bad approximation. If there are only small corrections on top of the usual space-time picture, then space-time would be fine, but if those corrections got so big that they overwhelmed the original predictions of general relativity then that’s quite a different situation. In that situation, space-time may have stopped being a useful description, and it may be much better to describe the world in another way.

But we could imagine an even more stubborn person who still insists that space-time is fine. Ultimately, our predictions about the world are mathematical formulas. No matter how complicated they are, we can always subtract a piece off of those formulas corresponding to the predictions of general relativity, and call the rest an extra effect. That may be a totally useless thing to do that doesn’t help you calculate anything, but someone could still do it, and thus insist that space-time still hasn’t broken down.

To convince such a person, space-time would need to break down in a way that made some important concept behind it invalid. There are various ways this could happen, corresponding to different concepts. For example, one unusual proposal is that space-time is non-commutative. If that were true then, in addition to the usual Heisenberg uncertainty principle between position and momentum, there would be an uncertainty principle between different directions in space-time. That would mean that you can’t define the position of something in all directions at once, which many people would agree is an important part of having a space-time!

Ultimately, physics is concerned with practicality. We want our concepts not just to be definable, but to do useful work in helping us understand the world. Our stubbornness should depend on whether a concept, like space-time, is still useful. If it is, we keep it. But if the situation changes, and another concept is more useful, then we can confidently say that space-time has broken down.

Visiting CERN

So, would you believe I’ve never visited CERN before?

I was at CERN for a few days this week, visiting friends and collaborators and giving an impromptu talk. Surprisingly, this is the first time I’ve been, a bit of an embarrassing admission for someone who’s ostensibly a particle physicist.

Despite that, CERN felt oddly familiar. The maze of industrial buildings and winding roads, the security gates and cards (and work-arounds for when you arrive outside of card-issuing hours, assisted by friendly security guards), the constant construction and remodeling, all of it reminded me of the times I visited SLAC during my PhD. This makes a lot of sense, of course: one accelerator is at least somewhat like another. But besides a visit to Fermilab for a conference several years ago, I haven’t been in many other places like that since then.

(One thing that might have also been true of SLAC and Fermilab but I never noticed: CERN buildings not only have evacuation instructions for the building in case of a fire, but also evacuation instructions for the whole site.)

CERN is a bit less “pretty” than SLAC on average, without the nice grassy area in the middle or the California sun that goes with it. It makes up for it with what seems like more in terms of outreach resources, including a big wooden dome of a mini-museum sponsored by Rolex, and a larger visitor center still under construction.

The outside, including a sculpture depicting the history of science with the Higgs boson discovery on the “cutting edge”
The inside. Bubbles on the ground contain either touchscreens or small objects (detectors, papers, a blackboard with the string theory genus expansion for some reason). Bubbles in the air were too high for me to check.

CERN hosts a variety of theoretical physicists doing various different types of work. I was hosted by the “QCD group”, but the string theorists just down the hall include a few people I know as well. The lounge had a few cardboard signs hidden under the table, leftovers of CERN’s famous yearly Christmas play directed by John Ellis.

It’s been a fun, if brief, visit. I’ll likely get to see a bit more this summer, when they host Amplitudes 2023. Until then, it was fun reconnecting with that “accelerator feel”.

The Temptation of Spinoffs

Read an argument for a big scientific project, and you’ll inevitably hear mention of spinoffs. Whether it’s NASA bringing up velcro or CERN and the World-Wide Web, scientists love to bring up times when a project led to some unrelated technology that improved peoples’ lives.

Just as inevitably as they show up, though, these arguments face criticism. Advocates of the projects argue that promoting spinoffs misses the point, training the public to think about science in terms of unrelated near-term gadgets rather than the actual point of the experiments. They think promoters should focus on the scientific end-goals, justifying them either in terms of benefit to humanity or as a broader, “it makes the country worth defending” human goal. It’s a perspective that shows up in education too, where even when students ask “when will I ever use this in real life?” it’s not clear that’s really what they mean.

On the other side, opponents of the projects will point out that the spinoffs aren’t good enough to justify the science. Some, like velcro, weren’t actually spinoffs to begin with. Others seem like tiny benefits compared to the vast cost of the scientific projects, or like things that would have been much easier to get with funding that was actually dedicated to achieving the spinoff.

With all these downsides, why do people keep bringing spinoffs up? Are they just a cynical attempt to confuse people?

I think there’s something less cynical going on here. Things make a bit more sense when you listen to what the scientists say, not to the public, but when talking to scientists in other disciplines.

Scientists speaking to fellow scientists still mention spinoffs, but they mention scientific spinoffs. The speaker in a talk I saw recently pointed out that the LHC doesn’t just help with particle physics: by exploring the behavior of collisions of high-energy atomic nuclei it provides essential information for astrophysicists understanding neutron stars and cosmologists studying the early universe. When these experiments study situations we can’t model well, they improve the approximations we use to describe those situations in other contexts. By knowing more, we know more. Knowledge builds on knowledge, and the more we know about the world the more we can do, often in surprising and un-planned ways.

I think that when scientists promote spinoffs to the public, they’re trying to convey this same logic. Like promoting an improved understanding of stars to astrophysicists, they’re modeling the public as “consumer goods scientists” and trying to pick out applications they’d find interesting.

Knowing more does help us know more, that much is true. And eventually that knowledge can translate to improving people’s lives. But in a public debate, people aren’t looking for these kinds of principles, let alone a scientific “I’ll scratch your back if you’ll scratch mine”. They’re looking for something like a cost-benefit analysis, “why are we doing this when we could do that?”

(This is not to say that most public debates involve especially good cost-benefit analysis. Just that it is, in the end, what people are trying to do.)

Simply listing spinoffs doesn’t really get at this. The spinoffs tend to be either small enough that they don’t really argue the point (velcro, even if NASA had invented it, could probably have been more cheaply found without a space program), or big but extremely unpredictable (it’s not like we’re going to invent another world-wide web).

Focusing on the actual end-products of the science should do a bit better. That can include “scientific spinoffs”, if not the “consumer goods spinoffs”. Those collisions of heavy nuclei change our understanding of how we model complex systems. That has applications in many areas of science, from how we model stars to materials to populations, and those applications in turn could radically improve people’s lives.

Or, well, they could not. Basic science is very hard to do cost-benefit analyses with. It’s the fabled explore/exploit dilemma, whether to keep trying to learn more or focus on building on what you have. If you don’t know what’s out there, if you don’t know what you don’t know, then you can’t really solve that dilemma.

So I get the temptation of reaching to spinoffs, of pointing to something concrete in everyday life and saying “science did that!” Science does radically improve people’s lives, but it doesn’t always do it especially quickly. You want to teach people that knowledge leads to knowledge, and you try to communicate it the way you would to other scientists, by saying how your knowledge and theirs intersect. But if you want to justify science to the public, you want something with at least the flavor of cost-benefit analysis. And you’ll get more mileage out of that if you think about where the science itself can go, than if you focus on the consumer goods it accidentally spins off along the way.

Valentine’s Day Physics Poem 2023

Since Valentine’s Day was this week, it’s time for the next installment of my traditional Valentine’s Day Physics Poems. New readers, don’t let this drive you off, I only do it once a year! And if you actually like it, you can take a look at poems from previous years here.

Married to a Model

If you ever face a physics class distracted,
Rappers and footballers twinkling on their phones,
Then like an awkward youth pastor, interject,
“You know who else is married to a Model?”

Her name is Standard, you see,
Wife of fifty years to Old Man Physics,
Known for her beauty, charm, and strangeness too.
But Old Man Physics has a wandering eye,
and dreams of Models Beyond.

Let the old man bend your ear,
you’ll hear
a litany of Problems.

He’ll never understand her, so he starts.
Some matters she holds weighty, some feather-light
with nary rhyme or reason
(which he is owed, he’s sure).

She’s unnatural, he says,
(echoing Higgins et al.),
a set of rules he can’t predict.
(But with those rules, all else is possible.)

Some regularities she holds to fast, despite room for exception,
others breaks, like an ill-lucked bathroom mirror.

And then, he says, she’ll just blow up
(when taken to extremes),
while singing nonsense in the face of Gravity.

He’s been keeping a careful eye
and noticing anomalies
(and each time, confronting them,
finds an innocent explanation,
but no matter).

And he imagines others
with yet wilder curves
and more sensitive reactions
(and nonsense, of course,
that he’s lived fifty years without).

Old man physics talks,
that’s certain.
But beyond the talk,
beyond the phases and phrases,
(conscious uncoupling, non-empirical science),
he stays by her side.

He knows Truth, 
in this world,
is worth fighting for.

Why Dark Matter Feels Like Cheating (And Why It Isn’t)

I’ve never met someone who believed the Earth was flat. I’ve met a few who believed it was six thousand years old, but not many. Occasionally, I run into crackpots who rail against relativity or quantum mechanics, or more recent discoveries like quarks or the Higgs. But for one conclusion of modern physics, the doubters are common. For this one idea, the average person may not insist that the physicists are wrong, but they’ll usually roll their eyes a little bit, ask the occasional “really?”

That idea is dark matter.

For the average person, dark matter doesn’t sound like normal, responsible science. It sounds like cheating. Scientists try to explain the universe, using stars and planets and gravity, and eventually they notice the equations don’t work, so they just introduce some new matter nobody can detect. It’s as if a budget didn’t add up, so the accountant just introduced some “dark expenses” to hide the problem.

Part of what’s going on here is that fundamental physics, unlike other fields, doesn’t have to reduce to something else. An accountant has to explain the world in terms of transfers of money, a chemist in terms of atoms and molecules. A physicist has to explain the world in terms of math, with no more restrictions than that. Whatever the “base level” of another field is, physics can, and must, go deeper.

But that doesn’t explain everything. Physics may have to explain things in terms of math, but we shouldn’t just invent new math whenever we feel like it. Surely, we should prefer explanations in terms of things we know to explanations in terms of things we don’t know. The question then becomes, what justifies the preference? And when do we get to break it?

Imagine you’re camping in your backyard. You’ve brought a pack of jumbo marshmallows. You wake up to find a hole torn in the bag, a few marshmallows strewn on a trail into the bushes, the rest gone. You’re tempted to imagine a new species of ant, with enormous jaws capable of ripping open plastic and hauling the marshmallows away. Then you remember your brother likes marshmallows, and it’s probably his fault.

Now imagine instead you’re camping in the Amazon rainforest. Suddenly, the ant explanation makes sense. You may not have a particular species of ants in mind, but you know the rainforest is full of new species no-one has yet discovered. And you’re pretty sure your brother couldn’t have flown to your campsite in the middle of the night and stolen your marshmallows.

We do have a preference against introducing new types of “stuff”, like new species of ants or new particles. We have that preference because these new types of stuff are unlikely, based on our current knowledge. We don’t expect new species of ants in our backyards, because we think we have a pretty good idea of what kinds of ants exist, and we think a marshmallow-stealing brother is more likely. That preference gets dropped, however, based on the strength of the evidence. If it’s very unlikely our brother stole the marshmallows, and if we’re somewhere our knowledge of ants is weak, then the marshmallow-stealing ants are more likely.

Dark matter is a massive leap. It’s not a massive leap because we can’t see it, but simply because it involves new particles, particles not in our Standard Model of particle physics. (Or, for the MOND-ish fans, new fields not present in Einstein’s theory of general relativity.) It’s hard to justify physics beyond the Standard Model, and our standards for justifying it are in general very high: we need very precise experiments to conclude that the Standard Model is well and truly broken.

For dark matter, we keep those standards. The evidence for some kind of dark matter, that there is something that can’t be explained by just the Standard Model and Einstein’s gravity, is at this point very strong. Far from a vague force that appears everywhere, we can map dark matter’s location, systematically describe its effect on the motion of galaxies to clusters of galaxies to the early history of the universe. We’ve checked if there’s something we’ve left out, if black holes or unseen planets might cover it, and they can’t. It’s still possible we’ve missed something, just like it’s possible your brother flew to the Amazon to steal your marshmallows, but it’s less likely than the alternatives.

Also, much like ants in the rainforest, we don’t know every type of particle. We know there are things we’re missing: new types of neutrinos, or new particles to explain quantum gravity. These don’t have to have anything to do with dark matter, they might be totally unrelated. But they do show that we should expect, sometimes, to run into particles we don’t already know about. We shouldn’t expect that we already know all the particles.

If physicists did what the cartoons suggest, it really would be cheating. If we proposed dark matter because our equations didn’t match up, and stopped checking, we’d be no better than an accountant adding “dark money” to a budget. But we didn’t do that. When we argue that dark matter exists, it’s because we’ve actually tried to put together the evidence, because we’ve weighed it against the preference to stick with the Standard Model and found the evidence tips the scales. The instinct to call it cheating is a good instinct, one you should cultivate. But here, it’s an instinct physicists have already taken into account.

All About the Collab

Sometimes, some scientists work alone. But mostly, scientists collaborate. We team up, getting more done together than we could alone.

Over the years, I’ve realized that theoretical physicists like me collaborate in a bit of a weird way, compared to other scientists. Most scientists do experiments, and those experiments require labs. Each lab typically has one principal investigator, or “PI”, who hires most of the other people in that lab. For any given project, scientists from the lab will be organized into particular roles. Some will be involved in the planning, some not. Some will do particular tests, gather data, manage lab animals, or do statistics. The whole experiment is at least roughly planned out from the beginning, and everyone has their own responsibility, to the extent that journals will sometimes ask scientists to list everyone’s roles when they publish papers. In this system, it’s rare for scientists from two different labs to collaborate. Usually it happens for a reason: a lab needs a statistician for a particularly subtle calculation, or one lab must process a sample so another lab can analyze it.

In contrast, theoretical physicists don’t have labs. Our collaborators sometimes come from the same university, but often they’re from a different one, frequently even in a different country. The way we collaborate is less like other scientists, and more like artists.

Sometimes, theoretical physicists have collaborations with dedicated roles and a detailed plan. This can happen when there is a specific calculation that needs to be done, that really needs to be done right. Some of the calculations that go into making predictions at the LHC are done in this way. I haven’t been in a collaboration like that (though in retrospect one collaborator may have had something like that in mind).

Instead, most of the collaborations I’ve been in have been more informal. They tend to start with a conversation. We chat by the coffee machine, or after a talk, anywhere there’s a blackboard nearby. It starts with “I’ve noticed something odd”, or “here’s something I don’t understand”. Then, we jam. We go back and forth, doing our thing and building on each other. Sometimes this happens in person, a barrage of questions and doubts until we hammer out something solid. Sometimes we go back to our offices, to calculate and look up references. Coming back the next day, we compare results: what did you manage to show? Did you get what I did? If not, why?

I make this sound spontaneous, but it isn’t completely. That starting conversation can be totally unplanned, but usually one of the scientists involved is trying to make it happen. There’s a different way you talk when you’re trying to start a collaboration, compared to when you just want to talk. If you’re looking for a collaboration, you go into more detail. If the other person is on the same wavelength, you start using “we” instead of “I”, or you start suggesting plans of action: “you could do X, while I do Y”. If you just want someone’s opinion, or just want to show off, then your conversation is less detailed, and less personal.

This is easiest to do with our co-workers, but we do it with people from other universities too. Sometimes this happens at conferences, more often during short visits for seminars. I’ve been on almost every end of this. As a visitor, I’ve arrived to find my hosts with a project in mind. As a host, I’ve invited a visitor with the goal of getting them involved in a collaboration, and I’ve received a visitor who came with their own collaboration idea.

After an initial flurry of work, we’ll have a rough idea of whether the project is viable. If it is, things get a bit more organized, and we sort out what needs to be done and a rough idea of who will do it. While the early stages really benefit from being done in person, this part is easier to do remotely. The calculations get longer but the concepts are clear, so each of us can work by ourselves, emailing when we make progress. If we get confused again, we can always schedule a Zoom to sort things out.

Once things are close (but often not quite done), it’s time to start writing the paper. In the past, I used Dropbox for this: my collaborators shared a folder with a draft, and we’d pass “control” back and forth as we wrote and edited. Now, I’m more likely to use something built for this purpose. Git is a tool used by programmers to collaborate on code. It lets you roll back edits you don’t like, and merge edits from two people to make sure they’re consistent. For other collaborations I use Overleaf, an online interface for the document-writing language LaTeX that lets multiple people edit in real-time. Either way, this part is also more or less organized, with a lot of “can you write this section?” that can shift around depending on how busy people end up being.

Finally, everything comes together. The edits stabilize, everyone agrees that the paper is good (or at least, that any dissatisfaction they have is too minor to be worth arguing over). We send it to a few trusted friends, then a few days later up on the arXiv it goes.

Then, the cycle begins again. If the ideas are still clear enough, the same collaboration might keep going, planning follow-up work and follow-up papers. We meet new people, or meet up with old ones, and establish new collaborations as we go. Our fortunes ebb and flow based on the conversations we have, the merits of our ideas and the strengths of our jams. Sometimes there’s more, sometimes less, but it keeps bubbling up if you let it.

LHC Black Holes for the Terminally Un-Reassured

Could the LHC have killed us all?

No, no it could not.

But…

I’ve had this conversation a few times over the years. Usually, the people I’m talking to are worried about black holes. They’ve heard that the Large Hadron Collider speeds up particles to amazingly high energies before colliding them together. They worry that these colliding particles could form a black hole, which would fall into the center of the Earth and busily gobble up the whole planet.

This pretty clearly hasn’t happened. But also, physicists were pretty confident that it couldn’t happen. That isn’t to say they thought it was impossible to make a black hole with the LHC. Some physicists actually hoped to make a black hole: it would have been evidence for extra dimensions, curled-up dimensions much larger than the tiny ones required by string theory. They figured out the kind of evidence they’d see if the LHC did indeed create a black hole, and we haven’t seen that evidence. But even before running the machine, they were confident that such a black hole wouldn’t gobble up the planet. Why?

The best argument is also the most unsatisfying. The LHC speeds up particles to high energies, but not unprecedentedly high energies. High-energy particles called cosmic rays enter the atmosphere every day, some of which are at energies comparable to the LHC. The LHC just puts the high-energy particles in front of a bunch of sophisticated equipment so we can measure everything about them. If the LHC could destroy the world, cosmic rays would have already done so.

That’s a very solid argument, but it doesn’t really explain why. Also, it may not be true for future colliders: we could build a collider with enough energy that cosmic rays don’t commonly meet it. So I should give another argument.

The next argument is Hawking radiation. In Stephen Hawking’s most famous accomplishment, he argued that because of quantum mechanics black holes are not truly black. Instead, they give off a constant radiation of every type of particle mixed together, shrinking as it does so. The radiation is faintest for large black holes, but gets more and more intense the smaller the black hole is, until the smallest black holes explode into a shower of particles and disappear. This argument means that a black hole small enough that the LHC could produce it would radiate away to nothing in almost an instant: not long enough to leave the machine, let alone fall to the center of the Earth.

This is a good argument, but maybe you aren’t as sure as I am about Hawking radiation. As it turns out, we’ve never measured Hawking radiation, it’s just a theoretical expectation. Remember that the radiation gets fainter the larger the black hole is: for a black hole in space with the mass of a star, the radiation is so tiny it would be almost impossible to detect even right next to the black hole. From here, in our telescopes, we have no chance of seeing it.

So suppose tiny black holes didn’t radiate, and suppose the LHC could indeed produce them. Wouldn’t that have been dangerous?

Here, we can do a calculation. I want you to appreciate how tiny these black holes would be.

From science fiction and cartoons, you might think of a black hole as a kind of vacuum cleaner, sucking up everything nearby. That’s not how black holes work, though. The “sucking” black holes do is due to gravity, no stronger than the gravity of any other object with the same mass at the same distance. The only difference comes when you get close to the event horizon, an invisible sphere close-in around the black hole. Pass that line, and the gravity is strong enough that you will never escape.

We know how to calculate the position of the event horizon of a black hole. It’s the Schwarzchild radius, and we can write it in terms of Newton’s constant G, the mass of the black hole M, and the speed of light c, as follows:

\frac{2GM}{c^2}

The Large Hadron Collider’s two beams each have an energy around seven tera-electron-volts, or TeV, so there are 14 TeV of energy in total in each collision. Imagine all of that energy being converted into mass, and that mass forming a black hole. That isn’t how it would actually happen: some of the energy would create other particles, and some would give the black hole a “kick”, some momentum in one direction or another. But we’re going to imagine a “worst-case” scenario, so let’s assume all the energy goes to form the black hole. Electron-volts are a weird physicist unit, but if we divide them by the speed of light squared (as we should if we’re using E=mc^2 to create a mass), then Wikipedia tells us that each electron-volt will give us 1.78\times 10^{-36} kilograms. “Tera” is the SI prefix for 10^{12}. Thus our tiny black hole starts with a mass of

14\times 10^{12}\times 1.78\times 10^{-36} = 2.49\times 10^{-23} \textrm{kg}

Plugging in Newton’s constant (6.67\times 10^{-11} meters cubed per kilogram per second squared), and the speed of light (3\times 10^8 meters per second), and we get a radius of,

\frac{2\times 6.67\times 10^{-11}\times 14\times 10^{12}\times 1.78\times 10^{-36}}{\left(3\times 10^8\right)^2} = 3.7\times 10^{-50} \textrm{m}

That, by the way, is amazingly tiny. The size of an atom is about 10^{-10} meters. If every atom was a tiny person, and each of that person’s atoms was itself a person, and so on for five levels down, then the atoms of the smallest person would be the same size as this event horizon.

Now, we let this little tiny black hole fall. Let’s imagine it falls directly towards the center of the Earth. The only force affecting it would be gravity (if it had an electrical charge, it would quickly attract a few electrons and become neutral). That means you can think of it as if it were falling through a tiny hole, with no friction, gobbling up anything unfortunate enough to fall within its event horizon.

For our first estimate, we’ll treat the black hole as if it stays the same size through its journey. Imagine the black hole travels through the entire earth, absorbing a cylinder of matter. Using the Earth’s average density of 5515 kilograms per cubic meter, and the Earth’s maximum radius of 6378 kilometers, our cylinder adds a mass of,

\pi \times \left(3.7\times 10^{-50}\right)^2 \times 2 \times 6378\times 10^3\times 5515 = 3\times 10^{-88} \textrm{kg}

That’s absurdly tiny. That’s much, much, much tinier than the mass we started out with. Absorbing an entire cylinder through the Earth makes barely any difference.

You might object, though, that the black hole is gaining mass as it goes. So really we ought to use a differential equation. If the black hole travels a distance r, absorbing mass as it goes at average Earth density \rho, then we find,

\frac{dM}{dr}=\pi\rho\left(\frac{2GM(r)}{c^2}\right)^2

Solving this, we get

M(r)=\frac{M_0}{1- M_0 \pi\rho\left(\frac{2G}{c^2}\right)^2 r }

Where M_0 is the mass we start out with.

Plug in the distance through the Earth for r, and we find…still about 3\times 10^{-88} \textrm{kg}! It didn’t change very much, which makes sense, it’s a very very small difference!

But you might still object. A black hole falling through the Earth wouldn’t just go straight through. It would pass through, then fall back in. In fact, it would oscillate, from one side to the other, like a pendulum. This is actually a common problem to give physics students: drop an object through a hole in the Earth, neglect air resistance, and what does it do? It turns out that the time the object takes to travel through the Earth is independent of its mass, and equal to roughly 84.5 minutes.

So let’s ask a question: how long would it take for a black hole, oscillating like this, to double its mass?

We want to solve,

2=\frac{1}{1- M_0 \pi\rho\left(\frac{2G}{c^2}\right)^2 r }

so we need the black hole to travel a total distance of

r=\frac{1}{2M_0 \pi\rho\left(\frac{2G}{c^2}\right)^2} = 5.3\times 10^{71} \textrm{m}

That’s a huge distance! The Earth’s radius, remember, is 6378 kilometers. So traveling that far would take

5.3\times 10^{71} \times 84.5/60/24/365 = 8\times 10^{67} \textrm{y}

Ten to the sixty-seven years. Our universe is only about ten to the ten years old. In another five times ten to the nine years, the Sun will enter its red giant phase, and swallow the Earth. There simply isn’t enough time for this tiny tiny black hole to gobble up the world, before everything is already gobbled up by something else. Even in the most pessimistic way to walk through the calculation, it’s just not dangerous.

I hope that, if you were worried about black holes at the LHC, you’re not worried any more. But more than that, I hope you’ve learned three lessons. First, that even the highest-energy particle physics involves tiny energies compared to day-to-day experience. Second, that gravitational effects are tiny in the context of particle physics. And third, that with Wikipedia access, you too can answer questions like this. If you’re worried, you can make an estimate, and check!

Cabinet of Curiosities: The Train-Ladder

I’ve got a new paper out this week, with Andrew McLeod, Roger Morales, Matthias Wilhelm, and Chi Zhang. It’s yet another entry in this year’s “cabinet of curiosities”, quirky Feynman diagrams with interesting traits.

A while back, I talked about a set of Feynman diagrams I could compute with any number of “loops”, bypassing the approximations we usually need to use in particle physics. That wasn’t the first time someone did that. Back in the 90’s, some folks figured out how to do this for so-called “ladder” diagrams. These diagrams have two legs on one end for two particles coming in, two legs on the other end for two particles going out, and a ladder in between, like so:

There are infinitely many of these diagrams, but they’re all beautifully simple, variations on a theme that can be written down in a precise mathematical way.

Change things a little bit, though, and the situation gets wildly more intractable. Let the rungs of the ladder peek through the sides, and you get something looking more like the tracks for a train:

These traintrack integrals are much more complicated. Describing them requires the mathematics of Calabi-Yau manifolds, involving higher and higher dimensions as the tracks get longer. I don’t think there’s any hope of understanding these things for all loops, at least not any time soon.

What if we aimed somewhere in between? A ladder that just started to turn traintrack?

Add just a single pair of rungs, and it turns out that things remain relatively simple. If we do this, it turns out we don’t need any complicated Calabi-Yau manifolds. We just need the simplest Calabi-Yau manifold, called an elliptic curve. It’s actually the same curve for every version of the diagram. And the situation is simple enough that, with some extra cleverness, it looks like we’ve found a trick to calculate these diagrams to any number of loops we’d like.

(Another group figured out the curve, but not the calculation trick. They’ve solved different problems, though, studying all sorts of different traintrack diagrams. They sorted out some confusion I used to have about one of those diagrams, showing it actually behaves precisely the way we expected it to. All in all, it’s been a fun example of the way different scientists sometimes hone in on the same discovery.)

These developments are exciting, because Feynman diagrams with elliptic curves are still tough to deal with. We still have whole conferences about them. These new elliptic diagrams can be a long list of test cases, things we can experiment with with any number of loops. With time, we might truly understand them as well as the ladder diagrams!