Author Archives: 4gravitons

At New Ideas in Cosmology

The Niels Bohr Institute is hosting a conference this week on New Ideas in Cosmology. I’m no cosmologist, but it’s a pretty cool field, so as a local I’ve been sitting in on some of the talks. So far they’ve had a selection of really interesting speakers with quite a variety of interests, including a talk by Roger Penrose with his trademark hand-stippled drawings.

Including this old classic

One thing that has impressed me has been the “interdisciplinary” feel of the conference. By all rights this should be one “discipline”, cosmology. But in practice, each speaker came at the subject from a different direction. They all had a shared core of knowledge, common models of the universe they all compare to. But the knowledge they brought to the subject varied: some had deep knowledge of the mathematics of gravity, others worked with string theory, or particle physics, or numerical simulations. Each talk, aware of the varied audience, was a bit “colloquium-style“, introducing a framework before diving in to the latest research. Each speaker knew enough to talk to the others, but not so much that they couldn’t learn from them. It’s been unexpectedly refreshing, a real interdisciplinary conference done right.

At Mikefest

I’m at a conference this week of a very particular type: a birthday conference. When folks in my field turn 60, their students and friends organize a special conference for them, celebrating their research legacy. With COVID restrictions just loosening, my advisor Michael Douglas is getting a last-minute conference. And as one of the last couple students he graduated at Stony Brook, I naturally showed up.

The conference, Mikefest, is at the Institut des Hautes Études Scientifiques, just outside of Paris. Mike was a big supporter of the IHES, putting in a lot of fundraising work for them. Another big supporter, James Simons, was Mike’s employer for a little while after his time at Stony Brook. The conference center we’re meeting in is named for him.

You might have to zoom in to see that, though.

I wasn’t involved in organizing the conference, so it was interesting seeing differences between this and other birthday conferences. Other conferences focus on the birthday prof’s “family tree”: their advisor, their students, and some of their postdocs. We’ve had several talks from Mike’s postdocs, and one from his advisor, but only one from a student. Including him and me, three of Mike’s students are here: another two have had their work mentioned but aren’t speaking or attending.

Most of the speakers have collaborated with Mike, but only for a few papers each. All of them emphasized a broader debt though, for discussions and inspiration outside of direct collaboration. The message, again and again, is that Mike’s work has been broad enough to touch a wide range of people. He’s worked on branes and the landscape of different string theory universes, pure mathematics and computation, neuroscience and recently even machine learning. The talks generally begin with a few anecdotes about Mike, before pivoting into research talks on the speakers’ recent work. The recent-ness of the work is perhaps another difference from some birthday conferences: as one speaker said, this wasn’t just a celebration of Mike’s past, but a “welcome back” after his return from the finance world.

One thing I don’t know is how much this conference might have been limited by coming together on short notice. For other birthday conferences impacted by COVID (and I’m thinking of one in particular), it might be nice to have enough time to have most of the birthday prof’s friends and “academic family” there in person. As-is, though, Mike seems to be having fun regardless.

Happy Birthday Mike!

You Are a Particle Detector

I mean that literally. True, you aren’t a 7,000 ton assembly of wires and silicon, like the ATLAS experiment inside the Large Hadron Collider. You aren’t managed by thousands of scientists and engineers, trying to sift through data from a billion pairs of protons smashing into each other every second. Nonetheless, you are a particle detector. Your senses detect particles.

Like you, and not like you

Your ears take vibrations in the air and magnify them, vibrating the fluid of your inner ear. Tiny hairs communicate that vibration to your nerves, which signal your brain. Particle detectors, too, magnify signals: photomultipliers take a single particle of light (called a photon) and set off a cascade, multiplying the signal one hundred million times so it can be registered by a computer.

Your nose and tongue are sensitive to specific chemicals, recognizing particular shapes and ignoring others. A particle detector must also be picky. A detector like ATLAS measures far more particle collisions than it could ever record. Instead, it learns to recognize particular “shapes”, collisions that might hold evidence of something interesting. Only those collisions are recorded, passed along to computer centers around the world.

Your sense of touch tells you something about the energy of a collision: specifically, the energy things have when they collide with you. Particle detectors do this with calorimeters, that generate signals based on a particle’s energy. Different parts of your body are more sensitive than others: your mouth and hands are much more sensitive than your back and shoulders. Different parts of a particle detector have different calorimeters: an electromagnetic calorimeter for particles like electrons, and a less sensitive hadronic calorimeter that can catch particles like protons.

You are most like a particle detector, though, in your eyes. The cells of your eyes, rods and cones, detect light, and thus detect photons. Your eyes are more sensitive than you think: you are likely able to detect even a single photon. In an experiment, three people sat in darkness for forty minutes, then heard two sounds, one of which might come accompanied by a single photon of light flashed into their eye. The three didn’t notice the photons every time, that’s not possible for such a small sensation: but they did much better than a random guess.

(You can be even more literal than that. An older professor here told me stories of the early days of particle physics. To check that a machine was on, sometimes physicists would come close, and watch for flashes in the corner of their vision: a sign of electrons flying through their eyeballs!)

You are a particle detector, but you aren’t just a particle detector. A particle detector can’t move, its thousands of tons are fixed in place. That gives it blind spots: for example, the tube that the particles travel through is clear, with no detectors in it, so the particle can get through. Physicists have to account for this, correcting for the missing space in their calculations. In contrast, if you have a blind spot, you can act: move, and see the world from a new point of view. You observe not merely a series of particles, but the results of your actions: what happens when you turn one way or another, when you make one choice or another.

So while you are a particle detector, what’s more, you’re a particle experiment. You can learn a lot more than those big heaps of wires and silicon could on their own. You’re like the whole scientific effort: colliders and detectors, data centers and scientists around the world. May you learn as much in your life as the experiments do in theirs.

Things Which Are Fluids

For overambitious apes like us, adding integers is the easiest thing in the world. Take one berry, add another, and you have two. Each remains separate, you can lay them in a row and count them one by one, each distinct thing adding up to a group of distinct things.

Other things in math are less like berries. Add two real numbers, like pi and the square root of two, and you get another real number, bigger than the first two, something you can write in an infinite messy decimal. You know in principle you can separate it out again (subtract pi, get the square root of two), but you can’t just stare at it and see the parts. This is less like adding berries, and more like adding fluids. Pour some water in to some other water, and you certainly have more water. You don’t have “two waters”, though, and you can’t tell which part started as which.

More waters, please!

Some things in math look like berries, but are really like fluids. Take a polynomial, say 5 x^2 + 6 x + 8. It looks like three types of things, like three berries: five x^2, six x, and eight 1. Add another polynomial, and the illusion continues: add x^2 + 3 x + 2 and you get 6 x^2+9 x+10. You’ve just added more x^2, more x, more 1, like adding more strawberries, blueberries, and raspberries.

But those berries were a choice you made, and not the only one. You can rewrite that first polynomial, for example saying 5(x^2+2x+1) - 4 (x+1) + 7. That’s the same thing, you can check. But now it looks like five x^2+2x+1, negative four x+1, and seven 1. It’s different numbers of different things, blackberries or gooseberries or something. And you can do this in many ways, infinitely many in fact. The polynomial isn’t really a collection of berries, for all it looked like one. It’s much more like a fluid, a big sloshing mess you can pour into buckets of different sizes. (Technically, it’s a vector space. Your berries were a basis.)

Even smart, advanced students can get tripped up on this. You can be used to treating polynomials as a fluid, and forget that directions in space are a fluid, one you can rotate as you please. If you’re used to directions in space, you’ll get tripped up by something else. You’ll find that types of particles can be more fluid than berry, the question of which quark is which not as simple as how many strawberries and blueberries you have. The laws of physics themselves are much more like a fluid, which should make sense if you take a moment, because they are made of equations, and equations are like a fluid.

So my fellow overambitious apes, do be careful. Not many things are like berries in the end. A whole lot are like fluids.

W is for Why???

Have you heard the news about the W boson?

The W boson is a fundamental particle, part of the Standard Model of particle physics. It is what we call a “force-carrying boson”, a particle related to the weak nuclear force in the same way photons are related to electromagnetism. Unlike photons, W bosons are “heavy”: they have a mass. We can’t usually predict masses of particles, but the W boson is a bit different, because its mass comes from the Higgs boson in a special way, one that ties it to the masses of other particles like the Z boson. The upshot is that if you know the mass of a few other particles, you can predict the mass of the W.

And according to a recent publication, that prediction is wrong. A team analyzed results from an old experiment called the Tevatron, the biggest predecessor of today’s Large Hadron Collider. They treated the data with groundbreaking care, mindbogglingly even taking into account the shape of the machine’s wires. And after all that analysis, they found that the W bosons detected by the Tevatron had a different mass than the mass predicted by the Standard Model.

How different? Here’s where precision comes in. In physics, we decide whether to trust a measurement with a statistical tool. We calculate how likely the measurement would be, if it was an accident. In this case: how likely it would be that, if the Standard Model was correct, the measurement would still come out this way? To discover a new particle, we require this chance to be about one in 3.5 million, or in our jargon, five sigma. That was the requirement for discovering the Higgs boson. This super-precise measurement of the W boson doesn’t have five sigma…it has seven sigma. That means, if we trust the analysis team, then a measurement like this could come accidentally out of the Standard Model only about one in a trillion times.

Ok, should we trust the analysis team?

If you want to know that, I’m the wrong physicist to ask. The right physicists are experimental particle physicists. They do analyses like that one, and they know what can go wrong. Everyone I’ve heard from in that field emphasized that this was a very careful group, who did a lot of things impressively right…but there is still room for mistakes. One pointed out that the new measurement isn’t just inconsistent with the Standard Model, but with many previous measurements too. Those measurements are less precise, but still precise enough that we should be a bit skeptical. Another went into more detail about specific clues as to what might have gone wrong.

If you can’t find an particle experimentalist, the next best choice is a particle phenomenologist. These are the people who try to make predictions for new experiments, who use theoretical physics to propose new models that future experiments can test. Here’s one giving a first impression, and discussing some ways to edit the Standard Model to agree with the new measurement. Here’s another discussing what to me is an even more interesting question: if we take these measurements seriously, both the new one and the old ones, then what do we believe?

I’m not an experimentalist or a phenomenologist. I’m an “amplitudeologist”. I work not on the data, or the predictions, but the calculational tools used to make those predictions, called “scattering amplitudes”. And that gives me a different view on the situation.

See in my field, precision is one of our biggest selling-points. If you want theoretical predictions to match precise experiments, you need our tricks to compute them. We believe (and argue to grant agencies) that this precision will be important: if a precise experiment and a precise prediction disagree, it could be the first clue to something truly new. New solid evidence of something beyond the Standard Model would revitalize all of particle physics, giving us a concrete goal and killing fruitless speculation.

This result shakes my faith in that a little. Probably, the analysis team got something wrong. Possibly, all previous analyses got something wrong. Either way, a lot of very careful smart people tried to estimate their precision, got very confident…and got it wrong.

(There’s one more alternative: maybe million-to-one chances really do crop up nine times out of ten.)

If some future analysis digs down deep in precision, and finds another deviation from the Standard Model, should we trust it? What if it’s measuring something new, and we don’t have the prior experiments to compare to?

(This would happen if we build a new even higher-energy collider. There are things the collider could measure, like the chance one Higgs boson splits into two, that we could not measure with any earlier machine. If we measured that, we couldn’t compare it to the Tevatron or the LHC, we’d have only the new collider to go on.)

Statistics are supposed to tell us whether to trust a result. Here, they’re not doing their job. And that creates the scary possibility that some anomaly shows up, some real deviation deep in the sigmas that hints at a whole new path for the field…and we just end up bickering about who screwed it up. Or the equally scary possibility that we find a seven-sigma signal of some amazing new physics, build decades of new theories on it…and it isn’t actually real.

We don’t just trust statistics. We also trust the things normal people trust. Do other teams find the same result? (I hope that they’re trying to get to this same precision here, and see what went wrong!) Does the result match other experiments? Does it make predictions, which then get tested in future experiments?

All of those are heuristics of course. Nothing can guarantee that we measure the truth. Each trick just corrects for some of our biases, some of the ways we make mistakes. We have to hope that’s good enough, that if there’s something to see we’ll see it, and if there’s nothing to see we won’t. Precision, my field’s raison d’être, can’t be enough to convince us by itself. But it can help.

The Undefinable

If I can teach one lesson to all of you, it’s this: be precise. In physics, we try to state what we mean as precisely as we can. If we can’t state something precisely, that’s a clue: maybe what we’re trying to state doesn’t actually make sense.

Someone recently reached out to me with a question about black holes. He was confused about how they were described, about what would happen when you fall in to one versus what we could see from outside. Part of his confusion boiled down to a question: “is the center really an infinitely small point?”

I remembered a commenter a while back who had something interesting to say about this. Trying to remind myself of the details, I dug up this question on Physics Stack Exchange. user4552 has a detailed, well-referenced answer, with subtleties of General Relativity that go significantly beyond what I learned in grad school.

According to user4552, the reason this question is confusing is that the usual setup of general relativity cannot answer it. In general relativity, singularities like the singularity in the middle of a black hole aren’t treated as points, or collections of points: they’re not part of space-time at all. So you can’t count their dimensions, you can’t see whether they’re “really” infinitely small points, or surfaces, or lines…

This might surprise people (like me) who have experience with simpler equations for these things, like the Schwarzchild metric. The Schwarzchild metric describes space-time around a black hole, and in the usual coordinates it sure looks like the singularity is at a single point where r=0, just like the point where r=0 is a single point in polar coordinates in flat space. The thing is, though, that’s just one sort of coordinates. You can re-write a metric in many different sorts of coordinates, and the singularity in the center of a black hole might look very different in those coordinates. In general relativity, you need to stick to things you can say independent of coordinates.

Ok, you might say, so the usual mathematics can’t answer the question. Can we use more unusual mathematics? If our definition of dimensions doesn’t tell us whether the singularity is a point, maybe we just need a new definition!

According to user4552, people have tried this…and it only sort of works. There are several different ways you could define the dimension of a singularity. They all seem reasonable in one way or another. But they give different answers! Some say they’re points, some say they’re three-dimensional. And crucially, there’s no obvious reason why one definition is “right”. The question we started with, “is the center really an infinitely small point?”, looked like a perfectly reasonable question, but it actually wasn’t: the question wasn’t precise enough.

This is the real problem. The problem isn’t that our question was undefined, after all, we can always add new definitions. The problem was that our question didn’t specify well enough the definitions we needed. That is why the question doesn’t have an answer.

Once you understand the difference, you see these kinds of questions everywhere. If you’re baffled by how mass could have come out of the Big Bang, or how black holes could radiate particles in Hawking radiation, maybe you’ve heard a physicist say that energy isn’t always conserved. Energy conservation is a consequence of symmetry, specifically, symmetry in time. If your space-time itself isn’t symmetric (the expanding universe making the past different from the future, a collapsing star making a black hole), then you shouldn’t expect energy to be conserved.

I sometimes hear people object to this. They ask, is it really true that energy isn’t conserved when space-time isn’t symmetric? Shouldn’t we just say that space-time itself contains energy?

And well yes, you can say that, if you want. It isn’t part of the usual definition, but you can make a new definition, one that gives energy to space-time. In fact, you can make more than one new definition…and like the situation with the singularity, these definitions don’t always agree! Once again, you asked a question you thought was sensible, but it wasn’t precise enough to have a definite answer.

Keep your eye out for these kinds of questions. If scientists seem to avoid answering the question you want, and keep answering a different question instead…it might be their question is the only one with a precise answer. You can define a method to answer your question, sure…but it won’t be the only way. You need to ask precise enough questions to get good answers.

Gateway Hobbies

When biologists tell stories of their childhoods, they’re full of trails of ants and fireflies in jars. Lots of writers start young, telling stories on the playground and making skits with their friends. And the mere existence of “chemistry sets” tells you exactly how many chemists get started. Many fields have these “gateway hobbies”, like gateway drugs for careers, ways that children and teenagers get hooked and gain experience.

Physics is a little different, though. While kids can play with magnets and electricity, there aren’t a whole lot of other “physics hobbies”, especially for esoteric corners like particle physics. Instead, the “gateway hobbies” of physics are more varied, drawing from many different fields.

First, of course, even if a child can’t “do physics”, they can always read about it. Kids will memorize the names of quarks, read about black holes, or watch documentaries about string theory. I’m not counting this as a “physics hobby” because it isn’t really: physics isn’t a collection of isolated facts, but of equations: frameworks you can use to make predictions. Reading about the Big Bang is a good way to get motivated and excited, it’s a great thing to do…but it doesn’t prepare you for the “science part” of the science.

A few efforts at physics popularization get a bit more hands-on. Many come in the form of video games. You can get experience with relativity through Velocity Raptor, quantum mechanics through Quantum Chess, or orbital mechanics through Kerbal Space Program. All of these get just another bit closer to “doing physics” rather than merely reading about it.

One can always gain experience in other fields, and that can be surprisingly relevant. Playing around with a chemistry set gives first-hand experience of the kinds of things that motivated quantum mechanics, and some things that still motivate condensed matter research. Circuits are physics, more directly, even if they’re also engineering: and for some physicists, designing electronic sensors is a huge part of what they do.

Astronomy has a special place, both in the history of physics and the pantheon of hobbies. There’s a huge amateur astronomy community, one that both makes real discoveries and reaches out to kids of all ages. Many physicists got their start looking at the heavens, using it like Newton’s contemporaries as a first glimpse into the mechanisms of nature.

More and more research in physics involves at least some programming, and programming is another activity kids have access to in spades, from Logo to robotics competitions. Learning how to program isn’t just an important skill: it’s also a way for young people to experience a world bound by clear laws and logic, another motivation to study physics.

Of course, if you’re interested in rules and logic, why not go all the way? Plenty of physicists grew up doing math competitions. I have fond memories of Oregon’s Pentagames, and the more “serious” activities go all the way up to the famously challenging Putnam Competition.

Finally, there are physics competitions too, at least in the form of the International Physics Olympiad, where high school students compete in physics prowess.

Not every physicist did these sorts of things, of course: some got hooked later. Others did more than one. A friend of mine who’s always been “Mr. Science” got almost the whole package, with a youth spent exploring the wild west of the early internet, working at a planetarium, and discovering just how easy it is to get legal access to dangerous and radioactive chemicals. There are many paths in to physics, so even if kids can’t “do physics” the same way they “do chemistry”, there’s still plenty to do!

Keeping It Colloquial

In the corners of academia where I hang out, a colloquium is a special kind of talk. Most talks we give are part of weekly seminars for specific groups. For example, the theoretical particle physicists here have a seminar. Each week we invite a speaker, who gives a talk on their recent work. Since they expect an audience of theoretical particle physicists, they can go into more detail.

A colloquium isn’t like that. Colloquia are talks for the whole department: theorists and experimentalists, particle physicists and biophysicists. They’re more prestigious, for big famous professors (or sometimes, for professors interviewing for jobs…). The different audience, and different context, means that the talk plays by different rules.

Recently, I saw a conference full of “colloquium-style” talks, trying to play by these rules. Some succeeded, some didn’t…and I think I now have a better idea of how those rules work.

First, in a colloquium, you’re not just speaking for yourself. You’re an ambassador for your field. For some of the audience, this might be the first time they’ve heard a talk by someone who does your kind of research. You want to give them a good impression, not just about you, but about the whole topic. So while you definitely want to mention your own work, you want to tell a full story, one that gives more than a glimpse of what others are doing as well.

Second, you want to connect to something the audience already knows. With an audience of physicists, you can assume a certain baseline, but not much more than that. You need to make the beginning accessible and start with something familiar. For the conference I mentioned, a talk that did this well was the talk on exoplanets, which started with the familiar planets of the solar system, classifying them in order to show what you might expect exoplanets to look like. In contrast, t’Hooft’s talk did this poorly. His work is exploiting a loophole in a quantum-mechanical argument called Bell’s theorem, which most physicists have heard of. Instead of mentioning Bell’s theorem, he referred vaguely to “criteria from philosophers”, and only even mentioned that near the end of the talk, instead starting with properties of quantum mechanics his audience was much less familiar with.

Moving on, then, you want to present a mystery. So far, everything in the talk has made sense, and your audience feels like they understand. Now, you show them something that doesn’t fit, something their familiar model can’t accommodate. This activates your audience’s scientist instincts: they’re curious now, they want to know the answer. A good example from the conference was a talk on chemistry in space. The speaker emphasized that we can see evidence of complex molecules in space, but that space dust is so absurdly dilute that it seems impossible such molecules could form: two atoms could go a billion years without meeting each other.

You can’t just leave your audience mystified, though. You next have to solve the mystery. Ideally, your solution will be something smart, but simple: something your audience can intuitively understand. This has two benefits. First, it makes you look smart: you described a mysterious problem, and then you show how to solve it! Second, it makes the audience feel smart: they felt the problem was hard, but now they understand how to solve it too. The audience will have good feelings about you as a result, and good feelings about the topic: in some sense, you’ve tied a piece of their self-esteem to knowing the solution to your problem. This was well-done by the speaker discussing space chemistry, who explained that the solution was chemistry on surfaces: if two atoms are on the surface of a dust grain or meteorite, they’re much more likely to react. It was also well-done by a speaker discussing models of diseases like diabetes: he explained the challenge of controlling processes with cells, when cells replicate exponentially, and showed one way they could be controlled, when the immune system kills off any cells that replicate much faster than their neighbors. (He also played the guitar to immune system-themed songs…also a good colloquium strategy for those who can pull it off!)

Finally, a picture is worth a thousand wordsas long as it’s a clear one. For an audience that won’t follow most of your equations, it’s crucial to show them something visual: graphics, puns, pictures of equipment or graphs. Crucially, though, your graphics should be something the audience can understand. If you put up a graph with a lot of incomprehensible detail: parameters you haven’t explained, or just set up in a way your audience doesn’t get, then your audience gets stuck. Much like an unfamiliar word, a mysterious graph will have members of the audience scratching their heads, trying to figure out what it means. They’ll be so busy trying, they’ll miss what you say next, and you’ll lose them! So yes, put in graphs, put in pictures: but make sure that the ones you use, you have time to explain.

Answering Questions: Virtue or Compulsion?

I was talking to a colleague about this blog. I mentioned worries I’ve had about email conversations with readers: worries about whether I’m communicating well, whether the readers are really understanding. For the colleague though, something else stood out:

“You sure are generous with your time.”

Am I?

I’d never really thought about it that way before. It’s not like I drop everything to respond to a comment, or a message. I leave myself a reminder, and get to it when I have time. To the extent that I have a time budget, I don’t spend it freely, I prioritize work before chatting with my readers, as nice as you folks are.

At the same time, though, I think my colleague was getting at a real difference there. It’s true that I don’t answer questions right away. But I do answer them eventually. I can’t imagine being asked a question, and just never answering it.

There are exceptions, of course. If you’re obviously just trolling, just insulting me or messing with me or asking the same question over and over, yeah I’ll skip your question. And if I don’t understand what you’re asking, there’s only so much effort I’m going to put in to try to decipher it. Even in those cases, though, I have a certain amount of regret. I have to take a deep breath and tell myself no, I can really skip this one.

On the one hand, this feels like a moral obligation, a kind of intellectual virtue. If knowledge, truth, information are good regardless of anything else, then answering questions is just straightforwardly good. People ought to know more, asking questions is how you learn, and that can’t work unless we’re willing to teach. Even if there’s something you need to keep secret, you can at least say something, if only to explain why you can’t answer. Just leaving a question hanging feels like something bad people do.

On the other hand, I think this might just be a compulsion, a weird quirk of my personality. It may even be more bad than good, an urge that makes me “waste my time”, or makes me too preoccupied with what others say, drafting responses in my head until I find release by writing them down. I think others are much more comfortable just letting a question lie, and moving on. It feels a bit like the urge to have the last word in a conversation, just more specific: if someone asks me to have the last word, I feel like I have to oblige!

I know this has to have its limits. The more famous bloggers get so many questions they can’t possibly respond to all of them. I’ve seen how people like Neil Gaiman describe responding to questions on tumblr, just opening a giant pile of unread messages, picking a few near the top, and then going back to their day. I can barely stand leaving unread messages in my email. If I got that famous, I don’t know how I’d deal with that. But I’d probably figure something out.

Am I too generous with you guys? Should people always answer questions? And does the fact that I ended this post with questions mean I’ll get more comments?

Of Snowmass and SAGEX

arXiv-watchers might have noticed an avalanche of papers with the word Snowmass in the title. (I contributed to one of them.)

Snowmass is a place, an area in Colorado known for its skiing. It’s also an event in that place, the Snowmass Community Planning Exercise for the American Physical Society’s Division of Particles and Fields. In plain terms, it’s what happens when particle physicists from across the US get together in a ski resort to plan their future.

Usually someone like me wouldn’t be involved in that. (And not because it’s a ski resort.) In the past, these meetings focused on plans for new colliders and detectors. They got contributions from experimentalists, and a few theorists heavily focused on their work, but not the more “formal” theorists beyond.

This Snowmass is different. It’s different because of Corona, which changed it from a big meeting in a resort to a spread-out series of meetings and online activities. It’s also different because they invited theorists to contribute, and not just those interested in particle colliders. The theorists involved study everything from black holes and quantum gravity to supersymmetry and the mathematics of quantum field theory. Groups focused on each topic submit “white papers” summarizing the state of their area. These white papers in turn get organized and summarized into a few subfields, which in turn contribute to the planning exercise. No-one I’ve talked to is entirely clear on how this works, how much the white papers will actually be taken into account or by whom. But it seems like a good chance to influence US funding agencies, like the Department of Energy, and see if we can get them to prioritize our type of research.

Europe has something similar to Snowmass, called the European Strategy for Particle Physics. It also has smaller-scale groups, with their own purposes, goals, and funding sources. One such group is called SAGEX: Scattering Amplitudes: from Geometry to EXperiment. SAGEX is an Innovative Training Network, an organization funded by the EU to train young researchers, in this case in scattering amplitudes. Its fifteen students are finishing their PhDs and ready to take the field by storm. Along the way, they spent a little time in industry internships (mostly at Maple and Mathematica), and quite a bit of time working on outreach.

They have now summed up that outreach work in an online exhibition. I’ve had fun exploring it over the last couple days. They’ve got a lot of good content there, from basic explanations of relativity and quantum mechanics, to detailed games involving Feynman diagrams and associahedra, to a section that uses solitons as a gentle introduction to integrability. If you’re in the target audience, you should check it out!