Tag Archives: DoingScience

Conferences Are Work! Who Knew?

I’ve been traveling for over a month now, from conference to conference, with a bit of vacation thrown in at the end.

(As such, I haven’t had time to read up on the recent announcement of the detection of neutrinos and high-energy photons from a blazar, Matt Strassler has a nice piece on it.)

One thing I didn’t expect was how exhausting going to three conferences in a row would be. I didn’t give any talks this time around, so I thought I was skipping the “work” part. But sitting in a room for talk after talk, listening and taking notes, turns out to still be work! There’s effort involved in paying attention, especially in a scientific talk where the details matter. You assess the talks in your head, turning concepts around and thinking about what you might do with them. It’s the kind of thing you don’t notice for a seminar or two, but at a conference, after a while, it really builds up. After three, let’s just say I’ve really needed this vacation. I’ll be back at work next week, and maybe I’ll have a longer blog post for you folks. Until then, I ought to get some rest!

Amplitudes 2018

This week, I’m at Amplitudes, my field’s big yearly conference. The conference is at SLAC National Accelerator Laboratory this year, a familiar and lovely place.

IMG_20180620_183339441_HDR

Welcome to the Guest House California

It’s been a packed conference, with a lot of interesting talks. Recording and slides of most of them should be up at this point, for those following at home. I’ll comment on a few that caught my attention, I might do a more in-depth post later.

The first morning was dedicated to gravitational waves. At the QCD Meets Gravity conference last December I noted that amplitudes folks were very eager to do something relevant to LIGO, but that it was still a bit unclear how we could contribute (aside from Pierpaolo Mastrolia, who had already figured it out). The following six months appear to have cleared things up considerably, and Clifford Cheung and Donal O’Connel’s talks laid out quite concrete directions for this kind of research.

I’d seen Erik Panzer talk about the Hepp bound two weeks ago at Les Houches, but that was for a much more mathematically-inclined audience. It’s been interesting seeing people here start to see the implications: a simple method to classify and estimate (within 1%!) Feynman integrals could be a real game-changer.

Brenda Penante’s talk made me rethink a slogan I like to quote, that N=4 super Yang-Mills is the “most transcendental” part of QCD. While this is true in some cases, in many ways it’s actually least true for amplitudes, with quite a few counterexamples. For other quantities (like the form factors that were the subject of her talk) it’s true more often, and it’s still unclear when we should expect it to hold, or why.

Nima Arkani-Hamed has a reputation for talks that end up much longer than scheduled. Lately, it seems to be due to the sheer number of projects he’s working on. He had to rush at the end of his talk, which would have been about cosmological polytopes. I’ll have to ask his collaborator Paolo Benincasa for an update when I get back to Copenhagen.

Tuesday afternoon was a series of talks on the “NNLO frontier”, two-loop calculations that form the state of the art for realistic collider physics predictions. These talks brought home to me that the LHC really does need two-loop precision, and that the methods to get it are still pretty cumbersome. For those of us off in the airy land of six-loop N=4 super Yang-Mills, this is the challenge: can we make what these people do simpler?

Wednesday cleared up a few things for me, from what kinds of things you can write down in “fishnet theory” to how broad Ashoke Sen’s soft theorem is, to how fast John Joseph Carrasco could show his villanelle slide. It also gave me a clearer idea of just what simplifications are available for pushing to higher loops in supergravity.

Wednesday was also the poster session. It keeps being amazing how fast the field is growing, the sheer number of new faces was quite inspiring. One of those new faces pointed me to a paper I had missed, suggesting that elliptic integrals could end up trickier than most of us had thought.

Thursday featured two talks by people who work on the Conformal Bootstrap, one of our subfield’s closest relatives. (We’re both “bootstrappers” in some sense.) The talks were interesting, but there wasn’t a lot of engagement from the audience, so if the intent was to make a bridge between the subfields I’m not sure it panned out. Overall, I think we’re mostly just united by how we feel about Simon Caron-Huot, who David Simmons-Duffin described as “awesome and mysterious”. We also had an update on attempts to extend the Pentagon OPE to ABJM, a three-dimensional analogue of N=4 super Yang-Mills.

I’m looking forward to Friday’s talks, promising elliptic functions among other interesting problems.

Quelques Houches

For the last two weeks I’ve been at Les Houches, a village in the French Alps, for the Summer School on Structures in Local Quantum Field Theory.

IMG_20180614_104537425

To assist, we have a view of some very large structures in local quantum field theory

Les Houches has a long history of prestigious summer schools in theoretical physics, going back to the activity of Cécile DeWitt-Morette after the second world war. This was more of a workshop than a “school”, though: each speaker gave one talk, and they weren’t really geared for students.

The workshop was organized by Dirk Kreimer and Spencer Bloch, who both have a long track record of work on scattering amplitudes with a high level of mathematical sophistication. The group they invited was an even mix of physicists interested in mathematics and mathematicians interested in physics. The result was a series of talks that managed to both be thoroughly technical and ask extremely deep questions, including “is quantum electrodynamics really an asymptotic series?”, “are there simple graph invariants that uniquely identify Feynman integrals?”, and several talks about something called the Spine of Outer Space, which still sounds a bit like a bad sci-fi novel. Along the way there were several talks showcasing the growing understanding of elliptic polylogarithms, giving me an opportunity to quiz Johannes Broedel about his recent work.

While some of the more mathematical talks went over my head, they spurred a lot of productive dialogues between physicists and mathematicians. Several talks had last-minute slides, added as a result of collaborations that happened right there at the workshop. There was even an entire extra talk, by David Broadhurst, based on work he did just a few days before.

We also had a talk by Jaclyn Bell, a former student of one of the participants who was on a BBC reality show about training to be an astronaut. She’s heavily involved in outreach now, and honestly I’m a little envious of how good she is at it.

An Omega for Every Alpha

In particle physics, we almost always use approximations.

Often, we assume the forces we consider are weak. We use a “coupling constant”, some number written g or a or \alpha, and we assume it’s small, so \alpha is greater than \alpha^2 is greater than \alpha^3. With this assumption, we can start drawing Feynman diagrams, and each “loop” we add to the diagram gives us a higher power of \alpha.

If \alpha isn’t small, then the trick stops working, the diagrams stop making sense, and we have to do something else.

Except for some times, when everything keeps working fine. This week, along with Simon Caron-Huot, Lance Dixon, Andrew McLeod, and Georgios Papathanasiou, I published what turned out to be a pretty cute example.

omegapic

We call this fellow \Omega. It’s a family of diagrams that we can write down for any number of loops: to get more loops, just extend the “…”, adding more boxes in the middle. Count the number of lines sticking out, and you get six: these are “hexagon functions”, the type of function I’ve used to calculate six-particle scattering in N=4 super Yang-Mills.

The fun thing about \Omega is that we don’t have to think about it this way, one loop at a time. We can add up all the loops, \alpha times one loop plus \alpha^2 times two loops plus \alpha^3 times three loops, all the way up to infinity. And we’ve managed to figure out what those loops sum to.

omegaeqnpic

The result ends up beautifully simple. This formula isn’t just true for small coupling constants, it’s true for any number you care to plug in, making the forces as strong as you’d like.

We can do this with \Omega because we have equations relating different loops together. Solving those equations with a few educated guesses, we can figure out the full sum. We can also go back, and use those equations to take the \Omegas at each loop apart, finding a basis of functions needed to describe them.

That basis is the real reward here. It’s not the full basis of “hexagon functions”: if you wanted to do a full six-particle calculation, you’d need more functions than the ones \Omega is made of. What it is, though, is a basis we can describe completely, stating exactly what it’s made of for any number of loops.

We can’t do that with the hexagon functions, at least not yet: we have to build them loop by loop, one at a time before we can find the next ones. The hope, though, is that we won’t have to do this much longer. The \Omega basis covers some of the functions we need. Our hope is that other nice families of diagrams can cover the rest. If we can identify more functions like \Omega, things that we can sum to any number of loops, then perhaps we won’t have to think loop by loop anymore. If we know the right building blocks, we might be able to guess the whole amplitude, to find a formula that works for any \alpha you’d like.

That would be a big deal. N=4 super Yang-Mills isn’t the real world, but it’s complicated in some of the same ways. If we can calculate there without approximations, it should at least give us an idea of what part of the real-world answer can look like. And for a field that almost always uses approximations, that’s some pretty substantial progress.

Be Rational, Integrate Our Way!

I’ve got another paper up this week with Jacob Bourjaily, Andrew McLeod, and Matthias Wilhelm, about integrating Feynman diagrams.

If you’ve been following this blog for a while, you might be surprised: most of my work avoids Feynman diagrams at all costs. I’ve changed my mind, in part, because it turns out integrating Feynman diagrams can be a lot easier than I had thought.

At first, I thought Feynman integrals would be hard purely because they’re integrals. Those of you who’ve taken calculus might remember that, while taking derivatives was just a matter of following the rules, doing integrals required a lot more thought. Rather than one set of instructions, you had a set of tricks, meant to try to match your integral to the derivative of some known function. Sometimes the tricks worked, sometimes you just ended up completely lost.

As it turns out, that’s not quite the problem here. When I integrate a Feynman diagram, most of the time I’m expecting a particular kind of result, called a polylogarithm. If you know that’s the end goal, then you really can just follow the rules, using partial-fractioning to break your integral up into simpler integrations, linear pieces that you can match to the definition of polylogarithms. There are even programs that do this for you: Erik Panzer’s HyperInt is an especially convenient one.

maplewhining

Or it would be convenient, if Maple’s GUI wasn’t cursed…

Still, I wouldn’t have expected Feynman integrals to work particularly well, because they require too many integrations. You need to integrate a certain number of times to define a polylogarithm: for the ones we get out of Feynman diagrams, it’s two integrations for each loop the diagram has. The usual ways we calculate Feynman diagrams lead to a lot more integrations: the systematic method, using something called Symanzik polynomials, involves one integration per particle line in the diagram, which usually adds up to a lot more than two per loop.

When I arrived at the Niels Bohr Institute, I assumed everyone in my field knew about Symanzik polynomials. I was surprised when it turned out Jake Bourjaily hadn’t even heard of them. He was integrating Feynman diagrams by what seemed like a plodding, unsystematic method, taking the intro example from textbooks and just applying it over and over, gaining no benefit from all of the beautiful graph theory that goes into the Symanzik polynomials.

I was even more surprised when his method turned out to be the better one.

Avoid Symanzik polynomials, and you can manage with a lot fewer integrations. Suddenly we were pretty close to the “two integrations per loop” sweet spot, with only one or two “extra” integrations to do.

A few more advantages, and Feynman integrals were actually looking reasonable. The final insight came when we realized that just writing the problem in the right variables made a huge difference.

HyperInt, as I mentioned, tries to break a problem up into simpler integrals. Specifically, it’s trying to make things linear in the integration variable. In order to do this, sometimes it has to factor quadratic polynomials, like so:

partialfractionformula

Notice the square roots in this formula? Those can make your life a good deal trickier. Once you’ve got irrational functions in the game, HyperInt needs extra instructions for how to handle them, and integration is a lot more cumbersome.

The last insight, then, and the key point in our paper, is to avoid irrational functions. To do that, we use variables that rationalize the square roots.

We get these variables from one of the mainstays of our field, called momentum twistors. These variables are most useful in our favorite theory of N=4 super Yang-Mills, but they’re useful in other contexts too. By parametrizing them with a good “chart”, one with only the minimum number of variables we need to capture the integral, we can rationalize most of the square roots we encounter.

That “most” is going to surprise some people. We rationalized all of the expected square roots, letting us do integrals all the way to four loops in a few cases. But there were some unexpected square roots, and those we couldn’t rationalize.

These unexpected square roots don’t just make our life more complicated, if they stick around in a physically meaningful calculation they’ll upset a few other conjectures as well. People had expected that these integrals were made of certain kinds of “letters”, organized by a mathematical structure called a cluster algebra. That cluster algebra structure doesn’t have room for square roots, which suggests that it can’t be the full story here.

The integrals that we can do, though, with no surprise square roots? They’re much easier than anyone expected, much easier than with any other method. Rather than running around doing something fancy, we just integrated things the simple, rational way…and it worked!

Writing the Paper Changes the Results

You spent months on your calculation, but finally it’s paid off. Now you just have to write the paper. That’s the easy part, right?

Not quite. Even if writing itself is easy for you, writing a paper is never just writing. To write a paper, you have to make your results as clear as possible, to fit them into one cohesive story. And often, doing that requires new calculations.

It’s something that first really struck me when talking to mathematicians, who may be the most extreme case. For them, a paper needs to be a complete, rigorous proof. Even when they have a result solidly plotted out in their head, when they’re sure they can prove something and they know what the proof needs to “look like”, actually getting the details right takes quite a lot of work.

Physicists don’t have quite the same standards of rigor, but we have a similar paper-writing experience. Often, trying to make our work clear raises novel questions. As we write, we try to put ourselves in the mind of a potential reader. Sometimes our imaginary reader is content and quiet. Other times, though, they object:

“Does this really work for all cases? What about this one? Did you make sure you can’t do this, or are you just assuming? Where does that pattern come from?”

Addressing those objections requires more work, more calculations. Sometimes, it becomes clear we don’t really understand our results at all! The paper takes a new direction, flows with new work to a new, truer message, one we wouldn’t have discovered if we didn’t sit down and try to write it out.

Proofs and Insight

Hearing us talking about the Amplituhedron, the professor across the table chimed in.

“The problem with you amplitudes people, I never know what’s a conjecture and what’s proven. The Amplituhedron, is that still a conjecture?”

The Amplituhedron, indeed, is still a conjecture (although a pretty well-supported one at this point). After clearing that up, we got to talking about the role proofs play in theoretical physics.

The professor was worried that we weren’t being direct enough in stating which ideas in amplitudes had been proven. While I agreed that we should be clearer, one of his points stood out to me: he argued that one benefit of clearly labeling conjectures is that it motivates people to go back and prove things. That’s a good thing to do in general, to be sure that your conjecture is really true, but often it has an added benefit: even if you’re pretty sure your conjecture is true, proving it can show you why it’s true, leading to new and valuable insight.

There’s a long history of important physics only becoming clear when someone took the time to work out a proof. But in amplitudes right now, I don’t think our lack of proofs is leading to a lack of insight. That’s because the kinds of things we’d like to prove often require novel insight themselves.

It’s not clear what it would take to prove the Amplituhedron. Even if you’ve got a perfectly clear, mathematically nice definition for it, you’d still need to prove that it does what it’s supposed to do: that it really calculates scattering amplitudes in N=4 super Yang-Mills. In order to do that, you’d need a very complete understanding of how those calculations work. You’d need to be able to see how known methods give rise to something like the Amplituhedron, or to find the Amplituhedron buried deep in the structure of the theory.

If you had that kind of insight? Then yeah, you could prove the Amplituhedron, and accomplish remarkable things along the way. But more than that, if you had that sort of insight, you would prove the Amplituhedron. Even if you didn’t know about the Amplituhedron to begin with, or weren’t sure whether or not it was a conjecture, once you had that kind of insight proving something like the Amplituhedron would be the inevitable next step. The signpost, “this is a conjecture” is helpful for other reasons, but it doesn’t change circumstances here: either you have what you need, or you don’t.

This contrasts with how progress works in other parts of physics, and how it has worked at other times. Sometimes, a field is moving so fast that conjectures get left by the wayside, even when they’re provable. You get situations where everyone busily assumes something is true and builds off it, and no-one takes the time to work out why. In that sort of field, it can be really valuable to clearly point out conjectures, so that someone gets motivated to work out the proof (and to hopefully discover something along the way).

I don’t think amplitudes is in that position though. It’s still worthwhile to signal our conjectures, to make clear what needs a proof and what doesn’t. But our big conjectures, like the Amplituhedron, aren’t the kind of thing someone can prove just by taking some time off and working on it. They require new, powerful insight. Because of that, our time is typically best served looking for that insight, finding novel examples and unusual perspectives that clear up what’s really going on. That’s a fair bit broader an activity than just working out a proof.

An Elliptical Workout

I study scattering amplitudes, probabilities that particles scatter off each other.

In particular, I’ve studied them using polylogarithmic functions. Polylogarithmic functions can be taken apart into “logs”, which obey identities much like logarithms do. They’re convenient and nice, and for my favorite theory of N=4 super Yang-Mills they’re almost all you need.

Well, until ten particles get involved, anyway.

That’s when you start needing elliptic integrals, and elliptic polylogarithms. These integrals substitute one of the “logs” of a polylogarithm with an integration over an elliptic curve.

And with Jacob Bourjaily, Andrew McLeod, Marcus Spradlin, and Matthias Wilhelm, I’ve now computed one.

tenpointimage

This one, to be specific

Our paper, The Elliptic Double-Box Integral, went up on the arXiv last night.

The last few weeks have been a frenzy of work, finishing up our calculations and writing the paper. It’s the fastest I’ve ever gotten a paper out, which has been a unique experience.

Computing this integral required new, so far unpublished tricks by Jake Bourjaily, as well as some rather powerful software and Mark Spradlin’s extensive expertise in simplifying polylogarithms. In the end, we got the integral into a “canonical” form, one other papers had proposed as the right way to represent it, with the elliptic curve in a form standardized by Weierstrass.

One of the advantages of fixing a “canonical” form is that it should make identities obvious. If two integrals are actually the same, then writing them according to the same canonical rules should make that clear. This is one of the nice things about polylogarithms, where these identities are really just identities between logs and the right form is comparatively easy to find.

Surprisingly, the form we found doesn’t do this. We can write down an integral in our “canonical” form that looks different, but really is the same as our original integral. The form other papers had suggested, while handy, can’t be the final canonical form.

What the final form should be, we don’t yet know. We have some ideas, but we’re also curious what other groups are thinking. We’re relatively new to elliptic integrals, and there are other groups with much more experience with them, some with papers coming out soon. As far as we know they’re calculating slightly different integrals, ones more relevant for the real world than for N=4 super Yang-Mills. It’s going to be interesting seeing what they come up with. So if you want to follow this topic, don’t just watch for our names on the arXiv: look for Claude Duhr and Falko Dulat, Luise Adams and Stefan Weinzierl. In the elliptic world, big things are coming.

Post-Scarcity Academia

Anyone will tell you that academia is broken.

The why varies, of course: some blame publication pressure, or greedy journals. Some think it’s the fault of grant committees, or tenure committees, or grad admission committees. Some argue we’re driving away the wrong people, others that we’re letting in the wrong people. Some place the fault with the media, or administrators, or the government, or the researchers themselves. Some believe the problem is just a small group of upstarts, others want to tear the whole system down.

If there’s one common theme to every “academia is broken” take, it’s limited resources. There are only so many people who can make a living doing research. Academia has to pick and choose who these people are and what they get to do, and anyone who thinks the system is broken thinks those choices could be made better.

As I was writing my version of the take, I started wondering. What if we didn’t have to choose? What would academia look like in a world without limited resources, where no-one needed to work for a living? Can we imagine what that world might look like?

Then I realized I didn’t need to imagine it. I’d already seen it.

giantitpscreenshot

And it was glorious

Let me tell you a bit about Dungeons and Dragons.

Dungeons and Dragons doesn’t have “pro gamers”, nobody makes money playing it. It isn’t even really the kind of game you can win or lose. It’s collaborative storytelling, backed up with a pile of dice and rulebooks. Nonetheless, Dungeons and Dragons has an active community dedicated to thinking about the game. They call themselves “optimizers”, and they focus on figuring out the best way the rules allow to do what they want to do.

Sometimes, the goal is practical: “what’s the best archer I can make?” “how can I make a character that has something useful to do no matter what?” Sometimes it’s more farfetched: “can I deal infinite damage?” “how can I make a god at level one?” Optimizing for these goals requires seeking out obscure rules, debating loopholes and the meaning of the text, and calculating probabilities.

I like to joke that Dungeons and Dragons was my first academic community, and that isn’t too far from the truth. These are people obsessed with understanding a complex system, who “publish” their research in forum posts , who collaborate and compete and care about finding the truth. While these people do have day jobs, that wasn’t a real limit. Dungeons and Dragons, I am forced to admit, is easier than theoretical physics. Even with day jobs or school, most of the D&D optimization community had plenty of time to do all the “research” they wanted. In a very real sense, they’re a glimpse at a post-scarcity academia.

There’s another parallel, one relevant to the current situation in theoretical physics. When I was most active in optimization, we played an edition of the game that was out of print. Normally there’s a sort of feedback between game designers and optimizers. As new expansions and errata are released, debates in the optimization community get resolved or re-ignited. With an out of print edition though, that feedback isn’t available. The optimization community was left by itself, examining whatever evidence it already had. This feels a lot like the current situation in physics, when so many experiments are just confirming the Standard Model. Without much feedback, the community has to evolve on its own.

 

So what did post-scarcity academia look like?

First, the good: this was a community highly invested in education. The best way to gain status wasn’t to build the strongest character, or discover a new trick. Instead, the most respected members of the community were the handbook writers, people who wrote long, clearly written forum posts summarizing optimization knowledge for newer players. I’m still not at the point where I read physics textbooks for fun, but back when I was active I would absolutely read optimization handbooks for fun. For those who wanted to get involved, the learning curve was about as well-signposted as it could be.

It was a community that could display breathtaking creativity, as well as extreme diligence. Some optimization was off-the-cuff and easy, but a lot of it took real work or real insight, and it showed. People would write short stories about the characters they made, or spend weeks cataloging every book that mentioned a particular rule. Despite not having to do their “research” for a living, motivation was never in short supply.

All that said, I think people yearning for a post-scarcity academia would be disappointed. If you think people do derivative, unoriginal work just because of academic careers, then I regret to inform you that a lot of optimization was unoriginal. There were a lot of posts that were just remixes of old ideas, packaged into a “new” build. There were also plenty of repetitive, pointless arguments, to the point that we’d joke about “Monkday” and “Wizard Wednesday”.

There was also a lot of attention-seeking behavior. There’s no optimization media, no optimization jobs that look for famous candidates, but people still cared about being heard, and pitched their work accordingly. We’d get a lot of overblown posts: “A Fighter that can beat any Wizard!” (because he’s been transformed by a spell into an all-powerful shapeshifter), “A Sorceror that can beat any Wizard!” (using houserules which change every time someone points out a flaw in the idea).

(Wizards, as you may be noticing, were kind of the String Theory of that community.)

 

Some problems in academia are caused by bad incentives, by the structure of academic careers. Some, though, are caused because academics are human beings. If we didn’t have to work for a living, academics would probably have different priorities, and we might work on a wider range of projects. But I suspect we’d still have good days and bad, that we’d still puff ourselves up for attention and make up dubious solutions to famous problems.

Of course, Dungeons and Dragons optimizers aren’t the only example of “post-scarcity academia”, or even a perfect example. They’ve got their own pressures, due to the structure of the community, that shape them in particular ways. I’d be interested to learn about other “amateur academics”, and how they handle things. My guess is that the groups whose work is closer to “real academia” (for example, the Society for Creative Anachronism) are more limited by their day jobs, but otherwise might be more informative. If there’s a “post-scarcity academia” you’re familiar with, mention it in the comments!

The Quantum Kids

I gave a pair of public talks at the Niels Bohr International Academy this week on “The Quest for Quantum Gravity” as part of their “News from the NBIA” lecture series. The content should be familiar to long-time readers of this blog: I talked about renormalization, and gravitons, and the whole story leading up to them.

(I wanted to title the talk “How I Learned to Stop Worrying and Love Quantum Gravity”, like my blog post, but was told Danes might not get the Doctor Strangelove reference.)

I also managed to work in some history, which made its way into the talk after Poul Damgaard, the director of the NBIA, told me I should ask the Niels Bohr Archive about Gamow’s Thought Experiment Device.

“What’s a Thought Experiment Device?”

einsteinbox

This, apparently

If you’ve heard of George Gamow, you’ve probably heard of the Alpher-Bethe-Gamow paper, his work with grad student Ralph Alpher on the origin of atomic elements in the Big Bang, where he added Hans Bethe to the paper purely for an alpha-beta-gamma pun.

As I would learn, Gamow’s sense of humor was prominent quite early on. As a research fellow at the Niels Bohr Institute (essentially a postdoc) he played with Bohr’s kids, drew physics cartoons…and made Thought Experiment Devices. These devices were essentially toy experiments, apparatuses that couldn’t actually work but that symbolized some physical argument. The one I used in my talk, pictured above, commemorated Bohr’s triumph over one of Einstein’s objections to quantum theory.

Learning more about the history of the institute, I kept noticing the young researchers, the postdocs and grad students.

h155

Lev Landau, George Gamow, Edward Teller. The kids are Aage and Ernest Bohr. Picture from the Niels Bohr Archive.

We don’t usually think about historical physicists as grad students. The only exception I can think of is Feynman, with his stories about picking locks at the Manhattan project. But in some sense, Feynman was always a grad student.

This was different. This was Lev Landau, patriarch of Russian physics, crowning name in a dozen fields and author of a series of textbooks of legendary rigor…goofing off with Gamow. This was Edward Teller, father of the Hydrogen Bomb, skiing on the institute lawn.

These were the children of the quantum era. They came of age when the laws of physics were being rewritten, when everything was new. Starting there, they could do anything, from Gamow’s cosmology to Landau’s superconductivity, spinning off whole fields in the new reality.

On one level, I envy them. It’s possible they were the last generation to be on the ground floor of a change quite that vast, a shift that touched all of physics, the opportunity to each become gods of their own academic realms.

I’m glad to know about them too, though, to see them as rambunctious grad students. It’s all too easy to feel like there’s an unbridgeable gap between postdocs and professors, to worry that the only people who make it through seem to have always been professors at heart. Seeing Gamow and Landau and Teller as “quantum kids” dispels that: these are all-too-familiar grad students and postdocs, joking around in all-too-familiar ways, who somehow matured into some of the greatest physicists of their era.