Amplitudes in String and Field Theory at NBI

There’s a conference at the Niels Bohr Institute this week, on Amplitudes in String and Field Theory. Like the conference a few weeks back, this one was funded by the Simons Foundation, as part of Michael Green’s visit here.

The first day featured a two-part talk by Michael Green and Congkao Wen. They are looking at the corrections that string theory adds on top of theories of supergravity. These corrections are difficult to calculate directly from string theory, but one can figure out a lot about them from the kinds of symmetry and duality properties they need to have, using the mathematics of modular forms. While Michael’s talk introduced the topic with a discussion of older work, Congkao talked about their recent progress looking at this from an amplitudes perspective.

Francesca Ferrari’s talk on Tuesday also related to modular forms, while Oliver Schlotterer and Pierre Vanhove talked about a different corner of mathematics, single-valued polylogarithms. These single-valued polylogarithms are of interest to string theorists because they seem to connect two parts of string theory: the open strings that describe Yang-Mills forces and the closed strings that describe gravity. In particular, it looks like you can take a calculation in open string theory and just replace numbers and polylogarithms with their “single-valued counterparts” to get the same calculation in closed string theory. Interestingly, there is more than one way that mathematicians can define “single-valued counterparts”, but only one such definition, the one due to Francis Brown, seems to make this trick work. When I asked Pierre about this he quipped it was because “Francis Brown has good taste…either that, or String Theory has good taste.”

Wednesday saw several talks exploring interesting features of string theory. Nathan Berkovitz discussed his new paper, which makes a certain context of AdS/CFT (a duality between string theory in certain curved spaces and field theory on the boundary of those spaces) manifest particularly nicely. By writing string theory in five-dimensional AdS space in the right way, he can show that if the AdS space is small it will generate the same Feynman diagrams that one would use to do calculations in N=4 super Yang-Mills. In the afternoon, Sameer Murthy showed how localization techniques can be used in gravity theories, including to calculate the entropy of black holes in string theory, while Yvonne Geyer talked about how to combine the string theory-like CHY method for calculating amplitudes with supersymmetry, especially in higher dimensions where the relevant mathematics gets tricky.

Thursday ended up focused on field theory. Carlos Mafra was originally going to speak but he wasn’t feeling well, so instead I gave a talk about the “tardigrade” integrals I’ve been looking at. Zvi Bern talked about his work applying amplitudes techniques to make predictions for LIGO. This subject has advanced a lot in the last few years, and now Zvi and collaborators have finally done a calculation beyond what others had been able to do with older methods. They still have a way to go before they beat the traditional methods overall, but they’re off to a great start. Lance Dixon talked about two-loop five-particle non-planar amplitudes in N=4 super Yang-Mills and N=8 supergravity. These are quite a bit trickier than the planar amplitudes I’ve worked on with him in the past, in particular it’s not yet possible to do this just by guessing the answer without considering Feynman diagrams.

Today was the last day of the conference, and the emphasis was on number theory. David Broadhurst described some interesting contributions from physics to mathematics, in particular emphasizing information that the Weierstrass formulation of elliptic curves omits. Eric D’Hoker discussed how the concept of transcendentality, previously used in field theory, could be applied to string theory. A few of his speculations seemed a bit farfetched (in particular, his setup needs to treat certain rational numbers as if they were transcendental), but after his talk I’m a bit more optimistic that there could be something useful there.

Pi Day Alternatives

On Pi Day, fans of the number pi gather to recite its digits and eat pies. It is the most famous of numerical holidays, but not the only one. Have you heard of the holidays for other famous numbers?

Tau Day: Celebrated on June 28. Observed by sitting around gloating about how much more rational one is than everyone else, then getting treated with high-energy tau leptons for terminal pedantry.

Canadian Modular Pi Day: Celebrated on February 3. Observed by confusing your American friends.

e Day: Celebrated on February 7. Observed in middle school classrooms, explaining the wonders of exponential functions and eating foods like eggs and eclairs. Once the students leave, drop tabs of ecstasy instead.

Golden Ratio Day: Celebrated on January 6. Rub crystals on pyramids and write vaguely threatening handwritten letters to every physicist you’ve heard of.

Euler Gamma Day: Celebrated on May 7 by dropping on the floor and twitching.

Riemann Zeta Daze: The first year, forget about it. The second, celebrate on January 6. The next year, January 2. After that, celebrate on New Year’s Day earlier and earlier in the morning each year until you can’t tell the difference any more.

A Field That Doesn’t Read Its Journals

Last week, the University of California system ended negotiations with Elsevier, one of the top academic journal publishers. UC had been trying to get Elsevier to switch to a new type of contract, one in which instead of paying for access to journals they pay for their faculty to publish, then make all the results openly accessible to the public. In the end they couldn’t reach an agreement and thus didn’t renew their contract, cutting Elsevier off from millions of dollars and their faculty from reading certain (mostly recent) Elsevier journal articles. There’s a nice interview here with one of the librarians who was sent to negotiate the deal.

I’m optimistic about what UC was trying to do. Their proposal sounds like it addresses some of the concerns raised here with open-access systems. Currently, journals that offer open access often charge fees directly to the scientists publishing in them, fees that have to be scrounged up from somebody’s grant at the last minute. By setting up a deal for all their faculty together, UC would have avoided that. While the deal fell through, having an organization as big as the whole University of California system advocating open access (and putting the squeeze on Elsevier’s profits) seems like it can only lead to progress.

The whole situation feels a little surreal, though, when I compare it to my own field.

At the risk of jinxing it, my field’s relationship with journals is even weirder than xkcd says.

arXiv.org is a website that hosts what are called “preprints”, which originally meant papers that haven’t been published yet. They’re online, freely accessible to anyone who wants to read them, and will be for as long as arXiv exists to host them. Essentially everything anyone publishes in my field ends up on arXiv.

Journals don’t mind, in part, because many of them are open-access anyway. There’s an organization, SCOAP3, that runs what is in some sense a large-scale version of what UC was trying to set up: instead of paying for subscriptions, university libraries pay SCOAP3 and it covers the journals’ publication costs.

This means that there are two coexisting open-access systems, the journals themselves and arXiv. But in practice, arXiv is the one we actually use.

If I want to show a student a paper, I don’t send them to the library or the journal website, I tell them how to find it on arXiv. If I’m giving a talk, there usually isn’t room for a journal reference, so I’ll give the arXiv number instead. In a paper, we do give references to journals…but they’re most useful when they have arXiv links as well. I think the only times I’ve actually read an article in a journal were for articles so old that arXiv didn’t exist when they were published.

We still submit our papers to journals, though. Peer review still matters, we still want to determine whether our results are cool enough for the fancy journals or only good enough for the ordinary ones. We still put journal citations on our CVs so employers and grant agencies know not only what we’ve done, but which reviewers liked it.

But the actual copy-editing and formatting and publishing, that the journals still employ people to do? Mostly, it never gets read.

In my experience, that editing isn’t too impressive. Often, it’s about changing things to fit the journal’s preferences: its layout, its conventions, its inconvenient proprietary document formats. I haven’t seen them try to fix grammar, or improve phrasing. Maybe my papers have unusually good grammar, maybe they do more for other papers. And maybe they used to do more, when journals had a more central role. But now, they don’t change much.

Sometimes the journal version ends up on arXiv, if the authors put it there. Sometimes it doesn’t. And sometimes the result is in between. For my last paper about Calabi-Yau manifolds in Feynman diagrams, we got several helpful comments from the reviewers, but the journal also weighed in to get us to remove our more whimsical language, down to the word “bestiary”. For the final arXiv version, we updated for the reviewer comments, but kept the whimsical words. In practice, that version is the one people in our field will read.

This has some awkward effects. It means that sometimes important corrections don’t end up on arXiv, and people don’t see them. It means that technically, if someone wanted to insist on keeping an incorrect paper online, they could, even if a corrected version was published in a journal. And of course, it means that a large amount of effort is dedicated to publishing journal articles that very few people read.

I don’t know whether other fields could get away with this kind of system. Physics is small. It’s small enough that it’s not so hard to get corrections from authors when one needs to, small enough that social pressure can get wrong results corrected. It’s small enough that arXiv and SCOAP3 can exist, funded by universities and private foundations. A bigger field might not be able to do any of that.

For physicists, we should keep in mind that our system can and should still be improved. For other fields, it’s worth considering whether you can move in this direction, and what it would cost to do so. Academic publishing is in a pretty bizarre place right now, but hopefully we can get it to a better one.

Hadronic Strings and Large-N Field Theory at NBI

One of string theory’s early pioneers, Michael Green, is currently visiting the Niels Bohr Institute as part of a program by the Simons Foundation. The program includes a series of conferences. This week we are having the first such conference, on Hadronic Strings and Large-N Field Theory.

The bulk of the conference focused on new progress on an old subject, using string theory to model the behavior of quarks and gluons. There were a variety of approaches on offer, some focused on particular approximations and others attempting to construct broader, “phenomenological” models.

The other talks came from a variety of subjects, loosely tied together by the topic of “large N field theories”. “N” here is the number of colors: while the real world has three “colors” of quarks, you can imagine a world with more. This leads to simpler calculations, and often to connections with string theory. Some talks deal with attempts to “solve” certain large-N theories exactly. Others ranged farther afield, even to discussions of colliding black holes.

Changing the Question

I’ve recently been reading Why Does the World Exist?, a book by the journalist Jim Holt. In it he interviews a range of physicists and philosophers, asking each the question in the title. As the book goes on, he concludes that physicists can’t possibly give him the answer he’s looking for: even if physicists explain the entire universe from simple physical laws, they still would need to explain why those laws exist. A bit disappointed, he turns back to the philosophers.

Something about Holt’s account rubs me the wrong way. Yes, it’s true that physics can’t answer this kind of philosophical problem, at least not in a logically rigorous way. But I think we do have a chance of answering the question nonetheless…by eclipsing it with a better question.

How would that work? Let’s consider a historical example.

Does the Earth go around the Sun, or does the Sun go around the Earth? We learn in school that this is a solved question: Copernicus was right, the Earth goes around the Sun.

The details are a bit more subtle, though. The Sun and the Earth both attract each other: while it is a good approximation to treat the Sun as fixed, in reality it and the Earth both move in elliptical orbits around the same focus (which is close to, but not exactly, the center of the Sun). Furthermore, this is all dependent on your choice of reference frame: if you wish you can choose coordinates in which the Earth stays still while the Sun moves.

So what stops a modern-day Tycho Brahe from arguing that the Sun and the stars and everything else orbit around the Earth?

The reason we aren’t still debating the Copernican versus the Tychonic system isn’t that we proved Copernicus right. Instead, we replaced the old question with a better one. We don’t actually care which object is the center of the universe. What we care about is whether we can make predictions, and what mathematical laws we need to do so. Newton’s law of universal gravitation lets us calculate the motion of the solar system. It’s easier to teach it by talking about the Earth going around the Sun, so we talk about it that way. The “philosophical” question, about the “center of the universe”, has been explained away by the more interesting practical question.

My suspicion is that other philosophical questions will be solved in this way. Maybe physicists can’t solve the ultimate philosophical question, of why the laws of physics are one way and not another. But if we can predict unexpected laws and match observations of the early universe, then we’re most of the way to making the question irrelevant. Similarly, perhaps neuroscientists will never truly solve the mystery of consciousness, at least the way philosophers frame it today. Nevertheless, if they can describe brains well enough to understand why we act like we’re conscious, if they have something in their explanation that looks sufficiently “consciousness-like”, then it won’t matter if they meet the philosophical requirements, people simply won’t care. The question will have been eaten by a more interesting question.

This can happen in physics by itself, without reference to philosophy. Indeed, it may happen again soon. In the New Yorker this week, Natalie Wolchover has an article in which she talks to Nima Arkani-Hamed about the search for better principles to describe the universe. In it, Nima talks about looking for a deep mathematical question that the laws of physics answer. Peter Woit has expressed confusion that Nima can both believe this and pursue various complicated, far-fetched, and at times frankly ugly ideas for new physics.

I think the way to reconcile these two perspectives is to know that Nima takes naturalness seriously. The naturalness argument in physics states that physics as we currently see it is “unnatural”, in particular, that we can’t get it cleanly from the kinds of physical theories we understand. If you accept the argument as stated, then you get driven down a rabbit hole of increasingly strange solutions: versions of supersymmetry that cleverly hide from all experiments, hundreds of copies of the Standard Model, or even a multiverse.

Taking naturalness seriously doesn’t just mean accepting the argument as stated though. It can also mean believing the argument is wrong, but wrong in an interesting way.

One interesting way naturalness could be wrong would be if our reductionist picture of the world, where the ultimate laws live on the smallest scales, breaks down. I’ve heard vague hints from physicists over the years that this might be the case, usually based on the way that gravity seems to mix small and large scales. (Wolchover’s article also hints at this.) In that case, you’d want to find not just a new physical theory, but a new question to ask, something that could eclipse the old question with something more interesting and powerful.

Nima’s search for better questions seems to drive most of his research now. But I don’t think he’s 100% certain that the old questions are wrong, so you can still occasionally see him talking about multiverses and the like.

Ultimately, we can’t predict when a new question will take over. It’s a mix of the social and the empirical, of new predictions and observations but also of which ideas are compelling and beautiful enough to get people to dismiss the old question as irrelevant. It feels like we’re due for another change…but we might not be, and even if we are it might be a long time coming.

Valentine’s Day Physics Poem 2019

It’s that time of year again! Time for me to dig in to my files and bring you yet another of my old physics poems.

Plagued with Divergences

“The whole scheme of local field theory is plagued with divergences”

Is divergence ever really unexpected?

If you asked a computer, what would it tell you?

You’d hear a whirring first, lungs and heart of the machine beating faster and faster.

And you’d dismiss it.
You knew this wasn’t going to be an easy interaction.
It doesn’t mean you’re going to diverge.

And perhaps it would try to warn you, write it there on the page.
It might even notice, its built-in instincts telling you, by the book,
“This will diverge.”

But instincts lie, and builders cheat.
And it doesn’t mean you’re going to diverge.

Now, you do everything the slow way,
Numerically.
You need a different answer.
Dismiss your instincts and force yourself through
Piece by piece.

And now, you can’t stop hearing the whir
The machine’s beating heart
Even when it should be at rest

And step by step, it tries to minimize its errors
And step by step, the errors grow

And exhausted, in the end, you see splashed across the screen
Something bigger than it should ever have been.

But sometimes things feel big and strange.
That’s just the way of the big wide world.
And it doesn’t mean you’re going to diverge.

You could have seen the signs,
Power-counted, seen what could overwhelm.
And you could have regulated, with an epsilon of flexibility.

But this one, this time, was supposed to be
Needed to be
Physical Truth
And truth doesn’t diverge

So you keep going,
Wheezing breath and painstaking calculation,
And every little thing blowing up

It’s not like there’s a better way to live.


The Particle Physics Curse of Knowledge

There’s a debate raging right now in particle physics, about whether and how to build the next big collider. CERN’s Future Circular Collider group has been studying different options, some more expensive and some less (Peter Woit has a nice summary of these here). This year, the European particle physics community will debate these proposals, deciding whether to include them in an updated European Strategy for Particle Physics. After that, it will be up to the various countries that are members of CERN to decide whether to fund the proposal. With the costs of the more expensive options hovering around $20 billion, this has led to substantial controversy.

I’m not going to offer an opinion here one way or another. Weighing this kind of thing requires knowing the alternatives: what else the European particle physics community might lobby for in the next few years, and once they decide, what other budget priorities each individual country has. I know almost nothing about either.

Instead of an opinion, I have an observation:

Imagine that primatologists had proposed a $20 billion primate center, able to observe gorillas in greater detail than ever before. The proposal might be criticized in any number of ways: there could be much cheaper ways to accomplish the same thing, the project might fail, it might be that we simply don’t care enough about primate behavior to spend $20 billion on it.

What you wouldn’t expect is the claim that a $20 billion primate center would teach us nothing new.

It probably wouldn’t teach us “$20 billion worth of science”, whatever that means. But a center like that would be guaranteed to discover something. That’s because we don’t expect primatologists’ theories to be exact. Even if gorillas behaved roughly as primatologists expected, the center would still see new behaviors, just as a consequence of looking at a new level of detail.

To pick a physics example, consider the gravitational wave telescope LIGO. Before their 2016 observation of two black holes merging, LIGO faced substantial criticism. After their initial experiments didn’t detect anything, many physicists thought that the project was doomed to fail: that it would never be sensitive enough to detect the faint signals of gravitational waves past the messy vibrations of everyday life on Earth.

When it finally worked, though, LIGO did teach us something new. Not the existence of gravitational waves, we already knew about them. Rather, LIGO taught us new things about the kinds of black holes that exist. LIGO observed much bigger black holes than astronomers expected, a surprise big enough that it left some people skeptical. Even if it hadn’t, though, we still would almost certainly observe something new: there’s no reason to expect astronomers to perfectly predict the size of the universe’s black holes.

Particle physics is different.

I don’t want to dismiss the work that goes in to collider physics (far too many people have dismissed it recently). Much, perhaps most, of the work on the LHC is dedicated not to detecting new particles, but to confirming and measuring the Standard Model. A new collider would bring heroic scientific effort. We’d learn revolutionary new things about how to build colliders, how to analyze data from colliders, and how to use the Standard Model to make predictions for colliders.

In the end, though, we expect those predictions to work. And not just to work reasonably well, but to work perfectly. While we might see something beyond the Standard Model, the default expectation is that we won’t, that after doing the experiments and analyzing the data and comparing to predictions we’ll get results that are statistically indistinguishable from an equation we can fit on a T-shirt. We’ll fix the constants on that T-shirt to an unprecedented level of precision, yes, but the form of the equation may well stay completely the same.

I don’t think there’s another field where that’s even an option. Nowhere else in all of science could we observe the world in unprecedented detail, capturing phenomena that had never been seen before…and end up perfectly matching our existing theory. There’s no other science where anyone would even expect that to happen.

That makes the argument here different from any argument we’ve faced before. It forces people to consider their deep priorities, to think not just about the best way to carry out this test or that but about what science is supposed to be for. I don’t think there are any easy answers. We’re in what may well be a genuinely new situation, and we have to figure out how to navigate it together.

Postscript: I still don’t want to give an opinion, but given that I didn’t have room for this above let me give a fragment of an opinion: Higgs triple couplings!!!

Grant Roulette

Sometimes, it feels like applying for funding in science is a form of high-stakes gambling. You put in weeks of work assembling a grant application, making sure that it’s exciting and relevant and contains all the obnoxious buzzwords you’re supposed to use…and in the end, it gets approved or rejected for reasons that seem entirely out of your control.

What if, instead, you were actually gambling?

Put all my money on post-Newtonian corrections…

That’s the philosophy behind a 2016 proposal by Ferric Fang and Arturo Casadevall, recently summarized in an article on Vox by Kelsey Piper. The goal is to cut down on the time scientists waste applying for money from various government organizations (for them, the US’s National Institute of Health) by making part of the process random. Applications would be reviewed to make sure they met a minimum standard, but past that point every grant would have an equal chance of getting funded. That way scientists wouldn’t spend so much time perfecting grant applications, and could focus on the actual science.

It’s an idea that seems, on its face, a bit too cute. Yes, grant applications are exhausting, but surely you still want some way to prioritize better ideas over worse ones? For all its flaws, one would hope the grant review process at least does that.

Well, maybe not. The Vox piece argues that, at least in medicine, grants are almost random already. Each grant is usually reviewed by multiple experts. Several studies cited in the piece looked at the variability between these experts: do they usually agree, or disagree? Measuring this in a variety of ways, they came to the same conclusion: there is almost no consistency among ratings by different experts. In effect, the NIH appears to already be using a lottery, one in which grants are randomly accepted or rejected depending on who reviews them.

What encourages me about these studies is that there really is a concrete question to ask. You could argue that physics shouldn’t suffer from the same problems as medicine, that grant review is really doing good work in our field. If you want to argue that, you can test it! Look at old reviews by different people, or get researchers to do “mock reviews”, and test statistical measures like inter-rater reliability. If there really is no consistency between reviews then we have a real problem in need of fixing.

I genuinely don’t know what to expect from that kind of study in my field. But the way people talk about grants makes me suspicious. Everyone seems to feel like grant agencies are biased against their sub-field. Grant-writing advice is full of weird circumstantial tips. (“I heard so-and-so is reviewing this year, so don’t mention QCD!”) It could all be true…but it’s also the kind of superstition people come up with when they look for patterns in a random process. If all the grant-writing advice in the world boils down to “bet on red”, we might as well admit which game we’re playing.

Book Review: Thirty Years That Shook Physics and Mr Tompkins in Paperback

George Gamow was one of the “quantum kids” who got their start at the Niels Bohr Institute in the 30’s. He’s probably best known for the Alpher, Bethe, Gamow paper, which managed to combine one of the best sources of evidence we have for the Big Bang with a gratuitous Greek alphabet pun. He was the group jester in a lot of ways: the historians here have archives full of his cartoons and in-jokes.

Naturally, he also did science popularization.

I recently read two of Gamow’s science popularization books, “Mr Tompkins” and “Thirty Years That Shook Physics”. Reading them was a trip back in time, to when people thought about physics in surprisingly different ways.

“Mr. Tompkins” started as a series of articles in Discovery, a popular science magazine. They were published as a book in 1940, with a sequel in 1945 and an update in 1965. Apparently they were quite popular among a certain generation: the edition I’m reading has a foreword by Roger Penrose.

(As an aside: Gamow mentions that the editor of Discovery was C. P. Snow…that C. P. Snow?)

Mr Tompkins himself is a bank clerk who decides on a whim to go to a lecture on relativity. Unable to keep up, he falls asleep, and dreams of a world in which the speed of light is much slower than it is in our world. Bicyclists visibly redshift, and travelers lead much longer lives than those who stay at home. As the book goes on he meets the same professor again and again (eventually marrying his daughter) and sits through frequent lectures on physics, inevitably falling asleep and experiencing it first-hand: jungles where Planck’s constant is so large that tigers appear as probability clouds, micro-universes that expand and collapse in minutes, and electron societies kept strictly monogamous by “Father Paulini”.

The structure definitely feels dated, and not just because these days people don’t often go to physics lectures for fun. Gamow actually includes the full text of the lectures that send Mr Tompkins to sleep, and while they’re not quite boring enough to send the reader to sleep they are written on a higher level than the rest of the text, with more technical terms assumed. In the later additions to the book the “lecture” aspect grows: the last two chapters involve a dream of Dirac explaining antiparticles to a dolphin in basically the same way he would explain them to a human, and a discussion of mesons in a Japanese restaurant where the only fantastical element is a trio of geishas acting out pion exchange.

Some aspects of the physics will also feel strange to a modern audience. Gamow presents quantum mechanics in a way that I don’t think I’ve seen in a modern text: while modern treatments start with uncertainty and think of quantization as a consequence, Gamow starts with the idea that there is a minimum unit of action, and derives uncertainty from that. Some of the rest is simply limited by timing: quarks weren’t fully understood even by the 1965 printing, in 1945 they weren’t even a gleam in a theorist’s eye. Thus Tompkins’ professor says that protons and neutrons are really two states of the same particle and goes on to claim that “in my opinion, it is quite safe to bet your last dollar that the elementary particles of modern physics [electrons, protons/neutrons, and neutrinos] will live up to their name.” Neutrinos also have an amusing status: they hadn’t been detected when the earlier chapters were written, and they come across rather like some people write about dark matter today, as a silly theorist hypothesis that is all-too-conveniently impossible to observe.

“Thirty Years That Shook Physics”, published in 1966, is a more usual sort of popular science book, describing the history of the quantum revolution. While mostly focused on the scientific concepts, Gamow does spend some time on anecdotes about the people involved. If you’ve read much about the time period, you’ll probably recognize many of the anecdotes (for example, the Pauli Principle that a theorist can break experimental equipment just by walking in to the room, or Dirac’s “discovery” of purling), even the ones specific to Gamow have by now been spread far and wide.

Like Mr Tompkins, the level in this book is not particularly uniform. Gamow will spend a paragraph carefully defining an average, and then drop the word “electroscope” as if everyone should know what it is. The historical perspective taught me a few things I perhaps should have already known, but found surprising anyway. (The plum-pudding model was an actual mathematical model, and people calculated its consequences! Muons were originally thought to be mesons!)

Both books are filled with Gamow’s whimsical illustrations, something he was very much known for. Apparently he liked to imitate other art styles as well, which is visible in the portraits of physicists at the front of each chapter.

Pictured: the electromagnetic spectrum as an infinite piano

1966 was late enough that this book doesn’t have the complacency of the earlier chapters in Mr Tompkins: Gamow knew that there were more particles than just electrons, nucleons, and neutrinos. It was still early enough, though, that the new particles were not fully understood. It’s interesting seeing how Gamow reacts to this: his expectation was that physics was on the cusp of another massive change, a new theory built on new fundamental principles. He speculates that there might be a minimum length scale (although oddly enough he didn’t expect it to be related to gravity).

It’s only natural that someone who lived through the dawn of quantum mechanics should expect a similar revolution to follow. Instead, the revolution of the late 60’s and early 70’s was in our understanding: not new laws of nature so much as new comprehension of just how much quantum field theory can actually do. I wonder if the generation who lived through that later revolution left it with the reverse expectation: that the next crisis should be solved in a similar way, that the world is quantum field theory (or close cousins, like string theory) all the way down and our goal should be to understand the capabilities of these theories as well as possible.

The final section of the book is well worth waiting for. In 1932, Gamow directed Bohr’s students in staging a play, the “Blegdamsvej Faust”. A parody of Faust, it features Bohr as god, Pauli as Mephistopheles, and Ehrenfest as the “erring Faust” (Gamow’s pun, not mine) that he tempts to sin with the promise of the neutrino, Gretchen. The piece, translated to English by Gamow’s wife Barbara, is filled with in-jokes on topics as obscure as Bohr’s habitual mistakes when speaking German. It’s gloriously weird and well worth a read. If you’ve ever seen someone do a revival performance, let me know!

Made of Quarks Versus Made of Strings

When you learn physics in school, you learn it in terms of building blocks.

First, you learn about atoms. Indivisible elements, as the Greeks foretold…until you learn that they aren’t indivisible. You learn that atoms are made of electrons, protons, and neutrons. Then you learn that protons and neutrons aren’t indivisible either, they’re made of quarks. They’re what physicists call composite particles, particles made of other particles stuck together.

Hearing this story, you notice a pattern. Each time physicists find a more fundamental theory, they find that what they thought were indivisible particles are actually composite. So when you hear physicists talking about the next, more fundamental theory, you might guess it has to work the same way. If quarks are made of, for example, strings, then each quark is made of many strings, right?

Nope! As it turns out, there are two different things physicists can mean when they say a particle is “made of” a more fundamental particle. Sometimes they mean the particle is composite, like the proton is made of quarks. But sometimes, like when they say particles are “made of strings”, they mean something different.

To understand what this “something different” is, let’s go back to quarks for a moment. You might have heard there are six types, or flavors, of quarks: up and down, strange and charm, top and bottom. The different types have different mass and electric charge. You might have also heard that quarks come in different colors, red green and blue. You might wonder then, aren’t there really eighteen types of quark? Red up quarks, green top quarks, and so forth?

Physicists don’t think about it that way. Unlike the different flavors, the different colors of quark have a more unified mathematical description. Changing the color of a quark doesn’t change its mass or electric charge. All it changes is how the quark interacts with other particles via the strong nuclear force. Know how one color works, and you know how the other colors work. Different colors can also “mix” together, similarly to how different situations can mix together in quantum mechanics: just as Schrodinger’s cat can be both alive and dead, a quark can be both red and green.

This same kind of thing is involved in another example, electroweak unification. You might have heard that electromagnetism and the weak nuclear force are secretly the same thing. Each force has corresponding particles: the familiar photon for electromagnetism, and W and Z bosons for the weak nuclear force. Unlike the different colors of quarks, photons and W and Z bosons have different masses from each other. It turns out, though, that they still come from a unified mathematical description: they’re “mixtures” (in the same Schrodinger’s cat-esque sense) of the particles from two more fundamental forces (sometimes called “weak isospin” and “weak hypercharge”). The reason they have different masses isn’t their own fault, but the fault of the Higgs: the Higgs field we have in our universe interacts with different parts of this unified force differently, so the corresponding particles end up with different masses.

A physicist might say that electromagnetism and the weak force are “made of” weak isospin and weak hypercharge. And it’s that kind of thing that physicists mean when they say that quarks might be made of strings, or the like: not that quarks are composite, but that quarks and other particles might have a unified mathematical description, and look different only because they’re interacting differently with something else.

This isn’t to say that quarks and electrons can’t be composite as well. They might be, we don’t know for sure. If they are, the forces binding them together must be very strong, strong enough that our most powerful colliders can’t make them wiggle even a little out of shape. The tricky part is that composite particles get mass from the energy holding them together. A particle held together by very powerful forces would normally be very massive, if you want it to end up lighter you have to construct your theory carefully to do that. So while occasionally people will suggest theories where quarks or electrons are composite, these theories aren’t common. Most of the time, if a physicist says that quarks or electrons are “made of ” something else, they mean something more like “particles are made of strings” than like “protons are made of quarks”.