Tag Archives: science communication

Of Snowmass and SAGEX

arXiv-watchers might have noticed an avalanche of papers with the word Snowmass in the title. (I contributed to one of them.)

Snowmass is a place, an area in Colorado known for its skiing. It’s also an event in that place, the Snowmass Community Planning Exercise for the American Physical Society’s Division of Particles and Fields. In plain terms, it’s what happens when particle physicists from across the US get together in a ski resort to plan their future.

Usually someone like me wouldn’t be involved in that. (And not because it’s a ski resort.) In the past, these meetings focused on plans for new colliders and detectors. They got contributions from experimentalists, and a few theorists heavily focused on their work, but not the more “formal” theorists beyond.

This Snowmass is different. It’s different because of Corona, which changed it from a big meeting in a resort to a spread-out series of meetings and online activities. It’s also different because they invited theorists to contribute, and not just those interested in particle colliders. The theorists involved study everything from black holes and quantum gravity to supersymmetry and the mathematics of quantum field theory. Groups focused on each topic submit “white papers” summarizing the state of their area. These white papers in turn get organized and summarized into a few subfields, which in turn contribute to the planning exercise. No-one I’ve talked to is entirely clear on how this works, how much the white papers will actually be taken into account or by whom. But it seems like a good chance to influence US funding agencies, like the Department of Energy, and see if we can get them to prioritize our type of research.

Europe has something similar to Snowmass, called the European Strategy for Particle Physics. It also has smaller-scale groups, with their own purposes, goals, and funding sources. One such group is called SAGEX: Scattering Amplitudes: from Geometry to EXperiment. SAGEX is an Innovative Training Network, an organization funded by the EU to train young researchers, in this case in scattering amplitudes. Its fifteen students are finishing their PhDs and ready to take the field by storm. Along the way, they spent a little time in industry internships (mostly at Maple and Mathematica), and quite a bit of time working on outreach.

They have now summed up that outreach work in an online exhibition. I’ve had fun exploring it over the last couple days. They’ve got a lot of good content there, from basic explanations of relativity and quantum mechanics, to detailed games involving Feynman diagrams and associahedra, to a section that uses solitons as a gentle introduction to integrability. If you’re in the target audience, you should check it out!

The Only Speed of Light That Matters

A couple weeks back, someone asked me about a Veritasium video with the provocative title “Why No One Has Measured The Speed Of Light”. Veritasium is a science popularization youtube channel, and usually a fairly good one…so it was a bit surprising to see it make a claim usually reserved for crackpots. Many, many people have measured the speed of light, including Ole Rømer all the way back in 1676. To argue otherwise seems like it demands a massive conspiracy.

Veritasium wasn’t proposing a conspiracy, though, just a technical point. Yes, many experiments have measured the speed of light. However, the speed they measure is in fact a “two-way speed”, the speed that light takes to go somewhere and then come back. They leave open the possibility that light travels differently in different directions, and only has the measured speed on average: that there are different “one-way speeds” of light.

The loophole is clearest using some of the more vivid measurements of the speed of light, timing how long it takes to bounce off a mirror and return. It’s less clear using other measurements of the speed of light, like Rømer’s. Rømer measured the speed of light using the moons of Jupiter, noticing that the time they took to orbit appeared to change based on whether Jupiter was moving towards or away from the Earth. For this measurement Rømer didn’t send any light to Jupiter…but he did have to make assumptions about Jupiter’s rotation, using it like a distant clock. Those assumptions also leave the door open to a loophole, one where the different one-way speeds of light are compensated by different speeds for distant clocks. You can watch the Veritasium video for more details about how this works, or see the wikipedia page for the mathematical details.

When we think of the speed of light as the same in all directions, in some sense we’re making a choice. We’ve chosen a convention, called the Einstein synchronization convention, that lines up distant clocks in a particular way. We didn’t have to choose that convention, though we prefer to (the math gets quite a bit more complicated if we don’t). And crucially for any such choice, it is impossible for any experiment to tell the difference.

So far, Veritasium is doing fine here. But if the video was totally fine, I wouldn’t have written this post. The technical argument is fine, but the video screws up its implications.

Near the end of the video, the host speculates whether this ambiguity is a clue. What if a deeper theory of physics could explain why we can’t tell the difference between different synchronizations? Maybe that would hint at something important.

Well, it does hint at something important, but not something new. What it hints at is that “one-way speeds” don’t matter. Not for light, or really for anything else.

Think about measuring the speed of something, anything. There are two ways to do it. One is to time it against something else, like the signal in a wire, and assume we know that speed. Veritasium shows an example of this, measuring the speed of a baseball that hits a target and sends a signal back. The other way is to send it somewhere with a clock we trust, and compare it to our clock. Each of these requires that something goes back and forth, even if it’s not the same thing each time. We can’t measure the one-way speed of anything because we’re never in two places at once. Everything we measure, every conclusion we come to about the world, rests on something “two-way”: our actions go out, our perceptions go in. Even our depth perception is an inference from our ancestors, whose experience seeing food and traveling to it calibrated our notion of distance.

Synchronization of clocks is a convention because the external world is a convention. What we have really, objectively, truly, are our perceptions and our memories. Everything else is a model we build to fill the gaps in between. Some features of that model are essential: if you change them, you no longer match our perceptions. Other features, though, are just convenience: ways we arrange the model to make it easier to use, to make it not “sound dumb”, to tell a coherent story. Synchronization is one of those things: the notion that you can compare times in distant places is convenient, but as relativity already tells us in other contexts, not necessary. It’s part of our storytelling, not an essential part of our model.

Book Review: The Joy of Insight

There’s something endlessly fascinating about the early days of quantum physics. In a century, we went from a few odd, inexplicable experiments to a practically complete understanding of the fundamental constituents of matter. Along the way the new ideas ended a world war, almost fueled another, and touched almost every field of inquiry. The people lucky enough to be part of this went from familiarly dorky grad students to architects of a new reality. Victor Weisskopf was one of those people, and The Joy of Insight: Passions of a Physicist is his autobiography.

Less well-known today than his contemporaries, Weisskopf made up for it with a front-row seat to basically everything that happened in particle physics. In the late 20’s and early 30’s he went from studying in Göttingen (including a crush on Maria Göppert before a car-owning Joe Mayer snatched her up) to a series of postdoctoral positions that would exhaust even a modern-day physicist, working in Leipzig, Berlin, Copenhagen, Cambridge, Zurich, and Copenhagen again, before fleeing Europe for a faculty position in Rochester, New York. During that time he worked for, studied under, collaborated or partied with basically everyone you might have heard of from that period. As a result, this section of the autobiography was my favorite, chock-full of stories, from the well-known (Pauli’s rudeness and mythical tendency to break experimental equipment) to the less-well known (a lab in Milan planned to prank Pauli with a door that would trigger a fake explosion when opened, which worked every time they tested it…and failed when Pauli showed up), to the more personal (including an in retrospect terrifying visit to the Soviet Union, where they asked him to critique a farming collective!) That era also saw his “almost Nobel”, in his case almost discovering the Lamb Shift.

Despite an “almost Nobel”, Weisskopf was paid pretty poorly when he arrived in Rochester. His story there puts something I’d learned before about another refugee physicist, Hertha Sponer, in a new light. Sponer’s university also didn’t treat her well, and it seemed reminiscent of modern academia. Weisskopf, though, thinks his treatment was tied to his refugee status: that, aware that they had nowhere else to go, universities gave the scientists who fled Europe worse deals than they would have in a Nazi-less world, snapping up talent for cheap. I could imagine this was true for Sponer as well.

Like almost everyone with the relevant expertise, Weisskopf was swept up in the Manhattan project at Los Alamos. There he rose in importance, both in the scientific effort (becoming deputy leader of the theoretical division) and the local community (spending some time on and chairing the project’s “town council”). Like the first sections, this surreal time leads to a wealth of anecdotes, all fascinating. In his descriptions of the life there I can see the beginnings of the kinds of “hiking retreats” physicists would build in later years, like the one at Aspen, that almost seem like attempts to recreate that kind of intense collaboration in an isolated natural place.

After the war, Weisskopf worked at MIT before a stint as director of CERN. He shepherded the facility’s early days, when they were building their first accelerators and deciding what kinds of experiments to pursue. I’d always thought that the “nuclear” in CERN’s name was an artifact of the times, when “nuclear” and “particle” physics were thought of as the same field, but according to Weisskopf the fields were separate and it was already a misnomer when the place was founded. Here the book’s supply of anecdotes becomes a bit more thin, and instead he spends pages on glowing descriptions of people he befriended. The pattern continues after the directorship as his duties get more administrative, spending time as head of the physics department at MIT and working on arms control, some of the latter while a member of the Pontifical Academy of Sciences (which apparently even a Jewish atheist can join). He does work on some science, though, collaborating on the “bag of quarks” model of protons and neutrons. He lives to see the fall of the Berlin wall, and the end of the book has a bit of 90’s optimism to it, the feeling that finally the conflicts of his life would be resolved. Finally, the last chapter abandons chronology altogether, and is mostly a list of his opinions of famous composers, capped off with a Bohr-inspired musing on the complementary nature of science and the arts, humanities, and religion.

One of the things I found most interesting in this book was actually something that went unsaid. Weisskopf’s most famous student was Murray Gell-Mann, a key player in the development of the theory of quarks (including coining the name). Gell-Mann was famously cultured (in contrast to the boorish-almost-as-affectation Feynman) with wide interests in the humanities, and he seems like exactly the sort of person Weisskopf would have gotten along with. Surprisingly though, he gets no anecdotes in this book, and no glowing descriptions: just a few paragraphs, mostly emphasizing how smart he was. I have to wonder if there was some coldness between them. Maybe Weisskopf had difficulty with a student who became so famous in his own right, or maybe they just never connected. Maybe Weisskopf was just trying to be generous: the other anecdotes in that part of the book are of much less famous people, and maybe Weisskopf wanted to prioritize promoting them, feeling that they were underappreciated.

Weisskopf keeps the physics light to try to reach a broad audience. This means he opts for short explanations, and often these are whatever is easiest to reach for. It creates some interesting contradictions: the way he describes his “almost Nobel” work in quantum electrodynamics is very much the way someone would have described it at the time, but very much not how it would be understood later, and by the time he talks about the bag of quarks model his more modern descriptions don’t cleanly link with what he said earlier. Overall, his goal isn’t really to explain the physics, but to explain the physicists. I enjoyed the book for that: people do it far too rarely, and the result was a really fun read.

What Are Students? We Just Don’t Know

I’m taking a pedagogy course at the moment, a term-long follow-up to the one-week intro course I took in the spring. The course begins with yet another pedagogical innovation, a “pre-project”. Before the course has really properly started, we get assembled into groups and told to investigate our students. We are supposed to do interviews on a few chosen themes, all with the objective of getting to know our students better. I’m guessing the point is to sharpen our goals, so that when we start learning pedagogy we’ll have a clearer idea of what problems we’d like to solve.

The more I think about this the more I’m looking forward to it. When I TAed in the past, some of the students were always a bit of a mystery. They sat in the back, skipped assignments, and gradually I saw less and less of them. They didn’t go to office hours or the help room, and I always wondered what happened. When in the course did they “turn off”, when did we lose them? They seemed like a kind of pedagogical dark matter, observable only by their presence on the rosters. I’m hoping to detect a little of that dark matter here.

As it’s a group project, we came up with a theme as a group, and questions to support that theme (in particular, we’re focusing on the different experiences between Danes and international students). Since the topic is on my mind in general though, I thought it would be fun to reach out to you guys. Educators in the comments: if you could ask your students one question, what would it be? Students, what is one thing you think your teachers are missing?

The arXiv SciComm Challenge

Fellow science communicators, think you can explain everything that goes on in your field? If so, I have a challenge for you. Pick a day, and go through all the new papers on arXiv.org in a single area. For each one, try to give a general-audience explanation of what the paper is about. To make it easier, you can ignore cross-listed papers. If your field doesn’t use arXiv, consider if you can do the challenge with another appropriate site.

I’ll start. I’m looking at papers in the “High Energy Physics – Theory” area, announced 6 Jan, 2022. I’ll warn you in advance that I haven’t read these papers, just their abstracts, so apologies if I get your paper wrong!

arXiv:2201.01303 : Holographic State Complexity from Group Cohomology

This paper says it is a contribution to a Proceedings. That means it is based on a talk given at a conference. In my field, a talk like this usually won’t be presenting new results, but instead summarizes results in a previous paper. So keep that in mind.

There is an idea in physics called holography, where two theories are secretly the same even though they describe the world with different numbers of dimensions. Usually this involves a gravitational theory in a “box”, and a theory without gravity that describes the sides of the box. The sides turn out to fully describe the inside of the box, much like a hologram looks 3D but can be printed on a flat sheet of paper. Using this idea, physicists have connected some properties of gravity to properties of the theory on the sides of the box. One of those properties is complexity: the complexity of the theory on the sides of the box says something about gravity inside the box, in particular about the size of wormholes. The trouble is, “complexity” is a bit subjective: it’s not clear how to give a good definition for it for this type of theory. In this paper, the author studies a theory with a precise mathematical definition, called a topological theory. This theory turns out to have mathematical properties that suggest a well-defined notion of complexity for it.

arXiv:2201.01393 : Nonrelativistic effective field theories with enhanced symmetries and soft behavior

We sometimes describe quantum field theory as quantum mechanics plus relativity. That’s not quite true though, because it is possible to define a quantum field theory that doesn’t obey special relativity, a non-relativistic theory. Physicists do this if they want to describe a system moving much slower than the speed of light: it gets used sometimes for nuclear physics, and sometimes for modeling colliding black holes.

In particle physics, a “soft” particle is one with almost no momentum. We can classify theories based on how they behave when a particle becomes more and more soft. In normal quantum field theories, if they have special behavior when a particle becomes soft it’s often due to a symmetry of the theory, where the theory looks the same even if something changes. This paper shows that this is not true for non-relativistic theories: they have more requirements to have special soft behavior, not just symmetry. They “bootstrap” a few theories, using some general restrictions to find them without first knowing how they work (“pulling them up by their own bootstraps”), and show that the theories they find are in a certain sense unique, the only theories of that kind.

arXiv:2201.01552 : Transmutation operators and expansions for 1-loop Feynman integrands

In recent years, physicists in my sub-field have found new ways to calculate the probability that particles collide. One of these methods describes ordinary particles in a way resembling string theory, and from this discovered a whole “web” of theories that were linked together by small modifications of the method. This method originally worked only for the simplest Feynman diagrams, the “tree” diagrams that correspond to classical physics, but was extended to the next-simplest diagrams, diagrams with one “loop” that start incorporating quantum effects.

This paper concerns a particular spinoff of this method, that can find relationships between certain one-loop calculations in a particularly efficient way. It lets you express calculations of particle collisions in a variety of theories in terms of collisions in a very simple theory. Unlike the original method, it doesn’t rely on any particular picture of how these collisions work, either Feynman diagrams or strings.

arXiv:2201.01624 : Moduli and Hidden Matter in Heterotic M-Theory with an Anomalous U(1) Hidden Sector

In string theory (and its more sophisticated cousin M theory), our four-dimensional world is described as a world with more dimensions, where the extra dimensions are twisted up so that they cannot be detected. The shape of the extra dimensions influences the kinds of particles we can observe in our world. That shape is described by variables called “moduli”. If those moduli are stable, then the properties of particles we observe would be fixed, otherwise they would not be. In general it is a challenge in string theory to stabilize these moduli and get a world like what we observe.

This paper discusses shapes that give rise to a “hidden sector”, a set of particles that are disconnected from the particles we know so that they are hard to observe. Such particles are often proposed as a possible explanation for dark matter. This paper calculates, for a particular kind of shape, what the masses of different particles are, as well as how different kinds of particles can decay into each other. For example, a particle that causes inflation (the accelerating expansion of the universe) can decay into effects on the moduli and dark matter. The paper also shows how some of the moduli are made stable in this picture.

arXiv:2201.01630 : Chaos in Celestial CFT

One variant of the holography idea I mentioned earlier is called “celestial” holography. In this picture, the sides of the box are an infinite distance away: a “celestial sphere” depicting the angles particles go after they collide, in the same way a star chart depicts the angles between stars. Recent work has shown that there is something like a sensible theory that describes physics on this celestial sphere, that contains all the information about what happens inside.

This paper shows that the celestial theory has a property called quantum chaos. In physics, a theory is said to be chaotic if it depends very precisely on its initial conditions, so that even a small change will result in a large change later (the usual metaphor is a butterfly flapping its wings and causing a hurricane). This kind of behavior appears to be present in this theory.

arXiv:2201.01657 : Calculations of Delbrück scattering to all orders in αZ

Delbrück scattering is an effect where the nuclei of heavy elements like lead can deflect high-energy photons, as a consequence of quantum field theory. This effect is apparently tricky to calculate, and previous calculations have involved approximations. This paper finds a way to calculate the effect without those approximations, which should let it match better with experiments.

(As an aside, I’m a little confused by the claim that they’re going to all orders in αZ when it looks like they just consider one-loop diagrams…but this is probably just my ignorance, this is a corner of the field quite distant from my own.)

arXiv:2201.01674 : On Unfolded Approach To Off-Shell Supersymmetric Models

Supersymmetry is a relationship between two types of particles: fermions, which typically make up matter, and bosons, which are usually associated with forces. In realistic theories this relationship is “broken” and the two types of particles have different properties, but theoretical physicists often study models where supersymmetry is “unbroken” and the two types of particles have the same mass and charge. This paper finds a new way of describing some theories of this kind that reorganizes them in an interesting way, using an “unfolded” approach in which aspects of the particles that would normally be combined are given their own separate variables.

(This is another one I don’t know much about, this is the first time I’d heard of the unfolded approach.)

arXiv:2201.01679 : Geometric Flow of Bubbles

String theorists have conjectured that only some types of theories can be consistently combined with a full theory of quantum gravity, others live in a “swampland” of non-viable theories. One set of conjectures characterizes this swampland in terms of “flows” in which theories with different geometry can flow in to each other. The properties of these flows are supposed to be related to which theories are or are not in the swampland.

This paper writes down equations describing these flows, and applies them to some toy model “bubble” universes.

arXiv:2201.01697 : Graviton scattering amplitudes in first quantisation

This paper is a pedagogical one, introducing graduate students to a topic rather than presenting new research.

Usually in quantum field theory we do something called “second quantization”, thinking about the world not in terms of particles but in terms of fields that fill all of space and time. However, sometimes one can instead use “first quantization”, which is much more similar to ordinary quantum mechanics. There you think of a single particle traveling along a “world-line”, and calculate the probability it interacts with other particles in particular ways. This approach has recently been used to calculate interactions of gravitons, particles related to the gravitational field in the same way photons are related to the electromagnetic field. The approach has some advantages in terms of simplifying the results, which are described in this paper.

Facts About Math Are Facts About Us

Each year, the Niels Bohr International Academy has a series of public talks. Part of Copenhagen’s Folkeuniversitet (“people’s university”), they attract a mix of older people who want to keep up with modern developments and young students looking for inspiration. I gave a talk a few days ago, as part of this year’s program. The last time I participated, back in 2017, I covered a topic that comes up a lot on this blog: “The Quest for Quantum Gravity”. This year, I was asked to cover something more unusual: “The Unreasonable Effectiveness of Mathematics in the Natural Sciences”.

Some of you might notice that title is already taken: it’s a famous lecture by the physicist Wigner, from 1959. Wigner posed an interesting question: why is advanced mathematics so useful in physics? Time and time again, mathematicians develop an idea purely for its own sake, only for physicists to find it absolutely indispensable to describe some part of the physical world. Should we be surprised that this keeps working? Suspicious?

I talked a bit about this: some of the answers people have suggested over the years, and my own opinion. But like most public talks, the premise was mostly a vehicle for cool examples: physicists through history bringing in new math, and surprising mathematical facts like the ones I talked about a few weeks back at Culture Night. Because of that, I was actually a bit unprepared to dive into the philosophical side of the topic (despite it being in principle a very philosophical topic!) When one of the audience members brought up mathematical Platonism, I floundered a bit, not wanting to say something that was too philosophically naive.

Well, if there’s anywhere I can be naive, it’s my own blog. I even have a label for Amateur Philosophy posts. So let’s do one.

Mathematical Platonism is the idea that mathematical truths “exist”: that they’re somewhere “out there” being discovered. On the other side, one might believe that mathematics is not discovered, but invented. For some reason, a lot of people with the latter opinion seem to think this has something to do with describing nature (for example, an essay a few years back by Lee Smolin defines mathematics as “the study of systems of evoked relationships inspired by observations of nature”).

I’m not a mathematical Platonist. I don’t even like to talk about which things do or don’t “exist”. But I also think that describing mathematics in terms of nature is missing the point. Mathematicians aren’t physicists. While there may have been a time when geometers argued over lines in the sand, these days mathematicians’ inspiration isn’t usually the natural world, at least not in the normal sense.

Instead, I think you can’t describe mathematics without describing mathematicians. A mathematical fact is, deep down, something a mathematician can say without other mathematicians shouting them down. It’s an allowed move in what my hazy secondhand memory of Wittgenstein wants to call a “language game”: something that gets its truth from a context of people interpreting and reacting to it, in the same way a move in chess matters only when everyone is playing by its rules.

This makes mathematics sound very subjective, and we’re used to the opposite: the idea that a mathematical fact is as objective as they come. The important thing to remember is that even with this kind of description, mathematics still ends up vastly less subjective than any other field. We care about subjectivity between different people: if a fact is “true” for Brits and “false” for Germans, then it’s a pretty limited fact. Mathematics is special because the “rules of its game” aren’t rules of one group or another. They’re rules that are in some sense our birthright. Any human who can read and write, or even just act and perceive, can act as a Turing Machine, a universal computer. With enough patience and paper, anything that you can prove to one person you can prove to another: you just have to give them the rules and let them follow them. It doesn’t matter how smart you are, or what you care about most: if something is mathematically true for others, it is mathematically true for you.

Some would argue that this is evidence for mathematical Platonism, that if something is a universal truth it should “exist”. Even if it does, though, I don’t think it’s useful to think of it in that way. Once you believe that mathematical truth is “out there”, you want to try to characterize it, to say something about it besides that it’s “out there”. You’ll be tempted to have an opinion on the Axiom of Choice, or the Continuum Hypothesis. And the whole point is that those aren’t sensible things to have opinions on, that having an opinion about them means denying the mathematical proofs that they are, in the “standard” axioms, undecidable. Whatever is “out there”, it has to include everything you can prove with every axiom system, whichever non-standard ones you can cook up, because mathematicians will happily work on any of them. The whole point of mathematics, the thing that makes it as close to objective as anything can be, is that openness: the idea that as long as an argument is good enough, as long as it can convince anyone prepared to wade through the pages, then it is mathematics. Nothing, so long as it can convince in the long-run, is excluded.

If we take this definition seriously, there are some awkward consequences. You could imagine a future in which every mind, everyone you might be able to do mathematics with, is crushed under some tyrant, forced to agree to something false. A real philosopher would dig in to this corner case, try to salvage the definition or throw it out. I’m not a real philosopher though. So all I can say is that while I don’t think that tyrant gets to define mathematics, I also don’t think there are good alternatives to my argument. Our only access to mathematics, and to truth in general, is through the people who pursue it. I don’t think we can define one without the other.

Outreach Talk on Math’s Role in Physics

Tonight is “Culture Night” in Copenhagen, the night when the city throws open its doors and lets the public in. Museums and hospitals, government buildings and even the Freemasons, all have public events. The Niels Bohr Institute does too, of course: an evening of physics exhibits and demos, capped off with a public lecture by Denmark’s favorite bow-tie wearing weirder-than-usual string theorist, Holger Bech Nielsen. In between, there are a number of short talks by various folks at the institute, including yours truly.

In my talk, I’m going to try and motivate the audience to care about math. Math is dry of course, and difficult for some, but we physicists need it to do our jobs. If you want to be precise about a claim in physics, you need math simply to say what you want clearly enough.

Since you guys likely don’t overlap with my audience tonight, it should be safe to give a little preview. I’ll be using a few examples, but this one is the most complicated:

I’ll be telling a story I stole from chapter seven of the web serial Almost Nowhere. (That link is to the first chapter, by the way, in case you want to read the series without spoilers. It’s very strange, very unique, and at least in my view quite worth reading.) You follow a warrior carrying a spear around a globe in two different paths. The warrior tries to always point in the same direction, but finds that the two different paths result in different spears when they meet. The story illustrates that such a simple concept as “what direction you are pointing” isn’t actually so simple: if you want to think about directions in curved space (like the surface of the Earth, but also, like curved space-time in general relativity) then you need more sophisticated mathematics (a notion called parallel transport) to make sense of it.

It’s kind of an advanced concept for a public talk. But seeing it show up in Almost Nowhere inspired me to try to get it across. I’ll let you know how it goes!

By the way, if you are interested in learning the kinds of mathematics you need for theoretical physics, and you happen to be a Bachelor’s student planning to pursue a PhD, then consider the Perimeter Scholars International Master’s Program! It’s a one-year intensive at the Perimeter Institute in Waterloo, Ontario, in Canada. In a year it gives you a crash course in theoretical physics, giving you tools that will set you ahead of other beginning PhD students. I’ve witnessed it in action, and it’s really remarkable how much the students learn in a year, and what they go on to do with it. Their early registration deadline is on November 15, just a month away, so if you’re interested you may want to start thinking about it.

Breaking Out of “Self-Promotion Voice”

What do TED talks and grant applications have in common?

Put a scientist on a stage, and what happens? Some of us panic and mumble. Others are as smooth as a movie star. Most, though, fall back on a well-practiced mode: “self-promotion voice”.

A scientist doing self-promotion voice is easy to recognize. We focus on ourselves, of course (that’s in the name!), talking about all the great things we’ve done. If we have to mention someone else, we make sure to link it in some way: “my colleague”, “my mentor”, “which inspired me to”. All vulnerability is “canned” in one way or another: “challenges we overcame”, light touches on the most sympathetic of issues. Usually, we aren’t negative towards our colleagues either: apart from the occasional very distant enemy, everyone is working with great scientific virtue. If we talk about our past, we tell the same kinds of stories, mentioning our youthful curiosity and deep buzzwordy motivations. Any jokes or references are carefully pruned, made accessible to the lowest-common-denominator. This results in a standard vocabulary: see a metaphor, a quote, or a turn of phrase, and you’re bound to see it in talks again and again and again. Things get even more repetitive when you take into account how often we lean on the voice: a given speech or piece will be assembled from elementary pieces, snippets of practiced self-promotion that we pour in like packing peanuts after a minimal edit, filling all available time and word count.

“My passion for teaching manifests…”

Packing peanuts may not be glamorous, but they get the job done. A scientist who can’t do “the voice” is going to find life a lot harder, their negativity or clumsiness turning away support when they need it most. Except for the greatest of geniuses, we all have to learn a bit of self-promotion to stay employed.

We don’t have to stop there, though. Self-promotion voice works, but it’s boring and stilted, and it all looks basically the same. If we can do something a bit more authentic then we stand out from the crowd.

I’ve been learning this more and more lately. My blog posts have always run the gamut: some are pure formula, but the ones I’m most proud of have a voice all their own. Over the years, I’ve been pushing my applications in that direction. Each grant and job application has a bit of the standard self-promotion voice pruned away, and a bit of another voice (my own voice?) sneaking in. This year, as I send out applications, I’ve been tweaking things. I almost hope the best jobs come late in the year, my applications will be better then!

The Winding Path of a Physics Conversation

In my line of work, I spend a lot of time explaining physics. I write posts here of course, and give the occasional public lecture. I also explain physics when I supervise Master’s students, and in a broader sense whenever I chat with my collaborators or write papers. I’ll explain physics even more when I start teaching. But of all the ways to explain physics, there’s one that has always been my favorite: the one-on-one conversation.

Talking science one-on-one is validating in a uniquely satisfying way. You get instant feedback, questions when you’re unclear and comprehension when you’re close. There’s a kind of puzzle to it, discovering what you need to fill in the gaps in one particular person’s understanding. As a kid, I’d chase this feeling with imaginary conversations: I’d plot out a chat with Democritus or Newton, trying to explain physics or evolution or democracy. It was a game, seeing how I could ground our modern understanding in concepts someone from history already knew.

Way better than Parcheesi

I’ll never get a chance in real life to explain physics to a Democritus or a Newton, to bridge a gap quite that large. But, as I’ve discovered over the years, everyone has bits and pieces they don’t yet understand. Even focused on the most popular topics, like black holes or elementary particles, everyone has gaps in what they’ve managed to pick up. I do too! So any conversation can be its own kind of adventure, discovering what that one person knows, what they don’t, and how to connect the two.

Of course, there’s fun in writing and public speaking too (not to mention, of course, research). Still, I sometimes wonder if there’s a career out there in just the part I like best: just one conversation after another, delving deep into one person’s understanding, making real progress, then moving on to the next. It wouldn’t be efficient by any means, but it sure sounds fun.

Alice Through the Parity Glass

When you look into your mirror in the morning, the face looking back at you isn’t exactly your own. Your mirror image is flipped: left-handed if you’re right-handed, and right-handed if you’re left-handed. Your body is not symmetric in the mirror: we say it does not respect parity symmetry. Zoom in, and many of the molecules in your body also have a “handedness” to them: biology is not the same when flipped in a mirror.

What about physics? At first, you might expect the laws of physics themselves to respect parity symmetry. Newton’s laws are the same when reflected in a mirror, and so are Maxwell’s. But one part of physics breaks this rule: the weak nuclear force, the force that causes nuclear beta decay. The weak nuclear force interacts differently with “right-handed” and “left-handed” particles (shorthand for particles that spin counterclockwise or clockwise with respect to their motion). This came as a surprise to most physicists, but it was predicted by Tsung-Dao Lee and Chen-Ning Yang and demonstrated in 1956 by Chien-Shiung Wu, known in her day as the “Queen of Nuclear Research”. The world really does look different when flipped in a mirror.

I gave a lecture on the weak force for the pedagogy course I took a few weeks back. One piece of feedback I got was that the topic wasn’t very relatable. People wanted to know why they should care about the handedness of the weak force, they wanted to hear about “real-life” applications. Once scientists learned that the weak force didn’t respect parity, what did that let us do?

Thinking about this, I realized this is actually a pretty tricky story to tell. With enough time and background, I could explain that the “handedness” of the Standard Model is a major constraint on attempts to unify physics, ruling out a lot of the simpler options. That’s hard to fit in a short lecture though, and it still isn’t especially close to “real life”.

Then I realized I don’t need to talk about “real life” to give a “real-life example”. People explaining relativity get away with science fiction scenarios, spaceships on voyages to black holes. The key isn’t to be familiar, just relatable. If I can tell a story (with people in it), then maybe I can make this work.

All I need, then, is a person who cares a lot about the world behind a mirror.

Curiouser and curiouser…

When Alice goes through the looking glass in the novel of that name, she enters a world flipped left-to-right, a world with its parity inverted. Following Alice, we have a natural opportunity to explore such a world. Others have used this to explore parity symmetry in biology: for example, a side-plot in Alan Moore’s League of Extraordinary Gentlemen sees Alice come back flipped, and starve when she can’t process mirror-reversed nutrients. I haven’t seen it explored for physics, though.

In order to make this story work, we have to get Alice to care about the weak nuclear force. The most familiar thing the weak force does is cause beta decay. And the most familiar thing that undergoes beta decay is a banana. Bananas contain radioactive potassium, which can transform to calcium by emitting an electron and an anti-electron-neutrino.

The radioactive potassium from a banana doesn’t stay in the body very long, only a few hours at most. But if Alice was especially paranoid about radioactivity, maybe she would want to avoid eating bananas. (We shouldn’t tell her that other foods contain potassium too.) If so, she might view the looking glass as a golden opportunity, a chance to eat as many bananas as she likes without worrying about radiation.

Does this work?

A first problem: can Alice even eat mirror-reversed bananas? I told you many biological molecules have handedness, which led Alan Moore’s version of Alice to starve. If we assume, unlike Moore, that Alice comes back in her original configuration and survives, we should still ask if she gets any benefit out of the bananas in the looking glass.

Researching this, I found that the main thing that makes bananas taste “banana-ish”, isoamyl acetate, does not have handedness: mirror bananas will still taste like bananas. Fructose, a sugar in bananas, does have handedness however: it isn’t the same when flipped in a mirror. Chatting with a chemist, the impression I got was that this isn’t a total loss: often, flipping a sugar results in another, different sugar. A mirror banana might still taste sweet, but less so. Overall, it may still be worth eating.

The next problem is a tougher one: flipping a potassium atom doesn’t actually make it immune to the weak force. The weak force only interacts with left-handed particles and right-handed antiparticles: in beta decay, it transforms a left-handed down quark to a left-handed up quark, producing a left-handed electron and a right-handed anti-neutrino.

Alice would have been fine if all of the quarks in potassium were left-handed, but they aren’t: an equal amount are right-handed, so the mirror weak force will still act on them, and they will still undergo beta decay. Actually, it’s worse than that: quarks, and massive particles in general, don’t actually have a definite handedness. If you speed up enough to catch up to a quark and pass it, then from your perspective it’s now going in the opposite direction, and its handedness is flipped. The only particles with definite handedness are massless particles: those go at the speed of light, so you can never catch up to them. Another way to think about this is that quarks get their mass from the Higgs field, and this happens because the Higgs lets left- and right-handed quarks interact. What we call the quark’s mass is in some sense just left- and right-handed quarks constantly mixing back and forth.

Alice does have the opportunity to do something interesting here, if she can somehow capture the anti-neutrinos from those bananas. Our world appears to only have left-handed neutrinos and right-handed anti-neutrinos. This seemed reasonable when we thought neutrinos were massless, but now we know neutrinos have a (very small) mass. As a result, the hunt is on for right-handed neutrinos or left-handed anti-neutrinos: if we can measure them, we could fix one of the lingering mysteries of the Standard Model. With this in mind, Alice has the potential to really confuse some particle physicists, giving them some left-handed anti-neutrinos from beyond the looking-glass.

It turns out there’s a problem with even this scheme, though. The problem is a much wider one: the whole story is physically inconsistent.

I’d been acting like Alice can pass back and forth through the mirror, carrying all her particles with her. But what are “her particles”? If she carries a banana through the mirror, you might imagine the quarks in the potassium atoms carry over. But those quarks are constantly exchanging other quarks and gluons, as part of the strong force holding them together. They’re also exchanging photons with electrons via the electromagnetic force, and they’re also exchanging W bosons via beta decay. In quantum field theory, all of this is in some sense happening at once, an infinite sum over all possible exchanges. It doesn’t make sense to just carve out one set of particles and plug them in to different fields somewhere else.

If we actually wanted to describe a mirror like Alice’s looking glass in physics, we’d want to do it consistently. This is similar to how physicists think of time travel: you can’t go back in time and murder your grandparents because your whole path in space-time has to stay consistent. You can only go back and do things you “already did”. We treat space in a similar way to time. A mirror like Alice’s imposes a condition, that fields on one side are equal to their mirror image on the other side. Conditions like these get used in string theory on occasion, and they have broad implications for physics on the whole of space-time, not just near the boundary. The upshot is that a world with a mirror like Alice’s in it would be totally different from a world without the looking glass: the weak force as we know it would not exist.

So unfortunately, I still don’t have a good “real life” story for a class about parity symmetry. It’s fun trying to follow Alice through a parity transformation, but there are a few too many problems for the tale to make any real sense. Feel free to suggest improvements!