Tag Archives: science communication

Talking and Teaching

Someone recently shared with me an article written by David Mermin in 1992 about physics talks. Some aspects are dated (our slides are no longer sheets of plastic, and I don’t think anyone writing an article like that today would feel the need to put it in the mouth of a fictional professor (which is a shame honestly)), but most of it still holds true. I particularly recognized the self-doubt of being a young physicist sitting in a talk and thinking “I’m supposed to enjoy this?”

Mermin’s basic point is to keep things as light as possible. You want to convey motivation more than content, and background more than your own contributions. Slides should be sparse, both because people won’t be able to see everything but also because people can get frustrated “reading ahead” of what you say.

Mermin’s suggestion that people read from a prepared text was probably good advice for him, but maybe not for others. It can be good if you can write like he does, but I don’t think most people’s writing is that much better than what they say in talks (you can judge this by reading peoples’ papers!) Some are much clearer speaking impromptu. I agree with him that in practice people end up just reading from their slides, which indeed is bad, but reading from a normal physics paper isn’t any better.

I also don’t completely agree with him about the value of speech over text. Yes, putting text on your slides means people can read ahead (unless you hide some of the text, which is easier to do these days than in the days of overhead transparencies). But just saying things means that if someone’s attention lapses for just a moment, they’ll be lost. Unless you repeat yourself a lot (good practice in any case), you should avoid just saying anything you need your audience to remember, and make sure they can read it somewhere if they need it as well.

That said, “if they need it” is doing a lot of work here, and this is where I agree again with Mermin. Fundamentally, you don’t need to convey everything you think you do. (I don’t usually need to convey everything I think I do!) It’s a lesson I’ve been learning this year from pedagogy courses, a message they try to instill in everyone who teaches at the university. If you want to really convey something well, then you just can’t convey that much. You need to focus, pick a few things and try to get them across, and structure the rest of what you say to reinforce those things. When teaching, or when speaking, less is more.

The Temptation of Spinoffs

Read an argument for a big scientific project, and you’ll inevitably hear mention of spinoffs. Whether it’s NASA bringing up velcro or CERN and the World-Wide Web, scientists love to bring up times when a project led to some unrelated technology that improved peoples’ lives.

Just as inevitably as they show up, though, these arguments face criticism. Advocates of the projects argue that promoting spinoffs misses the point, training the public to think about science in terms of unrelated near-term gadgets rather than the actual point of the experiments. They think promoters should focus on the scientific end-goals, justifying them either in terms of benefit to humanity or as a broader, “it makes the country worth defending” human goal. It’s a perspective that shows up in education too, where even when students ask “when will I ever use this in real life?” it’s not clear that’s really what they mean.

On the other side, opponents of the projects will point out that the spinoffs aren’t good enough to justify the science. Some, like velcro, weren’t actually spinoffs to begin with. Others seem like tiny benefits compared to the vast cost of the scientific projects, or like things that would have been much easier to get with funding that was actually dedicated to achieving the spinoff.

With all these downsides, why do people keep bringing spinoffs up? Are they just a cynical attempt to confuse people?

I think there’s something less cynical going on here. Things make a bit more sense when you listen to what the scientists say, not to the public, but when talking to scientists in other disciplines.

Scientists speaking to fellow scientists still mention spinoffs, but they mention scientific spinoffs. The speaker in a talk I saw recently pointed out that the LHC doesn’t just help with particle physics: by exploring the behavior of collisions of high-energy atomic nuclei it provides essential information for astrophysicists understanding neutron stars and cosmologists studying the early universe. When these experiments study situations we can’t model well, they improve the approximations we use to describe those situations in other contexts. By knowing more, we know more. Knowledge builds on knowledge, and the more we know about the world the more we can do, often in surprising and un-planned ways.

I think that when scientists promote spinoffs to the public, they’re trying to convey this same logic. Like promoting an improved understanding of stars to astrophysicists, they’re modeling the public as “consumer goods scientists” and trying to pick out applications they’d find interesting.

Knowing more does help us know more, that much is true. And eventually that knowledge can translate to improving people’s lives. But in a public debate, people aren’t looking for these kinds of principles, let alone a scientific “I’ll scratch your back if you’ll scratch mine”. They’re looking for something like a cost-benefit analysis, “why are we doing this when we could do that?”

(This is not to say that most public debates involve especially good cost-benefit analysis. Just that it is, in the end, what people are trying to do.)

Simply listing spinoffs doesn’t really get at this. The spinoffs tend to be either small enough that they don’t really argue the point (velcro, even if NASA had invented it, could probably have been more cheaply found without a space program), or big but extremely unpredictable (it’s not like we’re going to invent another world-wide web).

Focusing on the actual end-products of the science should do a bit better. That can include “scientific spinoffs”, if not the “consumer goods spinoffs”. Those collisions of heavy nuclei change our understanding of how we model complex systems. That has applications in many areas of science, from how we model stars to materials to populations, and those applications in turn could radically improve people’s lives.

Or, well, they could not. Basic science is very hard to do cost-benefit analyses with. It’s the fabled explore/exploit dilemma, whether to keep trying to learn more or focus on building on what you have. If you don’t know what’s out there, if you don’t know what you don’t know, then you can’t really solve that dilemma.

So I get the temptation of reaching to spinoffs, of pointing to something concrete in everyday life and saying “science did that!” Science does radically improve people’s lives, but it doesn’t always do it especially quickly. You want to teach people that knowledge leads to knowledge, and you try to communicate it the way you would to other scientists, by saying how your knowledge and theirs intersect. But if you want to justify science to the public, you want something with at least the flavor of cost-benefit analysis. And you’ll get more mileage out of that if you think about where the science itself can go, than if you focus on the consumer goods it accidentally spins off along the way.

The Problem of Quantum Gravity Is the Problem of High-Energy (Density) Quantum Gravity

I’ve said something like this before, but here’s another way to say it.

The problem of quantum gravity is one of the most famous problems in physics. You’ve probably heard someone say that quantum mechanics and general relativity are fundamentally incompatible. Most likely, this was narrated over pictures of a foaming, fluctuating grid of space-time. Based on that, you might think that all we have to do to solve this problem is to measure some quantum property of gravity. Maybe we could make a superposition of two different gravitational fields, see what happens, and solve the problem that way.

I mean, we could do that, some people are trying to. But it won’t solve the problem. That’s because the problem of quantum gravity isn’t just the problem of quantum gravity. It’s the problem of high-energy quantum gravity.

Merging quantum mechanics and general relativity is actually pretty easy. General relativity is a big conceptual leap, certainly, a theory in which gravity is really just the shape of space-time. At the same time, though, it’s also a field theory, the same general type of theory as electromagnetism. It’s a weirder field theory than electromagnetism, to be sure, one with deeper implications. But if we want to describe low energies, and weak gravitational fields, then we can treat it just like any other field theory. We know how to write down some pretty reasonable-looking equations, we know how to do some basic calculations with them. This part is just not that scary.

The scary part happens later. The theory we get from these reasonable-looking equations continues to look reasonable for a while. It gives formulas for the probability of things happening: things like gravitational waves bouncing off each other, as they travel through space. The problem comes when those waves have very high energy, and the nice reasonable probability formula now says that the probability is greater than one.

For those of you who haven’t taken a math class in a while, probabilities greater than one don’t make sense. A probability of one is a certainty, something guaranteed to happen. A probability greater than one isn’t more certain than certain, it’s just nonsense.

So we know something needs to change, we know we need a new theory. But we only know we need that theory when the energy is very high: when it’s the Planck energy. Before then, we might still have a different theory, but we might not: it’s not a “problem” yet.

Now, a few of you understand this part, but still have a misunderstanding. The Planck energy seems high for particle physics, but it isn’t high in an absolute sense: it’s about the energy in a tank of gasoline. Does that mean that all we have to do to measure quantum gravity is to make a quantum state out of your car?

Again, no. That’s because the problem of quantum gravity isn’t just the problem of high-energy quantum gravity either.

Energy seems objective, but it’s not. It’s subjective, or more specifically, relative. Due to special relativity, observers moving at different speeds observe different energies. Because of that, high energy alone can’t be the requirement: it isn’t something either general relativity or quantum field theory can “care about” by itself.

Instead, the real thing that matters is something that’s invariant under special relativity. This is hard to define in general terms, but it’s best to think of it as a requirement for not energy, but energy density.

(For the experts: I’m justifying this phrasing in part because of how you can interpret the quantity appearing in energy conditions as the energy density measured by an observer. This still isn’t the correct way to put it, but I can’t think of a better way that would be understandable to a non-technical reader. If you have one, let me know!)

Why do we need quantum gravity to fully understand black holes? Not just because they have a lot of mass, but because they have a lot of mass concentrated in a small area, a high energy density. Ditto for the Big Bang, when the whole universe had a very large energy density. Particle colliders are useful not just because they give particles high energy, but because they give particles high energy and put them close together, creating a situation with very high energy density.

Once you understand this, you can use it to think about whether some experiment or observation will help with the problem of quantum gravity. Does the experiment involve very high energy density, much higher than anything we can do in a particle collider right now? Is that telescope looking at something created in conditions of very high energy density, or just something nearby?

It’s not impossible for an experiment that doesn’t meet these conditions to find something. Whatever the correct quantum gravity theory is, it might be different from our current theories in a more dramatic way, one that’s easier to measure. But the only guarantee, the only situation where we know we need a new theory, is for very high energy density.

Simulated Wormholes for My Real Friends, Real Wormholes for My Simulated Friends

Maybe you’ve recently seen a headline like this:

Actually, I’m more worried that you saw that headline before it was edited, when it looked like this:

If you’ve seen either headline, and haven’t read anything else about it, then please at least read this:

Physicists have not created an actual wormhole. They have simulated a wormhole on a quantum computer.

If you’re willing to read more, then read the rest of this post. There’s a more subtle story going on here, both about physics and about how we communicate it. And for the experts, hold on, because when I say the wormhole was a simulation I’m not making the same argument everyone else is.

[And for the mega-experts, there’s an edit later in the post where I soften that claim a bit.]

The headlines at the top of this post come from an article in Quanta Magazine. Quanta is a web-based magazine covering many fields of science. They’re read by the general public, but they aim for a higher standard than many science journalists, with stricter fact-checking and a goal of covering more challenging and obscure topics. Scientists in turn have tended to be quite happy with them: often, they cover things we feel are important but that the ordinary media isn’t able to cover. (I even wrote something for them recently.)

Last week, Quanta published an article about an experiment with Google’s Sycamore quantum computer. By arranging the quantum bits (qubits) in a particular way, they were able to observe behaviors one would expect out of a wormhole, a kind of tunnel linking different points in space and time. They published it with the second headline above, claiming that physicists had created a wormhole with a quantum computer and explaining how, using a theoretical picture called holography.

This pissed off a lot of physicists. After push-back, Quanta’s twitter account published this statement, and they added the word “Holographic” to the title.

Why were physicists pissed off?

It wasn’t because the Quanta article was wrong, per se. As far as I’m aware, all the technical claims they made are correct. Instead, it was about two things. One was the title, and the implication that physicists “really made a wormhole”. The other was the tone, the excited “breaking news” framing complete with a video comparing the experiment with the discovery of the Higgs boson. I’ll discuss each in turn:

The Title

Did physicists really create a wormhole, or did they simulate one? And why would that be at all confusing?

The story rests on a concept from the study of quantum gravity, called holography. Holography is the idea that in quantum gravity, certain gravitational systems like black holes are fully determined by what happens on a “boundary” of the system, like the event horizon of a black hole. It’s supposed to be a hologram in analogy to 3d images encoded in 2d surfaces, rather than like the hard-light constructions of science fiction.

The best-studied version of holography is something called AdS/CFT duality. AdS/CFT duality is a relationship between two different theories. One of them is a CFT, or “conformal field theory”, a type of particle physics theory with no gravity and no mass. (The first example of the duality used my favorite toy theory, N=4 super Yang-Mills.) The other one is a version of string theory in an AdS, or anti-de Sitter space, a version of space-time curved so that objects shrink as they move outward, approaching a boundary. (In the first example, this space-time had five dimensions curled up in a sphere and the rest in the anti-de Sitter shape.)

These two theories are conjectured to be “dual”. That means that, for anything that happens in one theory, you can give an alternate description using the other theory. We say the two theories “capture the same physics”, even though they appear very different: they have different numbers of dimensions of space, and only one has gravity in it.

Many physicists would claim that if two theories are dual, then they are both “equally real”. Even if one description is more familiar to us, both descriptions are equally valid. Many philosophers are skeptical, but honestly I think the physicists are right about this one. Philosophers try to figure out which things are real or not real, to make a list of real things and explain everything else as made up of those in some way. I think that whole project is misguided, that it’s clarifying how we happen to talk rather than the nature of reality. In my mind, dualities are some of the clearest evidence that this project doesn’t make any sense: two descriptions can look very different, but in a quite meaningful sense be totally indistinguishable.

That’s the sense in which Quanta and Google and the string theorists they’re collaborating with claim that physicists have created a wormhole. They haven’t created a wormhole in our own space-time, one that, were it bigger and more stable, we could travel through. It isn’t progress towards some future where we actually travel the galaxy with wormholes. Rather, they created some quantum system, and that system’s dual description is a wormhole. That’s a crucial point to remember: even if they created a wormhole, it isn’t a wormhole for you.

If that were the end of the story, this post would still be full of warnings, but the title would be a bit different. It was going to be “Dual Wormholes for My Real Friends, Real Wormholes for My Dual Friends”. But there’s a list of caveats. Most of them arguably don’t matter, but the last was what got me to change the word “dual” to “simulated”.

  1. The real world is not described by N=4 super Yang-Mills theory. N=4 super Yang-Mills theory was never intended to describe the real world. And while the real world may well be described by string theory, those strings are not curled up around a five-dimensional sphere with the remaining dimensions in anti-de Sitter space. We can’t create either theory in a lab either.
  2. The Standard Model probably has a quantum gravity dual too, see this cute post by Matt Strassler. But they still wouldn’t have been able to use that to make a holographic wormhole in a lab.
  3. Instead, they used a version of AdS/CFT with fewer dimensions. It relates a weird form of gravity in one space and one time dimension (called JT gravity), to a weird quantum mechanics theory called SYK, with an infinite number of quantum particles or qubits. This duality is a bit more conjectural than the original one, but still reasonably well-established.
  4. Quantum computers don’t have an infinite number of qubits, so they had to use a version with a finite number: seven, to be specific. They trimmed the model down so that it would still show the wormhole-dual behavior they wanted. At this point, you might say that they’re definitely just simulating the SYK theory, using a small number of qubits to simulate the infinite number. But I think they could argue that this system, too, has a quantum gravity dual. The dual would have to be even weirder than JT gravity, and even more conjectural, but the signs of wormhole-like behavior they observed (mostly through simulations on an ordinary computer, which is still better at this kind of thing than a quantum computer) could be seen as evidence that this limited theory has its own gravity partner, with its own “real dual” wormhole.
  5. But those seven qubits don’t just have the interactions they were programmed to have, the ones with the dual. They are physical objects in the real world, so they interact with all of the forces of the real world. That includes, though very weakly, the force of gravity.

And that’s where I think things break, and you have to call the experiment a simulation. You can argue, if you really want to, that the seven-qubit SYK theory has its own gravity dual, with its own wormhole. There are people who expect duality to be broad enough to include things like that.

But you can’t argue that the seven-qubit SYK theory, plus gravity, has its own gravity dual. Theories that already have gravity are not supposed to have gravity duals. If you pushed hard enough on any of the string theorists on that team, I’m pretty sure they’d admit that.

That is what decisively makes the experiment a simulation. It approximately behaves like a system with a dual wormhole, because you can approximately ignore gravity. But if you’re making some kind of philosophical claim, that you “really made a wormhole”, then “approximately” doesn’t cut it: if you don’t exactly have a system with a dual, then you don’t “really” have a dual wormhole: you’ve just simulated one.

Edit: mitchellporter in the comments points out something I didn’t know: that there are in fact proposals for gravity theories with gravity duals. They are in some sense even more conjectural than the series of caveats above, but at minimum my claim above, that any of the string theorists on the team would agree that the system’s gravity means it can’t have a dual, is probably false.

I think at this point, I’d soften my objection to the following:

Describing the system of qubits in the experiment as a limited version of the SYK theory is in one way or another an approximation. It approximates them as not having any interactions beyond those they programmed, it approximates them as not affected by gravity, and because it’s a quantum mechanical description it even approximates the speed of light as small. Those approximations don’t guarantee that the system doesn’t have a gravity dual. But in order for them to, then our reality, overall, would have to have a gravity dual. There would have to be a dual gravity interpretation of everything, not just the inside of Google’s quantum computer, and it would have to be exact, not just an approximation. Then the approximate SYK would be dual to an approximate wormhole, but that approximate wormhole would be an approximation of some “real” wormhole in the dual space-time.

That’s not impossible, as far as I can tell. But it piles conjecture upon conjecture upon conjecture, to the point that I don’t think anyone has explicitly committed to the whole tower of claims. If you want to believe that this experiment literally created a wormhole, you thus can, but keep in mind the largest asterisk known to mankind.

End edit.

If it weren’t for that caveat, then I would be happy to say that the physicists really created a wormhole. It would annoy some philosophers, but that’s a bonus.

But even if that were true, I wouldn’t say that in the title of the article.

The Title, Again

These days, people get news in two main ways.

Sometimes, people read full news articles. Reading that Quanta article is a good way to understand the background of the experiment, what was done and why people care about it. As I mentioned earlier, I don’t think anything said there was wrong, and they cover essentially all of the caveats you’d care about (except for that last one 😉 ).

Sometimes, though, people just see headlines. They get forwarded on social media, observed at a glance passed between friends. If you’re popular enough, then many more people will see your headline than will actually read the article. For many people, their whole understanding of certain scientific fields is formed by these glancing impressions.

Because of that, if you’re popular and news-y enough, you have to be especially careful with what you put in your headlines, especially when it implies a cool science fiction story. People will almost inevitably see them out of context, and it will impact their view of where science is headed. In this case, the headline may have given many people the impression that we’re actually making progress towards travel via wormholes.

Some of my readers might think this is ridiculous, that no-one would believe something like that. But as a kid, I did. I remember reading popular articles about wormholes, describing how you’d need energy moving in a circle, and other articles about optical physicists finding ways to bend light and make it stand still. Putting two and two together, I assumed these ideas would one day merge, allowing us to travel to distant galaxies faster than light.

If I had seen Quanta’s headline at that age, I would have taken it as confirmation. I would have believed we were well on the way to making wormholes, step by step. Even the New York Times headline, “the Smallest, Crummiest Wormhole You Can Imagine”, wouldn’t have fazed me.

(I’m not sure even the extra word “holographic” would have. People don’t know what “holographic” means in this context, and while some of them would assume it meant “fake”, others would think about the many works of science fiction, like Star Trek, where holograms can interact physically with human beings.)

Quanta has a high-brow audience, many of whom wouldn’t make this mistake. Nevertheless, I think Quanta is popular enough, and respectable enough, that they should have done better here.

At minimum, they could have used the word “simulated”. Even if they go on to argue in the article that the wormhole is real, and not just a simulation, the word in the title does no real harm. It would be a lie, but a beneficial “lie to children”, the basic stock-in-trade of science communication. I think they could have defended it to the string theorists they interviewed on those grounds.

The Tone

Honestly, I don’t think people would have been nearly so pissed off were it not for the tone of the article. There are a lot of physics bloggers who view themselves as serious-minded people, opposed to hype and publicity stunts. They view the research program aimed at simulating quantum gravity on a quantum computer as just an attempt to link a dying and un-rigorous research topic to an over-hyped and over-funded one, pompous storytelling aimed at promoting the careers of people who are already extremely successful.

These people tend to view Quanta favorably, because it covers serious-minded topics in a thorough way. And so many of them likely felt betrayed, seeing this Quanta article as a massive failure of that serious-minded-ness, falling for or even endorsing the hypiest of hype.

To those people, I’d like to politely suggest you get over yourselves.

Quanta’s goal is to cover things accurately, to represent all the facts in a way people can understand. But “how exciting something is” is not a fact.

Excitement is subjective. Just because most of the things Quanta finds exciting you also find exciting, does not mean that Quanta will find the things you find unexciting unexciting. Quanta is not on “your side” in some war against your personal notion of unexciting science, and you should never have expected it to be.

In fact, Quanta tends to find things exciting, in general. They were more excited than I was about the amplituhedron, and I’m an amplitudeologist. Part of what makes them consistently excited about the serious-minded things you appreciate them for is that they listen to scientists and get excited about the things they’re excited about. That is going to include, inevitably, things those scientists are excited about for what you think are dumb groupthinky hype reasons.

I think the way Quanta titled the piece was unfortunate, and probably did real damage. I think the philosophical claim behind the title is wrong, though for subtle and weird enough reasons that I don’t really fault anybody for ignoring them. But I don’t think the tone they took was a failure of journalistic integrity or research or anything like that. It was a matter of taste. It’s not my taste, it’s probably not yours, but we shouldn’t have expected Quanta to share our tastes in absolutely everything. That’s just not how taste works.

This Week at Quanta Magazine

I’ve got an article in Quanta Magazine this week, about a program called FORM.

Quanta has come up a number of times on this blog, they’re a science news outlet set up by the Simons Foundation. Their goal is to enhance the public understanding of science and mathematics. They cover topics other outlets might find too challenging, and they cover the topics others cover with more depth. Most people I know who’ve worked with them have been impressed by their thoroughness: they take fact-checking to a level I haven’t seen with other science journalists. If you’re doing a certain kind of mathematical work, then you hope that Quanta decides to cover it.

A while back, as I was chatting with one of their journalists, I had a startling realization: if I want Quanta to cover something, I can send them a tip, and if they’re interested they’ll write about it. That realization resulted in the article I talked about here. Chatting with the journalist interviewing me for that article, though, I learned something if anything even more startling: if I want Quanta to cover something, and I want to write about it, I can pitch the article to Quanta, and if they’re interested they’ll pay me to write about it.

Around the same time, I happened to talk to a few people in my field, who had a problem they thought Quanta should cover. A software, called FORM, was used in all the most serious collider physics calculations. Despite that, the software wasn’t being supported: its future was unclear. You can read the article to learn more.

One thing I didn’t mention in that article: I hadn’t used FORM before I started writing it. I don’t do those “most serious collider physics calculations”, so I’d never bothered to learn FORM. I mostly use Mathematica, a common choice among physicists who want something easy to learn, even if it’s not the strongest option for many things.

(By the way, it was surprisingly hard to find quotes about FORM that didn’t compare it specifically to Mathematica. In the end I think I included one, but believe me, there could have been a lot more.)

Now, I wonder if I should have been using FORM all along. Many times I’ve pushed to the limits of what Mathematica could comfortable handle, the limits of what my computer’s memory could hold, equations long enough that just expanding them out took complicated work-arounds. If I had learned FORM, maybe I would have breezed through those calculations, and pushed even further.

I’d love it if this article gets FORM more attention, and more support. But also, I’d love it if it gives a window on the nuts and bolts of hard-core particle physics: the things people have to do to turn those T-shirt equations into predictions for actual colliders. It’s a world in between physics and computer science and mathematics, a big part of the infrastructure of how we know what we know that, precisely because it’s infrastructure, often ends up falling through the cracks.

Edit: For researchers interested in learning more about FORM, the workshop I mentioned at the end of the article is now online, with registrations open.

From Journal to Classroom

As part of the pedagogy course I’ve been taking, I’m doing a few guest lectures in various courses. I’ve got one coming up in a classical mechanics course (“intermediate”-level, so not Newton’s laws, but stuff the general public doesn’t know much about like Hamiltonians). They’ve been speeding through the core content, so I got to cover a “fun” topic, and after thinking back to my grad school days I chose a topic I think they’ll have a lot of fun with: Chaos theory.

Getting the obligatory Warhammer reference out of the way now

Chaos is one of those things everyone has a vague idea about. People have heard stories where a butterfly flaps its wings and causes a hurricane. Maybe they’ve heard of the rough concept, determinism with strong dependence on the initial conditions, so a tiny change (like that butterfly) can have huge consequences. Maybe they’ve seen pictures of fractals, and got the idea these are somehow related.

Its role in physics is a bit more detailed. It’s one of those concepts that “intermediate classical mechanics” is good for, one that can be much better understood once you’ve been introduced to some of the nineteenth century’s mathematical tools. It felt like a good way to show this class that the things they’ve learned aren’t just useful for dusty old problems, but for understanding something the public thinks is sexy and mysterious.

As luck would have it, the venerable textbook the students are using includes a (2000’s era) chapter on chaos. I read through it, and it struck me that it’s a very different chapter from most of the others. This hit me particularly when I noticed a section describing a famous early study of chaos, and I realized that all the illustrations were based on the actual original journal article.

I had surprisingly mixed feelings about this.

On the one hand, there’s a big fashion right now for something called research-based teaching. That doesn’t mean “using teaching methods that are justified by research” (though you’re supposed to do that too), but rather, “tying your teaching to current scientific research”. This is a fashion that makes sense, because learning about cutting-edge research in an undergraduate classroom feels pretty cool. It lets students feel more connected with the scientific community, it inspires them to get involved, and it gets them more used to what “real research” looks like.

On the other hand, structuring your textbook based on the original research papers feels kind of lazy. There’s a reason we don’t teach Newtonian mechanics the way Newton would have. Pedagogy is supposed to be something we improve at over time: we come up with better examples and better notation, more focused explanations that teach what we want students to learn. If we just summarize a paper, we’re not really providing “added value”: we should hope, at this point, that we can do better.

Thinking about this, I think the distinction boils down to why you’re teaching the material in the first place.

With a lot of research-based teaching, the goal is to show the students how to interact with current literature. You want to show them journal papers, not because the papers are the best way to teach a concept or skill, but because reading those papers is one of the skills you want to teach.

That makes sense for very current topics, but it seems a bit weird for the example I’ve been looking at, an early study of chaos from the 60’s. It’s great if students can read current papers, but they don’t necessarily need to read older ones. (At least, not yet.)

What then, is the textbook trying to teach? Here things get a bit messy. For a relatively old topic, you’d ideally want to teach not just a vague impression of what was discovered, but concrete skills. Here though, those skills are just a bit beyond the students’ reach: chaos is more approachable than you’d think, but still not 100% something the students can work with. Instead they’re learning to appreciate concepts. This can be quite valuable, but it doesn’t give the kind of structure that a concrete skill does. In particular, it makes it hard to know what to emphasize, beyond just summarizing the original article.

In this case, I’ve come up with my own way forward. There are actually concrete skills I’d like to teach. They’re skills that link up with what the textbook is teaching, skills grounded in the concepts it’s trying to convey, and that makes me think I can convey them. It will give some structure to the lesson, a focus on not merely what I’d like the students to think but what I’d like them to do.

I won’t go into too much detail: I suspect some of the students may be reading this, and I don’t want to spoil the surprise! But I’m looking forward to class, and to getting to try another pedagogical experiment.

The Folks With the Best Pictures

Sometimes I envy astronomers. Particle physicists can write books full of words and pages of colorful graphs and charts, and the public won’t retain any of it. Astronomers can mesmerize the world with a single picture.

NASA just released the first images from its James Webb Space Telescope. They’re impressive, and not merely visually: in twelve hours, they probe deeper than the Hubble Space Telescope managed in weeks on the same patch of sky, as well as gathering data that can show what kinds of molecules are present in the galaxies.

(If you’re curious how the James Webb images compare to Hubble ones, here’s a nice site comparing them.)

Images like this enter the popular imagination. The Hubble telescope’s deep field has appeared on essentially every artistic product one could imagine. As of writing this, searching for “Hubble” on Etsy gives almost 5,000 results. “JWST”, the acronym for the James Webb Space Telescope, already gives over 1,000, including several on the front page that already contain just-released images. Despite the Large Hadron Collider having operated for over a decade, searching “LHC” also leads to just around 1,000 results…and a few on the front page are actually pictures of the JWST!

It would be great as particle physicists to have that kind of impact…but I think we shouldn’t stress ourselves too much about it. Ultimately astronomers will always have this core advantage. Space is amazing, visually stunning and mind-bogglingly vast. It has always had a special place for human cultures, and I’m happy for astronomers to inherit that place.

Gateway Hobbies

When biologists tell stories of their childhoods, they’re full of trails of ants and fireflies in jars. Lots of writers start young, telling stories on the playground and making skits with their friends. And the mere existence of “chemistry sets” tells you exactly how many chemists get started. Many fields have these “gateway hobbies”, like gateway drugs for careers, ways that children and teenagers get hooked and gain experience.

Physics is a little different, though. While kids can play with magnets and electricity, there aren’t a whole lot of other “physics hobbies”, especially for esoteric corners like particle physics. Instead, the “gateway hobbies” of physics are more varied, drawing from many different fields.

First, of course, even if a child can’t “do physics”, they can always read about it. Kids will memorize the names of quarks, read about black holes, or watch documentaries about string theory. I’m not counting this as a “physics hobby” because it isn’t really: physics isn’t a collection of isolated facts, but of equations: frameworks you can use to make predictions. Reading about the Big Bang is a good way to get motivated and excited, it’s a great thing to do…but it doesn’t prepare you for the “science part” of the science.

A few efforts at physics popularization get a bit more hands-on. Many come in the form of video games. You can get experience with relativity through Velocity Raptor, quantum mechanics through Quantum Chess, or orbital mechanics through Kerbal Space Program. All of these get just another bit closer to “doing physics” rather than merely reading about it.

One can always gain experience in other fields, and that can be surprisingly relevant. Playing around with a chemistry set gives first-hand experience of the kinds of things that motivated quantum mechanics, and some things that still motivate condensed matter research. Circuits are physics, more directly, even if they’re also engineering: and for some physicists, designing electronic sensors is a huge part of what they do.

Astronomy has a special place, both in the history of physics and the pantheon of hobbies. There’s a huge amateur astronomy community, one that both makes real discoveries and reaches out to kids of all ages. Many physicists got their start looking at the heavens, using it like Newton’s contemporaries as a first glimpse into the mechanisms of nature.

More and more research in physics involves at least some programming, and programming is another activity kids have access to in spades, from Logo to robotics competitions. Learning how to program isn’t just an important skill: it’s also a way for young people to experience a world bound by clear laws and logic, another motivation to study physics.

Of course, if you’re interested in rules and logic, why not go all the way? Plenty of physicists grew up doing math competitions. I have fond memories of Oregon’s Pentagames, and the more “serious” activities go all the way up to the famously challenging Putnam Competition.

Finally, there are physics competitions too, at least in the form of the International Physics Olympiad, where high school students compete in physics prowess.

Not every physicist did these sorts of things, of course: some got hooked later. Others did more than one. A friend of mine who’s always been “Mr. Science” got almost the whole package, with a youth spent exploring the wild west of the early internet, working at a planetarium, and discovering just how easy it is to get legal access to dangerous and radioactive chemicals. There are many paths in to physics, so even if kids can’t “do physics” the same way they “do chemistry”, there’s still plenty to do!

Keeping It Colloquial

In the corners of academia where I hang out, a colloquium is a special kind of talk. Most talks we give are part of weekly seminars for specific groups. For example, the theoretical particle physicists here have a seminar. Each week we invite a speaker, who gives a talk on their recent work. Since they expect an audience of theoretical particle physicists, they can go into more detail.

A colloquium isn’t like that. Colloquia are talks for the whole department: theorists and experimentalists, particle physicists and biophysicists. They’re more prestigious, for big famous professors (or sometimes, for professors interviewing for jobs…). The different audience, and different context, means that the talk plays by different rules.

Recently, I saw a conference full of “colloquium-style” talks, trying to play by these rules. Some succeeded, some didn’t…and I think I now have a better idea of how those rules work.

First, in a colloquium, you’re not just speaking for yourself. You’re an ambassador for your field. For some of the audience, this might be the first time they’ve heard a talk by someone who does your kind of research. You want to give them a good impression, not just about you, but about the whole topic. So while you definitely want to mention your own work, you want to tell a full story, one that gives more than a glimpse of what others are doing as well.

Second, you want to connect to something the audience already knows. With an audience of physicists, you can assume a certain baseline, but not much more than that. You need to make the beginning accessible and start with something familiar. For the conference I mentioned, a talk that did this well was the talk on exoplanets, which started with the familiar planets of the solar system, classifying them in order to show what you might expect exoplanets to look like. In contrast, t’Hooft’s talk did this poorly. His work is exploiting a loophole in a quantum-mechanical argument called Bell’s theorem, which most physicists have heard of. Instead of mentioning Bell’s theorem, he referred vaguely to “criteria from philosophers”, and only even mentioned that near the end of the talk, instead starting with properties of quantum mechanics his audience was much less familiar with.

Moving on, then, you want to present a mystery. So far, everything in the talk has made sense, and your audience feels like they understand. Now, you show them something that doesn’t fit, something their familiar model can’t accommodate. This activates your audience’s scientist instincts: they’re curious now, they want to know the answer. A good example from the conference was a talk on chemistry in space. The speaker emphasized that we can see evidence of complex molecules in space, but that space dust is so absurdly dilute that it seems impossible such molecules could form: two atoms could go a billion years without meeting each other.

You can’t just leave your audience mystified, though. You next have to solve the mystery. Ideally, your solution will be something smart, but simple: something your audience can intuitively understand. This has two benefits. First, it makes you look smart: you described a mysterious problem, and then you show how to solve it! Second, it makes the audience feel smart: they felt the problem was hard, but now they understand how to solve it too. The audience will have good feelings about you as a result, and good feelings about the topic: in some sense, you’ve tied a piece of their self-esteem to knowing the solution to your problem. This was well-done by the speaker discussing space chemistry, who explained that the solution was chemistry on surfaces: if two atoms are on the surface of a dust grain or meteorite, they’re much more likely to react. It was also well-done by a speaker discussing models of diseases like diabetes: he explained the challenge of controlling processes with cells, when cells replicate exponentially, and showed one way they could be controlled, when the immune system kills off any cells that replicate much faster than their neighbors. (He also played the guitar to immune system-themed songs…also a good colloquium strategy for those who can pull it off!)

Finally, a picture is worth a thousand wordsas long as it’s a clear one. For an audience that won’t follow most of your equations, it’s crucial to show them something visual: graphics, puns, pictures of equipment or graphs. Crucially, though, your graphics should be something the audience can understand. If you put up a graph with a lot of incomprehensible detail: parameters you haven’t explained, or just set up in a way your audience doesn’t get, then your audience gets stuck. Much like an unfamiliar word, a mysterious graph will have members of the audience scratching their heads, trying to figure out what it means. They’ll be so busy trying, they’ll miss what you say next, and you’ll lose them! So yes, put in graphs, put in pictures: but make sure that the ones you use, you have time to explain.

Answering Questions: Virtue or Compulsion?

I was talking to a colleague about this blog. I mentioned worries I’ve had about email conversations with readers: worries about whether I’m communicating well, whether the readers are really understanding. For the colleague though, something else stood out:

“You sure are generous with your time.”

Am I?

I’d never really thought about it that way before. It’s not like I drop everything to respond to a comment, or a message. I leave myself a reminder, and get to it when I have time. To the extent that I have a time budget, I don’t spend it freely, I prioritize work before chatting with my readers, as nice as you folks are.

At the same time, though, I think my colleague was getting at a real difference there. It’s true that I don’t answer questions right away. But I do answer them eventually. I can’t imagine being asked a question, and just never answering it.

There are exceptions, of course. If you’re obviously just trolling, just insulting me or messing with me or asking the same question over and over, yeah I’ll skip your question. And if I don’t understand what you’re asking, there’s only so much effort I’m going to put in to try to decipher it. Even in those cases, though, I have a certain amount of regret. I have to take a deep breath and tell myself no, I can really skip this one.

On the one hand, this feels like a moral obligation, a kind of intellectual virtue. If knowledge, truth, information are good regardless of anything else, then answering questions is just straightforwardly good. People ought to know more, asking questions is how you learn, and that can’t work unless we’re willing to teach. Even if there’s something you need to keep secret, you can at least say something, if only to explain why you can’t answer. Just leaving a question hanging feels like something bad people do.

On the other hand, I think this might just be a compulsion, a weird quirk of my personality. It may even be more bad than good, an urge that makes me “waste my time”, or makes me too preoccupied with what others say, drafting responses in my head until I find release by writing them down. I think others are much more comfortable just letting a question lie, and moving on. It feels a bit like the urge to have the last word in a conversation, just more specific: if someone asks me to have the last word, I feel like I have to oblige!

I know this has to have its limits. The more famous bloggers get so many questions they can’t possibly respond to all of them. I’ve seen how people like Neil Gaiman describe responding to questions on tumblr, just opening a giant pile of unread messages, picking a few near the top, and then going back to their day. I can barely stand leaving unread messages in my email. If I got that famous, I don’t know how I’d deal with that. But I’d probably figure something out.

Am I too generous with you guys? Should people always answer questions? And does the fact that I ended this post with questions mean I’ll get more comments?