Tag Archives: science communication

At Quanta This Week, and Some Bonus Material

When I moved back to Denmark, I mentioned that I was planning to do more science journalism work. The first fruit of that plan is up this week: I have a piece at Quanta Magazine about a perennially trendy topic in physics, the S-matrix.

It’s been great working with Quanta again. They’ve been thorough, attentive to the science, and patient with my still-uncertain life situation. I’m quite likely to have more pieces there in future, and I’ve got ideas cooking with other outlets as well, so stay tuned!

My piece with Quanta is relatively short, the kind of thing they used to label a “blog” rather than say a “feature”. Since the S-matrix is a pretty broad topic, there were a few things I couldn’t cover there, so I thought it would be nice to discuss them here. You can think of this as a kind of “bonus material” section for the piece. So before reading on, read my piece at Quanta first!

Welcome back!

At Quanta I wrote a kind of cartoon of the S-matrix, asking you to think about it as a matrix of probabilities, with rows for input particles and columns for output particles. There are a couple different simplifications I snuck in there, the pop physicist’s “lies to children“. One, I already flag in the piece: the entries aren’t really probabilities, they’re complex numbers, probability amplitudes.

There’s another simplification that I didn’t have space to flag. The rows and columns aren’t just lists of particles, they’re lists of particles in particular states.

What do I mean by states? A state is a complete description of a particle. A particle’s state includes its energy and momentum, including the direction it’s traveling in. It includes its spin, and the direction of its spin: for example, clockwise or counterclockwise? It also includes any charges, from the familiar electric charge to the color of a quark.

This makes the matrix even bigger than you might have thought. I was already describing an infinite matrix, one where you can have as many columns and rows as you can imagine numbers of colliding particles. But the number of rows and columns isn’t just infinite, but uncountable, as many rows and columns as there are different numbers you can use for energy and momentum.

For some of you, an uncountably infinite matrix doesn’t sound much like a matrix. But for mathematicians familiar with vector spaces, this is totally reasonable. Even if your matrix is infinite, or even uncountably infinite, it can still be useful to think about it as a matrix.

Another subtlety, which I’m sure physicists will be howling at me about: the Higgs boson is not supposed to be in the S-matrix!

In the article, I alluded to the idea that the S-matrix lets you “hide” particles that only exist momentarily inside of a particle collision. The Higgs is precisely that sort of particle, an unstable particle. And normally, the S-matrix is supposed to only describe interactions between stable particles, particles that can survive all the way to infinity.

In my defense, if you want a nice table of probabilities to put in an article, you need an unstable particle: interactions between stable particles depend on their energy and momentum, sometimes in complicated ways, while a single unstable particle will decay into a reliable set of options.

More technically, there are also contexts in which it’s totally fine to think about an S-matrix between unstable particles, even if it’s not usually how we use the idea.

My piece also didn’t have a lot of room to discuss new developments. I thought at minimum I’d say a bit more about the work of the young people I mentioned. You can think of this as an appetizer: there are a lot of people working on different aspects of this subject these days.

Part of the initial inspiration for the piece was when an editor at Quanta noticed a recent paper by Christian Copetti, Lucía Cordova, and Shota Komatsu. The paper shows an interesting case, where one of the “logical” conditions imposed in the original S-matrix bootstrap doesn’t actually apply. It ended up being too technical for the Quanta piece, but I thought I could say a bit about it, and related questions, here.

Some of the conditions imposed by the original bootstrappers seem unavoidable. Quantum mechanics makes no sense if doesn’t compute probabilities, and probabilities can’t be negative, or larger than one, so we’d better have an S-matrix that obeys those rules. Causality is another big one: we probably shouldn’t have an S-matrix that lets us send messages back in time and change the past.

Other conditions came from a mixture of intuition and observation. Crossing is a big one here. Crossing tells you that you can take an S-matrix entry with in-coming particles, and relate it to a different S-matrix entry with out-going anti-particles, using techniques from the calculus of complex numbers.

Crossing may seem quite obscure, but after some experience with S-matrices it feels obvious and intuitive. That’s why for an expert, results like the paper by Copetti, Cordova, and Komatsu seem so surprising. What they found was that a particularly exotic type of symmetry, called a non-invertible symmetry, was incompatible with crossing symmetry. They could find consistent S-matrices for theories with these strange non-invertible symmetries, but only if they threw out one of the basic assumptions of the bootstrap.

This was weird, but upon reflection not too weird. In theories with non-invertible symmetries, the behaviors of different particles are correlated together. One can’t treat far away particles as separate, the way one usually does with the S-matrix. So trying to “cross” a particle from one side of a process to another changes more than it usually would, and you need a more sophisticated approach to keep track of it. When I talked to Cordova and Komatsu, they related this to another concept called soft theorems, aspects of which have been getting a lot of attention and funding of late.

In the meantime, others have been trying to figure out where the crossing rules come from in the first place.

There were attempts in the 1970’s to understand crossing in terms of other fundamental principles. They slowed in part because, as the original S-matrix bootstrap was overtaken by QCD, there was less motivation to do this type of work anymore. But they also ran into a weird puzzle. When they tried to use the rules of crossing more broadly, only some of the things they found looked like S-matrices. Others looked like stranger, meaningless calculations.

A recent paper by Simon Caron-Huot, Mathieu Giroux, Holmfridur Hannesdottir, and Sebastian Mizera revisited these meaningless calculations, and showed that they aren’t so meaningless after all. In particular, some of them match well to the kinds of calculations people wanted to do to predict gravitational waves from colliding black holes.

Imagine a pair of black holes passing close to each other, then scattering away in different directions. Unlike particles in a collider, we have no hope of catching the black holes themselves. They’re big classical objects, and they will continue far away from us. We do catch gravitational waves, emitted from the interaction of the black holes.

This different setup turns out to give the problem a very different character. It ends up meaning that instead of the S-matrix, you want a subtly different mathematical object, one related to the original S-matrix by crossing relations. Using crossing, Caron-Huot, Giroux, Hannesdottir and Mizera found many different quantities one could observe in different situations, linked by the same rules that the original S-matrix bootstrappers used to relate S-matrix entries.

The work of these two groups is just some of the work done in the new S-matrix program, but it’s typical of where the focus is going. People are trying to understand the general rules found in the past. They want to know where they came from, and as a consequence, when they can go wrong. They have a lot to learn from the older papers, and a lot of new insights come from diligent reading. But they also have a lot of new insights to discover, based on the new tools and perspectives of the modern day. For the most part, they don’t expect to find a new unified theory of physics from bootstrapping alone. But by learning how S-matrices work in general, they expect to find valuable knowledge no matter how the future goes.

The Impact of Jim Simons

The obituaries have been weirdly relevant lately.

First, a couple weeks back, Daniel Dennett died. Dennett was someone who could have had a huge impact on my life. Growing up combatively atheist in the early 2000’s, Dennett seemed to be exploring every question that mattered: how the semblance of consciousness could come from non-conscious matter, how evolution gives rise to complexity, how to raise a new generation to grow beyond religion and think seriously about the world around them. I went to Tufts to get my bachelor’s degree based on a glowing description he wrote in the acknowledgements of one of his books, and after getting there, I asked him to be my advisor.

(One of three, because the US education system, like all good games, can be min-maxed.)

I then proceeded to be far too intimidated to have a conversation with him more meaningful than “can you please sign my registration form?”

I heard a few good stories about Dennett while I was there, and I saw him debate once. I went into physics for my PhD, not philosophy.

Jim Simons died on May 10. I never spoke to him at all, not even to ask him to sign something. But he had a much bigger impact on my life.

I began my PhD at SUNY Stony Brook with a small scholarship from the Simons Foundation. The university’s Simons Center for Geometry and Physics had just opened, a shining edifice of modern glass next to the concrete blocks of the physics and math departments.

For a student aspiring to theoretical physics, the Simons Center virtually shouted a message. It taught me that physics, and especially theoretical physics, was something prestigious, something special. That if I kept going down that path I could stay in that world of shiny new buildings and daily cookie breaks with the occasional fancy jar-based desserts, of talks by artists and a café with twenty-dollar lunches (half-price once a week for students, the only time we could afford it, and still about twice what we paid elsewhere on campus). There would be garden parties with sushi buffets and late conference dinners with cauliflower steaks and watermelon salads. If I was smart enough (and I longed to be smart enough), that would be my future.

Simons and his foundation clearly wanted to say something along those lines, if not quite as filtered by the stars in a student’s eyes. He thought that theoretical physics, and research more broadly, should be something prestigious. That his favored scholars deserved more, and should demand more.

This did have weird consequences sometimes. One year, the university charged us an extra “academic excellence fee”. The story we heard was that Simons had demanded Stony Brook increase its tuition in order to accept his donations, so that it would charge more similarly to more prestigious places. As a state university, Stony Brook couldn’t do that…but it could add an extra fee. And since PhD students got their tuition, but not fees, paid by the department, we were left with an extra dent in our budgets.

The Simons Foundation created Quanta Magazine. If the Simons Center used food to tell me physics mattered, Quanta delivered the same message to professors through journalism. Suddenly, someone was writing about us, not just copying press releases but with the research and care of an investigative reporter. And they wrote about everything: not just sci-fi stories and cancer cures but abstract mathematics and the space of quantum field theories. Professors who had spent their lives straining to capture the public’s interest suddenly were shown an audience that actually wanted the real story.

In practice, the Simons Foundation made its decisions through the usual experts and grant committees. But the way we thought about it, the decisions always had a Jim Simons flavor. When others in my field applied for funding from the Foundation, they debated what Simons would want: would he support research on predictions for the LHC and LIGO? Or would he favor links to pure mathematics, or hints towards quantum gravity? Simons Collaboration Grants have an enormous impact on theoretical physics, dwarfing many other sources of funding. A grant funds an army of postdocs across the US, shifting the priorities of the field for years at a time.

Denmark has big foundations that have an outsize impact on science. Carlsberg, Villum, and the bigger-than-Denmark’s GDP Novo Nordisk have foundations with a major influence on scientific priorities. But Denmark is a country of six million. It’s much harder to have that influence on a country of three hundred million. Despite that, Simons came surprisingly close.

While we did like to think of the Foundation’s priorities as Simons’, I suspect that it will continue largely on the same track without him. Quanta Magazine is editorially independent, and clearly puts its trust in the journalists that made it what it is today.

I didn’t know Simons, I don’t think I even ever smelled one of his famous cigars. Usually, that would be enough to keep me from writing a post like this. But, through the Foundation, and now through Quanta, he’s been there with me the last fourteen years. That’s worth a reflection, at the very least.

If That Measures the Quantum Vacuum, Anything Does

Sabine Hossenfelder has gradually transitioned from critical written content about physics to YouTube videos, mostly short science news clips with the occasional longer piece. Luckily for us in the unable-to-listen-to-podcasts demographic, the transcripts of these videos are occasionally published on her organization’s Substack.

Unluckily, it feels like the short news format is leading to some lazy metaphors. There are stories science journalists sometimes tell because they’re easy and familiar, even if they don’t really make sense. Scientists often tell them too, for the same reason. But the more careful voices avoid them.

Hossenfelder has been that careful before, but one of her recent pieces falls short. The piece is titled “This Experiment Will Measure Nothing, But Very Precisely”.

The “nothing” in the title is the oft-mythologized quantum vacuum. The story goes that in quantum theory, empty space isn’t really empty. It’s full of “virtual” particles, that pop in and out of existence, jostling things around.

This…is not a good way to think about it. Really, it’s not. If you want to understand what’s going on physically, it’s best to think about measurements, and measurements involve particles: you can’t measure anything in pure empty space, you don’t have anything to measure with. Instead, every story you can tell about the “quantum vacuum” and virtual particles, you can tell about interactions between particles that actually exist.

(That post I link above, by the way, was partially inspired by a more careful post by Hossenfelder. She does know this stuff. She just doesn’t always use it.)

Let me tell the story Hossenfelder’s piece is telling, in a less silly way:

In the earliest physics classes, you learn that light does not affect other light. Shine two flashlight beams across each other, and they’ll pass right through. You can trace the rays of each source, independently, keeping track of how they travel and bounce around the room.

In quantum theory, that’s not quite true. Light can interact with light, through subtle quantum effects. This effect is tiny, so tiny it hasn’t been measured before. But with ingenious tricks involving tuning three different lasers in exactly the right way, a team of physicists in Dresden has figured out how it could be done.

And see, that’s already cool, right? It’s cool when people figure out how to see things that have never been seen before, full stop.

But the way Hossenfelder presents it, the cool thing about this is that they are “measuring nothing”. That they’re measuring “the quantum vacuum”, really precisely.

And I mean, you can say that, I guess. But it’s equally true of every subtle quantum effect.

In classical physics, electrons should have a very specific behavior in a magnetic field, called their magnetic moment. Quantum theory changes this: electrons have a slightly different magnetic moment, an anomalous magnetic moment. And people have measured this subtle effect: it’s famously the most precisely confirmed prediction in all of science.

That effect can equally well be described as an effect of the quantum vacuum. You can draw the same pictures, if you really want to, with virtual particles popping in and out of the vacuum. One effect (light bouncing off light) doesn’t exist at all in classical physics, while the other (electrons moving in a magnetic field) exists, but is subtly different. But both, in exactly the same sense, are “measurements of nothing”.

So if you really want to stick on the idea that, whenever you measure any subtle quantum effect, you measure “the quantum vacuum”…then we’re already doing that, all the time. Using it to popularize some stuff (say, this experiment) and not other stuff (the LHC is also measuring the quantum vacuum) is just inconsistent.

Better, in my view, to skip the silly talk about nothing. Talk about what we actually measure. It’s cool enough that way.

Cause and Effect and Stories

You can think of cause and effect as the ultimate story. The world is filled with one damn thing happening after another, but to make sense of it we organize it into a narrative: this happened first, and it caused that, which caused that. We tie this to “what if” stories, stories about things that didn’t happen: if this hadn’t happened, then it wouldn’t have caused that, so that wouldn’t have happened.

We also tell stories about cause and effect. Physicists use cause and effect as a tool, a criterion to make sense of new theories: does this theory respect cause and effect, or not? And just like everything else in science, there is more than one story they tell about it.

As a physicist, how would you think about cause and effect?

The simplest, and most obvious requirement, is that effects should follow their causes. Cause and effect shouldn’t go backwards in time, the cause should come before the effect.

This all sounds sensible, until you remember that in physics “before” and “after” are relative. If you try to describe the order of two distant events, your description will be different than someone moving with a different velocity. You might think two things happened at the same time, while they think one happened first, and someone else thinks the other happened first.

You’d think this makes a total mess of cause and effect, but actually everything remains fine, as long nothing goes faster than the speed of light. If someone could travel between two events slower than the speed of light, then everybody will agree on their order, and so everyone can agree on which one caused the other. Cause and effect only get screwed up if they can happen faster than light.

(If the two events are two different times you observed something, then cause and effect will always be fine, since you yourself can’t go faster than the speed of light. So nobody will contradict what you observe, they just might interpret it differently.)

So if you want to make sure that your theory respects cause and effect, you’d better be sure that nothing goes faster than light. It turns out, this is not automatic! In general relativity, an effect called Shapiro time delay makes light take longer to pass a heavy object than to go through empty space. If you modify general relativity, you can accidentally get a theory with a Shapiro time advance, where light arrives sooner than it would through empty space. In such a theory, at least some observers will see effects happen before their causes!

Once you know how to check this, as a physicist, there are two kinds of stories you can tell. I’ve heard different people in the field tell both.

First, you can say that cause and effect should be a basic physical principle. Using this principle, you can derive other restrictions, demands on what properties matter and energy can have. You can carve away theories that violate these rules, making sure that we’re testing for theories that actually make sense.

On the other hand, there are a lot of stories about time travel. Time travel screws up cause and effect in a very direct way. When Harry Potter and Hermione travel back in time at the end of Harry Potter and the Prisoner of Azkaban, they cause the event that saves Harry’s life earlier in the book. Science fiction and fantasy are full of stories like this, and many of them are perfectly consistent. How can we be so sure that we don’t live in such a world?

The other type of story positions the physics of cause and effect as a search for evidence. We’re looking for physics that violates cause and effect, because if it exists, then on some small level it should be possible to travel back in time. By writing down the consequences of cause and effect, we get to describe what evidence we’d need to see it breaking down, and if we see it whole new possibilities open up.

These are both good stories! And like all other stories in science, they only capture part of what the scientists are up to. Some people stick to one or the other, some go between them, driven by the actual research, not the story itself. Like cause and effect itself, the story is just one way to describe the world around us.

Stories Backwards and Forwards

You can always start with “once upon a time”…

I come up with tricks to make calculations in particle physics easier. That’s my one-sentence story, or my most common one. If I want to tell a longer story, I have more options.

Here’s one longer story:

I want to figure out what Nature is telling us. I want to take all the data we have access to that has anything to say about fundamental physics, every collider and gravitational wave telescope and ripple in the overall structure of the universe, and squeeze it as hard as I can until something comes out. I want to make sure we understand the implications of our current best theories as well as we can, to as high precision as we can, because I want to know whether they match what we see.

To do that, I am starting with a type of calculation I know how to do best. That’s both because I can make progress with it, and because it will be important for making these inferences, for testing our theories. I am following a hint in a theory that definitely does not describe the real world, one that is both simpler to work with and surprisingly complex, one that has a good track record, both for me and others, for advancing these calculations. And at the end of the day, I’ll make our ability to infer things from Nature that much better.

Here’s another:

Physicists, unknowing, proposed a kind of toy model, one often simpler to work with but not necessarily simpler to describe. Using this model, they pursued increasingly elaborate calculations, and time and time again, those calculations surprised them. The results were not random, not a disorderly mess of everything they could plausibly have gotten. Instead, they had structure, symmetries and patterns and mathematical properties that the physicists can’t seem to explain. If we can explain them, we will advance our knowledge of models and theories and ideas, geometry and combinatorics, learning more about the unexpected consequences of the rules we invent.

We can also help the physicists advance physics, of course. That’s a happy accident, but one that justifies the money and time, showing the rest of the world that understanding consequences of rules is still important and valuable.

These seem like very different stories, but they’re not so different. They change in order, physics then math or math then physics, backwards and forwards. By doing that, they change in emphasis, in where they’re putting glory and how they’re catching your attention. But at the end of the day, I’m investigating mathematical mysteries, and I’m advancing our ability to do precision physics.

(Maybe you think that my motivation must lie with one of these stories and not the other. One is “what I’m really doing”, the other is a lie made up for grant agencies.
Increasingly, I don’t think people work like that. If we are at heart stories, we’re retroactive stories. Our motivation day to day doesn’t follow one neat story or another. We move forward, we maybe have deep values underneath, but our accounts of “why” can and will change depending on context. We’re human, and thus as messy as that word should entail.)

I can tell more than two stories if I want to. I won’t here. But this is largely what I’m working on at the moment. In applying for grants, I need to get the details right, to sprinkle the right references and the right scientific arguments, but the broad story is equally important. I keep shuffling that story, a pile of not-quite-literal index cards, finding different orders and seeing how they sound, imagining my audience and thinking about what stories would work for them.

Small Shifts for Specificity

Cosmologists are annoyed at a recent spate of news articles claiming the universe is 26.7 billion years old (rather than 13.8 billion as based on the current best measurements). To some of the science-reading public, the news sounds like a confirmation of hints they’d already heard: about an ancient “Methuselah” star that seemed to be older than the universe (later estimates put it younger), and recent observations from the James Webb Space Telescope of early galaxies that look older than they ought.

“The news doesn’t come from a telescope, though, or a new observation of the sky. Instead, it comes from this press release from the University of Ottawa: “Reinventing cosmology: uOttawa research puts age of universe at 26.7 — not 13.7 — billion years”.

(If you look, you’ll find many websites copying this press release almost word-for-word. This is pretty common in science news, where some websites simply aggregate press releases and others base most of their science news on them rather than paying enough for actual journalism.)

The press release, in turn, is talking about a theory, not an observation. The theorist, Rajendra Gupta, was motivated by examples like the early galaxies observed by JWST and the Methuselah star. Since the 13.8 billion year age of the universe is based on a mathematical model, he tried to find a different mathematical model that led to an older universe. Eventually, by hypothesizing what seems like every unproven physics effect he could think of, he found one that gives a different estimate, 26.7 billion. He probably wasn’t the first person to do this, because coming up with different models to explain odd observations is a standard thing cosmologists do all the time, and until one of the models is shown to explain a wider range of observations (because our best theories explain a lot, so they’re hard to replace), they’re just treated as speculation, not newsworthy science.

This is a pretty clear case of hype, and as such most of the discussion has been about what went wrong. Should we blame the theorist? The university? The journalists? Elon Musk?

Rather than blame, I think it’s more productive to offer advice. And in this situation, the person I think could use some advice is the person who wrote the press release.

So suppose you work for a university, writing their press releases. One day, you hear that one of your professors has done something very cool, something worthy of a press release: they’ve found a new estimate for the age of the universe. What do you do?

One thing you absolutely shouldn’t do is question the science. That just isn’t your job, and even if it were you don’t have the expertise to do that. Anyone who’s hoping that you will only write articles about good science and not bad science is being unrealistic, that’s just not an option.

If you can’t be more accurate, though, you can still be more precise. You can write your article, and in particular your headline, so that you express what you do know as clearly and specifically as possible.

(I’m assuming here you write your own headlines. This is not normal in journalism, where most headlines are written by an editor, not by the writer of a piece. But university press offices are small enough that I’m assuming, perhaps incorrectly, that you can choose how to title your piece.)

Let’s take a look at the title, “Reinventing cosmology: uOttawa research puts age of universe at 26.7 — not 13.7 — billion years”, and see if we can make some small changes to improve it.

One very general word in that title is “research”. Lots of people do research: astronomers do research when they collect observations, theorists do research when they make new models. If you say “research”, some people will think you’re reporting a new observation, a new measurement that gives a radically different age for the universe.

But you know that’s not true, it’s not what the scientist you’re talking to is telling you. So to avoid the misunderstanding, you can get a bit more specific, and replace the word “research” with a more precise one: “Reinventing cosmology: uOttawa theory puts age of universe at 26.7 — not 13.7 — billion years”.

“Theory” is just as familiar a word as “research”. You won’t lose clicks, you won’t confuse people. But now, you’ve closed off a big potential misunderstanding. By a small shift, you’ve gotten a lot clearer. And you didn’t need to question the science to do it!

You can do more small shifts, if you understand a bit more of the science. “Puts” is kind of ambiguous: a theory could put an age somewhere because it computes it from first principles, or because it dialed some parameter to get there. Here, the theory was intentionally chosen to give an older universe, so the title should hint at this in some way. Instead of “puts”, then, you can use “allows”: “Reinventing cosmology: uOttawa theory allows age of universe to be 26.7 — not 13.7 — billion years”.

These kinds of little tricks can be very helpful. If you’re trying to avoid being misunderstood, then it’s good to be as specific as you can, given what you understand. If you do it carefully, you don’t have to question your scientists’ ideas or downplay their contributions. You can do your job, promote your scientists, and still contribute to responsible journalism.

Solutions and Solutions

The best misunderstandings are detective stories. You can notice when someone is confused, but digging up why can take some work. If you manage, though, you learn much more than just how to correct the misunderstanding. You learn something about the words you use, and the assumptions you make when using them.

Recently, someone was telling me about a book they’d read on Karl Schwarzschild. Schwarzschild is famous for discovering the equations that describe black holes, based on Einstein’s theory of gravitation. To make the story more dramatic, he did so only shortly before dying from a disease he caught fighting in the first World War. But this person had the impression that Schwarzschild had done even more. According to this person, the book said that Schwarzschild had done something to prove Einstein’s theory, or to complete it.

Another Schwarzschild accomplishment: that mustache

At first, I thought the book this person had read was wrong. But after some investigation, I figured out what happened.

The book said that Schwarzschild had found the first exact solution to Einstein’s equations. That’s true, and as a physicist I know precisely what it means. But I now realize that the average person does not.

In school, the first equations you solve are algebraic, x+y=z. Some equations, like x^2=4, have solutions. Others, like x^2=-4, seem not to, until you learn about new types of numbers that solve them. Either way, you get used to equations being like a kind of puzzle, a question for which you need to find an answer.

If you’re thinking of equations like that, then it probably sounds like Schwarzschild “solved the puzzle”. If Schwarzschild found the first solution to Einstein’s equation, that means that Einstein did not. That makes it sound like Einstein’s work was incomplete, that he had asked the right question but didn’t yet know the right answer.

Einstein’s equations aren’t algebraic equations, though. They’re differential equations. Instead of equations for a variable, they’re equations for a mathematical function, a formula that, in this case, describes the curvature of space and time.

Scientists in many fields use differential equations, but they use them in different ways. If you’re a chemist or a biologist, it might be that you’re most used to differential equations with simple solutions, like sines, cosines, or exponentials. You learn how to solve these equations, and they feel a bit like the algebraic ones: you have a puzzle, and then you solve the puzzle.

Other fields, though, have tougher differential equations. If you’re a physicist or an engineer, you’ve likely met differential equations that you can’t treat in this way. If you’re dealing with fluid mechanics, or general relativity, or even just Newtonian gravity in an odd situation, you can’t usually solve the problem by writing down known functions like sines and cosines.

That doesn’t mean you can’t solve the problem at all, though!

Even if you can’t write down a solution to a differential equation with sines and cosines, a solution can still exist. (In some cases, we can even prove a solution exists!) It just won’t be written in terms of sines and cosines, or other functions you’ve learned in school. Instead, the solution will involve some strange functions, functions no-one has heard of before.

If you want, you can make up names for those functions. But unless you’re going to classify them in a useful way, there’s not much point. Instead, you work with these functions by approximation. You calculate them in a way that doesn’t give you the full answer, but that does let you estimate how close you are. That’s good enough to give you numbers, which in turn is good enough to compare to experiments. With just an approximate solution, like this, Einstein could check if his equations described the orbit of Mercury.

Once you know you can find these approximate solutions, you have a different perspective on equations. An equation isn’t just a mysterious puzzle. If you can approximate the solution, then you already know how to solve that puzzle. So we wouldn’t think of Einstein’s theory as incomplete because he was only able to find approximate solutions: for a theory as complicated as Einstein’s, that’s perfectly normal. Most of the time, that’s all we need.

But it’s still pretty cool when you don’t have to do this. Sometimes, we can not just approximate, but actually “write down” the solution, either using known functions or well-classified new ones. We call a solution like that an analytic solution, or an exact solution.

That’s what Schwarzschild managed. These kinds of exact solutions often only work in special situations, and Schwarzschild’s is no exception. His Schwarzschild solution works for matter in a special situation, arranged in a perfect sphere. If matter happened to be arranged in that way, then the shape of space and time would be exactly as Schwarzschild described it.

That’s actually pretty cool! Einstein’s equations are complicated enough that no-one was sure that there were any solutions like that, even in very special situations. Einstein expected it would be a long time until they could do anything except approximate solutions.

(If Schwarzschild’s solution only describes matter arranged in a perfect sphere, why do we think it describes real black holes? This took later work, by people like Roger Penrose, who figured out that matter compressed far enough will always find a solution like Schwarzschild’s.)

Schwarzschild intended to describe stars with his solution, or at least a kind of imaginary perfect star. What he found was indeed a good approximation to real stars, but also the possibility that a star shoved into a sufficiently small space would become something weird and new, something we would come to describe as a black hole. That’s a pretty impressive accomplishment, especially for someone on the front lines of World War One. And if you know the difference between an exact solution and an approximate one, you have some idea of what kind of accomplishment that is.

Traveling This Week

I’m traveling this week, so this will just be a short post. This isn’t a scientific trip exactly: I’m in Poland, at an event connected to the 550th anniversary of the birth of Copernicus.

Not this one, but they do have nice posters!

Part of this event involved visiting the Copernicus Science Center, the local children’s science museum. The place was sold out completely. For any tired science communicators, I recommend going to a sold-out science museum: the sheer enthusiasm you’ll find there is balm for the most jaded soul.

Whatever Happened to the Nonsense Merchants?

I was recently reminded that Michio Kaku exists.

In the past, Michio Kaku made important contributions to string theory, but he’s best known for what could charitably be called science popularization. He’s an excited promoter of physics and technology, but that excitement often strays into inaccuracy. Pretty much every time I’ve heard him mentioned, it’s for some wildly overenthusiastic statement about physics that, rather than just being simplified for a general audience, is generally flat-out wrong, conflating a bunch of different developments in a way that makes zero actual sense.

Michio Kaku isn’t unique in this. There’s a whole industry in making nonsense statements about science, overenthusiastic books and videos hinting at science fiction or mysticism. Deepak Chopra is a famous figure from deeper on this spectrum, known for peddling loosely quantum-flavored spirituality.

There was a time I was worried about this kind of thing. Super-popular misinformation is the bogeyman of the science popularizer, the worry that for every nice, careful explanation we give, someone else will give a hundred explanations that are way more exciting and total baloney. Somehow, though, I hear less and less from these people over time, and thus worry less and less about them.

Should I be worried more? I’m not sure.

Are these people less popular than they used to be? Is that why I’m hearing less about them? Possibly, but I’d guess not. Michio Kaku has eight hundred thousand twitter followers. Deepak Chopra has three million. On the other hand, the usually-careful Brian Greene has a million followers, and Neil deGrasse Tyson, where the worst I’ve heard is that he can be superficial, has fourteen million.

(But then in practice, I’m more likely to reflect on content with even smaller audiences.)

If misinformation is this popular, shouldn’t I be doing more to combat it?

Popular misinformation is also going to be popular among critics. For every big-time nonsense merchant, there are dozens of people breaking down and debunking every false statement they say, every piece of hype they release. Often, these people will end up saying the same kinds of things over and over again.

If I can be useful, I don’t think it will be by saying the same thing over and over again. I come up with new metaphors, new descriptions, new explanations. I clarify things others haven’t clarified, I clear up misinformation others haven’t addressed. That feels more useful to me, especially in a world where others are already countering the big problems. I write, and writing lasts, and can be used again and again when needed. I don’t need to keep up with the Kakus and Chopras of the world to do that.

(Which doesn’t imply I’ll never address anything one of those people says…but if I do, it will be because I have something new to say back!)

Talking and Teaching

Someone recently shared with me an article written by David Mermin in 1992 about physics talks. Some aspects are dated (our slides are no longer sheets of plastic, and I don’t think anyone writing an article like that today would feel the need to put it in the mouth of a fictional professor (which is a shame honestly)), but most of it still holds true. I particularly recognized the self-doubt of being a young physicist sitting in a talk and thinking “I’m supposed to enjoy this?”

Mermin’s basic point is to keep things as light as possible. You want to convey motivation more than content, and background more than your own contributions. Slides should be sparse, both because people won’t be able to see everything but also because people can get frustrated “reading ahead” of what you say.

Mermin’s suggestion that people read from a prepared text was probably good advice for him, but maybe not for others. It can be good if you can write like he does, but I don’t think most people’s writing is that much better than what they say in talks (you can judge this by reading peoples’ papers!) Some are much clearer speaking impromptu. I agree with him that in practice people end up just reading from their slides, which indeed is bad, but reading from a normal physics paper isn’t any better.

I also don’t completely agree with him about the value of speech over text. Yes, putting text on your slides means people can read ahead (unless you hide some of the text, which is easier to do these days than in the days of overhead transparencies). But just saying things means that if someone’s attention lapses for just a moment, they’ll be lost. Unless you repeat yourself a lot (good practice in any case), you should avoid just saying anything you need your audience to remember, and make sure they can read it somewhere if they need it as well.

That said, “if they need it” is doing a lot of work here, and this is where I agree again with Mermin. Fundamentally, you don’t need to convey everything you think you do. (I don’t usually need to convey everything I think I do!) It’s a lesson I’ve been learning this year from pedagogy courses, a message they try to instill in everyone who teaches at the university. If you want to really convey something well, then you just can’t convey that much. You need to focus, pick a few things and try to get them across, and structure the rest of what you say to reinforce those things. When teaching, or when speaking, less is more.