Tag Archives: press

Freelancing in [Country That Includes Greenland]

(Why mention Greenland? It’s a movie reference.)

I figured I’d give an update on my personal life.

A year ago, I resigned from my position in France and moved back to Denmark. I had planned to spend a few months as a visiting researcher in my old haunts at the Niels Bohr Institute, courtesy of the spare funding of a generous friend. There turned out to be more funding than expected, and what was planned as just a few months was extended to almost a year.

I spent that year learning something new. It was still an amplitudes project, trying to make particle physics predictions more efficient. But this time I used Python. I looked into reinforcement learning and PyTorch, played with using a locally hosted Large Language Model to generate random code, and ended up getting good results from a classic genetic programming approach. Along the way I set up a SQL database, configured Docker containers, and puzzled out interactions with CUDA. I’ve got a paper in the works, I’ll post about it when it’s out.

All the while, on the side, I’ve been seeking out stories. I’ve not just been a writer, but a journalist, tracking down leads and interviewing experts. I had three pieces in Quanta Magazine and one in Ars Technica.

Based on that, I know I can make money doing science journalism. What I don’t know yet is whether I can make a living doing it. This year, I’ll figure that out. With the project at the Niels Bohr Institute over, I’ll have more time to seek out leads and pitch to more outlets. I’ll see whether I can turn a skill into a career.

So if you’re a scientist with a story to tell, if you’ve discovered something or accomplished something or just know something that the public doesn’t, and that you want to share: do reach out. There’s a lot that can be of interest, passion that can be shared.

At the same time, I don’t know yet whether I can make a living as a freelancer. Many people try and don’t succeed. So I’m keeping my CV polished and my eyes open. I have more experience now with Data Science tools, and I’ve got a few side projects cooking that should give me a bit more. I have a few directions in mind, but ultimately, I’m flexible. I like being part of a team, and with enthusiastic and competent colleagues I can get excited about pretty much anything. So if you’re hiring in Copenhagen, if you’re open to someone with ten years of STEM experience who’s just starting to see what industry has to offer, then let’s chat. Even if we’re not a good fit, I bet you’ve got a good story to tell.

At Ars Technica Last Week, With a Piece on How Wacky Ideas Become Big Experiments

I had a piece last week at Ars Technica about the path ideas in physics take to become full-fledged experiments.

My original idea for the story was a light-hearted short news piece. A physicist at the University of Kansas, Steven Prohira, had just posted a proposal for wiring up a forest to detect high-energy neutrinos, using the trees like giant antennas.

Chatting to experts, what at first seemed silly started feeling like a hook for something more. Prohira has a strong track record, and the experts I talked to took his idea seriously. They had significant doubts, but I was struck by how answerable those doubts were, how rather than dismissing the whole enterprise they had in mind a list of questions one could actually test. I wrote a blog post laying out that impression here.

The editor at Ars was interested, so I dug deeper. Prohira’s story became a window on a wider-ranging question: how do experiments happen? How does a scientist convince the community to work on a project, and the government to fund it? How do ideas get tested before these giant experiments get built?

I tracked down researchers from existing experiments and got their stories. They told me how detecting particles from space takes ingenuity, with wacky ideas involving the natural world being surprisingly common. They walked me through tales of prototypes and jury-rigging and feasibility studies and approval processes.

The highlights of those tales ended up in the piece, but there was a lot I couldn’t include. In particular, I had a long chat with Sunil Gupta about the twists and turns taken by the GRAPES experiment in India. Luckily for you, some of the most interesting stories have already been covered, for example their measurement of the voltage of a thunderstorm or repurposing used building materials to keep costs down. I haven’t yet found his story about stirring wavelength-shifting chemicals all night using a propeller mounted on a power drill, but I suspect it’s out there somewhere. If not, maybe it can be the start of a new piece!

Replacing Space-Time With the Space in Your Eyes

Nima Arkani-Hamed thinks space-time is doomed.

That doesn’t mean he thinks it’s about to be destroyed by a supervillain. Rather, Nima, like many physicists, thinks that space and time are just approximations to a deeper reality. In order to make sense of gravity in a quantum world, seemingly fundamental ideas, like that particles move through particular places at particular times, will probably need to become more flexible.

But while most people who think space-time is doomed research quantum gravity, Nima’s path is different. Nima has been studying scattering amplitudes, formulas used by particle physicists to predict how likely particles are to collide in particular ways. He has been trying to find ways to calculate these scattering amplitudes without referring directly to particles traveling through space and time. In the long run, the hope is that knowing how to do these calculations will help suggest new theories beyond particle physics, theories that can’t be described with space and time at all.

Ten years ago, Nima figured out how to do this in a particular theory, one that doesn’t describe the real world. For that theory he was able to find a new picture of how to calculate scattering amplitudes based on a combinatorical, geometric space with no reference to particles traveling through space-time. He gave this space the catchy name “the amplituhedron“. In the years since, he found a few other “hedra” describing different theories.

Now, he’s got a new approach. The new approach doesn’t have the same kind of catchy name: people sometimes call it surfaceology, or curve integral formalism. Like the amplituhedron, it involves concepts from combinatorics and geometry. It isn’t quite as “pure” as the amplituhedron: it uses a bit more from ordinary particle physics, and while it avoids specific paths in space-time it does care about the shape of those paths. Still, it has one big advantage: unlike the amplituhedron, Nima’s new approach looks like it can work for at least a few of the theories that actually describe the real world.

The amplituhedron was mysterious. Instead of space and time, it described the world in terms of a geometric space whose meaning was unclear. Nima’s new approach also describes the world in terms of a geometric space, but this space’s meaning is a lot more clear.

The space is called “kinematic space”. That probably still sounds mysterious. “Kinematic” in physics refers to motion. In the beginning of a physics class when you study velocity and acceleration before you’ve introduced a single force, you’re studying kinematics. In particle physics, kinematic refers to the motion of the particles you detect. If you see an electron going up and to the right at a tenth the speed of light, those are its kinematics.

Kinematic space, then, is the space of observations. By saying that his approach is based on ideas in kinematic space, what Nima is saying is that it describes colliding particles not based on what they might be doing before they’re detected, but on mathematics that asks questions only about facts about the particles that can be observed.

(For the experts: this isn’t quite true, because he still needs a concept of loop momenta. He’s getting the actual integrands from his approach, rather than the dual definition he got from the amplituhedron. But he does still have to integrate one way or another.)

Quantum mechanics famously has many interpretations. In my experience, Nima’s favorite interpretation is the one known as “shut up and calculate”. Instead of arguing about the nature of an indeterminately philosophical “real world”, Nima thinks quantum physics is a tool to calculate things people can observe in experiments, and that’s the part we should care about.

From a practical perspective, I agree with him. And I think if you have this perspective, then ultimately, kinematic space is where your theories have to live. Kinematic space is nothing more or less than the space of observations, the space defined by where things land in your detectors, or if you’re a human and not a collider, in your eyes. If you want to strip away all the speculation about the nature of reality, this is all that is left over. Any theory, of any reality, will have to be described in this way. So if you think reality might need a totally new weird theory, it makes sense to approach things like Nima does, and start with the one thing that will always remain: observations.

At Quanta This Week, With a Piece on Multiple Imputation

I’ve got another piece in Quanta Magazine this week.

While my past articles in Quanta have been about physics, this time I’m stretching my science journalism muscles in a new direction. I was chatting with a friend who works for a pharmaceutical company, and he told me about a statistical technique that sounded ridiculous. Luckily, he’s a patient person, and after annoying him and a statistician family member for a while I understood that the technique actually made sense. Since I love sharing counterintuitive facts, I thought this would be a great story to share with Quanta’s readers. I then tracked down more statisticians, and annoyed them in a more professional way, finally resulting in the Quanta piece.

The technique is called multiple imputation, and is a way to deal with missing data. By filling in (“imputing”) missing information with good enough guesses, you can treat a dataset with missing data as if it was complete. If you do this imputation multiple times with the help of a source of randomness, you can also model how uncertain those guesses are, so your final statistical estimates are as uncertain as they ought to be. That, in a nutshell, is multiple imputation.

In the piece, I try to cover the key points: how the technique came to be, how it spread, and why people use it. To complement that, in this post I wanted to get a little bit closer to the technical details, and say a bit about why some of the workarounds a naive physicist would come up with don’t actually work.

If you’re anything like me, multiple imputation sounds like a very weird way to deal with missing data. In order to fill in missing data, you have to use statistical techniques to find good guesses. Why can’t you just use the same techniques to analyze the data in the first place? And why do you have to use a random number generator to model your uncertainty, instead of just doing propagation of errors?

It turns out, you can sort of do both of these things. Full Information Maximum Likelihood is a method where you use all the data you have, and only the data you have, without imputing anything or throwing anything out. The catch is that you need a model, one with parameters you can try to find the most likely values for. Physicists usually do have a model like this (for example, the Standard Model), so I assumed everyone would. But for many things you want to measure in social science and medicine, you don’t have any such model, so multiple imputation ends up being more versatile in practice.

(If you want more detail on this, you need to read something written by actual statisticians. The aforementioned statistician family member has a website here that compares and contrasts multiple imputation with full information maximum likelihood.)

What about the randomness? It turns out there is yet another technique, called Fractional Imputation. While multiple imputation randomly chooses different values to impute, fractional imputation gives each value a weight based on the chance for it to come up. This gives the same result…if you can compute the weights, and store all the results. The impression I’ve gotten is that people are working on this, but it isn’t very well-developed.

“Just do propagation of errors”, the thing I wanted to suggest as a physicist, is much less of an option. In many of these datasets, you don’t attribute errors to the base data points to begin with. And on the other hand, if you want to be more sophisticated, then something like propagation of errors is too naive. You have a variety of different variables, correlated with each other in different ways, giving a complicated multivariate distribution. Propagation of errors is already pretty fraught when you go beyond linear relationships (something they don’t tend to tell baby physicists), using it for this would be pushing it rather too far.

The thing I next wanted to suggest, “just carry the distribution through the calculation”, turns out to relate to something I’ve called the “one philosophical problem of my sub-field”. In the area of physics I’ve worked in, a key question is what it means to have “done” an integral. Here, one can ask what it means to do a calculation on a distribution. In both cases, the end goal is to get numbers out: physics predictions on the one hand, statistical estimates on the other. You can get those numbers by “just” doing numerics, using randomness and approximations to estimate the number you’re interested in. And in a way, that’s all you can do. Any time you “just do the integral” or “just carry around the distribution”, the thing you get in the end is some function: it could be a well-understood function like a sine or log, or it could be an exotic function someone defined for that purpose. But whatever function you get, you get numbers out of it the same way. A sine or a log, on a computer, is just an approximation scheme, a program that outputs numbers.

(But we do still care about analytic results, we don’t “just” do numerics. That’s because understanding the analytics helps us do numerics better, we can get more precise numbers faster and more stably. If you’re just carrying around some arbitrarily wiggly distribution, it’s not clear you can do that.)

So at this point, I get it. I’m still curious to see how Fractional Imputation develops, and when I do have an actual model I’d lean to wanting to use Full Information Maximum Likelihood instead. (And there are probably some other caveats I may need to learn at some point!) But I’m comfortable with the idea that Multiple Imputation makes sense for the people using it.

Clickbait or Koan

Last month, I had a post about a type of theory that is, in a certain sense, “immune to gravity”. These theories don’t allow you to build antigravity machines, and they aren’t totally independent of the overall structure of space-time. But they do ignore the core thing most people think of as gravity, the curvature of space that sends planets around the Sun and apples to the ground. And while that trait isn’t something we can use for new technology, it has led to extremely productive conversations between mathematicians and physicists.

After posting, I had some interesting discussions on twitter. A few people felt that I was over-hyping things. Given all the technical caveats, does it really make sense to say that these theories defy gravity? Isn’t a title like “Gravity-Defying Theories” just clickbait?

Obviously, I don’t think so.

There’s a concept in education called inductive teaching. We remember facts better when they come in context, especially the context of us trying to solve a puzzle. If you try to figure something out, and then find an answer, you’re going to remember that answer better than if you were just told the answer from the beginning. There are some similarities here to the concept of a Zen koan: by asking questions like “what is the sound of one hand clapping?” a Zen master is supposed to get you to think about the world in a different way.

When I post with a counterintuitive title, I’m aiming for that kind of effect. I know that you’ll read the title and think “that can’t be right!” Then you’ll read the post, and hear the explanation. That explanation will stick with you better because you asked that question, because “how can that be right?” is the solution to a puzzle that, in that span of words, you cared about.

Clickbait is bad for two reasons. First, it sucks you in to reading things that aren’t actually interesting. I write my blog posts because I think they’re interesting, so I hope I avoid that. Second, it can spread misunderstandings. I try to be careful about these, and I have some tips how you can be too:

  1. Correct the misunderstanding early. If I’m worried a post might be misunderstood in a clickbaity way, I make sure that every time I post the link I include a sentence discouraging the misunderstanding. For example, for the post on Gravity-Defying Theories, before the link I wrote “No flying cars, but it is technically possible for something to be immune to gravity”. If I’m especially worried, I’ll also make sure that the first paragraph of the piece corrects the misunderstanding as well.
  2. Know your audience. This means both knowing the normal people who read your work, and how far something might go if it catches on. Your typical readers might be savvy enough to skip the misunderstanding, but if they latch on to the naive explanation immediately then the “koan” effect won’t happen. The wider your reach can be, the more careful you need to be about what you say. If you’re a well-regarded science news piece, don’t write a title saying that scientists have built a wormhole.
  3. Have enough of a conclusion to be “worth it”. This is obviously a bit subjective. If your post introduces a mystery and the answer is that you just made some poetic word choice, your audience is going to feel betrayed, like the puzzle they were considering didn’t have a puzzly answer after all. Whatever you’re teaching in your post, it needs to have enough “meat” that solving it feels like a real discovery, like the reader did some real work to solve it.

I don’t think I always live up to these, but I do try. And I think trying is better than the conservative option, of never having catchy titles that make counterintuitive claims. One of the most fun aspects of science is that sometimes a counterintuitive fact is actually true, and that’s an experience I want to share.

Does Science Require Publication?

Seen on Twitter:

As is traditional, twitter erupted into dumb arguments over this. Some made fun of Yann LeCun for implying that Elon Musk will be forgotten, which despite any other faults of his seems unlikely. Science popularizer Sabine Hossenfelder pointed out that there are two senses of “publish” getting confused here: publish as in “make public” and publish as in “put in a scientific journal”. The latter tends to be necessary for scientists in practice, but is not required in principle. (The way journals work has changed a lot over just the last century!) The former, Sabine argued, is still 100% necessary.

Plenty of people on twitter still disagreed (this always happens). It got me thinking a bit about the role of publication in science.

When we talk about what science requires or doesn’t require, what are we actually talking about?

“Science” is a word, and like any word its meaning is determined by how it is used. Scientists use the word “science” of course, as do schools and governments and journalists. But if we’re getting into arguments about what does or does not count as science, then we’re asking about a philosophical problem, one in which philosophers of science try to understand what counts as science and what doesn’t.

What do philosophers of science want? Many things, but a big one is to explain why science works so well. Over a few centuries, humanity went from understanding the world in terms of familiar materials and living creatures to decomposing them in terms of molecules and atoms and cells and proteins. In doing this, we radically changed what we were capable of, computers out of the reach of blacksmiths and cures for diseases that weren’t even distinguishable. And while other human endeavors have seen some progress over this time (democracy, human rights…), science’s accomplishment demands an explanation.

Part of that explanation, I think, has to include making results public. Alchemists were interested in many of the things later chemists were, and had started to get some valuable insights. But alchemists were fearful of what their knowledge would bring (especially the ones who actually thought they could turn lead into gold). They published almost only in code. As such, the pieces of progress they made didn’t build up, didn’t aggregate, didn’t become overall progress. It was only when a new scientific culture emerged, when natural philosophers and physicists and chemists started writing to each other as clearly as they could, that knowledge began to build on itself.

Some on twitter pointed out the example of the Manhattan project during World War II. A group of scientists got together and made progress on something almost entirely in secret. Does that not count as science?

I’m willing to bite this bullet: I don’t think it does! When the Soviets tried to replicate the bomb, they mostly had to start from scratch, aside from some smuggled atomic secrets. Today, nations trying to build their own bombs know more, but they still must reinvent most of it. We may think this is a good thing, we may not want more countries to make progress in this way. But I don’t think we can deny that it genuinely does slow progress!

At the same time, to contradict myself a bit: I think you can think of science that happens within a particular community. The scientists of the Manhattan project didn’t publish in journals the Soviets could read. But they did write internal reports, they did publish to each other. I don’t think science by its nature has to include the whole of humanity (if it does, then perhaps studying the inside of black holes really is unscientific). You probably can do science sticking to just your own little world. But it will be slower. Better, for progress’s sake, if you can include people from across the world.

At Quanta This Week, and Some Bonus Material

When I moved back to Denmark, I mentioned that I was planning to do more science journalism work. The first fruit of that plan is up this week: I have a piece at Quanta Magazine about a perennially trendy topic in physics, the S-matrix.

It’s been great working with Quanta again. They’ve been thorough, attentive to the science, and patient with my still-uncertain life situation. I’m quite likely to have more pieces there in future, and I’ve got ideas cooking with other outlets as well, so stay tuned!

My piece with Quanta is relatively short, the kind of thing they used to label a “blog” rather than say a “feature”. Since the S-matrix is a pretty broad topic, there were a few things I couldn’t cover there, so I thought it would be nice to discuss them here. You can think of this as a kind of “bonus material” section for the piece. So before reading on, read my piece at Quanta first!

Welcome back!

At Quanta I wrote a kind of cartoon of the S-matrix, asking you to think about it as a matrix of probabilities, with rows for input particles and columns for output particles. There are a couple different simplifications I snuck in there, the pop physicist’s “lies to children“. One, I already flag in the piece: the entries aren’t really probabilities, they’re complex numbers, probability amplitudes.

There’s another simplification that I didn’t have space to flag. The rows and columns aren’t just lists of particles, they’re lists of particles in particular states.

What do I mean by states? A state is a complete description of a particle. A particle’s state includes its energy and momentum, including the direction it’s traveling in. It includes its spin, and the direction of its spin: for example, clockwise or counterclockwise? It also includes any charges, from the familiar electric charge to the color of a quark.

This makes the matrix even bigger than you might have thought. I was already describing an infinite matrix, one where you can have as many columns and rows as you can imagine numbers of colliding particles. But the number of rows and columns isn’t just infinite, but uncountable, as many rows and columns as there are different numbers you can use for energy and momentum.

For some of you, an uncountably infinite matrix doesn’t sound much like a matrix. But for mathematicians familiar with vector spaces, this is totally reasonable. Even if your matrix is infinite, or even uncountably infinite, it can still be useful to think about it as a matrix.

Another subtlety, which I’m sure physicists will be howling at me about: the Higgs boson is not supposed to be in the S-matrix!

In the article, I alluded to the idea that the S-matrix lets you “hide” particles that only exist momentarily inside of a particle collision. The Higgs is precisely that sort of particle, an unstable particle. And normally, the S-matrix is supposed to only describe interactions between stable particles, particles that can survive all the way to infinity.

In my defense, if you want a nice table of probabilities to put in an article, you need an unstable particle: interactions between stable particles depend on their energy and momentum, sometimes in complicated ways, while a single unstable particle will decay into a reliable set of options.

More technically, there are also contexts in which it’s totally fine to think about an S-matrix between unstable particles, even if it’s not usually how we use the idea.

My piece also didn’t have a lot of room to discuss new developments. I thought at minimum I’d say a bit more about the work of the young people I mentioned. You can think of this as an appetizer: there are a lot of people working on different aspects of this subject these days.

Part of the initial inspiration for the piece was when an editor at Quanta noticed a recent paper by Christian Copetti, Lucía Cordova, and Shota Komatsu. The paper shows an interesting case, where one of the “logical” conditions imposed in the original S-matrix bootstrap doesn’t actually apply. It ended up being too technical for the Quanta piece, but I thought I could say a bit about it, and related questions, here.

Some of the conditions imposed by the original bootstrappers seem unavoidable. Quantum mechanics makes no sense if doesn’t compute probabilities, and probabilities can’t be negative, or larger than one, so we’d better have an S-matrix that obeys those rules. Causality is another big one: we probably shouldn’t have an S-matrix that lets us send messages back in time and change the past.

Other conditions came from a mixture of intuition and observation. Crossing is a big one here. Crossing tells you that you can take an S-matrix entry with in-coming particles, and relate it to a different S-matrix entry with out-going anti-particles, using techniques from the calculus of complex numbers.

Crossing may seem quite obscure, but after some experience with S-matrices it feels obvious and intuitive. That’s why for an expert, results like the paper by Copetti, Cordova, and Komatsu seem so surprising. What they found was that a particularly exotic type of symmetry, called a non-invertible symmetry, was incompatible with crossing symmetry. They could find consistent S-matrices for theories with these strange non-invertible symmetries, but only if they threw out one of the basic assumptions of the bootstrap.

This was weird, but upon reflection not too weird. In theories with non-invertible symmetries, the behaviors of different particles are correlated together. One can’t treat far away particles as separate, the way one usually does with the S-matrix. So trying to “cross” a particle from one side of a process to another changes more than it usually would, and you need a more sophisticated approach to keep track of it. When I talked to Cordova and Komatsu, they related this to another concept called soft theorems, aspects of which have been getting a lot of attention and funding of late.

In the meantime, others have been trying to figure out where the crossing rules come from in the first place.

There were attempts in the 1970’s to understand crossing in terms of other fundamental principles. They slowed in part because, as the original S-matrix bootstrap was overtaken by QCD, there was less motivation to do this type of work anymore. But they also ran into a weird puzzle. When they tried to use the rules of crossing more broadly, only some of the things they found looked like S-matrices. Others looked like stranger, meaningless calculations.

A recent paper by Simon Caron-Huot, Mathieu Giroux, Holmfridur Hannesdottir, and Sebastian Mizera revisited these meaningless calculations, and showed that they aren’t so meaningless after all. In particular, some of them match well to the kinds of calculations people wanted to do to predict gravitational waves from colliding black holes.

Imagine a pair of black holes passing close to each other, then scattering away in different directions. Unlike particles in a collider, we have no hope of catching the black holes themselves. They’re big classical objects, and they will continue far away from us. We do catch gravitational waves, emitted from the interaction of the black holes.

This different setup turns out to give the problem a very different character. It ends up meaning that instead of the S-matrix, you want a subtly different mathematical object, one related to the original S-matrix by crossing relations. Using crossing, Caron-Huot, Giroux, Hannesdottir and Mizera found many different quantities one could observe in different situations, linked by the same rules that the original S-matrix bootstrappers used to relate S-matrix entries.

The work of these two groups is just some of the work done in the new S-matrix program, but it’s typical of where the focus is going. People are trying to understand the general rules found in the past. They want to know where they came from, and as a consequence, when they can go wrong. They have a lot to learn from the older papers, and a lot of new insights come from diligent reading. But they also have a lot of new insights to discover, based on the new tools and perspectives of the modern day. For the most part, they don’t expect to find a new unified theory of physics from bootstrapping alone. But by learning how S-matrices work in general, they expect to find valuable knowledge no matter how the future goes.

Small Shifts for Specificity

Cosmologists are annoyed at a recent spate of news articles claiming the universe is 26.7 billion years old (rather than 13.8 billion as based on the current best measurements). To some of the science-reading public, the news sounds like a confirmation of hints they’d already heard: about an ancient “Methuselah” star that seemed to be older than the universe (later estimates put it younger), and recent observations from the James Webb Space Telescope of early galaxies that look older than they ought.

“The news doesn’t come from a telescope, though, or a new observation of the sky. Instead, it comes from this press release from the University of Ottawa: “Reinventing cosmology: uOttawa research puts age of universe at 26.7 — not 13.7 — billion years”.

(If you look, you’ll find many websites copying this press release almost word-for-word. This is pretty common in science news, where some websites simply aggregate press releases and others base most of their science news on them rather than paying enough for actual journalism.)

The press release, in turn, is talking about a theory, not an observation. The theorist, Rajendra Gupta, was motivated by examples like the early galaxies observed by JWST and the Methuselah star. Since the 13.8 billion year age of the universe is based on a mathematical model, he tried to find a different mathematical model that led to an older universe. Eventually, by hypothesizing what seems like every unproven physics effect he could think of, he found one that gives a different estimate, 26.7 billion. He probably wasn’t the first person to do this, because coming up with different models to explain odd observations is a standard thing cosmologists do all the time, and until one of the models is shown to explain a wider range of observations (because our best theories explain a lot, so they’re hard to replace), they’re just treated as speculation, not newsworthy science.

This is a pretty clear case of hype, and as such most of the discussion has been about what went wrong. Should we blame the theorist? The university? The journalists? Elon Musk?

Rather than blame, I think it’s more productive to offer advice. And in this situation, the person I think could use some advice is the person who wrote the press release.

So suppose you work for a university, writing their press releases. One day, you hear that one of your professors has done something very cool, something worthy of a press release: they’ve found a new estimate for the age of the universe. What do you do?

One thing you absolutely shouldn’t do is question the science. That just isn’t your job, and even if it were you don’t have the expertise to do that. Anyone who’s hoping that you will only write articles about good science and not bad science is being unrealistic, that’s just not an option.

If you can’t be more accurate, though, you can still be more precise. You can write your article, and in particular your headline, so that you express what you do know as clearly and specifically as possible.

(I’m assuming here you write your own headlines. This is not normal in journalism, where most headlines are written by an editor, not by the writer of a piece. But university press offices are small enough that I’m assuming, perhaps incorrectly, that you can choose how to title your piece.)

Let’s take a look at the title, “Reinventing cosmology: uOttawa research puts age of universe at 26.7 — not 13.7 — billion years”, and see if we can make some small changes to improve it.

One very general word in that title is “research”. Lots of people do research: astronomers do research when they collect observations, theorists do research when they make new models. If you say “research”, some people will think you’re reporting a new observation, a new measurement that gives a radically different age for the universe.

But you know that’s not true, it’s not what the scientist you’re talking to is telling you. So to avoid the misunderstanding, you can get a bit more specific, and replace the word “research” with a more precise one: “Reinventing cosmology: uOttawa theory puts age of universe at 26.7 — not 13.7 — billion years”.

“Theory” is just as familiar a word as “research”. You won’t lose clicks, you won’t confuse people. But now, you’ve closed off a big potential misunderstanding. By a small shift, you’ve gotten a lot clearer. And you didn’t need to question the science to do it!

You can do more small shifts, if you understand a bit more of the science. “Puts” is kind of ambiguous: a theory could put an age somewhere because it computes it from first principles, or because it dialed some parameter to get there. Here, the theory was intentionally chosen to give an older universe, so the title should hint at this in some way. Instead of “puts”, then, you can use “allows”: “Reinventing cosmology: uOttawa theory allows age of universe to be 26.7 — not 13.7 — billion years”.

These kinds of little tricks can be very helpful. If you’re trying to avoid being misunderstood, then it’s good to be as specific as you can, given what you understand. If you do it carefully, you don’t have to question your scientists’ ideas or downplay their contributions. You can do your job, promote your scientists, and still contribute to responsible journalism.

Whatever Happened to the Nonsense Merchants?

I was recently reminded that Michio Kaku exists.

In the past, Michio Kaku made important contributions to string theory, but he’s best known for what could charitably be called science popularization. He’s an excited promoter of physics and technology, but that excitement often strays into inaccuracy. Pretty much every time I’ve heard him mentioned, it’s for some wildly overenthusiastic statement about physics that, rather than just being simplified for a general audience, is generally flat-out wrong, conflating a bunch of different developments in a way that makes zero actual sense.

Michio Kaku isn’t unique in this. There’s a whole industry in making nonsense statements about science, overenthusiastic books and videos hinting at science fiction or mysticism. Deepak Chopra is a famous figure from deeper on this spectrum, known for peddling loosely quantum-flavored spirituality.

There was a time I was worried about this kind of thing. Super-popular misinformation is the bogeyman of the science popularizer, the worry that for every nice, careful explanation we give, someone else will give a hundred explanations that are way more exciting and total baloney. Somehow, though, I hear less and less from these people over time, and thus worry less and less about them.

Should I be worried more? I’m not sure.

Are these people less popular than they used to be? Is that why I’m hearing less about them? Possibly, but I’d guess not. Michio Kaku has eight hundred thousand twitter followers. Deepak Chopra has three million. On the other hand, the usually-careful Brian Greene has a million followers, and Neil deGrasse Tyson, where the worst I’ve heard is that he can be superficial, has fourteen million.

(But then in practice, I’m more likely to reflect on content with even smaller audiences.)

If misinformation is this popular, shouldn’t I be doing more to combat it?

Popular misinformation is also going to be popular among critics. For every big-time nonsense merchant, there are dozens of people breaking down and debunking every false statement they say, every piece of hype they release. Often, these people will end up saying the same kinds of things over and over again.

If I can be useful, I don’t think it will be by saying the same thing over and over again. I come up with new metaphors, new descriptions, new explanations. I clarify things others haven’t clarified, I clear up misinformation others haven’t addressed. That feels more useful to me, especially in a world where others are already countering the big problems. I write, and writing lasts, and can be used again and again when needed. I don’t need to keep up with the Kakus and Chopras of the world to do that.

(Which doesn’t imply I’ll never address anything one of those people says…but if I do, it will be because I have something new to say back!)

Simulated Wormholes for My Real Friends, Real Wormholes for My Simulated Friends

Maybe you’ve recently seen a headline like this:

Actually, I’m more worried that you saw that headline before it was edited, when it looked like this:

If you’ve seen either headline, and haven’t read anything else about it, then please at least read this:

Physicists have not created an actual wormhole. They have simulated a wormhole on a quantum computer.

If you’re willing to read more, then read the rest of this post. There’s a more subtle story going on here, both about physics and about how we communicate it. And for the experts, hold on, because when I say the wormhole was a simulation I’m not making the same argument everyone else is.

[And for the mega-experts, there’s an edit later in the post where I soften that claim a bit.]

The headlines at the top of this post come from an article in Quanta Magazine. Quanta is a web-based magazine covering many fields of science. They’re read by the general public, but they aim for a higher standard than many science journalists, with stricter fact-checking and a goal of covering more challenging and obscure topics. Scientists in turn have tended to be quite happy with them: often, they cover things we feel are important but that the ordinary media isn’t able to cover. (I even wrote something for them recently.)

Last week, Quanta published an article about an experiment with Google’s Sycamore quantum computer. By arranging the quantum bits (qubits) in a particular way, they were able to observe behaviors one would expect out of a wormhole, a kind of tunnel linking different points in space and time. They published it with the second headline above, claiming that physicists had created a wormhole with a quantum computer and explaining how, using a theoretical picture called holography.

This pissed off a lot of physicists. After push-back, Quanta’s twitter account published this statement, and they added the word “Holographic” to the title.

Why were physicists pissed off?

It wasn’t because the Quanta article was wrong, per se. As far as I’m aware, all the technical claims they made are correct. Instead, it was about two things. One was the title, and the implication that physicists “really made a wormhole”. The other was the tone, the excited “breaking news” framing complete with a video comparing the experiment with the discovery of the Higgs boson. I’ll discuss each in turn:

The Title

Did physicists really create a wormhole, or did they simulate one? And why would that be at all confusing?

The story rests on a concept from the study of quantum gravity, called holography. Holography is the idea that in quantum gravity, certain gravitational systems like black holes are fully determined by what happens on a “boundary” of the system, like the event horizon of a black hole. It’s supposed to be a hologram in analogy to 3d images encoded in 2d surfaces, rather than like the hard-light constructions of science fiction.

The best-studied version of holography is something called AdS/CFT duality. AdS/CFT duality is a relationship between two different theories. One of them is a CFT, or “conformal field theory”, a type of particle physics theory with no gravity and no mass. (The first example of the duality used my favorite toy theory, N=4 super Yang-Mills.) The other one is a version of string theory in an AdS, or anti-de Sitter space, a version of space-time curved so that objects shrink as they move outward, approaching a boundary. (In the first example, this space-time had five dimensions curled up in a sphere and the rest in the anti-de Sitter shape.)

These two theories are conjectured to be “dual”. That means that, for anything that happens in one theory, you can give an alternate description using the other theory. We say the two theories “capture the same physics”, even though they appear very different: they have different numbers of dimensions of space, and only one has gravity in it.

Many physicists would claim that if two theories are dual, then they are both “equally real”. Even if one description is more familiar to us, both descriptions are equally valid. Many philosophers are skeptical, but honestly I think the physicists are right about this one. Philosophers try to figure out which things are real or not real, to make a list of real things and explain everything else as made up of those in some way. I think that whole project is misguided, that it’s clarifying how we happen to talk rather than the nature of reality. In my mind, dualities are some of the clearest evidence that this project doesn’t make any sense: two descriptions can look very different, but in a quite meaningful sense be totally indistinguishable.

That’s the sense in which Quanta and Google and the string theorists they’re collaborating with claim that physicists have created a wormhole. They haven’t created a wormhole in our own space-time, one that, were it bigger and more stable, we could travel through. It isn’t progress towards some future where we actually travel the galaxy with wormholes. Rather, they created some quantum system, and that system’s dual description is a wormhole. That’s a crucial point to remember: even if they created a wormhole, it isn’t a wormhole for you.

If that were the end of the story, this post would still be full of warnings, but the title would be a bit different. It was going to be “Dual Wormholes for My Real Friends, Real Wormholes for My Dual Friends”. But there’s a list of caveats. Most of them arguably don’t matter, but the last was what got me to change the word “dual” to “simulated”.

  1. The real world is not described by N=4 super Yang-Mills theory. N=4 super Yang-Mills theory was never intended to describe the real world. And while the real world may well be described by string theory, those strings are not curled up around a five-dimensional sphere with the remaining dimensions in anti-de Sitter space. We can’t create either theory in a lab either.
  2. The Standard Model probably has a quantum gravity dual too, see this cute post by Matt Strassler. But they still wouldn’t have been able to use that to make a holographic wormhole in a lab.
  3. Instead, they used a version of AdS/CFT with fewer dimensions. It relates a weird form of gravity in one space and one time dimension (called JT gravity), to a weird quantum mechanics theory called SYK, with an infinite number of quantum particles or qubits. This duality is a bit more conjectural than the original one, but still reasonably well-established.
  4. Quantum computers don’t have an infinite number of qubits, so they had to use a version with a finite number: seven, to be specific. They trimmed the model down so that it would still show the wormhole-dual behavior they wanted. At this point, you might say that they’re definitely just simulating the SYK theory, using a small number of qubits to simulate the infinite number. But I think they could argue that this system, too, has a quantum gravity dual. The dual would have to be even weirder than JT gravity, and even more conjectural, but the signs of wormhole-like behavior they observed (mostly through simulations on an ordinary computer, which is still better at this kind of thing than a quantum computer) could be seen as evidence that this limited theory has its own gravity partner, with its own “real dual” wormhole.
  5. But those seven qubits don’t just have the interactions they were programmed to have, the ones with the dual. They are physical objects in the real world, so they interact with all of the forces of the real world. That includes, though very weakly, the force of gravity.

And that’s where I think things break, and you have to call the experiment a simulation. You can argue, if you really want to, that the seven-qubit SYK theory has its own gravity dual, with its own wormhole. There are people who expect duality to be broad enough to include things like that.

But you can’t argue that the seven-qubit SYK theory, plus gravity, has its own gravity dual. Theories that already have gravity are not supposed to have gravity duals. If you pushed hard enough on any of the string theorists on that team, I’m pretty sure they’d admit that.

That is what decisively makes the experiment a simulation. It approximately behaves like a system with a dual wormhole, because you can approximately ignore gravity. But if you’re making some kind of philosophical claim, that you “really made a wormhole”, then “approximately” doesn’t cut it: if you don’t exactly have a system with a dual, then you don’t “really” have a dual wormhole: you’ve just simulated one.

Edit: mitchellporter in the comments points out something I didn’t know: that there are in fact proposals for gravity theories with gravity duals. They are in some sense even more conjectural than the series of caveats above, but at minimum my claim above, that any of the string theorists on the team would agree that the system’s gravity means it can’t have a dual, is probably false.

I think at this point, I’d soften my objection to the following:

Describing the system of qubits in the experiment as a limited version of the SYK theory is in one way or another an approximation. It approximates them as not having any interactions beyond those they programmed, it approximates them as not affected by gravity, and because it’s a quantum mechanical description it even approximates the speed of light as small. Those approximations don’t guarantee that the system doesn’t have a gravity dual. But in order for them to, then our reality, overall, would have to have a gravity dual. There would have to be a dual gravity interpretation of everything, not just the inside of Google’s quantum computer, and it would have to be exact, not just an approximation. Then the approximate SYK would be dual to an approximate wormhole, but that approximate wormhole would be an approximation of some “real” wormhole in the dual space-time.

That’s not impossible, as far as I can tell. But it piles conjecture upon conjecture upon conjecture, to the point that I don’t think anyone has explicitly committed to the whole tower of claims. If you want to believe that this experiment literally created a wormhole, you thus can, but keep in mind the largest asterisk known to mankind.

End edit.

If it weren’t for that caveat, then I would be happy to say that the physicists really created a wormhole. It would annoy some philosophers, but that’s a bonus.

But even if that were true, I wouldn’t say that in the title of the article.

The Title, Again

These days, people get news in two main ways.

Sometimes, people read full news articles. Reading that Quanta article is a good way to understand the background of the experiment, what was done and why people care about it. As I mentioned earlier, I don’t think anything said there was wrong, and they cover essentially all of the caveats you’d care about (except for that last one 😉 ).

Sometimes, though, people just see headlines. They get forwarded on social media, observed at a glance passed between friends. If you’re popular enough, then many more people will see your headline than will actually read the article. For many people, their whole understanding of certain scientific fields is formed by these glancing impressions.

Because of that, if you’re popular and news-y enough, you have to be especially careful with what you put in your headlines, especially when it implies a cool science fiction story. People will almost inevitably see them out of context, and it will impact their view of where science is headed. In this case, the headline may have given many people the impression that we’re actually making progress towards travel via wormholes.

Some of my readers might think this is ridiculous, that no-one would believe something like that. But as a kid, I did. I remember reading popular articles about wormholes, describing how you’d need energy moving in a circle, and other articles about optical physicists finding ways to bend light and make it stand still. Putting two and two together, I assumed these ideas would one day merge, allowing us to travel to distant galaxies faster than light.

If I had seen Quanta’s headline at that age, I would have taken it as confirmation. I would have believed we were well on the way to making wormholes, step by step. Even the New York Times headline, “the Smallest, Crummiest Wormhole You Can Imagine”, wouldn’t have fazed me.

(I’m not sure even the extra word “holographic” would have. People don’t know what “holographic” means in this context, and while some of them would assume it meant “fake”, others would think about the many works of science fiction, like Star Trek, where holograms can interact physically with human beings.)

Quanta has a high-brow audience, many of whom wouldn’t make this mistake. Nevertheless, I think Quanta is popular enough, and respectable enough, that they should have done better here.

At minimum, they could have used the word “simulated”. Even if they go on to argue in the article that the wormhole is real, and not just a simulation, the word in the title does no real harm. It would be a lie, but a beneficial “lie to children”, the basic stock-in-trade of science communication. I think they could have defended it to the string theorists they interviewed on those grounds.

The Tone

Honestly, I don’t think people would have been nearly so pissed off were it not for the tone of the article. There are a lot of physics bloggers who view themselves as serious-minded people, opposed to hype and publicity stunts. They view the research program aimed at simulating quantum gravity on a quantum computer as just an attempt to link a dying and un-rigorous research topic to an over-hyped and over-funded one, pompous storytelling aimed at promoting the careers of people who are already extremely successful.

These people tend to view Quanta favorably, because it covers serious-minded topics in a thorough way. And so many of them likely felt betrayed, seeing this Quanta article as a massive failure of that serious-minded-ness, falling for or even endorsing the hypiest of hype.

To those people, I’d like to politely suggest you get over yourselves.

Quanta’s goal is to cover things accurately, to represent all the facts in a way people can understand. But “how exciting something is” is not a fact.

Excitement is subjective. Just because most of the things Quanta finds exciting you also find exciting, does not mean that Quanta will find the things you find unexciting unexciting. Quanta is not on “your side” in some war against your personal notion of unexciting science, and you should never have expected it to be.

In fact, Quanta tends to find things exciting, in general. They were more excited than I was about the amplituhedron, and I’m an amplitudeologist. Part of what makes them consistently excited about the serious-minded things you appreciate them for is that they listen to scientists and get excited about the things they’re excited about. That is going to include, inevitably, things those scientists are excited about for what you think are dumb groupthinky hype reasons.

I think the way Quanta titled the piece was unfortunate, and probably did real damage. I think the philosophical claim behind the title is wrong, though for subtle and weird enough reasons that I don’t really fault anybody for ignoring them. But I don’t think the tone they took was a failure of journalistic integrity or research or anything like that. It was a matter of taste. It’s not my taste, it’s probably not yours, but we shouldn’t have expected Quanta to share our tastes in absolutely everything. That’s just not how taste works.