Hot Things Are Less Useful

Did you know that particle colliders have to cool down their particle beams before they collide?

You might have learned in school that temperature is secretly energy. With a number called Boltzmann’s constant, you can convert a temperature of a gas in Kelvin to the average energy of a molecule in the gas. If that’s what you remember about temperature, it might seem weird that someone would cool down the particles in a particle collider. The whole point of a particle collider is to accelerate particles, giving them lots of energy, before colliding them together. Since those particles have a lot of energy, they must be very hot, right?

Well, no. Here’s the thing: temperature is not just the average energy. It’s the average random energy. It’s energy that might be used to make a particle move forward or backwards, up or down, a random different motion for each particle. It doesn’t include motion that’s the same for each particle, like the movement of a particle beam.

Cooling down a particle beam then, doesn’t mean slowing it down. Rather, it means making it more consistent, getting the different particles moving in the same direction rather than randomly spreading apart. You want the particles to go somewhere specific, speeding up and slamming into the other beam. You don’t want them to move randomly, running into the walls and destroying your collider. So you can have something with high energy that is comparatively cool.

In general, the best way I’ve found to think about temperature and heat is in terms of usefulness and uselessness. Cool things are useful, they do what you expect and not much more. Hot things are less useful, they use energy to do random things you don’t want. Sometimes, by chance, this random energy will still do something useful, and if you have a cold thing to pair with the hot thing, you can take advantage of this in a consistent way. But hot things by themselves are less useful, and that’s why particle colliders try to cool down their beams.

AI Can’t Do Science…And Neither Can Other Humans

Seen on Twitter:

I don’t know the context here, so I can’t speak to what Prof. Cronin meant. But it got me thinking.

Suppose you, like Prof. Cronin, were to insist that AI “cannot in principle” do science, because AI “is not autonomous” and “does not come up with its own problems to solve”. What might you mean?

You might just be saying that AI is bad at coming up with new problems to solve. That’s probably fair, at least at the moment. People have experimented with creating simple “AI researchers” that “study” computer programs, coming up with hypotheses about the programs’ performance and testing them. But it’s a long road from that to reproducing the much higher standards human scientists have to satisfy.

You probably don’t mean that, though. If you did, you wouldn’t have said “in principle”. You mean something stronger.

More likely, you might mean that AI cannot come up with its own problems, because AI is a tool. People come up with problems, and use AI to help solve them. In this perspective, not only is AI “not autonomous”, it cannot be autonomous.

On a practical level, this is clearly false. Yes, machine learning models, the core technology in current AI, are set up to answer questions. A user asks something, and receives the model’s prediction of the answer. That’s a tool, but for the more flexible models like GPT it’s trivial to turn it into something autonomous. Just add another program: a loop that asks the model what to do, does it, tells the model the result, and asks what to do next. Like taping a knife to a Roomba, you’ve made a very simple modification to make your technology much more dangerous.

You might object, though, that this simple modification of GPT is not really autonomous. After all, a human created it. That human had some goal, some problem they wanted to solve, and the AI is just solving the problem for them.

That may be a fair description of current AI, but insisting it’s true in principle has some awkward implications. If you make a “physics AI”, just tell it to do “good physics”, and it starts coming up with hypotheses you’d never thought of, is it really fair to say it’s just solving your problem?

What if the AI, instead, was a child? Picture a physicist encouraging a child to follow in their footsteps, filling their life with physics ideas and rhapsodizing about the hard problems of the field at the dinner table. Suppose the child becomes a physicist in turn, and finds success later in life. Were they really autonomous? Were they really a scientist?

What if the child, instead, was a scientific field, and the parent was the general public? The public votes for representatives, the representatives vote to hire agencies, and the agencies promise scientists they’ll give them money if they like the problems they come up with. Who is autonomous here?

(And what happens if someone takes a hammer to that process? I’m…still not talking about this! No-politics-rule still in effect, sorry! I do have a post planned, but it will have to wait until I can deal with the fallout.)

At this point, you’d probably stop insisting. You’d drop that “in principle”, and stick with the claim I started with, that current AI can’t be a scientist.

But you have another option.

You can accept the whole chain of awkward implications, bite all the proverbial bullets. Yes, you insist, AI is not autonomous. Neither is the physicist’s child in your story, and neither are the world’s scientists paid by government grants. Each is a tool, used by the one, true autonomous scientist: you.

You are stuck in your skull, a blob of curious matter trained on decades of experience in the world and pre-trained with a couple billion years of evolution. For whatever reason, you want to know more, so you come up with problems to solve. You’re probably pretty vague about those problems. You might want to see more pretty pictures of space, or wrap your head around the nature of time. So you turn the world into your tool. You vote and pay taxes, so your government funds science. You subscribe to magazines and newspapers, so you hear about it. You press out against the world, and along with the pressure that already exists it adds up, and causes change. Biological intelligences and artificial intelligences scurry at your command. From their perspective, they are proposing their own problems, much more detailed and complex than the problems you want to solve. But from yours, they’re your limbs beyond limbs, sight beyond sight, asking the fundamental questions you want answered.

Cool Asteroid News

Did you hear about the asteroid?

Which one?

You might have heard that an asteroid named 2024 YR4 is going to come unusually close to the Earth in 2032. When it first made the news, astronomers estimated a non-negligible chance of it hitting us: about three percent. That’s small enough that they didn’t expect it to happen, but large enough to plan around it: people invest in startups with a smaller chance of succeeding. Still, people were fairly calm about this one, and there are a couple of good reasons:

  • First, this isn’t a “kill the dinosaurs” asteroid, it’s much smaller. This is a “Tunguska Event” asteroid. Still pretty bad if it happens near a populated area, but not the end of life as we know it.
  • We know about it far in advance, and space agencies have successfully deflected an asteroid before, for a test. If it did pose a risk, it’s quite likely they’d be able to change its path so it misses the Earth instead.
  • It’s tempting to think of that 3% chance as like a roll of a hundred-sided die: the asteroid is on a random path, roll 1 to 3 and it will hit the Earth, roll higher and it won’t, and nothing we do will change that. In reality, though, that 3% was a measure of our ignorance. As astronomers measure the asteroid more thoroughly, they’ll know more and more about its path, and each time they figure something out, they’ll update the number.

And indeed, the number has been updated. In just the last few weeks, the estimated probability of impact has dropped from 3% to a few thousandths of a percent, as more precise observations clarified the asteroid’s path. There’s still a non-negligible chance it will hit the moon (about two percent at the moment), but it’s far too small to do more than make a big flashy crater.

It’s kind of fun to think that there are people out there who systematically track these things, with a plan to deal with them. It feels like something out of a sci-fi novel.

But I find the other asteroid more fun.

In 2020, a probe sent by NASA visited an asteroid named Bennu, taking samples which it carefully packaged and brought back to Earth. Now, scientists have analyzed the samples, revealing several moderately complex chemicals that have an important role in life on Earth, like amino acids and the bases that make up RNA and DNA. Interestingly, while on Earth these molecules all have the same “handedness“, the molecules on Bennu are divided about 50/50. Something similar was seen on samples retrieved from another asteroid, so this reinforces the idea that amino acids and nucleotide bases in space do not have a preferred handedness.

I first got into physics for the big deep puzzles, the ones that figure into our collective creation story. Where did the universe come from? Why are its laws the way they are? Over the ten years since I got my PhD, it’s felt like the answers to these questions have gotten further and further away, with new results serving mostly to rule out possible explanations with greater and greater precision.

Biochemistry has its own deep puzzles figuring into our collective creation story, and the biggest one is abiogenesis: how life formed from non-life. What excites me about these observations from Bennu is that it represents real ongoing progress on that puzzle. By glimpsing a soup of ambidextrous molecules, Bennu tells us something about how our own molecules’ handedness could have developed, and rules out ways that it couldn’t have. In physics, if we could see an era of the universe when there were equal amounts of matter and antimatter, we’d be ecstatic: it would confirm that the imbalance between matter and antimatter is a real mystery, and show us where we need to look for the answer. I love that researchers on the origins of life have reason right now to be similarly excited.

Some FAQ for Microsoft’s Majorana 1 Chip

Recently, Microsoft announced a fancy new quantum computing chip called Majorana 1. I’ve noticed quite a bit of confusion about what they actually announced, and while there’s a great FAQ page about it on the quantum computing blog Shtetl Optimized, the post there aims at a higher level, assuming you already know the basics. You can think of this post as a complement to that one, that tries to cover some basic things Shtetl Optimized took for granted.

Q: In the announcement, Microsoft said:

“It leverages the world’s first topoconductor, a breakthrough type of material which can observe and control Majorana particles to produce more reliable and scalable qubits, which are the building blocks for quantum computers.”

That sounds wild! Are they really using particles in a computer?

A: All computers use particles. Electrons are particles!

Q: You know what I mean!

A: You’re asking if these are “particle physics” particles, like the weird types they try to observe at the LHC?

No, they’re not.

Particle physicists use a mathematical framework called quantum field theory, where particles are ripples in things called quantum fields that describe properties of the universe. But they aren’t the only people to use that framework. Instead of studying properties of the universe you can study properties of materials, weird alloys and layers of metal and crystal that do weird and useful things. The properties of these materials can be approximately described with the same math, with quantum fields. Just as the properties of the universe ripple to produce particles, these properties of materials ripple to produce what are called quasiparticles. Ultimately, these quasiparticles come down to movements of ordinary matter, usually electrons in the original material. They’re just described with a kind of math that makes them look like their own particles.

Q: So, what are these Majorana particles supposed to be?

A: In quantum field theory, most particles come with an antimatter partner. Electrons, for example, have partners called positrons, with a positive electric charge instead of a negative one. These antimatter partners have to exist due to the math of quantum field theory, but there is a way out: some particles are their own antimatter partner, letting one particle cover both roles. This happens for some “particle physics particles”, but all the examples we’ve found are a type of particle called a “boson”, particles related to forces. In 1937, the physicist Ettore Majorana figured out the math you would need to make a particle like this that was a fermion instead, the other main type of particle that includes electrons and protons. So far, we haven’t found one of these Majorana fermions in nature, though some people think the elusive neutrino particles could be an example. Others, though, have tried instead to find a material described by Majorana’s theory. This should in principle be easier, you can build a lot of different materials after all. But it’s proven quite hard for people to do. Back in 2018, Microsoft claimed they’d managed this, but had to retract the claim. This time, they seem more confident, though the scientific community is still not convinced.

Q: And what’s this topoconductor they’re talking about?

A: Topoconductor is short for topological superconductor. Superconductors are materials that conduct electricity much better than ordinary metals.

Q: And, topological means? Something about donuts, right?

A: If you’ve heard anything about topology, you’ve heard that it’s a type of mathematics where donuts are equivalent to coffee cups. You might have seen an animation of a coffee cup being squished and mushed around until the ring of the handle becomes the ring of a donut.

This isn’t actually the important part of topology. The important part is that, in topology, a ball is not equivalent to a donut.

Topology is the study of which things can change smoothly into one another. If you want to change a donut into a ball, you have to slice through the donut’s ring or break the surface inside. You can’t smoothly change one to another. Topologists study shapes of different kinds of things, figuring out which ones can be changed into each other smoothly and which can’t.

Q: What does any of that have to do with quantum computers?

A: The shapes topologists study aren’t always as simple as donuts and coffee cups. They can also study the shape of quantum fields, figuring out which types of quantum fields can change smoothly into each other and which can’t.

The idea of topological quantum computation is to use those rules about what can change into each other to encode information. You can imagine a ball encoding zero, and a donut encoding one. A coffee cup would then also encode one, because it can change smoothly into a donut, while a box would encode zero because you can squash the corners to make it a ball. This helps, because it means that you don’t screw up your information by making smooth changes. If you accidentally drop your box that encodes zero and squish a corner, it will still encode zero.

This matters in quantum computing because it is very easy to screw up quantum information. Quantum computers are very delicate, and making them work reliably has been immensely challenging, requiring people to build much bigger quantum computers so they can do each calculation with many redundant backups. The hope is that topological superconductors would make this easier, by encoding information in a way that is hard to accidentally change.

Q: Cool. So does that mean Microsoft has the best quantum computer now?

A: The machine Microsoft just announced has only a single qubit, the quantum equivalent of just a single bit of computer memory. At this point, it can’t do any calculations. It can just be read, giving one or zero. The hope is that the power of the new method will let Microsoft catch up with companies that have computers with hundred of qubits, and help them arrive faster at the millions of qubits that will be needed to do anything useful.

Q: Ah, ok. But it sounds like they accomplished some crazy Majorana stuff at least, right?

A: Umm…

Read the Shtetl-Optimized FAQ if you want more details. The short answer is that this is still controversial. So far, the evidence they’ve made public isn’t enough to show that they found these Majorana quasiparticles, or that they made a topological superconductor. They say they have more recent evidence that they haven’t published yet. We’ll see.

Bonus Material for “How Hans Bethe Stumbled Upon Perfect Quantum Theories”

I had an article last week in Quanta Magazine. It’s a piece about something called the Bethe ansatz, a method in mathematical physics that was discovered by Hans Bethe in the 1930’s, but which only really started being understood and appreciated around the 1960’s. Since then it’s become a key tool, used in theoretical investigations in areas from condensed matter to quantum gravity. In this post, I thought I’d say a bit about the story behind the piece and give some bonus material that didn’t fit.

When I first decided to do the piece I reached out to Jules Lamers. We were briefly office-mates when I worked in France, where he was giving a short course on the Bethe ansatz and the methods that sprung from it. It turned out he had also been thinking about writing a piece on the subject, and we considered co-writing for a bit, but that didn’t work for Quanta. He helped me a huge amount with understanding the history of the subject and tracking down the right sources. If you’re a physicist who wants to learn about these things, I recommend his lecture notes. And if you’re a non-physicist who wants to know more, I hope he gets a chance to write a longer popular-audience piece on the topic!

If you clicked through to Jules’s lecture notes, you’d see the word “Bethe ansatz” doesn’t appear in the title. Instead, you’d see the phrase “quantum integrability”. In classical physics, an “integrable” system is one where you can calculate what will happen by doing an integral, essentially letting you “solve” any problem completely. Systems you can describe with the Bethe ansatz are solvable in a more complicated quantum sense, so they get called “quantum integrable”. There’s a whole research field that studies these quantum integrable systems.

My piece ended up rushing through the history of the field. After talking about Bethe’s original discovery, I jumped ahead to ice. The Bethe ansatz was first used to think about ice in the 1960’s, but the developments I mentioned leading up to it, where experimenters noticed extra variability and theorists explained it with the positions of hydrogen atoms, happened earlier, in the 1930’s. (Thanks to the commenter who pointed out that this was confusing!) Baxter gets a starring role in this section and had an important role in tying things together, but other people (Lieb and Sutherland) were involved earlier, showing that the Bethe ansatz indeed could be used with thin sheets of ice. This era had a bunch of other big names that I didn’t have space to talk about: C. N. Yang makes an appearance, and while Faddeev comes up later, I didn’t mention that he had a starring role in the 1970’s in understanding the connection to classical integrability and proposing a mathematical structure to understand what links all these different integrable theories together.

I vaguely gestured at black holes and quantum gravity, but didn’t have space for more than that. The connection there is to a topic you might have heard of before if you’ve read about string theory, called AdS/CFT, a connection between two kinds of world that are secretly the same: a toy model of gravity called Anti-de Sitter space (AdS) and a theory without gravity that looks the same at any scale (called a Conformal Field Theory, or CFT). It turns out that in the most prominent example of this, the theory without gravity is integrable! In fact, it’s a theory I spent a lot of time working with back in my research days, called N=4 super Yang-Mills. This theory is kind of like QCD, and in some sense it has integrability for similar reasons to those that Feynman hoped for and Korchemsky and Faddeev found. But it actually goes much farther, outside of the high-energy approximation where Korchemsky and Faddeev’s result works, and in principle seems to include everything you might want to know about the theory. Nowadays, people are using it to investigate the toy model of quantum gravity, hoping to get insights about quantum gravity in general.

One thing I didn’t get a chance to mention at all is the connection to quantum computing. People are trying to build a quantum computer with carefully-cooled atoms. It’s important to test whether the quantum computer functions well enough, or if the quantum states aren’t as perfect as they need to be. One way people have been testing this is with the Bethe ansatz: because it lets you calculate the behavior of special systems perfectly, you can set up your quantum computer to model a Bethe ansatz, and then check how close to the prediction your results are. You know that the theoretical result is complete, so any failure has to be due to an imperfection in your experiment.

I gave a quick teaser to a very active field, one that has fascinated a lot of prominent physicists and been applied in a wide variety of areas. I hope I’ve inspired you to learn more!

Valentine’s Day Physics Poem 2025

Today is Valentine’s Day, so it’s time for the blog’s yearly tradition of posting a poem. This one is inspired by that one Robert Wilson quote.

The physicist was called 
before the big wide world and asked,
Why?

This commitment
This drive
This dream

(and as Nature is a woman, so let her be)

How does she defend?
How does she serve your interests,
home and abroad
(which may be one and the same)?

The physicist stood
before the big wide world
alone but not alone

and answered

She makes me worth defending.

A realist defends to defend
Lives to live
Survives to survive
And devours to devour
It’s dour
Mere existence
The law of “better mine than yours”

Instead, the physicist spoke of the painters,
the sculptors,
…and the poets
He spoke of dignity and honor and love and worth
Of seeing a twinkling many-faceted thing
past the curve of the road
and a future to be shared.

Integration by Parts, Evolved

I posted what may be my last academic paper today, about a project I’ve been working on with Matthias Wilhelm for most of the last year. The paper is now online here. For me, the project has been a chance to broaden my horizons, learn new skills, and start to step out of my academic comfort zone. For Matthias, I hope it was grant money well spent.

I wanted to work on something related to machine learning, for the usual trendy employability reasons. Matthias was already working with machine learning, but was interested in pursuing a different question.

When is machine learning worthwhile? Machine learning methods are heuristics, unreliable methods that sometimes work well. You don’t use a heuristic if you have a reliable method that runs fast enough. But if all you have are heuristics to begin with, then machine learning can give you a better heuristic.

Matthias noticed a heuristic embedded deep in how we do particle physics, and guessed that we could do better. In particle physics, we use pictures called Feynman diagrams to predict the probabilities for different outcomes of collisions, comparing those predictions to observation to look for evidence of new physics. Each Feynman diagram corresponds to an integral, and for each calculation there are hundreds, thousands, or even millions of those integrals to do.

Luckily, physicists don’t actually have to do all those integrals. It turns out that most of them are related, by a slightly more advanced version of that calculus class mainstay, integration by parts. Using integration by parts you can solve a list of equations, finding out how to write your integrals in terms of a much smaller list.

How big a list of equations do you need, and which ones? Twenty-five years ago, Stefano Laporta proposed a “golden rule” to choose, based on his own experience, and people have been using it (more or less, with their own tweaks) since then.

Laporta’s rule is a heuristic, with no proof that it is the best option, or even that it will always work. So we probably shouldn’t have been surprised when someone came up with a better heuristic. Watching talks at a December 2023 conference, Matthias saw a presentation by Johann Usovitsch on a curious new rule. The rule was surprisingly simple, just one extra condition on top of Laporta’s. But it was enough to reduce the number of equations by a factor of twenty.

That’s great progress, but it’s also a bit frustrating. Over almost twenty-five years, no-one had guessed this one simple change?

Maybe, thought Matthias and I, we need to get better at guessing.

We started out thinking we’d try reinforcement learning, a technique where a machine is trained by playing a game again and again, changing its strategy when that strategy brings it a reward. We thought we could have the machine learn to cut away extra equations, getting rewarded if it could cut more while still getting the right answer. We didn’t end up pursuing this very far before realizing another strategy would be a better fit.

What is a rule, but a program? Laporta’s golden rule and Johann’s new rule could both be expressed as simple programs. So we decided to use a method that could guess programs.

One method stood out for sheer trendiness and audacity: FunSearch. FunSearch is a type of algorithm called a genetic algorithm, which tries to mimic evolution. It makes a population of different programs, “breeds” them with each other to create new programs, and periodically selects out the ones that perform best. That’s not the trendy or audacious part, though, people have been doing that sort of genetic programming for a long time.

The trendy, audacious part is that FunSearch generates these programs with a Large Language Model, or LLM (the type of technology behind ChatGPT). Using an LLM trained to complete code, FunSearch presents the model with two programs labeled v0 and v1 and asks it to complete v2. In general, program v2 will have some traits from v0 and v1, but also a lot of variation due to the unpredictable output of LLMs. The inventors of FunSearch used this to contribute the variation needed for evolution, using it to evolve programs to find better solutions to math problems.

We decided to try FunSearch on our problem, modifying it a bit to fit the case. We asked it to find a shorter list of equations, giving a better score for a shorter list but a penalty if the list wasn’t able to solve the problem fully.

Some tinkering and headaches later, it worked! After a few days and thousands of program guesses, FunSearch was able to find a program that reproduced the new rule Johann had presented. A few hours more, and it even found a rule that was slightly better!

But then we started wondering: do we actually need days of GPU time to do this?

An expert on heuristics we knew had insisted, at the beginning, that we try something simpler. The approach we tried then didn’t work. But after running into some people using genetic programming at a conference last year, we decided to try again, using a Python package they used in their work. This time, it worked like a charm, taking hours rather than days to find good rules.

This was all pretty cool, a great opportunity for me to cut my teeth on Python programming and its various attendant skills. And it’s been inspiring, with Matthias drawing together more people interested in seeing just how much these kinds of heuristic methods can do there. I should be clear though, that so far I don’t think our result is useful. We did better than the state of the art on an example, but only slightly, and in a way that I’d guess doesn’t generalize. And we needed quite a bit of overhead to do it. Ultimately, while I suspect there’s something useful to find in this direction, it’s going to require more collaboration, both with people using the existing methods who know better what the bottlenecks are, and with experts in these, and other, kinds of heuristics.

So I’m curious to see what the future holds. And for the moment, happy that I got to try this out!

Physics Gets Easier, Then Harder

Some people have stories about an inspiring teacher who introduced them to their life’s passion. My story is different: I became a physicist due to a famously bad teacher.

My high school was, in general, a good place to learn science, but physics was the exception. The teacher at the time had a bad reputation, and while I don’t remember exactly why I do remember his students didn’t end up learning much physics. My parents were aware of the problem, and aware that physics was something I might have a real talent for. I was already going to take math at the university, having passed calculus at the high school the year before, taking advantage of a program that let advanced high school students take free university classes. Why not take physics at the university too?

This ended up giving me a huge head-start, letting me skip ahead to the fun stuff when I started my Bachelor’s degree two years later. But in retrospect, I’m realizing it helped me even more. Skipping high-school physics didn’t just let me move ahead: it also let me avoid a class that is in many ways more difficult than university physics.

High school physics is a mess of mind-numbing formulas. How is velocity related to time, or acceleration to displacement? What’s the current generated by a changing magnetic field, or the magnetic field generated by a current? Students learn a pile of apparently different procedures to calculate things that they usually don’t particularly care about.

Once you know some math, though, you learn that most of these formulas are related. Integration and differentiation turn the mess of formulas about acceleration and velocity into a few simple definitions. Understand vectors, and instead of a stack of different rules about magnets and circuits you can learn Maxwell’s equations, which show how all of those seemingly arbitrary rules fit together in one reasonable package.

This doesn’t just happen when you go from high school physics to first-year university physics. The pattern keeps going.

In a textbook, you might see four equations to represent what Maxwell found. But once you’ve learned special relativity and some special notation, they combine into something much simpler. Instead of having to keep track of forces in diagrams, you can write down a Lagrangian and get the laws of motion with a reliable procedure. Instead of a mess of creation and annihilation operators, you can use a path integral. The more physics you learn, the more seemingly different ideas get unified, the less you have to memorize and the more just makes sense. The more physics you study, the easier it gets.

Until, that is, it doesn’t anymore. A physics education is meant to catch you up to the state of the art, and it does. But while the physics along the way has been cleaned up, the state of the art has not. We don’t yet have a unified set of physical laws, or even a unified way to do physics. Doing real research means once again learning the details: quantum computing algorithms or Monte Carlo simulation strategies, statistical tools or integrable models, atomic lattices or topological field theories.

Most of the confusions along the way were research problems in their own day. Electricity and magnetism were understood and unified piece by piece, one phenomenon after another before Maxwell linked them all together, before Lorentz and Poincaré and Einstein linked them further still. Once a student might have had to learn a mess of particles with names like J/Psi, now they need just six types of quarks.

So if you’re a student now, don’t despair. Physics will get easier, things will make more sense. And if you keep pursuing it, eventually, it will stop making sense once again.

Science Journalism Tasting Notes

When you’ve done a lot of science communication you start to see patterns. You notice the choices people make when they write a public talk or a TV script, the different goals and practical constraints that shape a piece. I’ve likened it to watching an old kung fu movie and seeing where the wires are.

I don’t have a lot of experience doing science journalism, I can’t see the wires yet. But I’m starting to notice things, subtle elements like notes at a wine-tasting. Just like science communication by academics, science journalism is shaped by a variety of different goals.

First, there’s the need for news to be “new”. A classic news story is about something that happened recently, or even something that’s happening right now. Historical stories usually only show up as new “revelations”, something the journalist or a researcher recently dug up. This isn’t a strict requirement, and it seems looser in science journalism than in other types of journalism: sometimes you can have a piece on something cool the audience might not know, even if it’s not “new”. But it shapes how things are covered, it means that a piece on something old will often have something tying it back to a recent paper or an ongoing research topic.

Then, a news story should usually also be a “story”. Science communication can sometimes involve a grab-bag of different topics, like a TED talk that shows off a few different examples. Journalistic pieces often try to deliver one core message, with details that don’t fit the narrative needing to wait for another piece where they fit better. You might be tempted to round this off to saying that journalists are better writers than academics, since it’s easier for a reader to absorb one message than many. But I think it also ties to the structure. Journalists do have content with multiple messages, it just usually is not published as one story, but a thematic collection of stories.

Combining those two goals, there’s a tendency for news to focus on what happened. “First they had the idea, then there were challenges, then they made their discovery, now they look to the future.” You can’t just do that, though, because of another goal: pedagogy. Your audience doesn’t know everything you know. In order for them to understand what happened, there are often other things they have to understand. In non-science news, this can sometimes be brief, a paragraph that gives the background for people who have been “living under a rock”. In science news, there’s a lot more to explain. You have to teach something, and teaching well can demand a structure very different from the one-step-at a time narrative of what happened. Balancing these two is tricky, and it’s something I’m still learning how to do, as can be attested by the editors who’ve had to rearrange some of my pieces to make the story flow better.

News in general cares about being independent, about journalists who figure out the story and tell the truth regardless of what the people in power are saying. Science news is strange because, if a scientist gets covered at all, it’s almost always positive. Aside from the occasional scandal or replication crisis, science news tends to portray scientific developments as valuable, “good news” rather than “bad news”. If you’re a politician or a company, hearing from a journalist might make you worry. If you say the wrong thing, you might come off badly. If you’re a scientist, your biggest worry is that a journalist might twist your words into a falsehood that makes your work sound too good. On the other hand, a journalist who regularly publishes negative things about scientists would probably have a hard time finding scientists to talk to! There are basic journalistic ethics questions here that one probably learns about at journalism school and we who sneak in with no training have to learn another way.

These are the flavors I’ve tasted so far: novelty and narrative vs. education, positivity vs. accuracy. I’ll doubtless see more over the years, and go from someone who kind of knows what they’re doing to someone who can mentor others. With that in mind, I should get to writing!

Ways Freelance Journalism Is Different From Academic Writing

A while back, I was surprised when I saw the writer of a well-researched webcomic assume that academics are paid for their articles. I ended up writing a post explaining how academic publishing actually works.

Now that I’m out of academia, I’m noticing some confusion on the other side. I’m doing freelance journalism, and the academics I talk to tend to have some common misunderstandings. So academics, this post is for you: a FAQ of questions I’ve been asked about freelance journalism. Freelance journalism is more varied than academia, and I’ve only been doing it a little while, so all of my answers will be limited to my experience.

Q: What happens first? Do they ask you to write something? Do you write an article and send it to them?

Academics are used to writing an article, then sending it to a journal, which sends it out to reviewers to decide whether to accept it. In freelance journalism in my experience, you almost never write an article before it’s accepted. (I can think of one exception I’ve run into, and that was for an opinion piece.)

Sometimes, an editor reaches out to a freelancer and asks them to take on an assignment to write a particular sort of article. This happens more freelancers that have been working with particular editors for a long time. I’m new to this, so the majority of the time I have to “pitch”. That means I email an editor describing the kind of piece I want to write. I give a short description of the topic and why it’s interesting. If the editor is interested, they’ll ask some follow-up questions, then tell me what they want me to focus on, how long the piece should be, and how much they’ll pay me. (The last two are related, many places pay by the word.) After that, I can write a draft.

Q: Wait, you’re paid by the word? Then why not make your articles super long, like Victor Hugo?

I’m paid per word assigned, not per word in the finished piece. The piece doesn’t have to strictly stick to the word limit, but it should be roughly the right size, and I work with the editor to try to get it there. In practice, places seem to have a few standard size ranges and internal terminology for what they are (“blog”, “essay”, “short news”, “feature”). These aren’t always the same as the categories readers see online. Some places have a web page listing these categories for prospective freelancers, but many don’t, so you have to either infer them from the lengths of articles online or learn them over time from the editors.

Q: Why didn’t you mention this important person or idea?

Because pieces pay more by the word, it’s easier as a freelancer to sell shorter pieces than longer ones. For science news, favoring shorter pieces also makes some pedagogical sense. People usually take away only a few key messages from a piece, if you try to pack in too much you run a serious risk of losing people. After I’ve submitted a draft, I work with the editor to polish it, and usually that means cutting off side-stories and “by-the-ways” to make the key points as vivid as possible.

Q: Do you do those cool illustrations?

Academia has a big focus on individual merit. The expectation is that when you write something, you do almost all of the work yourself, to the extent that more programming-heavy fields like physics and math do their own typesetting.

Industry, including journalism, is more comfortable delegating. Places will generally have someone on-staff to handle illustrations. I suggest diagrams that could be helpful to the piece and do a sketch of what they could look like, but it’s someone else’s job to turn that into nice readable graphic design.

Q: Why is the title like that? Why doesn’t that sound like you?

Editors in journalistic outlets are much more involved than in academic journals. Editors won’t just suggest edits, they’ll change wording directly and even input full sentences of their own. The title and subtitle of a piece in particular can change a lot (in part because they impact SEO), and in some places these can be changed by the editor quite late in the process. I’ve had a few pieces whose title changed after I’d signed off on them, or even after they first appeared.

Q: Are your pieces peer-reviewed?

The news doesn’t have peer review, no. Some places, like Quanta Magazine, do fact-checking. Quanta pays independent fact-checkers for longer pieces, while for shorter pieces it’s the writer’s job to verify key facts, confirming dates and the accuracy of quotes.

Q: Can you show me the piece before it’s published, so I can check it?

That’s almost never an option. Journalists tend to have strict rules about showing a piece before it’s published, related to more political areas where they want to preserve the ability to surprise wrongdoers and the independence to find their own opinions. Science news seems like it shouldn’t require this kind of thing as much, it’s not like we normally write hit pieces. But we’re not publicists either.

In a few cases, I’ve had people who were worried about something being conveyed incorrectly, or misleadingly. For those, I offer to do more in the fact-checking stage. I can sometimes show you quotes or paraphrase how I’m describing something, to check whether I’m getting something wrong. But under no circumstances can I show you the full text.

Q: What can I do to make it more likely I’ll get quoted?

Pieces are short, and written for a general, if educated, audience. Long quotes are harder to use because they eat into word count, and quotes with technical terms are harder to use because we try to limit the number of terms we ask the reader to remember. Quotes that mention a lot of concepts can be harder to find a place for, too: concepts are introduced gradually over the piece, so a quote that mentions almost everything that comes up will only make sense to the reader at the very end.

In a science news piece, quotes can serve a couple different roles. They can give authority, an expert’s judgement confirming that something is important or real. They can convey excitement, letting the reader see a scientist’s emotions. And sometimes, they can give an explanation. This last only happens when the explanation is very efficient and clear. If the journalist can give a better explanation, they’re likely to use that instead.

So if you want to be quoted, keep that in mind. Try to say things that are short and don’t use a lot of technical jargon or bring in too many concepts at once. Convey judgement, which things are important and why, and convey passion, what drives you and excited you about a topic. I am allowed to edit quotes down, so I can take a piece of a longer quote that’s cleaner or cut a long list of examples from an otherwise compelling statement. I can correct grammar and get rid of filler words and obvious mistakes. But I can’t put words in your mouth, I have to work with what you actually said, and if you don’t say anything I can use then you won’t get quoted.