Tag Archives: press

To Measure Something or to Test It

Black holes have been in the news a couple times recently.

On one end, there was the observation of an extremely large black hole in the early universe, when no black holes of the kind were expected to exist. My understanding is this is very much a “big if true” kind of claim, something that could have dramatic implications but may just be being misunderstood. At the moment, I’m not going to try to work out which one it is.

In between, you have a piece by me in Quanta Magazine a couple weeks ago, about tests of whether black holes deviate from general relativity. They don’t, by the way, according to the tests so far.

And on the other end, you have the coverage last week of a “confirmation” (or even “proof”) of the black hole area law.

The black hole area law states that the total area of the event horizons of all black holes will always increase. It’s also known as the second law of black hole thermodynamics, paralleling the second law of thermodynamics that entropy always increases. Hawking proved this as a theorem in 1971, assuming that general relativity holds true.

(That leaves out quantum effects, which indeed can make black holes shrink, as Hawking himself famously later argued.)

The black hole area law is supposed to hold even when two black holes collide and merge. While the combination may lose energy (leading to gravitational waves that carry energy to us), it will still have greater area, in the end, than the sum of the black holes that combined to make it.

Ok, so that’s the area law. What’s this paper that’s supposed to “finally prove” it?

The LIGO, Virgo, and KAGRA collaborations recently published a paper based on gravitational waves from one particularly clear collision of black holes, which they measured back in January. They compare their measurements to predictions from general relativity, and checked two things: whether the measurements agreed with predictions based on the Kerr metric (how space-time around a rotating black hole is supposed to behave), and whether they obeyed the area law.

The first check isn’t so different in purpose from the work I wrote about in Quanta Magazine, just using different methods. In both studies, physicists are looking for deviations from the laws of general relativity, triggered by the highly curved environments around black holes. These deviations could show up in one way or another in any black hole collision, so while you would ideally look for them by scanning over many collisions (as the paper I reported on did), you could do a meaningful test even with just one collision. That kind of a check may not be very strenuous (if general relativity is wrong, it’s likely by a very small amount), but it’s still an opportunity, diligently sought, to be proven wrong.

The second check is the one that got the headlines. It also got first billing in the paper title, and a decent amount of verbiage in the paper itself. And if you think about it for more than five minutes, it doesn’t make a ton of sense as presented.

Suppose the black hole area law is wrong, and sometimes black holes lose area when they collide. Even if this happened sometimes, you wouldn’t expect it to happen every time. It’s not like anyone is pondering a reverse black hole area law, where black holes only shrink!

Because of that, I think it’s better to say that LIGO measured the black hole area law for this collision, while they tested whether black holes obey the Kerr metric. In one case, they’re just observing what happened in this one situation. In the other, they can try to draw implications for other collisions.

That doesn’t mean their work wasn’t impressive, but it was impressive for reasons that don’t seem to be getting emphasized. It’s impressive because, prior to this paper, they had not managed to measure the areas of colliding black holes well enough to confirm that they obeyed the area law! The previous collisions looked like they obeyed the law, but when you factor in the experimental error they couldn’t say it with confidence. The current measurement is better, and can. So the new measurement is interesting not because it confirms a fundamental law of the universe or anything like that…it’s interesting because previous measurements were so bad, that they couldn’t even confirm this kind of fundamental law!

That, incidentally, feels like a “missing mood” in pop science. Some things are impressive not because of their amazing scale or awesome implications, but because they are unexpectedly, unintuitively, really really hard to do. These measurements shouldn’t be thought of, or billed, as tests of nature’s fundamental laws. Instead they’re interesting because they highlight what we’re capable of, and what we still need to accomplish.

Newsworthiness Bias

I had a chat about journalism recently, and I had a realization about just how weird science journalism, in particular, is.

Journalists aren’t supposed to be cheerleaders. Journalism and PR have very different goals (which is why I keep those sides of my work separate). A journalist is supposed to be uncompromising, to write the truth even if it paints the source in a bad light.

Norms are built around this. Serious journalistic outlets usually don’t let sources see pieces before they’re published. The source doesn’t have the final say in how they’re portrayed: the journalist reserves the right to surprise them if justified. Investigative journalists can be superstars, digging up damning secrets about the powerful.

When a journalist starts a project, the piece might turn out positive, or negative. A politician might be the best path forward, or a disingenuous grifter. A business might be a great investment opportunity, or a total scam. A popular piece of art might be a triumph, or a disappointment.

And a scientific result?

It might be a fraud, of course. Scientific fraud does exist, and is a real problem. But it’s not common, really. Pick a random scientific paper, filter by papers you might consider reporting on in the first place, and you’re very unlikely to find a fraudulent result. Science journalists occasionally report on spectacularly audacious scientific frauds, or frauds in papers that have already made the headlines. But you don’t expect fraud in the average paper you cover.

It might be scientifically misguided: flawed statistics, a gap in a proof, a misuse of concepts. Journalists aren’t usually equipped to ferret out these issues, though. Instead, this is handled in principle by peer review, and in practice by the scientific community outside of the peer review process.

Instead, for a scientific result, the most common negative judgement isn’t that it’s a lie, or a mistake. It’s that it’s boring.

And certainly, a good science journalist can judge a paper as boring. But there is a key difference between doing that, and judging a politician as crooked or a popular work of art as mediocre. You can write an article about the lying candidate for governor, or the letdown Tarantino movie. But if a scientific result is boring, and nobody else has covered it…then it isn’t newsworthy.

In science, people don’t usually publish their failures, their negative results, their ho-hum obvious conclusions. That fills the literature with only the successes, a phenomenon called publication bias. It also means, though, that scientists try to make their results sound more successful, more important and interesting, than they actually are. Some of the folks fighting the replication crisis have coined a term for this: they call it importance hacking.

The same incentives apply to journalists, especially freelancers. Starting out, it was far from clear that I could make enough to live on. I felt like I had to make every lead count, to find a newsworthy angle on every story idea I could find, because who knew when I would find another one? Over time, I learned to balance that pull better. Now that I’m making most of my income from consulting instead, the pressure has eased almost entirely: there are things I’m tempted to importance-hack for the sake of friends, but nothing that I need to importance-hack to stay in the black.

Doing journalism on the side may be good for me personally at the moment, but it’s not really a model. Much like we need career scientists, even if their work is sometimes boring, we need career journalists, even if they’re sometimes pressured to overhype.

So if we don’t want to incentivize science journalists to be science cheerleaders, what can we do instead?

In science, one way to address publication bias is with pre-registered studies. A scientist sets out what they plan to test, and a journal agrees to publish the result, no matter what it is. You could imagine something like this for science journalism. I once proposed a recurring column where every month I would cover a random paper from arXiv.org, explaining what it meant to accomplish. I get why the idea was turned down, but I still think about it.

In journalism, the arts offer the closest parallel with a different approach. There are many negative reviews of books, movies, and music, and most of them merely accuse the art of being boring, not evil. These exist because they focus on popular works that people pay attention to anyway, so that any negative coverage has someone to convince. You could imagine applying this model to science, though it could be a bit silly. I’m envisioning a journalist who writes an article every time Witten publishes, rating some papers impressive and others disappointing, the same way a music journalist might cover every Taylor Swift album.

Neither of these models are really satisfactory. You could imagine an even more adversarial model, where journalists run around accusing random scientists of wasting the government’s money, but that seems dramatically worse.

So I’m not sure. Science is weird, and hard to accurately value: if we knew how much something mattered already, it would be engineering, not science. Journalism is weird: it’s public-facing research, where the public facing is the whole point. Their combination? Even weirder.

Bonus Info on the LHC and Beyond

Three of my science journalism pieces went up last week!

(This is a total coincidence. One piece was a general explainer “held in reserve” for a nice slot in the schedule, one was a piece I drafted in February, while the third I worked on in May. In journalism, things take as long as they take.)

The shortest piece, at Quanta Magazine, was an explainer about the two types of particles in physics: bosons, and fermions.

I don’t have a ton of bonus info here, because of how tidy the topic is, so just two quick observations.

First, I have the vague impression that Bose, bosons’ namesake, is “claimed” by both modern-day Bangladesh and India. I had friends in grad school who were proud of their fellow physicist from Bangladesh, but while he did his most famous work in Dhaka, he was born and died in Calcutta. Since both were under British India for most of his life, these things likely get complicated.

Second, at the end of the piece I mention a “world on a wire” where fermions and bosons are the same. One example of such a “wire” is a string, like in string theory. One thing all young string theorists learn is “bosonization”: the idea that, in a 1+1-dimensional world like a string, you can re-write any theory with fermions as a theory with bosons, as well as vice versa. This has important implications for how string theory is set up.

Next, in Ars Technica, I had a piece about how LHC physicists are using machine learning to untangle the implications of quantum interference.

As a journalist, it’s really easy to fall into a trap where you give the main person you interview too much credit: after all, you’re approaching the story from their perspective. I tried to be cautious about this, only to be stymied when literally everyone else I interviewed praised Aishik Ghosh to the skies and credited him with being the core motivating force behind the project. So I shrugged my shoulders and followed suit. My understanding is that he has been appropriately rewarded and will soon be a professor at Georgia Tech.

I didn’t list the inventors of the NSBI method that Ghosh and co. used, but names like Kyle Cranmer and Johann Brehmer tend to get bandied about. It’s a method that was originally explored for a more general goal, trying to characterize what the Standard Model might be missing, while the work I talk about in the piece takes it in a new direction, closer to the typical things the ATLAS collaboration looks for.

I also did not say nearly as much as I was tempted to about how the ATLAS collaboration publishes papers, which was honestly one of the most intriguing parts of the story for me. There is a huge amount of review that goes on inside ATLAS before one of their papers reaches the outside world, way more than there ever is in a journal’s peer review process. This is especially true for “physics papers”, where ATLAS is announcing a new conclusion about the physical world, as ATLAS’s reputation stands on those conclusions being reliable. That means starting with an “internal note” that’s hundreds of pages long (and sometimes over a thousand), an editorial board that manages the editing process, disseminating the paper to the entire collaboration for comment, and getting specific experts and institute groups within the collaboration to read through the paper in detail. The process is a bit less onerous for “technical papers”, which describe a new method, not a new conclusion about the world. Still, it’s cumbersome enough that for those papers, often scientists don’t publish them “within ATLAS” at all, instead releasing them independently. The results I reported on are special because they involved a physics paper and a technical paper, both within the ATLAS collaboration process. Instead of just working with partial or simplified data, they wanted to demonstrate the method on a “full analysis”, with all the computation and human coordination that requires. Normally, ATLAS wouldn’t go through the whole process of publishing a physics paper without basing it on new data, but this was different: the method had the potential to be so powerful that the more precise results would be worth stating as physics results alone.

(Also, for the people in the comments worried about training a model on old data: that’s not what they did. In physics, they don’t try to train a neural network model to predict the results of colliders, such a model wouldn’t tell us anything useful. They run colliders to tell us whether what they see matches the analytic, Standard, model. The neural network is trained to predict not what the experiment will say, but what the Standard Model will say, as we can usually only figure that out through time-consuming simulations. So it’s trained on (new) simulations, not on experimental data.)

Finally, on Friday I had a piece in Physics Today about the European Strategy for Particle Physics (or ESPP), and in particular, plans for the next big collider.

Before I even started working on this piece, I saw a thread by Patrick Koppenburg on some of the 263 documents submitted for the ESPP update. While my piece ended up mostly focused on the big circular collider plan that most of the field is converging on (the future circular collider, or FCC), Koppenburg’s thread was more wide-ranging, meant to illustrate the breadth of ideas under discussion. Some of that discussion is about the LHC’s current plans, like its “high-luminosity” upgrade that will see it gather data at much higher rates up until 2040. Some of it is assessing broader concerns, which it may surprise some of you to learn includes sustainability: yes, there are more or less sustainable ways to build giant colliders.

The most fun part of the discussion, though, concerns all of the other collider proposals.

Some report progress on new technologies. Muon colliders are the most famous of these, but there are other proposals that would specifically help with a linear collider. I never did end up understanding what Cooled Copper Colliders are all about, beyond that they let you get more energy in a smaller machine without super-cooling. If you know about them, chime in in the comments! Meanwhile, plasma wakefield acceleration could accelerate electrons on a wave of plasma. This has the disadvantage that you want to collide electrons and positrons, and if you try to stick a positron in plasma it will happily annihilate with the first electron it meets. So what do you do? You go half-and-half, with the HALHF project: speed up the electron with a plasma wakefield, accelerate the positron normally, and have them meet in the middle.

Others are backup plans, or “budget options”, where CERN could get a bit better measurements on some parameters if they can’t stir up the funding to measure the things they really want. They could put electrons and positrons into the LHC tunnel instead of building a new one, for a weaker machine that could still study the Higgs boson to some extent. They could use a similar experiment to produce Z bosons instead, which could serve as a bridge to a different collider project. Or, they could collider the LHC’s proton beam with an electron beam, for an experiment that mixes advantages and disadvantages of some of the other approaches.

While working on the piece, one resource I found invaluable was this colloquium talk by Tristan du Pree, where he goes through the 263 submissions and digs up a lot of interesting numbers and commentary. Read the slides for quotes from the different national inputs and “solo inputs” with comments from particular senior scientists. I used that talk to get a broad impression of what the community was feeling, and it was interesting how well it was reflected in the people I interviewed. The physicist based in Switzerland felt the most urgency for the FCC plan, while the Dutch sources were more cautious, with other Europeans firmly in the middle.

Going over the FCC report itself, one thing I decided to leave out of the discussion was the cost-benefit analysis. There’s the potential for a cute sound-bite there, “see, the collider is net positive!”, but I’m pretty skeptical of the kind of analysis they’re doing there, even if it is standard practice for government projects. Between the biggest benefits listed being industrial benefits to suppliers and early-career researcher training (is a collider unusually good for either of those things, compared to other ways we spend money?) and the fact that about 10% of the benefit is the science itself (where could one possibly get a number like that?), it feels like whatever reasoning is behind this is probably the kind of thing that makes rigor-minded economists wince. I wasn’t able to track down the full calculation though, so I really don’t know, maybe this makes more sense than it looks.

I think a stronger argument than anything along those lines is a much more basic point, about expertise. Right now, we have a community of people trying to do something that is not merely difficult, but fundamental. This isn’t like sending people to space, where many of the engineering concerns will go away when we can send robots instead. This is fundamental engineering progress in how to manipulate the forces of nature (extremely powerful magnets, high voltages) and process huge streams of data. Pushing those technologies to the limit seems like it’s going to be relevant, almost no matter what we end up doing. That’s still not putting the science first and foremost, but it feels a bit closer to an honest appraisal of what good projects like this do for the world.

Bonus info for Reversible Computing and Megastructures

After some delay, a bonus info post!

At FirstPrinciples.org, I had a piece covering work by engineering professor Colin McInnes on stability of Dyson spheres and ringworlds. This was a fun one to cover, mostly because of how it straddles the borderline between science fiction and practical physics and engineering. McInnes’s claim to fame is work on solar sails, which seem like a paradigmatic example of that kind of thing: a common sci-fi theme that’s surprisingly viable. His work on stability was interesting to me because it’s the kind of work that a century and a half ago would have been paradigmatic physics. Now, though, very few physicists work on orbital mechanics, and a lot of the core questions have passed on to engineering. It’s fascinating to see how these classic old problems can still have undiscovered solutions, and how the people best equipped to find them now are tinkerers practicing their tools instead of cutting-edge mathematicians.

At Quanta Magazine, I had a piece about reversible computing. Readers may remember I had another piece on that topic at the end of March, a profile on the startup Vaire Computing at FirstPrinciples.org. That piece talked about FirstPrinciples, but didn’t say much about reversible computing. I figured I’d combine the “bonus info” for both posts here.

Neither piece went into much detail about the engineering involved, as it didn’t really make sense in either venue. One thing that amused me a bit is that the core technology that drove Vaire into action is something that actually should be very familiar to a physics or engineering student: a resonator. Theirs is obviously quite a bit more sophisticated than the base model, but at its heart it’s doing the same thing: storing charge and controlling frequency. It turns out that those are both essential to making reversible computers work: you need to store charge so it isn’t lost to ground when you empty a transistor, and you need to control the frequency so you can have waves with gentle transitions instead of the more sharp corners of the waves used in normal computers, thus wasting less heat in rapid changes of voltage. Vaire recently announced they’re getting 50% charge recovery from their test chips, and they’re working on raising that number.

Originally, the Quanta piece was focused more on reversible programming than energy use, as the energy angle seemed a bit more physics-focused than their computer science desk usually goes. The emphasis ended up changing as I worked on the draft, but it meant that an interesting parallel story got lost on the cutting-room floor. There’s a community of people who study reversible computing not from the engineering side, but from the computer science side, studying reversible logic and reversible programming languages. It’s a pursuit that goes back to the 1980’s, where at Caltech around when Feynman was teaching his course on the physics of computing a group of students were figuring out how to set up a reversible programming language. Called Janus, they sent their creation to Landauer, and the letter ended up with Michael Frank after Landauer died. There’s a lovely quote from it regarding their motivation: “We did it out of curiosity over whether such an odd animal as this was possible, and because we were interested in knowing where we put information when we programmed. Janus forced us to pay attention to where our bits went since none could be thrown away.”

Being forced to pay attention to information, in turn, is what has animated the computer science side of the reversible computing community. There are applications to debugging, where you can run code backwards when it gets stuck, to encryption and compression, where you want to be able to recover the information you hid away, and to security, where you want to keep track of information to make sure a hacker can’t figure out things they shouldn’t. Also, for a lot of these people, it’s just a fun puzzle. Early on my attention was caught by a paper by Hannah Earley describing a programming language called Alethe, a word you might recognize from the Greek word for truth, which literally means something like “not-forgetting”.

(Compression is particularly relevant for the “garbage data” you need to output in a reversible computation. If you want to add two numbers reversibly, naively you need to keep both input numbers and their output, but you can be more clever than that and just keep one of the inputs since you can subtract to find the other. There are a lot of substantially more clever tricks in this vein people have figured out over the years.)

I didn’t say anything about the other engineering approaches to reversible computing, that try to do something outside of traditional computer chips. There’s DNA computing, which tries to compute with a bunch of DNA in solution. There’s the old concept of ballistic reversible computing, where you imagine a computer that runs like a bunch of colliding billiard balls, conserving energy. Coordinating such a computer can be a nightmare, and early theoretical ideas were shown to be disrupted by something as tiny as a few stray photons from a distant star. But people like Frank figured out ways around the coordination problem, and groups have experimented with superconductors as places to toss those billiard balls around. The early billiard-inspired designs also had a big impact on quantum computing, where you need reversible gates and the only irreversible operation is the measurement. The name “Toffoli” comes up a lot in quantum computing discussions, I hadn’t known before this that Toffoli gates were originally for reversible computing in general, not specifically quantum computing.

Finally, I only gestured at the sci-fi angle. For reversible computing’s die-hards, it isn’t just a way to make efficient computers now. It’s the ultimate future of the technology, the kind of energy-efficiency civilization will need when we’re covering stars with shells of “computronium” full of busy joyous artificial minds.

And now that I think about it, they should chat with McInnes. He can tell them the kinds of stars they should build around.

Branching Out, and Some Ground Rules

In January, my time at the Niels Bohr Institute ended. Instead of supporting myself by doing science, as I’d done the last thirteen or so years, I started making a living by writing, doing science journalism.

That work picked up. My readers here have seen a few of the pieces already, but there are lots more in the pipeline, getting refined by editors or waiting to be published. It’s given me a bit of income, and a lot of visibility.

That visibility, in turn, has given me new options. It turns out that magazines aren’t the only companies interested in science writing, and journalism isn’t the only way to write for a living. Companies that invest in science want a different kind of writing, one that builds their reputation both with the public and with the scientific community. And as I’ve discovered, if you have enough of a track record, some of those companies will reach out to you.

So I’m branching out, from science journalism to science communications consulting, advising companies how to communicate science. I’ve started working with an exciting client, with big plans for the future. If you follow me on LinkedIn, you’ll have seen a bit about who they are and what I’ll be doing for them.

Here on the blog, I’d like to maintain a bit more separation. Blogging is closer to journalism, and in journalism, one ought to be careful about conflicts of interest. The advice I’ve gotten is that it’s good to establish some ground rules, separating my communications work from my journalistic work, since I intend to keep doing both.

So without further ado, my conflict of interest rules:

  • I will not write in a journalistic capacity about my consulting clients, or their direct competitors.
  • I will not write in a journalistic capacity about the technology my clients are investing in, except in extremely general terms. (For example, most businesses right now are investing in AI. I’ll still write about AI in general, but not about any particular AI technologies my clients are pursuing.)
  • I will more generally maintain a distinction between areas I cover journalistically and areas where I consult. Right now, this means I avoid writing in a journalistic capacity about:
    • Health/biomedical topics
    • Neuroscience
    • Advanced sensors for medical applications

I plan to update these rules over time as I get a better feeling for what kinds of conflict of interest risks I face and what my clients are comfortable with. I now have a Page for this linked in the top menu, clients and editors can check there to see my current conflict of interest rules.

In Scientific American, With a Piece on Vacuum Decay

I had a piece in Scientific American last week. It’s paywalled, but if you’re a subscriber there you can see it, or you can buy the print magazine.

(I also had two pieces out in other outlets this week. I’ll be saying more about them…in a couple weeks.)

The Scientific American piece is about an apocalyptic particle physics scenario called vacuum decay. It’s a topic I covered last year in Quanta Magazine, an unlikely event where the Higgs field which gives fundamental particles their mass changes value, suddenly making all other particles much more massive and changing physics as we know it. It’s a change that physicists think would start as a small bubble and spread at (almost) the speed of light, covering the universe.

What I wrote for Quanta was a short news piece covering a small adjustment to the calculation, one that made the chance of vacuum decay slightly more likely. (But still mind-bogglingly small, to be clear.)

Scientific American asked for a longer piece, and that gave me space to dig deeper. I was able to say more about how vacuum decay works, with a few metaphors that I think should make it a lot easier to understand. I also got to learn about some new developments, in particular, an interesting story about how tiny primordial black holes could make vacuum decay dramatically more likely.

One thing that was a bit too complicated to talk about were the puzzles involved in trying to calculate these chances. In the article, I mention a calculation of the chance of vacuum decay by a team including Matthew Schwartz. That calculation wasn’t the first to estimate the chance of vacuum decay, and it’s not the most recent update either. Instead, I picked it because Schwartz’s team approached the question in what struck me as a more reliable way, trying to cut through confusion by asking the most basic question you can in a quantum theory: given that now you observe X, what’s the chance that later you observe Y? Figuring out how to turn vacuum decay into that kind of question correctly is tricky (for example, you need to include the possibility that vacuum decay happens, then reverses, then happens again).

The calculations of black holes speeding things up didn’t work things out in quite as much detail. I like to think I’ve made a small contribution by motivating them to look at Schwartz’s work, which might spawn a more rigorous calculation in future. When I talked to Schwartz, he wasn’t even sure whether the picture of a bubble forming in one place and spreading at light speed is correct: he’d calculated the chance of the initial decay, but hadn’t found a similarly rigorous way to think about the aftermath. So even more than the uncertainty I talk about in the piece, the questions about new physics and probability, there is even some doubt about whether the whole picture really works the way we’ve been imagining it.

That makes for a murky topic! But it’s also a flashy one, a compelling story for science fiction and the public imagination, and yeah, another motivation to get high-precision measurements of the Higgs and top quark from future colliders! (If maybe not quite the way this guy said it.)

Post on the Weak Gravity Conjecture for FirstPrinciples.org

I have another piece this week on the FirstPrinciples.org Hub. If you’d like to know who they are, I say a bit about my impressions of them in my post on the last piece I had there. They’re still finding their niche, so there may be shifts in the kind of content they cover over time, but for now they’ve given me an opportunity to cover a few topics that are off the beaten path.

This time, the piece is what we in the journalism biz call an “explainer”. Instead of interviewing people about cutting-edge science, I wrote a piece to explain an older idea. It’s an idea that’s pretty cool, in a way I think a lot of people can actually understand: a black hole puzzle that might explain why gravity is the weakest force. It’s an idea that’s had an enormous influence, both in the string theory world where it originated and on people speculating more broadly about the rules of quantum gravity. If you want to learn more, read the piece!

Since I didn’t interview anyone for this piece, I don’t have the same sort of “bonus content” I sometimes give. Instead of interviewing, I brushed up on the topic, and the best resource I found was this review article written by Dan Harlow, Ben Heidenreich, Matthew Reece, and Tom Rudelius. It gave me a much better idea of the subtleties: how many different ways there are to interpret the original conjecture, and how different attempts to build on it reflect on different facets and highlight different implications. If you are a physicist curious what the whole thing is about, I recommend reading that review: while I try to give a flavor of some of the subtleties, a piece for a broad audience can only do so much.

This Week, at FirstPrinciples.org

I’ve got a piece out this week in a new venue: FirstPrinciples.org, where I’ve written a profile of a startup called Vaire Computing.

Vaire works on reversible computing, an idea that tries to leverage thermodynamics to make a computer that wastes as little heat as possible. While I learned a lot of fun things that didn’t make it into the piece…I’m not going to tell you them this week! That’s because I’m working on another piece about reversible computing, focused on a different aspect of the field. When that piece is out I’ll have a big “bonus material post” talking about what I learned writing both pieces.

This week, instead, the bonus material is about FirstPrinciples.org itself, where you’ll be seeing me write more often in future. The First Principles Foundation was founded by Ildar Shar, a Canadian tech entrepreneur who thinks that physics is pretty cool. (Good taste that!) His foundation aims to support scientific progress, especially in addressing the big, fundamental questions. They give grants, analyze research trends, build scientific productivity tools…and most relevantly for me, publish science news on their website, in a section called the Hub.

The first time I glanced through the Hub, it was clear that FirstPrinciples and I have a lot in common. Like me, they’re interested both in scientific accomplishments and in the human infrastructure that makes them possible. They’ve interviewed figures in the open access movement, like the creators of arXiv and SciPost. On the science side, they mix coverage of the mainstream and reputable with outsiders challenging the status quo, and hot news topics with explainers of key concepts. They’re still new, and still figuring out what they want to be. But from my glimpse on the way, it looks like they’re going somewhere good.

Science Journalism Tasting Notes

When you’ve done a lot of science communication you start to see patterns. You notice the choices people make when they write a public talk or a TV script, the different goals and practical constraints that shape a piece. I’ve likened it to watching an old kung fu movie and seeing where the wires are.

I don’t have a lot of experience doing science journalism, I can’t see the wires yet. But I’m starting to notice things, subtle elements like notes at a wine-tasting. Just like science communication by academics, science journalism is shaped by a variety of different goals.

First, there’s the need for news to be “new”. A classic news story is about something that happened recently, or even something that’s happening right now. Historical stories usually only show up as new “revelations”, something the journalist or a researcher recently dug up. This isn’t a strict requirement, and it seems looser in science journalism than in other types of journalism: sometimes you can have a piece on something cool the audience might not know, even if it’s not “new”. But it shapes how things are covered, it means that a piece on something old will often have something tying it back to a recent paper or an ongoing research topic.

Then, a news story should usually also be a “story”. Science communication can sometimes involve a grab-bag of different topics, like a TED talk that shows off a few different examples. Journalistic pieces often try to deliver one core message, with details that don’t fit the narrative needing to wait for another piece where they fit better. You might be tempted to round this off to saying that journalists are better writers than academics, since it’s easier for a reader to absorb one message than many. But I think it also ties to the structure. Journalists do have content with multiple messages, it just usually is not published as one story, but a thematic collection of stories.

Combining those two goals, there’s a tendency for news to focus on what happened. “First they had the idea, then there were challenges, then they made their discovery, now they look to the future.” You can’t just do that, though, because of another goal: pedagogy. Your audience doesn’t know everything you know. In order for them to understand what happened, there are often other things they have to understand. In non-science news, this can sometimes be brief, a paragraph that gives the background for people who have been “living under a rock”. In science news, there’s a lot more to explain. You have to teach something, and teaching well can demand a structure very different from the one-step-at a time narrative of what happened. Balancing these two is tricky, and it’s something I’m still learning how to do, as can be attested by the editors who’ve had to rearrange some of my pieces to make the story flow better.

News in general cares about being independent, about journalists who figure out the story and tell the truth regardless of what the people in power are saying. Science news is strange because, if a scientist gets covered at all, it’s almost always positive. Aside from the occasional scandal or replication crisis, science news tends to portray scientific developments as valuable, “good news” rather than “bad news”. If you’re a politician or a company, hearing from a journalist might make you worry. If you say the wrong thing, you might come off badly. If you’re a scientist, your biggest worry is that a journalist might twist your words into a falsehood that makes your work sound too good. On the other hand, a journalist who regularly publishes negative things about scientists would probably have a hard time finding scientists to talk to! There are basic journalistic ethics questions here that one probably learns about at journalism school and we who sneak in with no training have to learn another way.

These are the flavors I’ve tasted so far: novelty and narrative vs. education, positivity vs. accuracy. I’ll doubtless see more over the years, and go from someone who kind of knows what they’re doing to someone who can mentor others. With that in mind, I should get to writing!

Ways Freelance Journalism Is Different From Academic Writing

A while back, I was surprised when I saw the writer of a well-researched webcomic assume that academics are paid for their articles. I ended up writing a post explaining how academic publishing actually works.

Now that I’m out of academia, I’m noticing some confusion on the other side. I’m doing freelance journalism, and the academics I talk to tend to have some common misunderstandings. So academics, this post is for you: a FAQ of questions I’ve been asked about freelance journalism. Freelance journalism is more varied than academia, and I’ve only been doing it a little while, so all of my answers will be limited to my experience.

Q: What happens first? Do they ask you to write something? Do you write an article and send it to them?

Academics are used to writing an article, then sending it to a journal, which sends it out to reviewers to decide whether to accept it. In freelance journalism in my experience, you almost never write an article before it’s accepted. (I can think of one exception I’ve run into, and that was for an opinion piece.)

Sometimes, an editor reaches out to a freelancer and asks them to take on an assignment to write a particular sort of article. This happens more freelancers that have been working with particular editors for a long time. I’m new to this, so the majority of the time I have to “pitch”. That means I email an editor describing the kind of piece I want to write. I give a short description of the topic and why it’s interesting. If the editor is interested, they’ll ask some follow-up questions, then tell me what they want me to focus on, how long the piece should be, and how much they’ll pay me. (The last two are related, many places pay by the word.) After that, I can write a draft.

Q: Wait, you’re paid by the word? Then why not make your articles super long, like Victor Hugo?

I’m paid per word assigned, not per word in the finished piece. The piece doesn’t have to strictly stick to the word limit, but it should be roughly the right size, and I work with the editor to try to get it there. In practice, places seem to have a few standard size ranges and internal terminology for what they are (“blog”, “essay”, “short news”, “feature”). These aren’t always the same as the categories readers see online. Some places have a web page listing these categories for prospective freelancers, but many don’t, so you have to either infer them from the lengths of articles online or learn them over time from the editors.

Q: Why didn’t you mention this important person or idea?

Because pieces pay more by the word, it’s easier as a freelancer to sell shorter pieces than longer ones. For science news, favoring shorter pieces also makes some pedagogical sense. People usually take away only a few key messages from a piece, if you try to pack in too much you run a serious risk of losing people. After I’ve submitted a draft, I work with the editor to polish it, and usually that means cutting off side-stories and “by-the-ways” to make the key points as vivid as possible.

Q: Do you do those cool illustrations?

Academia has a big focus on individual merit. The expectation is that when you write something, you do almost all of the work yourself, to the extent that more programming-heavy fields like physics and math do their own typesetting.

Industry, including journalism, is more comfortable delegating. Places will generally have someone on-staff to handle illustrations. I suggest diagrams that could be helpful to the piece and do a sketch of what they could look like, but it’s someone else’s job to turn that into nice readable graphic design.

Q: Why is the title like that? Why doesn’t that sound like you?

Editors in journalistic outlets are much more involved than in academic journals. Editors won’t just suggest edits, they’ll change wording directly and even input full sentences of their own. The title and subtitle of a piece in particular can change a lot (in part because they impact SEO), and in some places these can be changed by the editor quite late in the process. I’ve had a few pieces whose title changed after I’d signed off on them, or even after they first appeared.

Q: Are your pieces peer-reviewed?

The news doesn’t have peer review, no. Some places, like Quanta Magazine, do fact-checking. Quanta pays independent fact-checkers for longer pieces, while for shorter pieces it’s the writer’s job to verify key facts, confirming dates and the accuracy of quotes.

Q: Can you show me the piece before it’s published, so I can check it?

That’s almost never an option. Journalists tend to have strict rules about showing a piece before it’s published, related to more political areas where they want to preserve the ability to surprise wrongdoers and the independence to find their own opinions. Science news seems like it shouldn’t require this kind of thing as much, it’s not like we normally write hit pieces. But we’re not publicists either.

In a few cases, I’ve had people who were worried about something being conveyed incorrectly, or misleadingly. For those, I offer to do more in the fact-checking stage. I can sometimes show you quotes or paraphrase how I’m describing something, to check whether I’m getting something wrong. But under no circumstances can I show you the full text.

Q: What can I do to make it more likely I’ll get quoted?

Pieces are short, and written for a general, if educated, audience. Long quotes are harder to use because they eat into word count, and quotes with technical terms are harder to use because we try to limit the number of terms we ask the reader to remember. Quotes that mention a lot of concepts can be harder to find a place for, too: concepts are introduced gradually over the piece, so a quote that mentions almost everything that comes up will only make sense to the reader at the very end.

In a science news piece, quotes can serve a couple different roles. They can give authority, an expert’s judgement confirming that something is important or real. They can convey excitement, letting the reader see a scientist’s emotions. And sometimes, they can give an explanation. This last only happens when the explanation is very efficient and clear. If the journalist can give a better explanation, they’re likely to use that instead.

So if you want to be quoted, keep that in mind. Try to say things that are short and don’t use a lot of technical jargon or bring in too many concepts at once. Convey judgement, which things are important and why, and convey passion, what drives you and excited you about a topic. I am allowed to edit quotes down, so I can take a piece of a longer quote that’s cleaner or cut a long list of examples from an otherwise compelling statement. I can correct grammar and get rid of filler words and obvious mistakes. But I can’t put words in your mouth, I have to work with what you actually said, and if you don’t say anything I can use then you won’t get quoted.