Tag Archives: science communication

Post on the Weak Gravity Conjecture for FirstPrinciples.org

I have another piece this week on the FirstPrinciples.org Hub. If you’d like to know who they are, I say a bit about my impressions of them in my post on the last piece I had there. They’re still finding their niche, so there may be shifts in the kind of content they cover over time, but for now they’ve given me an opportunity to cover a few topics that are off the beaten path.

This time, the piece is what we in the journalism biz call an “explainer”. Instead of interviewing people about cutting-edge science, I wrote a piece to explain an older idea. It’s an idea that’s pretty cool, in a way I think a lot of people can actually understand: a black hole puzzle that might explain why gravity is the weakest force. It’s an idea that’s had an enormous influence, both in the string theory world where it originated and on people speculating more broadly about the rules of quantum gravity. If you want to learn more, read the piece!

Since I didn’t interview anyone for this piece, I don’t have the same sort of “bonus content” I sometimes give. Instead of interviewing, I brushed up on the topic, and the best resource I found was this review article written by Dan Harlow, Ben Heidenreich, Matthew Reece, and Tom Rudelius. It gave me a much better idea of the subtleties: how many different ways there are to interpret the original conjecture, and how different attempts to build on it reflect on different facets and highlight different implications. If you are a physicist curious what the whole thing is about, I recommend reading that review: while I try to give a flavor of some of the subtleties, a piece for a broad audience can only do so much.

There Is No Shortcut to Saying What You Mean

Blogger Andrew Oh-Willeke of Dispatches from Turtle Island pointed me to an editorial in Science about the phrase scientific consensus.

The editorial argues that by referring to conclusions like the existence of climate change or vaccine safety as “the scientific consensus”, communicators have inadvertently fanned the flames of distrust. By emphasizing agreement between scientists, the phrase “scientific consensus” leaves open the question of how that consensus was reached. More conspiracy-minded people imagine shady backroom deals and corrupt payouts, while the more realistic blame incentives and groupthink. If you disagree with “the scientific consensus”, you may thus decide the best way forward is to silence those pesky scientists.

(The link to current events is left as an exercise to the reader, to comment on elsewhere. As usual, please no explicit discussion of politics on this blog!)

Instead of “scientific consensus”, the editorial suggests another term, convergence of evidence. The idea is that by centering the evidence instead of the scientists, the phrase would make it clear that these conclusions are justified by something more than social pressures, and will remain even if the scientists promoting them are silenced.

Oh-Willeke pointed me to another blog post responding to the editorial, which has a nice discussion of how the terms were used historically, showing their popularity over time. “Convergence of evidence” was more popular in the 1950’s, with a small surge in the late 90’s and early 2000’s. “Scientific consensus” rose in the 1980’s and 90’s, lining up with a time when social scientists were skeptical about science’s objectivity and wanted to explore the social reasons why scientists come to agreement. It then fell around the year 2000, before rising again, this time used instead by professional groups of scientists to emphasize their agreement on issues like climate change.

(The blog post then goes on to try to motivate the word “consilience” instead, on the rather thin basis that “convergence of evidence” isn’t interdisciplinary enough, which seems like a pretty silly objection. “Convergence” implies coming in from multiple directions, it’s already interdisciplinary!)

I appreciate “convergence of evidence”, it seems like a useful phrase. But I think the editorial is working from the wrong perspective, in trying to argue for which terms “we should use” in the first place.

Sometimes, as a scientist or an organization or a journalist, you want to emphasize evidence. Is it “a preponderance of evidence”, most but not all? Is it “overwhelming evidence”, evidence so powerful it is unlikely to ever be defeated? Or is it a “convergence of evidence”, evidence that came in slowly from multiple paths, each independent route making a coincidence that much less likely?

But sometimes, you want to emphasize the judgement of the scientists themselves.

Sometimes when scientists agree, they’re working not from evidence but from personal experience: feelings of which kinds of research pan out and which don’t, or shared philosophies that sit deep in how they conceive their discipline. Describing physicists’ reasons for expecting supersymmetry before the LHC turned on as a convergence of evidence would be inaccurate. Describing it as having been a (not unanimous) consensus gets much closer to the truth.

Sometimes, scientists do have evidence, but as a journalist, you can’t evaluate its strength. You note some controversy, you can follow some of the arguments, but ultimately you have to be honest about how you got the information. And sometimes, that will be because it’s what most of the responsible scientists you talked to agreed on: scientific consensus.

As science communicators, we care about telling the truth (as much as we ever can, at any rate). As a result, we cannot adopt blanket rules of thumb. We cannot say, “we as a community are using this term now”. The only responsible thing we can do is to think about each individual word. We need to decide what we actually mean, to read widely and learn from experience, to find which words express our case in a way that is both convincing and accurate. There’s no shortcut to that, no formula where you just “use the right words” and everything turns out fine. You have to do the work, and hope it’s enough.

Antimatter Isn’t Magic

You’ve heard of antimatter, right?

For each type of particle, there is a rare kind of evil twin with the opposite charge, called an anti-particle. When an anti-proton meets a proton, they annihilate each other in a giant blast of energy.

I see a lot of questions online about antimatter. One recurring theme is people asking a very general question: how does antimatter work?

If you’ve just heard the pop physics explanation, antimatter probably sounds like magic. What about antimatter lets it destroy normal matter? Does it need to touch? How long does it take? And what about neutral particles like neutrons?

You find surprisingly few good explanations of this online, but I can explain why. Physicists like me don’t expect antimatter to be confusing in this way, because to us, antimatter isn’t doing anything all that special. When a particle and an antiparticle annihilate, they’re doing the same thing that any other pair of particles do when they do…basically anything else.

Instead of matter and antimatter, let’s talk about one of the oldest pieces of evidence for quantum mechanics, the photoelectric effect. Scientists shone light at a metal, and found that if the wavelength of the light was short enough, electrons would spring free, causing an electric current. If the wavelength was too long, the metal wouldn’t emit any electrons, no matter how much light they shone. Einstein won his Nobel prize for the explanation: the light hitting the metal comes in particle-sized pieces, called photons, whose energy is determined by the wavelength of the light. If the individual photons don’t have enough energy to get an electron to leave the metal, then no electron will move, no matter how many photons you use.

What happens to the photons after they hit the metal?

They go away. We say they are absorbed, an electron absorbs a photon and speeds up, increasing its kinetic energy so it can escape.

But we could just as easily say the photon is annihilated, if we wanted to.

In the photoelectric effect, you start with one electron and one photon, they come together, and you end up with one electron and no photon. In proton-antiproton annihilation, you start with a proton and an antiproton, they come together, and you end up with no protons or antiprotons, but instead “energy”…which in practice, usually means two photons.

That’s all that happens, deep down at the root of things. The laws of physics are rules about inputs and outputs. Start with these particles, they come together, you end up with these other particles. Sometimes one of the particles stays the same. Sometimes particles seem to transform, and different kinds of particles show up. Sometimes some of the particles are photons, and you think of them as “just energy”, and easy to absorb. But particles are particles, and nothing is “just energy”. Each thing, absorption, decay, annihilation, each one is just another type of what we call interactions.

What makes annihilation of matter and antimatter seem unique comes down to charges. Interactions have to obey the laws of physics: they conserve energy, they conserve momentum, and they conserve charge.

So why can an antiproton and a proton annihilate to pure photons, while two protons can’t? A proton and an antiproton have opposite charge, a photon has zero charge. You could combine two protons to make something else, but it would have to have the same charge as two protons.

What about neutrons? A neutron has no electric charge, so you might think it wouldn’t need antimatter. But a neutron has another type of charge, called baryon number. In order to annihilate one, you’d need an anti-neutron, which would still have zero electric charge but would have the opposite baryon number. (By the way, physicists have been making anti-neutrons since 1956.)

On the other hand, photons actually have no charge. So do Higgs bosons. So one Higgs boson can become two photons, without annihilating with anything else. Each of these particles can be called its own antiparticle: a photon is also an antiphoton, a Higgs is also an anti-Higgs.

Because particle-antiparticle annihilation follows the same rules as other interactions between particles, it also takes place via the same forces. When a proton and an antiproton annihilate each other, they typically do this via the electromagnetic force. This is why you end up with light, which is an electromagnetic wave. Like everything in the quantum world, this annihilation isn’t certain. Is has a chance to happen, proportional to the strength of the interaction force involved.

What about neutrinos? They also appear to have a kind of charge, called lepton number. That might not really be a conserved charge, and neutrinos might be their own antiparticles, like photons. However, they are much less likely to be annihilated than protons and antiprotons, because they don’t have electric charge, and thus their interaction doesn’t depend on the electromagnetic force, but on the much weaker weak nuclear force. A weaker force means a less likely interaction.

Antimatter might seem like the stuff of science fiction. But it’s not really harder to understand than anything else in particle physics.

(I know, that’s a low bar!)

It’s just interactions. Particles go in, particles go out. If it follows the rules, it can happen, if it doesn’t, it can’t. Antimatter is no different.

AI Can’t Do Science…And Neither Can Other Humans

Seen on Twitter:

I don’t know the context here, so I can’t speak to what Prof. Cronin meant. But it got me thinking.

Suppose you, like Prof. Cronin, were to insist that AI “cannot in principle” do science, because AI “is not autonomous” and “does not come up with its own problems to solve”. What might you mean?

You might just be saying that AI is bad at coming up with new problems to solve. That’s probably fair, at least at the moment. People have experimented with creating simple “AI researchers” that “study” computer programs, coming up with hypotheses about the programs’ performance and testing them. But it’s a long road from that to reproducing the much higher standards human scientists have to satisfy.

You probably don’t mean that, though. If you did, you wouldn’t have said “in principle”. You mean something stronger.

More likely, you might mean that AI cannot come up with its own problems, because AI is a tool. People come up with problems, and use AI to help solve them. In this perspective, not only is AI “not autonomous”, it cannot be autonomous.

On a practical level, this is clearly false. Yes, machine learning models, the core technology in current AI, are set up to answer questions. A user asks something, and receives the model’s prediction of the answer. That’s a tool, but for the more flexible models like GPT it’s trivial to turn it into something autonomous. Just add another program: a loop that asks the model what to do, does it, tells the model the result, and asks what to do next. Like taping a knife to a Roomba, you’ve made a very simple modification to make your technology much more dangerous.

You might object, though, that this simple modification of GPT is not really autonomous. After all, a human created it. That human had some goal, some problem they wanted to solve, and the AI is just solving the problem for them.

That may be a fair description of current AI, but insisting it’s true in principle has some awkward implications. If you make a “physics AI”, just tell it to do “good physics”, and it starts coming up with hypotheses you’d never thought of, is it really fair to say it’s just solving your problem?

What if the AI, instead, was a child? Picture a physicist encouraging a child to follow in their footsteps, filling their life with physics ideas and rhapsodizing about the hard problems of the field at the dinner table. Suppose the child becomes a physicist in turn, and finds success later in life. Were they really autonomous? Were they really a scientist?

What if the child, instead, was a scientific field, and the parent was the general public? The public votes for representatives, the representatives vote to hire agencies, and the agencies promise scientists they’ll give them money if they like the problems they come up with. Who is autonomous here?

(And what happens if someone takes a hammer to that process? I’m…still not talking about this! No-politics-rule still in effect, sorry! I do have a post planned, but it will have to wait until I can deal with the fallout.)

At this point, you’d probably stop insisting. You’d drop that “in principle”, and stick with the claim I started with, that current AI can’t be a scientist.

But you have another option.

You can accept the whole chain of awkward implications, bite all the proverbial bullets. Yes, you insist, AI is not autonomous. Neither is the physicist’s child in your story, and neither are the world’s scientists paid by government grants. Each is a tool, used by the one, true autonomous scientist: you.

You are stuck in your skull, a blob of curious matter trained on decades of experience in the world and pre-trained with a couple billion years of evolution. For whatever reason, you want to know more, so you come up with problems to solve. You’re probably pretty vague about those problems. You might want to see more pretty pictures of space, or wrap your head around the nature of time. So you turn the world into your tool. You vote and pay taxes, so your government funds science. You subscribe to magazines and newspapers, so you hear about it. You press out against the world, and along with the pressure that already exists it adds up, and causes change. Biological intelligences and artificial intelligences scurry at your command. From their perspective, they are proposing their own problems, much more detailed and complex than the problems you want to solve. But from yours, they’re your limbs beyond limbs, sight beyond sight, asking the fundamental questions you want answered.

Bonus Material for “How Hans Bethe Stumbled Upon Perfect Quantum Theories”

I had an article last week in Quanta Magazine. It’s a piece about something called the Bethe ansatz, a method in mathematical physics that was discovered by Hans Bethe in the 1930’s, but which only really started being understood and appreciated around the 1960’s. Since then it’s become a key tool, used in theoretical investigations in areas from condensed matter to quantum gravity. In this post, I thought I’d say a bit about the story behind the piece and give some bonus material that didn’t fit.

When I first decided to do the piece I reached out to Jules Lamers. We were briefly office-mates when I worked in France, where he was giving a short course on the Bethe ansatz and the methods that sprung from it. It turned out he had also been thinking about writing a piece on the subject, and we considered co-writing for a bit, but that didn’t work for Quanta. He helped me a huge amount with understanding the history of the subject and tracking down the right sources. If you’re a physicist who wants to learn about these things, I recommend his lecture notes. And if you’re a non-physicist who wants to know more, I hope he gets a chance to write a longer popular-audience piece on the topic!

If you clicked through to Jules’s lecture notes, you’d see the word “Bethe ansatz” doesn’t appear in the title. Instead, you’d see the phrase “quantum integrability”. In classical physics, an “integrable” system is one where you can calculate what will happen by doing an integral, essentially letting you “solve” any problem completely. Systems you can describe with the Bethe ansatz are solvable in a more complicated quantum sense, so they get called “quantum integrable”. There’s a whole research field that studies these quantum integrable systems.

My piece ended up rushing through the history of the field. After talking about Bethe’s original discovery, I jumped ahead to ice. The Bethe ansatz was first used to think about ice in the 1960’s, but the developments I mentioned leading up to it, where experimenters noticed extra variability and theorists explained it with the positions of hydrogen atoms, happened earlier, in the 1930’s. (Thanks to the commenter who pointed out that this was confusing!) Baxter gets a starring role in this section and had an important role in tying things together, but other people (Lieb and Sutherland) were involved earlier, showing that the Bethe ansatz indeed could be used with thin sheets of ice. This era had a bunch of other big names that I didn’t have space to talk about: C. N. Yang makes an appearance, and while Faddeev comes up later, I didn’t mention that he had a starring role in the 1970’s in understanding the connection to classical integrability and proposing a mathematical structure to understand what links all these different integrable theories together.

I vaguely gestured at black holes and quantum gravity, but didn’t have space for more than that. The connection there is to a topic you might have heard of before if you’ve read about string theory, called AdS/CFT, a connection between two kinds of world that are secretly the same: a toy model of gravity called Anti-de Sitter space (AdS) and a theory without gravity that looks the same at any scale (called a Conformal Field Theory, or CFT). It turns out that in the most prominent example of this, the theory without gravity is integrable! In fact, it’s a theory I spent a lot of time working with back in my research days, called N=4 super Yang-Mills. This theory is kind of like QCD, and in some sense it has integrability for similar reasons to those that Feynman hoped for and Korchemsky and Faddeev found. But it actually goes much farther, outside of the high-energy approximation where Korchemsky and Faddeev’s result works, and in principle seems to include everything you might want to know about the theory. Nowadays, people are using it to investigate the toy model of quantum gravity, hoping to get insights about quantum gravity in general.

One thing I didn’t get a chance to mention at all is the connection to quantum computing. People are trying to build a quantum computer with carefully-cooled atoms. It’s important to test whether the quantum computer functions well enough, or if the quantum states aren’t as perfect as they need to be. One way people have been testing this is with the Bethe ansatz: because it lets you calculate the behavior of special systems perfectly, you can set up your quantum computer to model a Bethe ansatz, and then check how close to the prediction your results are. You know that the theoretical result is complete, so any failure has to be due to an imperfection in your experiment.

I gave a quick teaser to a very active field, one that has fascinated a lot of prominent physicists and been applied in a wide variety of areas. I hope I’ve inspired you to learn more!

Science Journalism Tasting Notes

When you’ve done a lot of science communication you start to see patterns. You notice the choices people make when they write a public talk or a TV script, the different goals and practical constraints that shape a piece. I’ve likened it to watching an old kung fu movie and seeing where the wires are.

I don’t have a lot of experience doing science journalism, I can’t see the wires yet. But I’m starting to notice things, subtle elements like notes at a wine-tasting. Just like science communication by academics, science journalism is shaped by a variety of different goals.

First, there’s the need for news to be “new”. A classic news story is about something that happened recently, or even something that’s happening right now. Historical stories usually only show up as new “revelations”, something the journalist or a researcher recently dug up. This isn’t a strict requirement, and it seems looser in science journalism than in other types of journalism: sometimes you can have a piece on something cool the audience might not know, even if it’s not “new”. But it shapes how things are covered, it means that a piece on something old will often have something tying it back to a recent paper or an ongoing research topic.

Then, a news story should usually also be a “story”. Science communication can sometimes involve a grab-bag of different topics, like a TED talk that shows off a few different examples. Journalistic pieces often try to deliver one core message, with details that don’t fit the narrative needing to wait for another piece where they fit better. You might be tempted to round this off to saying that journalists are better writers than academics, since it’s easier for a reader to absorb one message than many. But I think it also ties to the structure. Journalists do have content with multiple messages, it just usually is not published as one story, but a thematic collection of stories.

Combining those two goals, there’s a tendency for news to focus on what happened. “First they had the idea, then there were challenges, then they made their discovery, now they look to the future.” You can’t just do that, though, because of another goal: pedagogy. Your audience doesn’t know everything you know. In order for them to understand what happened, there are often other things they have to understand. In non-science news, this can sometimes be brief, a paragraph that gives the background for people who have been “living under a rock”. In science news, there’s a lot more to explain. You have to teach something, and teaching well can demand a structure very different from the one-step-at a time narrative of what happened. Balancing these two is tricky, and it’s something I’m still learning how to do, as can be attested by the editors who’ve had to rearrange some of my pieces to make the story flow better.

News in general cares about being independent, about journalists who figure out the story and tell the truth regardless of what the people in power are saying. Science news is strange because, if a scientist gets covered at all, it’s almost always positive. Aside from the occasional scandal or replication crisis, science news tends to portray scientific developments as valuable, “good news” rather than “bad news”. If you’re a politician or a company, hearing from a journalist might make you worry. If you say the wrong thing, you might come off badly. If you’re a scientist, your biggest worry is that a journalist might twist your words into a falsehood that makes your work sound too good. On the other hand, a journalist who regularly publishes negative things about scientists would probably have a hard time finding scientists to talk to! There are basic journalistic ethics questions here that one probably learns about at journalism school and we who sneak in with no training have to learn another way.

These are the flavors I’ve tasted so far: novelty and narrative vs. education, positivity vs. accuracy. I’ll doubtless see more over the years, and go from someone who kind of knows what they’re doing to someone who can mentor others. With that in mind, I should get to writing!

Ways Freelance Journalism Is Different From Academic Writing

A while back, I was surprised when I saw the writer of a well-researched webcomic assume that academics are paid for their articles. I ended up writing a post explaining how academic publishing actually works.

Now that I’m out of academia, I’m noticing some confusion on the other side. I’m doing freelance journalism, and the academics I talk to tend to have some common misunderstandings. So academics, this post is for you: a FAQ of questions I’ve been asked about freelance journalism. Freelance journalism is more varied than academia, and I’ve only been doing it a little while, so all of my answers will be limited to my experience.

Q: What happens first? Do they ask you to write something? Do you write an article and send it to them?

Academics are used to writing an article, then sending it to a journal, which sends it out to reviewers to decide whether to accept it. In freelance journalism in my experience, you almost never write an article before it’s accepted. (I can think of one exception I’ve run into, and that was for an opinion piece.)

Sometimes, an editor reaches out to a freelancer and asks them to take on an assignment to write a particular sort of article. This happens more freelancers that have been working with particular editors for a long time. I’m new to this, so the majority of the time I have to “pitch”. That means I email an editor describing the kind of piece I want to write. I give a short description of the topic and why it’s interesting. If the editor is interested, they’ll ask some follow-up questions, then tell me what they want me to focus on, how long the piece should be, and how much they’ll pay me. (The last two are related, many places pay by the word.) After that, I can write a draft.

Q: Wait, you’re paid by the word? Then why not make your articles super long, like Victor Hugo?

I’m paid per word assigned, not per word in the finished piece. The piece doesn’t have to strictly stick to the word limit, but it should be roughly the right size, and I work with the editor to try to get it there. In practice, places seem to have a few standard size ranges and internal terminology for what they are (“blog”, “essay”, “short news”, “feature”). These aren’t always the same as the categories readers see online. Some places have a web page listing these categories for prospective freelancers, but many don’t, so you have to either infer them from the lengths of articles online or learn them over time from the editors.

Q: Why didn’t you mention this important person or idea?

Because pieces pay more by the word, it’s easier as a freelancer to sell shorter pieces than longer ones. For science news, favoring shorter pieces also makes some pedagogical sense. People usually take away only a few key messages from a piece, if you try to pack in too much you run a serious risk of losing people. After I’ve submitted a draft, I work with the editor to polish it, and usually that means cutting off side-stories and “by-the-ways” to make the key points as vivid as possible.

Q: Do you do those cool illustrations?

Academia has a big focus on individual merit. The expectation is that when you write something, you do almost all of the work yourself, to the extent that more programming-heavy fields like physics and math do their own typesetting.

Industry, including journalism, is more comfortable delegating. Places will generally have someone on-staff to handle illustrations. I suggest diagrams that could be helpful to the piece and do a sketch of what they could look like, but it’s someone else’s job to turn that into nice readable graphic design.

Q: Why is the title like that? Why doesn’t that sound like you?

Editors in journalistic outlets are much more involved than in academic journals. Editors won’t just suggest edits, they’ll change wording directly and even input full sentences of their own. The title and subtitle of a piece in particular can change a lot (in part because they impact SEO), and in some places these can be changed by the editor quite late in the process. I’ve had a few pieces whose title changed after I’d signed off on them, or even after they first appeared.

Q: Are your pieces peer-reviewed?

The news doesn’t have peer review, no. Some places, like Quanta Magazine, do fact-checking. Quanta pays independent fact-checkers for longer pieces, while for shorter pieces it’s the writer’s job to verify key facts, confirming dates and the accuracy of quotes.

Q: Can you show me the piece before it’s published, so I can check it?

That’s almost never an option. Journalists tend to have strict rules about showing a piece before it’s published, related to more political areas where they want to preserve the ability to surprise wrongdoers and the independence to find their own opinions. Science news seems like it shouldn’t require this kind of thing as much, it’s not like we normally write hit pieces. But we’re not publicists either.

In a few cases, I’ve had people who were worried about something being conveyed incorrectly, or misleadingly. For those, I offer to do more in the fact-checking stage. I can sometimes show you quotes or paraphrase how I’m describing something, to check whether I’m getting something wrong. But under no circumstances can I show you the full text.

Q: What can I do to make it more likely I’ll get quoted?

Pieces are short, and written for a general, if educated, audience. Long quotes are harder to use because they eat into word count, and quotes with technical terms are harder to use because we try to limit the number of terms we ask the reader to remember. Quotes that mention a lot of concepts can be harder to find a place for, too: concepts are introduced gradually over the piece, so a quote that mentions almost everything that comes up will only make sense to the reader at the very end.

In a science news piece, quotes can serve a couple different roles. They can give authority, an expert’s judgement confirming that something is important or real. They can convey excitement, letting the reader see a scientist’s emotions. And sometimes, they can give an explanation. This last only happens when the explanation is very efficient and clear. If the journalist can give a better explanation, they’re likely to use that instead.

So if you want to be quoted, keep that in mind. Try to say things that are short and don’t use a lot of technical jargon or bring in too many concepts at once. Convey judgement, which things are important and why, and convey passion, what drives you and excited you about a topic. I am allowed to edit quotes down, so I can take a piece of a longer quote that’s cleaner or cut a long list of examples from an otherwise compelling statement. I can correct grammar and get rid of filler words and obvious mistakes. But I can’t put words in your mouth, I have to work with what you actually said, and if you don’t say anything I can use then you won’t get quoted.

Freelancing in [Country That Includes Greenland]

(Why mention Greenland? It’s a movie reference.)

I figured I’d give an update on my personal life.

A year ago, I resigned from my position in France and moved back to Denmark. I had planned to spend a few months as a visiting researcher in my old haunts at the Niels Bohr Institute, courtesy of the spare funding of a generous friend. There turned out to be more funding than expected, and what was planned as just a few months was extended to almost a year.

I spent that year learning something new. It was still an amplitudes project, trying to make particle physics predictions more efficient. But this time I used Python. I looked into reinforcement learning and PyTorch, played with using a locally hosted Large Language Model to generate random code, and ended up getting good results from a classic genetic programming approach. Along the way I set up a SQL database, configured Docker containers, and puzzled out interactions with CUDA. I’ve got a paper in the works, I’ll post about it when it’s out.

All the while, on the side, I’ve been seeking out stories. I’ve not just been a writer, but a journalist, tracking down leads and interviewing experts. I had three pieces in Quanta Magazine and one in Ars Technica.

Based on that, I know I can make money doing science journalism. What I don’t know yet is whether I can make a living doing it. This year, I’ll figure that out. With the project at the Niels Bohr Institute over, I’ll have more time to seek out leads and pitch to more outlets. I’ll see whether I can turn a skill into a career.

So if you’re a scientist with a story to tell, if you’ve discovered something or accomplished something or just know something that the public doesn’t, and that you want to share: do reach out. There’s a lot that can be of interest, passion that can be shared.

At the same time, I don’t know yet whether I can make a living as a freelancer. Many people try and don’t succeed. So I’m keeping my CV polished and my eyes open. I have more experience now with Data Science tools, and I’ve got a few side projects cooking that should give me a bit more. I have a few directions in mind, but ultimately, I’m flexible. I like being part of a team, and with enthusiastic and competent colleagues I can get excited about pretty much anything. So if you’re hiring in Copenhagen, if you’re open to someone with ten years of STEM experience who’s just starting to see what industry has to offer, then let’s chat. Even if we’re not a good fit, I bet you’ve got a good story to tell.

At Ars Technica Last Week, With a Piece on How Wacky Ideas Become Big Experiments

I had a piece last week at Ars Technica about the path ideas in physics take to become full-fledged experiments.

My original idea for the story was a light-hearted short news piece. A physicist at the University of Kansas, Steven Prohira, had just posted a proposal for wiring up a forest to detect high-energy neutrinos, using the trees like giant antennas.

Chatting to experts, what at first seemed silly started feeling like a hook for something more. Prohira has a strong track record, and the experts I talked to took his idea seriously. They had significant doubts, but I was struck by how answerable those doubts were, how rather than dismissing the whole enterprise they had in mind a list of questions one could actually test. I wrote a blog post laying out that impression here.

The editor at Ars was interested, so I dug deeper. Prohira’s story became a window on a wider-ranging question: how do experiments happen? How does a scientist convince the community to work on a project, and the government to fund it? How do ideas get tested before these giant experiments get built?

I tracked down researchers from existing experiments and got their stories. They told me how detecting particles from space takes ingenuity, with wacky ideas involving the natural world being surprisingly common. They walked me through tales of prototypes and jury-rigging and feasibility studies and approval processes.

The highlights of those tales ended up in the piece, but there was a lot I couldn’t include. In particular, I had a long chat with Sunil Gupta about the twists and turns taken by the GRAPES experiment in India. Luckily for you, some of the most interesting stories have already been covered, for example their measurement of the voltage of a thunderstorm or repurposing used building materials to keep costs down. I haven’t yet found his story about stirring wavelength-shifting chemicals all night using a propeller mounted on a power drill, but I suspect it’s out there somewhere. If not, maybe it can be the start of a new piece!

Replacing Space-Time With the Space in Your Eyes

Nima Arkani-Hamed thinks space-time is doomed.

That doesn’t mean he thinks it’s about to be destroyed by a supervillain. Rather, Nima, like many physicists, thinks that space and time are just approximations to a deeper reality. In order to make sense of gravity in a quantum world, seemingly fundamental ideas, like that particles move through particular places at particular times, will probably need to become more flexible.

But while most people who think space-time is doomed research quantum gravity, Nima’s path is different. Nima has been studying scattering amplitudes, formulas used by particle physicists to predict how likely particles are to collide in particular ways. He has been trying to find ways to calculate these scattering amplitudes without referring directly to particles traveling through space and time. In the long run, the hope is that knowing how to do these calculations will help suggest new theories beyond particle physics, theories that can’t be described with space and time at all.

Ten years ago, Nima figured out how to do this in a particular theory, one that doesn’t describe the real world. For that theory he was able to find a new picture of how to calculate scattering amplitudes based on a combinatorical, geometric space with no reference to particles traveling through space-time. He gave this space the catchy name “the amplituhedron“. In the years since, he found a few other “hedra” describing different theories.

Now, he’s got a new approach. The new approach doesn’t have the same kind of catchy name: people sometimes call it surfaceology, or curve integral formalism. Like the amplituhedron, it involves concepts from combinatorics and geometry. It isn’t quite as “pure” as the amplituhedron: it uses a bit more from ordinary particle physics, and while it avoids specific paths in space-time it does care about the shape of those paths. Still, it has one big advantage: unlike the amplituhedron, Nima’s new approach looks like it can work for at least a few of the theories that actually describe the real world.

The amplituhedron was mysterious. Instead of space and time, it described the world in terms of a geometric space whose meaning was unclear. Nima’s new approach also describes the world in terms of a geometric space, but this space’s meaning is a lot more clear.

The space is called “kinematic space”. That probably still sounds mysterious. “Kinematic” in physics refers to motion. In the beginning of a physics class when you study velocity and acceleration before you’ve introduced a single force, you’re studying kinematics. In particle physics, kinematic refers to the motion of the particles you detect. If you see an electron going up and to the right at a tenth the speed of light, those are its kinematics.

Kinematic space, then, is the space of observations. By saying that his approach is based on ideas in kinematic space, what Nima is saying is that it describes colliding particles not based on what they might be doing before they’re detected, but on mathematics that asks questions only about facts about the particles that can be observed.

(For the experts: this isn’t quite true, because he still needs a concept of loop momenta. He’s getting the actual integrands from his approach, rather than the dual definition he got from the amplituhedron. But he does still have to integrate one way or another.)

Quantum mechanics famously has many interpretations. In my experience, Nima’s favorite interpretation is the one known as “shut up and calculate”. Instead of arguing about the nature of an indeterminately philosophical “real world”, Nima thinks quantum physics is a tool to calculate things people can observe in experiments, and that’s the part we should care about.

From a practical perspective, I agree with him. And I think if you have this perspective, then ultimately, kinematic space is where your theories have to live. Kinematic space is nothing more or less than the space of observations, the space defined by where things land in your detectors, or if you’re a human and not a collider, in your eyes. If you want to strip away all the speculation about the nature of reality, this is all that is left over. Any theory, of any reality, will have to be described in this way. So if you think reality might need a totally new weird theory, it makes sense to approach things like Nima does, and start with the one thing that will always remain: observations.