Three of my science journalism pieces went up last week!
(This is a total coincidence. One piece was a general explainer “held in reserve” for a nice slot in the schedule, one was a piece I drafted in February, while the third I worked on in May. In journalism, things take as long as they take.)
The shortest piece, at Quanta Magazine, was an explainer about the two types of particles in physics: bosons, and fermions.
I don’t have a ton of bonus info here, because of how tidy the topic is, so just two quick observations.
First, I have the vague impression that Bose, bosons’ namesake, is “claimed” by both modern-day Bangladesh and India. I had friends in grad school who were proud of their fellow physicist from Bangladesh, but while he did his most famous work in Dhaka, he was born and died in Calcutta. Since both were under British India for most of his life, these things likely get complicated.
Second, at the end of the piece I mention a “world on a wire” where fermions and bosons are the same. One example of such a “wire” is a string, like in string theory. One thing all young string theorists learn is “bosonization”: the idea that, in a 1+1-dimensional world like a string, you can re-write any theory with fermions as a theory with bosons, as well as vice versa. This has important implications for how string theory is set up.
Next, in Ars Technica, I had a piece about how LHC physicists are using machine learning to untangle the implications of quantum interference.
As a journalist, it’s really easy to fall into a trap where you give the main person you interview too much credit: after all, you’re approaching the story from their perspective. I tried to be cautious about this, only to be stymied when literally everyone else I interviewed praised Aishik Ghosh to the skies and credited him with being the core motivating force behind the project. So I shrugged my shoulders and followed suit. My understanding is that he has been appropriately rewarded and will soon be a professor at Georgia Tech.
I didn’t list the inventors of the NSBI method that Ghosh and co. used, but names like Kyle Cranmer and Johann Brehmer tend to get bandied about. It’s a method that was originally explored for a more general goal, trying to characterize what the Standard Model might be missing, while the work I talk about in the piece takes it in a new direction, closer to the typical things the ATLAS collaboration looks for.
I also did not say nearly as much as I was tempted to about how the ATLAS collaboration publishes papers, which was honestly one of the most intriguing parts of the story for me. There is a huge amount of review that goes on inside ATLAS before one of their papers reaches the outside world, way more than there ever is in a journal’s peer review process. This is especially true for “physics papers”, where ATLAS is announcing a new conclusion about the physical world, as ATLAS’s reputation stands on those conclusions being reliable. That means starting with an “internal note” that’s hundreds of pages long (and sometimes over a thousand), an editorial board that manages the editing process, disseminating the paper to the entire collaboration for comment, and getting specific experts and institute groups within the collaboration to read through the paper in detail. The process is a bit less onerous for “technical papers”, which describe a new method, not a new conclusion about the world. Still, it’s cumbersome enough that for those papers, often scientists don’t publish them “within ATLAS” at all, instead releasing them independently. The results I reported on are special because they involved a physics paper and a technical paper, both within the ATLAS collaboration process. Instead of just working with partial or simplified data, they wanted to demonstrate the method on a “full analysis”, with all the computation and human coordination that requires. Normally, ATLAS wouldn’t go through the whole process of publishing a physics paper without basing it on new data, but this was different: the method had the potential to be so powerful that the more precise results would be worth stating as physics results alone.
(Also, for the people in the comments worried about training a model on old data: that’s not what they did. In physics, they don’t try to train a neural network model to predict the results of colliders, such a model wouldn’t tell us anything useful. They run colliders to tell us whether what they see matches the analytic, Standard, model. The neural network is trained to predict not what the experiment will say, but what the Standard Model will say, as we can usually only figure that out through time-consuming simulations. So it’s trained on (new) simulations, not on experimental data.)
Finally, on Friday I had a piece in Physics Today about the European Strategy for Particle Physics (or ESPP), and in particular, plans for the next big collider.
Before I even started working on this piece, I saw a thread by Patrick Koppenburg on some of the 263 documents submitted for the ESPP update. While my piece ended up mostly focused on the big circular collider plan that most of the field is converging on (the future circular collider, or FCC), Koppenburg’s thread was more wide-ranging, meant to illustrate the breadth of ideas under discussion. Some of that discussion is about the LHC’s current plans, like its “high-luminosity” upgrade that will see it gather data at much higher rates up until 2040. Some of it is assessing broader concerns, which it may surprise some of you to learn includes sustainability: yes, there are more or less sustainable ways to build giant colliders.
The most fun part of the discussion, though, concerns all of the other collider proposals.
Some report progress on new technologies. Muon colliders are the most famous of these, but there are other proposals that would specifically help with a linear collider. I never did end up understanding what Cooled Copper Colliders are all about, beyond that they let you get more energy in a smaller machine without super-cooling. If you know about them, chime in in the comments! Meanwhile, plasma wakefield acceleration could accelerate electrons on a wave of plasma. This has the disadvantage that you want to collide electrons and positrons, and if you try to stick a positron in plasma it will happily annihilate with the first electron it meets. So what do you do? You go half-and-half, with the HALHF project: speed up the electron with a plasma wakefield, accelerate the positron normally, and have them meet in the middle.
Others are backup plans, or “budget options”, where CERN could get a bit better measurements on some parameters if they can’t stir up the funding to measure the things they really want. They could put electrons and positrons into the LHC tunnel instead of building a new one, for a weaker machine that could still study the Higgs boson to some extent. They could use a similar experiment to produce Z bosons instead, which could serve as a bridge to a different collider project. Or, they could collider the LHC’s proton beam with an electron beam, for an experiment that mixes advantages and disadvantages of some of the other approaches.
While working on the piece, one resource I found invaluable was this colloquium talk by Tristan du Pree, where he goes through the 263 submissions and digs up a lot of interesting numbers and commentary. Read the slides for quotes from the different national inputs and “solo inputs” with comments from particular senior scientists. I used that talk to get a broad impression of what the community was feeling, and it was interesting how well it was reflected in the people I interviewed. The physicist based in Switzerland felt the most urgency for the FCC plan, while the Dutch sources were more cautious, with other Europeans firmly in the middle.
Going over the FCC report itself, one thing I decided to leave out of the discussion was the cost-benefit analysis. There’s the potential for a cute sound-bite there, “see, the collider is net positive!”, but I’m pretty skeptical of the kind of analysis they’re doing there, even if it is standard practice for government projects. Between the biggest benefits listed being industrial benefits to suppliers and early-career researcher training (is a collider unusually good for either of those things, compared to other ways we spend money?) and the fact that about 10% of the benefit is the science itself (where could one possibly get a number like that?), it feels like whatever reasoning is behind this is probably the kind of thing that makes rigor-minded economists wince. I wasn’t able to track down the full calculation though, so I really don’t know, maybe this makes more sense than it looks.
I think a stronger argument than anything along those lines is a much more basic point, about expertise. Right now, we have a community of people trying to do something that is not merely difficult, but fundamental. This isn’t like sending people to space, where many of the engineering concerns will go away when we can send robots instead. This is fundamental engineering progress in how to manipulate the forces of nature (extremely powerful magnets, high voltages) and process huge streams of data. Pushing those technologies to the limit seems like it’s going to be relevant, almost no matter what we end up doing. That’s still not putting the science first and foremost, but it feels a bit closer to an honest appraisal of what good projects like this do for the world.

Dr. von Hippel —
Quantum physics is nice. Accelerators are nice. But in case you haven’t noticed, the world is not very nice these days. With all the money going to STEM, nothing is going to the humanities, and we have lost our ability to be human with each other.
Basic science with daily application, like weather satellites and medical research, is going unfunded. Worse, data gathering itself is being cut back and that data which exists is subject to political censorship. We are asked no longer to seek the truth but to believe true that which is said by self-interested empowered parties.
The internationalism of science is amazing to see, it has existed even in wartime. There are leading edge scientists from many countries of the world, many working outside their home country. I don’t believe there has ever been an article in Quanta that has featured only US citizens, and certainly not only white males. Yet universities are being punished for their foreign students and diversity. Students are being denied student visas because of their nationality and political beliefs.
It’s time to have articles about scientists who protected other scientists from threats in their home country. It is time we honored scientists for their work as POWs, like Jean-Victor Poncelet. If memory serves, a recent article mentioned such a scientist who worked on the Navier-Stokes problem.
You may have come down from the ivory tower, but only to the 47th floor.
Saludos, Roger
LikeLike
This blog has a no-politics policy. Your comment is ok, but just barely, and only because you phrased things quite vaguely.
There are a number of reasons for the policy, some of them personal. One reason is that, even restricted to the causes you care about, given my small but varied audience I do more good on the “47th floor” than I would “on the ground”. I occasionally make exceptions to my no-politics policy when I think there is something very specific people are misunderstanding. I don’t think that’s the right way to think about what’s going on right now.
LikeLike
Dr. von Hippel —
Certainly you cannot contend that science budgeting is not political. Whether to build a collider or a space-based gravity wave detector or map the connections of the human brain or search for room temperature superconductors is a political choice. At a more applied level, whether to have satellites detecting earth’s weather and climate or the sun’s or Jupiter’s is a political choice. Whether to search for UFOs or extraterrestrial life is a political choice. We used to have disciplinary research prioritization panels and NSF peer-review funding process, but that process has been brought under political control, isn’t that an underlying issue for all research programs? Isn’t that a news peg? The choice between manned and unmanned space missions has always been a political process, marked by a conflict between ideologies of scientific data collection and humans as a space-faring species, in support of hidden agendas of the military-industrial complex. And now we have Elon Musk’s satellites interfering with earth-based astronomy. If you’re serious about science writing, the issue of what gets researched, what doesn’t, and why is much more impactful that your click-bait about Higgs field shift.
r
Roger
LikeLike
“it feels like whatever reasoning is behind this is probably the kind of thing that makes rigor-minded economists wince.”
A more honest approach would probably be to report the costs less any obvious and clear economic benefits (e.g. excess power generation sold back to the region where it is located), and then to juxtapose this net cost against potential scientific benefits described only qualitatively and leaving the reader or the decision makers to evaluate the value of that personally.
Your discussion in this post of the internal ATLAS process, however, is great! I’ve been meaning to write a post about what peer review adds to the value of reported scientific results in different contexts, and this post provides some real meat to add to that discussion.
LikeLiked by 1 person