Monthly Archives: November 2023

A Significant Calculation

Particle physicists have a weird relationship to journals. We publish all our results for free on a website called the arXiv, and when we need to read a paper that’s the first place we look. But we still submit our work to journals, because we need some way to vouch that we’re doing good work. Explicit numbers (h-index, impact factor) are falling out of favor, but we still need to demonstrate that we get published in good journals, that we do enough work, and that work has an impact on others. We need it to get jobs, to get grants to fund research at those jobs, and to get future jobs for the students and postdocs we hire with those grants. Our employers need it to justify their own funding, to summarize their progress so governments and administrators can decide who gets what.

This can create weird tensions. When people love a topic, they want to talk about it with each other. They want to say all sorts of things, big and small, to contribute new ideas and correct others and move things forward. But as professional physicists, we also have to publish papers. We can publish some “notes”, little statements on the arXiv that we don’t plan to make into a paper, but we don’t really get “credit” for those. So in practice, we try to force anything we want to say into a paper-sized chunk.

That wouldn’t be a problem if paper-sized chunks were flexible, and you can see when journals historically tried to make them that way. Some journals publish “letters”, short pieces a few pages long, to contrast with their usual papers that can run from twenty to a few hundred pages. These “letters” tend to be viewed as prestigious, though, so they end up being judged on roughly the same standards as the normal papers, if not more strictly.

What standards are those? For each journal, you can find an official list. The Journal of High-Energy Physics, for example, instructs reviewers to look for “high scientific
quality, originality and relevance”. That rules out papers that just reproduce old results, but otherwise is frustratingly vague. What constitutes high scientific quality? Relevant to whom?

In practice, reviewers use a much fuzzier criterion: is this “paper-like”? Does this look like other things that get published, or not?

Each field will assess that differently. It’s a criterion of familiarity, of whether a paper looks like what people in the field generally publish. In my field, one rule of thumb is that a paper must contain a significant calculation.

A “significant calculation” is still quite fuzzy, but the idea is to make sure that a paper requires some amount of actual work. Someone has to do something challenging, and the work shouldn’t be half-done: as much as feasible, they should finish, and calculate something new. Ideally, this should be something that nobody had calculated before, but if the perspective is new enough it can be something old. It should “look hard”, though.

That’s a fine way to judge whether someone is working hard, which is something we sometimes want to judge. But since we’re incentivized to make everything into a paper, this means that every time we want to say something, we want to accompany it with some “significant calculation”, some concrete time-consuming work. This can happen even if we want to say something that’s quite direct and simple, a fact that can be quickly justified but nonetheless has been ignored by the field. If we don’t want it to be “just” an un-credited note, we have to find some way to turn it into a “significant calculation”. We do extra work, sometimes pointless work, in order to make something “paper-sized”.

I like to think about what academia would be like without the need to fill out a career. The model I keep imagining is that of a web forum or a blogging platform. There would be the big projects, the in-depth guides and effortposts. But there would also be shorter contributions, people building off each other, comments on longer pieces and quick alerts pinned to the top of the page. We’d have a shared record of knowledge, where everyone contributes what they want to whatever level of detail they want.

I think math is a bit closer to this ideal. Despite their higher standards for review, checking the logic of every paper to make sure it makes sense to publish, math papers can sometimes be very short, or on apparently trivial things. Physics doesn’t quite work this way, and I suspect part of it is our funding sources. If you’re mostly paid to teach, like many mathematicians, your research is more flexible. If you’re paid to research, like many physicists, then people want to make sure your research is productive, and that tends to cram it into measurable boxes.

In today’s world, I don’t think physics can shift cultures that drastically. Even as we build new structures to rival the journals, the career incentives remain. Physics couldn’t become math unless it shed most of the world’s physicists.

In the long run, though…well, we may one day find ourselves in a world where we don’t have to work all our days to keep each other alive. And if we do, hopefully we’ll change how scientists publish.

IPhT-60 Retrospective

Last week, my institute had its 60th anniversary party, which like every party in academia takes the form of a conference.

For unclear reasons, this one also included a physics-themed arcade game machine.

Going in, I knew very little about the history of the Institute of Theoretical Physics, of the CEA it’s part of (Commissariat of Atomic Energy, now Atomic and Alternative Energy), or of French physics in general, so I found the first few talks very interesting. I learned that in France in the early 1950’s, theoretical physics was quite neglected. Key developments, like relativity and statistical mechanics, were seen as “too German” due to their origins with Einstein and Boltzmann (nevermind that this was precisely why the Nazis thought they were “not German enough”), while de Broglie suppressed investigation of quantum mechanics. It took French people educated abroad to come back and jumpstart progress.

The CEA is, in a sense, the French equivalent of the some of the US’s national labs, and like them got its start as part of a national push towards nuclear weapons and nuclear power.

(Unlike the US’s national labs, the CEA is technically a private company. It’s not even a non-profit: there are for-profit components that sell services and technology to the energy industry. Never fear, my work remains strictly useless.)

My official title is Ingénieur Chercheur, research engineer. In the early days, that title was more literal. Most of the CEA’s first permanent employees didn’t have PhDs, but were hired straight out of undergraduate studies. The director, Claude Bloch, was in his 40’s, but most of the others were in their 20’s. There was apparently quite a bit of imposter syndrome back then, with very young people struggling to catch up to the global state of the art.

They did manage to catch up, though, and even excel. In the 60’s and 70’s, researchers at the institute laid the groundwork for a lot of ideas that are popular in my field at the moment. Stora’s work established a new way to think about symmetry that became the textbook approach we all learn in school, while Froissart figured out a consistency condition for high-energy physics whose consequences we’re still teasing out. Pham was another major figure at the institute in that era. With my rudimentary French I started reading his work back in Copenhagen, looking for new insights. I didn’t go nearly as fast as my partner in the reading group though, whose mastery of French and mathematics has seen him use Pham’s work in surprising new ways.

Hearing about my institute’s past, I felt a bit of pride in the physicists of the era, not just for the science they accomplished but for the tools they built to do it. This was the era of preprints, first as physical papers, orange folders mailed to lists around the world, and later online as the arXiv. Physicists here were early adopters of some aspects, though late adopters of others (they were still mailing orange folders a ways into the 90’s). They also adopted computation, with giant punch-card reading, sheets-of-output-producing computers staffed at all hours of the night. A few physicists dove deep into the new machines, and guided the others as capabilities changed and evolved, while others were mostly just annoyed by the noise!

When the institute began, scientific papers were still typed on actual typewriters, with equations handwritten in or typeset in ingenious ways. A pool of secretaries handled much of the typing, many of whom were able to come to the conference! I wonder what they felt, seeing what the institute has become since.

I also got to learn a bit about the institute’s present, and by implication its future. I saw talks covering different areas, from multiple angles on mathematical physics to simulations of large numbers of particles, quantum computing, and machine learning. I even learned a bit from talks on my own area of high-energy physics, highlighting how much one can learn from talking to new people.

IPhT’s 60-Year Anniversary

This year is the 60th anniversary of my new employer, the Institut de Physique Théorique of CEA Paris-Saclay (or IPhT for short). In celebration, they’re holding a short conference, with a variety of festivities. They’ve been rushing to complete a film about the institute, and I hear there’s even a vintage arcade game decorated with Feynman diagrams. For me, it will be a chance to learn a bit more about the history of this place, which I currently know shamefully little about.

(For example, despite having his textbook on my shelf, I don’t know much about what our Auditorium’s namesake Claude Itzykson was known for.)

Since I’m busy with the conference this week, I won’t have time for a long blog post. Next week I’ll be able to say more, and tell you what I learned!

Theorems About Reductionism

A reductionist would say that the behavior of the big is due to the behavior of the small. Big things are made up of small things, so anything the big things do must be explicable in terms of what the small things are doing. It may be very hard to explain things this way: you wouldn’t want to describe the economy in terms of motion of carbon atoms. But in principle, if you could calculate everything, you’d find the small things are enough: there are no fundamental “new rules” that only apply to big things.

A physicist reductionist would have to amend this story. Zoom in far enough, and it doesn’t really make sense to talk about “small things”, “big things”, or even “things” at all. The world is governed by interactions of quantum fields, ripples spreading and colliding and changing form. Some of these ripples (like the ones we call “protons”) are made up of smaller things…but ultimately most aren’t. They just are what they are.

Still, a physicist can rescue the idea of reductionism by thinking about renormalization. If you’ve heard of renormalization, you probably think of it as a trick physicists use to hide inconvenient infinite results in their calculations. But an arguably better way to think about it is as a kind of “zoom” dial for quantum field theories. Starting with a theory, we can use renormalization to “zoom out”, ignoring the smallest details and seeing what picture emerges. As we “zoom”, different forces will seem to get stronger or weaker: electromagnetism matters less when we zoom out, the strong nuclear force matters more.

(Why then, is electromagnetism so much more important in everyday life? The strong force gets so strong as we zoom out that we stop seeing individual particles, and only see them bound into protons and neutrons. Electromagnetism is like this too, binding electrons and protons into neutral atoms. In both cases, it can be better, once we’ve zoomed out, to use a new description: you don’t want to do chemistry keeping track of the quarks and gluons.)

A physicists reductionist then, would expect renormalization to always go “one way”. As we “zoom out”, we should find that our theories in a meaningful sense get simpler and simpler. Maybe they’re still hard to work with: it’s easier to think about gluons and quarks when zoomed in than the zoo of different nuclear particles we need to consider when zoomed out. But at each step, we’re ignoring some details. And if you’re a reductionist, you shouldn’t expect “zooming out” to show you anything truly fundamentally new.

Can you prove that, though?

Surprisingly, yes!

In 2011, Zohar Komargodski and Adam Schwimmer proved a result called the a-theorem. “The a-theorem” is probably the least google-able theorem in the universe, which has probably made it hard to popularize. It is named after a quantity, labeled “a”, that gives a particular way to add up energy in a quantum field theory. Komargodski and Schwimmer proved that, when you do the renormalization procedure and “zoom out”, then this quantity “a” will always get smaller.

Why does this say anything about reductionism?

Suppose you have a theory that violates reductionism. You zoom out, and see something genuinely new: a fact about big things that isn’t due to facts about small things. If you had a theory like that, then you could imagine “zooming in” again, and using your new fact about big things to predict something about the small things that you couldn’t before. You’d find that renormalization doesn’t just go “one way”: with new facts able to show up at every scale, zooming out isn’t necessarily ignoring more and zooming in isn’t necessarily ignoring less. It would depend on the situation which way the renormalization procedure would go.

The a-theorem puts a stop to this. It says that, when you “zoom out”, there is a number that always gets smaller. In some ways it doesn’t matter what that number is (as long as you’re not cheating and using the scale directly). In this case, it is a number that loosely counts “how much is going on” in a given space. And because it always decreases when you do renormalization, it means that renormalization can never “go backwards”. You can never renormalize back from your “zoomed out” theory to the “zoomed in” one.

The a-theorem, like every theorem, is based on assumptions. Here, the assumptions are mostly that quantum field theory works in the normal way, that the theory we’re dealing with is not a totally new type of theory instead. One assumption I find interesting is the assumption of locality, that no signals can travel faster than the speed of light. On a naive level, this makes a lot of sense to me. If you can send signals faster than light, then you can’t control your “zoom lens”. Physics in a small area might be changed by something happening very far away, so you can’t “zoom in” in a way that lets you keep including everything that could possibly be relevant. If you have signals that go faster than light, you could transmit information between different parts of big things without them having to “go through” small things first. You’d screw up reductionism, and have surprises show up on every scale.

Personally, I find it really cool that it’s possible to prove a theorem that says something about a seemingly philosophical topic like reductionism. Even with assumptions (and even with the above speculations about the speed of light), it’s quite interesting that one can say anything at all about this kind of thing from a physics perspective. I hope you find it interesting too!