Category Archives: Life as a Physicist

Confidence and Friendliness in Science

I’ve seen three kinds of scientific cultures.

First, there are folks who are positive about almost everyone. Ask them about someone else’s lab, even a competitor, and they’ll be polite at worst, and often downright excited. Anyone they know, they’ll tell you how cool the work they’re doing is, how it’s important and valuable and worth doing. They might tell you they prefer a different approach, but they’ll almost never bash someone’s work.

I’ve heard this comes out of American culture, and I can kind of see it. There’s an attitude in the US that everything needs to be described as positively as possible. This is especially true in a work context. Negativity is essentially a death sentence, doled out extremely rarely: if you explicitly say someone or their work is bad, you’re trying to get them fired. You don’t do that unless someone really really deserves it.

That style of scientific culture is growing, but it isn’t universal. There’s still a big cultural group that is totally ok with negativity: as long as it’s directed at other people, anyway.

This scientific culture prides itself on “telling it like it is”. They’ll happily tell you about how everything everyone else is doing is bullshit. Sometimes, they claim their ideas are the only ways forward. Others will have a small number of other people who they trust, who have gained their respect in one way or another. This sort of culture is most stereotypically associated with Russians: a “Russian-style” seminar, for example, is one where the speaker is aggressively questioned for hours.

It may sound like those are the only two options, but there is a third. While “American-style” scientists don’t criticize anyone, and “Russian-style” scientists criticize everyone else, there are also scientists who criticize almost everyone, including themselves.

With a light touch, this culture can be one of the best. There can be a real focus on “epistemic humility”, on always being clear of how much we still don’t know.

However, it can be worryingly easy to spill past that light touch, into something toxic. When the criticism goes past humility and into a lack of confidence in your own work, you risk falling into a black hole, where nothing is going well and nobody has a way out. This kind of culture can spread, filling a workplace and infecting anyone who spends too long there with the conviction that nothing will ever measure up again.

If you can’t manage that light skeptical touch, then your options are American-style or Russian-style. I don’t think either is obviously better. Both have their blind spots: the Americans can let bad ideas slide to avoid rocking the boat, while the Russians can be blind to their own flaws, confident that because everyone else seems wrong they don’t need to challenge their own worldview.

You have one more option, though. Now that you know this, you can recognize each for what it is: not the one true view of the world, but just one culture’s approach to the truth. If you can do that, you can pick up each culture as you need, switching between them as you meet different communities and encounter different things. If you stay aware, you can avoid fighting over culture and discourse, and use your energy on what matters: the science.

Visiting the IAS

I’m at the Institute for Advanced Study, or IAS, this week.

There isn’t a conference going on, but if you looked at the visitor list you’d be forgiven for thinking there was. We have talks in my subfield almost every day this week, two professors from my subfield here on sabbatical, and extra visitors on top of that.

The IAS is a bit of an odd place. Partly, that’s due to its physical isolation: tucked away in the woods behind Princeton, a half-hour’s walk from the nearest restaurant, it’s supposed to be a place for contemplation away from the hustle and bustle of the world.

Since the last time I visited they’ve added a futuristic new building, seen here out of my office window. The building is most notable for one wild promise: someday, they will serve dinner there.

Mostly, though, the weirdness of the IAS is due to the kind of institution it is.

Within a given country, most universities are pretty similar. Each may emphasize different teaching styles, and the US has a distinction between public and private, but (neglecting scammy for-profit universities), there are some commonalities of structure: both how they’re organized, and how they’re funded. Even between countries, different university systems have quite a bit of overlap.

The IAS, though, is not a university. It’s an independent institute. Neighboring Princeton supplies it with PhD students, but otherwise the IAS runs, and funds, itself.

There are a few other places like that around the world. The Perimeter Institute in Canada is also independent, and also borrows students from a neighboring university. CERN pools resources from several countries across Europe and beyond, Nordita from just the Nordic countries. Generalizing further, many countries have some sort of national labs or other nation-wide systems, from US Department of Energy labs like SLAC to Germany’s Max Planck Institutes.

And while universities share a lot in common, non-university institutes can be very different. Some are closely tied to a university, located inside university buildings with members with university affiliations. Others sit at a greater remove, less linked to a university or not linked at all. Some have their own funding, investments or endowments or donations, while others are mostly funded by governments, or groups of governments. I’ve heard that the IAS gets about 10% of its budget from the government, while Perimeter gets its everyday operating expenses entirely from the Canadian government and uses donations for infrastructure and the like.

So ultimately, the IAS is weird because every organization like it is weird. There are a few templates, and systems, but by and large each independent research organization is different. Understanding one doesn’t necessarily help at understanding another.

Fields and Scale

I am a theoretical particle physicist, and every morning I check the arXiv.

arXiv.org is a type of website called a preprint server. It’s where we post papers before they are submitted to (and printed by) a journal. In practice, everything in our field shows up on arXiv, publicly accessible, before it appears anywhere else. There’s no peer review process on arXiv, the journals still handle that, but in our field peer review doesn’t often notice substantive errors. So in practice, we almost never read the journals: we just check arXiv.

And so every day, I check the arXiv. I go to the section on my sub-field, and I click on a link that lists all of the papers that were new that day. I skim the titles, and if I see an interesting paper I’ll read the abstract, and maybe download the full thing. Checking as I’m writing this, there were ten papers posted in my field, and another twenty “cross-lists” were posted in other fields but additionally classified in mine.

Other fields use arXiv: mathematicians and computer scientists and even economists use it in roughly the same way physicists do. For biology and medicine, though, there are different, newer sites: bioRxiv and medRxiv.

One thing you may notice is the different capitalization. When physicists write arXiv, the “X” is capitalized. In the logo, it looks like a Greek letter chi, thus saying “archive”. The biologists and medical researchers capitalize the R instead. The logo still has an X that looks like a chi, but positioned with the R it looks like the Rx of medical prescriptions.

Something I noticed, but you might not, was the lack of a handy link to see new papers. You can search medRxiv and bioRxiv, and filter by date. But there’s no link that directly takes you to the newest papers. That suggests that biologists aren’t using bioRxiv like we use arXiv, and checking the new papers every day.

I was curious if this had to do with the scale of the field. I have the impression that physics and mathematics are smaller fields than biology, and that much less physics and mathematics research goes on than medical research. Certainly, theoretical particle physics is a small field. So I might have expected arXiv to be smaller than bioRxiv and medRxiv, and I certainly would expect fewer papers in my sub-field than papers in a medium-sized subfield of biology.

On the other hand, arXiv in my field is universal. In biology, bioRxiv and medRxiv are still quite controversial. More and more people are using them, but not every journal accepts papers posted to a preprint server. Many people still don’t use these services. So I might have expected bioRxiv and medRxiv to be smaller.

Checking now, neither answer is quite right. I looked between November 1 and November 2, and asked each site how many papers were uploaded between those dates. arXiv had the most, 604 papers. bioRxiv had roughly half that many, 348. medRxiv had 97.

arXiv represents multiple fields, bioRxiv is “just” biology. Specializing, on that day arXiv had 235 physics papers, 135 mathematics papers, and 250 computer science papers. So each individual field has fewer papers than biology in this period.

Specializing even further, I can look at a subfield. My subfield, which is fairly small, had 20 papers between those dates. Cell biology, which I would expect to be quite a big subfield, had 33.

Overall, the numbers were weirdly comparable, with medRxiv unexpectedly small compared to both arXiv and bioRxiv. I’m not sure whether there are more biologists than physicists, but I’m pretty sure there should be more cell biologists than theoretical particle physicists. This suggests that many still aren’t using bioRxiv. It makes me wonder: will bioRxiv grow dramatically in future? Are the people running it ready for if it does?

No, PhD Students Are Not Just Cheap Labor

Here’s a back-of-the-envelope calculation:

In 2019, there were 83,050 unionized graduate students in the US. Let’s assume these are mostly PhD students, since other graduate students are not usually university employees. I can’t find an estimate of the total number of PhD students in the US, but in 2019, 55,614 of them graduated. In 2020, the average US doctorate took 7.5 years to complete. That implies that 83,050/(55,614 x 7.5) = about one-fifth of PhD students in the US are part of a union.

That makes PhD student unions common, but not the majority. It means they’re not unheard of and strange, but a typical university still isn’t unionized. It’s the sweet spot for controversy. It leads to a lot of dumb tweets.

I saw one such dumb tweet recently, from a professor arguing that PhD students shouldn’t unionize. The argument was that if PhD students were paid more, then professors would prefer to hire postdocs, researchers who already have a doctoral degree.

(I won’t link to the tweet, in part because this person is probably being harassed enough already.)

I don’t know how things work in this professor’s field. But the implication, that professors primarily take on PhD students because they’re cheaper, not only doesn’t match my experience: it also just doesn’t make very much sense.

Imagine a neighborhood where the children form a union. They decide to demand a higher allowance, and to persuade any new children in the neighborhood to follow their lead.

Now imagine a couple in that neighborhood, deciding whether to have a child. Do you think that they might look at the fees the “children’s union” charges, and decide to hire an adult to do their chores instead?

Maybe there’s a price where they’d do that. If neighborhood children demanded thousands of dollars in allowance, maybe the young couple would decide that it’s too expensive to have a child. But a small shift is unlikely to change things very much: people have kids for many reasons, and those reasons don’t usually include cheap labor.

The reasons professors take on PhD students are similar to the reasons parents decide to have children. Some people have children because they want a legacy, something of theirs that survives to the next generation. For professors, PhD students are our legacy, our chance to raise someone on our ideas and see how they build on them. Some people have children because they love the act of child-raising: helping someone grow and learn about the world. The professors who take on students like taking on students: teaching is fun, after all.

That doesn’t mean there won’t be cases “on the margin”, where a professor finds they can’t afford a student they previously could. (And to be fair to the tweet I’m criticizing, they did even use the word “marginal”.) But they would have to be in a very tight funding situation, with very little flexibility.

And even for situations like that, long-term, I’m not sure anything would change.

I did my PhD in the US. I was part of a union, and in part because of that (though mostly because I was in a physics department), I was paid relatively decently for a PhD student. Relatively decently is still not that great, though. This was the US, where universities still maintain the fiction that PhD students only work 20 hours a week and pay proportionate to that, and where salaries in a university can change dramatically from student to postdoc to professor.

One thing I learned during my PhD is that despite our low-ish salaries, we cost our professors about as much as postdocs did. The reason why is tuition: PhD students don’t pay their own tuition, but that tuition still exists, and is paid by the professors who hire those students out of their grants. A PhD salary plus a PhD tuition ended up roughly equal to a postdoc salary.

Now, I’m working in a very different system. In a Danish university, wages are very flat. As a postdoc, a nice EU grant put me at almost the same salary as the professors. As a professor, my salary is pretty close to that of one of the better-paying schoolteacher jobs.

At the same time, tuition is much less relevant. Undergraduates don’t pay tuition at all, so PhD tuition isn’t based on theirs. Instead, it’s meant to cover costs of the PhD program as a whole.

I’ve filled out grants here in Denmark, so I know how much PhD students cost, and how much postdocs cost. And since the situation is so different, you might expect a difference here too.

There isn’t one. Hiring a PhD student, salary plus tuition, costs about as much as hiring a postdoc.

Two very different systems, with what seem to be very different rules, end up with the same equation. PhD students and postdocs cost about as much as each other, even if every assumption that you think would affect the outcome turns out completely different.

This is why I expect that, even if PhD students get paid substantially more, they still won’t end up that out of whack with postdocs. There appears to be an iron law of academic administration keeping these two numbers in line, one that holds across nations and cultures and systems. The proportion of unionized PhD students in the US will keep working its way upwards, and I don’t expect it to have any effect on whether professors take on PhDs.

From Journal to Classroom

As part of the pedagogy course I’ve been taking, I’m doing a few guest lectures in various courses. I’ve got one coming up in a classical mechanics course (“intermediate”-level, so not Newton’s laws, but stuff the general public doesn’t know much about like Hamiltonians). They’ve been speeding through the core content, so I got to cover a “fun” topic, and after thinking back to my grad school days I chose a topic I think they’ll have a lot of fun with: Chaos theory.

Getting the obligatory Warhammer reference out of the way now

Chaos is one of those things everyone has a vague idea about. People have heard stories where a butterfly flaps its wings and causes a hurricane. Maybe they’ve heard of the rough concept, determinism with strong dependence on the initial conditions, so a tiny change (like that butterfly) can have huge consequences. Maybe they’ve seen pictures of fractals, and got the idea these are somehow related.

Its role in physics is a bit more detailed. It’s one of those concepts that “intermediate classical mechanics” is good for, one that can be much better understood once you’ve been introduced to some of the nineteenth century’s mathematical tools. It felt like a good way to show this class that the things they’ve learned aren’t just useful for dusty old problems, but for understanding something the public thinks is sexy and mysterious.

As luck would have it, the venerable textbook the students are using includes a (2000’s era) chapter on chaos. I read through it, and it struck me that it’s a very different chapter from most of the others. This hit me particularly when I noticed a section describing a famous early study of chaos, and I realized that all the illustrations were based on the actual original journal article.

I had surprisingly mixed feelings about this.

On the one hand, there’s a big fashion right now for something called research-based teaching. That doesn’t mean “using teaching methods that are justified by research” (though you’re supposed to do that too), but rather, “tying your teaching to current scientific research”. This is a fashion that makes sense, because learning about cutting-edge research in an undergraduate classroom feels pretty cool. It lets students feel more connected with the scientific community, it inspires them to get involved, and it gets them more used to what “real research” looks like.

On the other hand, structuring your textbook based on the original research papers feels kind of lazy. There’s a reason we don’t teach Newtonian mechanics the way Newton would have. Pedagogy is supposed to be something we improve at over time: we come up with better examples and better notation, more focused explanations that teach what we want students to learn. If we just summarize a paper, we’re not really providing “added value”: we should hope, at this point, that we can do better.

Thinking about this, I think the distinction boils down to why you’re teaching the material in the first place.

With a lot of research-based teaching, the goal is to show the students how to interact with current literature. You want to show them journal papers, not because the papers are the best way to teach a concept or skill, but because reading those papers is one of the skills you want to teach.

That makes sense for very current topics, but it seems a bit weird for the example I’ve been looking at, an early study of chaos from the 60’s. It’s great if students can read current papers, but they don’t necessarily need to read older ones. (At least, not yet.)

What then, is the textbook trying to teach? Here things get a bit messy. For a relatively old topic, you’d ideally want to teach not just a vague impression of what was discovered, but concrete skills. Here though, those skills are just a bit beyond the students’ reach: chaos is more approachable than you’d think, but still not 100% something the students can work with. Instead they’re learning to appreciate concepts. This can be quite valuable, but it doesn’t give the kind of structure that a concrete skill does. In particular, it makes it hard to know what to emphasize, beyond just summarizing the original article.

In this case, I’ve come up with my own way forward. There are actually concrete skills I’d like to teach. They’re skills that link up with what the textbook is teaching, skills grounded in the concepts it’s trying to convey, and that makes me think I can convey them. It will give some structure to the lesson, a focus on not merely what I’d like the students to think but what I’d like them to do.

I won’t go into too much detail: I suspect some of the students may be reading this, and I don’t want to spoil the surprise! But I’m looking forward to class, and to getting to try another pedagogical experiment.

At Bohr-100: Current Themes in Theoretical Physics

During the pandemic, some conferences went online. Others went dormant.

Every summer before the pandemic, the Niels Bohr International Academy hosted a conference called Current Themes in High Energy Physics and Cosmology. Current Themes is a small, cozy conference, a gathering of close friends some of whom happen to have Nobel prizes. Holding it online would be almost missing the point.

Instead, we waited. Now, at least in Denmark, the pandemic is quiet enough to hold this kind of gathering. And it’s a special year: the 100th anniversary of Niels Bohr’s Nobel, the 101st of the Niels Bohr Institute. So it seemed like the time for a particularly special Current Themes.

For one, it lets us use remarkably simple signs

A particularly special Current Themes means some unusually special guests. Our guests are usually pretty special already (Gerard t’Hooft and David Gross are regulars, to just name the Nobelists), but this year we also had Alexander Polyakov. Polyakov’s talk had a magical air to it. In a quiet voice, broken by an impish grin when he surprised us with a joke, Polyakov began to lay out five unsolved problems he considered interesting. In the end, he only had time to present one, related to turbulence: when Gross asked him to name the remaining four, the second included a term most of us didn’t recognize (striction, known in a magnetic context and which he wanted to explore gravitationally), so the discussion hung while he defined that and we never did learn what the other three problems were.

At the big 100th anniversary celebration earlier in the spring, the Institute awarded a few years worth of its Niels Bohr Institute Medal of Honor. One of the recipients, Paul Steinhardt, couldn’t make it then, so he got his medal now. After the obligatory publicity photos were taken, Steinhardt entertained us all with a colloquium about his work on quasicrystals, including the many adventures involved in finding the first example “in the wild”. I can’t do the story justice in a short blog post, but if you won’t have the opportunity to watch him speak about it then I hear his book is good.

An anniversary conference should have some historical elements as well. For this one, these were ably provided by David Broadhurst, who gave an after-dinner speech cataloguing things he liked about Bohr. Some was based on public information, but the real draw were the anecdotes: his own reminiscences, and those of people he knew who knew Bohr well.

The other talks covered interesting ground: from deep approaches to quantum field theory, to new tools to understand black holes, to the implications of causality itself. One out of the ordinary talk was by Sabrina Pasterski, who advocated a new model of physics funding. I liked some elements (endowed organizations to further a subfield) and am more skeptical of others (mostly involving NFTs). Regardless it, and the rest of the conference more broadly, spurred a lot of good debate.

Covering the Angles

One way to think of science is of a lot of interesting little problems. Some scientists are driven by questions like “how does this weird cell work?” or “how accurately can I predict the chance these particles collide?” If the puzzles are fun enough and the questions are interesting enough, then that can be enough motivation on its own.

Another perspective thinks of science as pursuit of a few big problems. Physicists want to write down the laws of nature, to know where the universe came from, to reconcile gravity and quantum mechanics. Biologists want to understand how life works and manipulate it, psychologists want the same for the human mind. For some scientists, these big questions are at the heart of why they do science. Someone in my field once joked he can’t get up in the morning without telling himself “spacetime is doomed”.

Even if you care about the big questions, though, you can’t neglect the small ones. That’s because modern science is collaborative. A big change, like a new particle or a whole new theory of physics, requires confirmation. It’s not enough for one person to propose it. The ideas that last in science last because they crop up in many different places, with many different methods. They last because we check all the angles, compulsively, looking for any direction that might be screwed up.

In those checks, any and all science can be useful. We need the big conceptual leaps from people like Einstein and the careful and systematic measurements of Brahe. We need people who look for the wackiest ideas, not just because they might be true, but to rule them out when they’re false, to make us all the more confident we’re on the right path. We need people pushing tried-and-true theories to the next leap of precision, to show that nothing is hiding in the gaps and make it clearer when something is. We need many people pushing many different paths: all are necessary, and any one might be crucial.

Often, one of these paths gets the lion’s share of the glory: the press, the Nobel, the mention in the history books. But the other paths still matter: we wouldn’t be confident in the science if they didn’t exist. Most working scientists will be on those other paths, as a matter of course. But we still need them to get science done.

The Conference Dilemma: Freshness vs. Breadth

Back in 2017, I noticed something that should have struck me as a little odd. My sub-field has a big yearly conference, called Amplitudes, that brings in everyone who works on our kind of research. Amplitudes 2017 was fun, but not “fresh”: most people talked about work they had already published. A smaller conference I went to that year, called QCD Meets Gravity, was much “fresher”: a lot of discussion of work in progress and work “hot off the presses”.

At the time, I chalked the difference up to timing: it was a few months later, and people happened to have projects that matured around then. But I realized recently there’s another reason, one why you would expect bigger conferences to have less fresh content.

See, I’ve recently been on the other “side of the curtain”: I was an organizer for Amplitudes last year. And I noticed one big obstacle to having fresh content: the timeframe.

The bigger a conference is, the longer in advance you need to invite speakers. It’s a bigger task to organize everyone, to make sure travel and hotels and raw availability works, that everyone has time to prepare their talks and you have a nice full (but not too full) schedule. So when we started asking people, we didn’t know what the “freshest” work was going to be. We had recommendations from our scientific committee (a group of experts in the subfield whose job is to suggest speakers), but in practice the goal is more one of breadth than freshness: we needed to make sure that everybody in our community was represented.

A smaller conference can get around this. It can be organized a bit later, so the organizers have more information about new developments. It covers a smaller area, so the organizers have more information about new hot topics and unpublished results. And it typically invites most of the sub-community anyway, so you’re guaranteed to cover the hot new stuff just by raw completeness.

This doesn’t mean small conferences are “just better” or anything like that. Breadth is genuinely useful: a big conference covering a whole subfield is great for bringing a community together, getting everyone on a shared page and expanding their horizons. There’s a real tradeoff between those goals and getting a conference with the latest progress. It’s not a fixed tradeoff, we can improve both goals at once (I think at Amplitudes we as organizers could have been better at highlighting unpublished work), but we still have to make choices of what to emphasize.

Proxies for Proxies

Why pay scientists?

Maybe you care about science itself. You think that exploring the world should be one of our central goals as human beings, that it “makes our country worth defending”.

Maybe you care about technology. You support science because, down the line, you think it will give us new capabilities that improve people’s lives. Maybe you expect this to happen directly, or maybe indirectly as “spinoff” inventions like the internet.

Maybe you just think science is cool. You want the stories that science tells: they entertain you, they give you a place in the world, they help distract from the mundane day to day grind.

Maybe you just think that the world ought to have scientists in it. You can think of it as a kind of bargain, maintaining expertise so that society can tackle difficult problems. Or you can be more cynical, paying early-career scientists on the assumption that most will leave academia and cheapen labor costs for tech companies.

Maybe you want to pay the scientists to teach, to be professors at universities. You notice that they don’t seem to be happy if you don’t let them research, so you throw a little research funding at them, as a treat.

Maybe you just want to grow your empire: your department, your university, the job numbers in your district.

In most jobs, you’re supposed to do what people pay you to do. As a scientist, the people who pay you have all of these motivations and more. You can’t simply choose to do what people pay you to do.

So you come up with a proxy. You sum up all of these ideas, into a vague picture of what all those people want. You have some idea of scientific quality: not just a matter of doing science correctly and carefully, but doing interesting science. It’s not something you ever articulate. It’s likely even contradictory, after all, the goals it approximates often are. Nonetheless, it’s your guide, and not just your guide: it’s the guide of those who hire you, those who choose if you get promoted or whether you get more funding. All of these people have some vague idea in their head of what makes good science, their own proxy for the desires of the vast mass of voters and decision-makers and funders.

But of course, the standard is still vague. Should good science be deep? Which topics are deeper than others? Should it be practical? Practical for whom? Should it be surprising? What do you expect to happen, and what would surprise you? Should it get the community excited? Which community?

As a practicing scientist, you have to build your own proxy for these proxies. The same work that could get you hired in one place might meet blank stares at another, and you can’t build your life around those unpredictable quirks. So you make your own vague idea of what you’re supposed to do, an alchemy of what excites you and what makes an impact and what your friends are doing. You build a stand-in in your head, on the expectation that no-one else will have quite the same stand-in, then go out and convince the other stand-ins to give money to your version. You stand on a shifting pile of unwritten rules, subtler even than some artists, because at the end of the day there’s never a real client to be seen. Just another proxy.

Types of Undergrad Projects

I saw a discussion on twitter recently, about PhD programs in the US. Apparently universities are putting more and more weight whether prospective students published a paper during their Bachelor’s degree. For some, it’s even an informal requirement. Some of those in the discussion were skeptical that the students were really contributing to these papers much, and thought that most of the work must have been done by the papers’ other authors. If so, this would mean universities are relying more and more on a metric that depends on whether students can charm their professors enough to be “included” in this way, rather than their own abilities.

I won’t say all that much about the admissions situation in the US. (Except to say that if you find yourself making up new criteria to carefully sift out a few from a group of already qualified-enough candidates, maybe you should consider not doing that.) What I did want to say a bit about is what undergraduates can typically actually do, when it comes to research in my field.

First, I should clarify that I’m talking about students in the US system here. Undergraduate degrees in Europe follow a different path. Students typically take three years to get a Bachelor’s degree, often with a project at the end, followed by a two-year Master’s degree capped with a Master’s thesis. A European Master’s thesis doesn’t have to result in a paper, but is often at least on that level, while a European Bachelor project typically isn’t. US Bachelor’s degrees are four years, so one might expect a Bachelor’s thesis to be in between a European Bachelor’s project and Master’s thesis. In practice, it’s a bit different: courses for Master’s students in Europe will generally cover material taught to PhD students in the US, so a typical US Bachelor’s student won’t have had some courses that have a big role in research in my field, like Quantum Field Theory. On the other hand, the US system is generally much more flexible, with students choosing more of their courses and having more opportunities to advance ahead of the default path. So while US Bachelor’s students don’t typically take Quantum Field Theory, the more advanced students can and do.

Because of that, how advanced a given US Bachelor’s student is varies. A small number are almost already PhD students, and do research to match. Most aren’t, though. Despite that, it’s still possible for such a student to complete a real research project in theoretical physics, one that results in a real paper. What does that look like?

Sometimes, it’s because the student is working with a toy model. The problems we care about in theoretical physics can be big and messy, involving a lot of details that only an experienced researcher will know. If we’re lucky, we can make a simpler version of the problem, one that’s easier to work with. Toy models like this are often self-contained, the kind of thing a student can learn without all of the background we expect. The models may be simpler than the real world, but they can still be interesting, suggesting new behavior that hadn’t been considered before. As such, with a good choice of toy model an undergraduate can write something that’s worthy of a real physics paper.

Other times, the student is doing something concrete in a bigger collaboration. This isn’t quite the same as the “real scientists” doing all the work, because the student has a real task to do, just one that is limited in scope. Maybe there is particular computer code they need to get working, or a particular numerical calculation they need to do. The calculation may be comparatively straightforward, but in combination with other results it can still merit a paper. My first project as a PhD student was a little like that, tackling one part of a larger calculation. Once again, the task can be quite self-contained, the kind of thing you can teach a student over a summer project.

Undergraduate projects in the US won’t always result in a paper, and I don’t think anyone should expect, or demand, that they do. But a nontrivial number do, and not because the student is “cheating”. With luck, a good toy model or a well-defined sub-problem can lead a Bachelor’s student to make a real contribution to physics, and get a paper in the bargain.