Category Archives: Life as a Physicist

Papers With Questions and Papers With Answers

I’ve found that when it comes to reading papers, there are two distinct things I look for.

Sometimes, I read a paper looking for an answer. Typically, this is a “how to” kind of answer: I’m trying to do something, and the paper I’m reading is supposed to explain how. More rarely, I’m directly using a result: the paper proved a theorem or compute a formula, and I just take it as written and use it to calculate something else. Either way, I’m seeking out the paper with a specific goal in mind, which typically means I’m reading it long after it came out.

Other times, I read a paper looking for a question. Specifically, I look for the questions the author couldn’t answer. Sometimes these are things they point out, limitations of their result or opportunities for further study. Sometimes, these are things they don’t notice, holes or patterns in their results that make me wonder “what if?” Either can be the seed of a new line of research, a problem I can solve with a new project. If I read a paper in this way, typically it just came out, and this is the first time I’ve read it. When that isn’t the case, it’s because I start out with another reason to read it: often I’m looking for an answer, only to realize the answer I need isn’t there. The missing answer then becomes my new question.

I’m curious about the balance of these two behaviors in different fields. My guess is that some fields read papers more for their answers, while others read them more for their questions. If you’re working in another field, let me know what you do in the comments!

A Week Among the Pedagogues

Pedagogy courses have a mixed reputation among physicists, and for once I don’t just mean “mixed” as a euphemism for “bad”. I’ve met people who found them very helpful, and I’ve been told that attending a Scandinavian pedagogy course looks really good on a CV. On the other hand, I’ve heard plenty of horror stories of classes that push a jumble of dogmatic requirements and faddish gimmicks, all based on research that if anything has more of a replication crisis going than psychology does.

With that reputation in mind, I went into the pedagogy course last week hopeful, but skeptical. In part, I wasn’t sure whether pedagogy was the kind of thing that could be taught. Each class is different, and so much of what makes a bad or good teacher seems to be due to experience, which one can’t get much of in a one-week course. I couldn’t imagine what facts a pedagogy course could tell me that would actually improve my teaching, and wouldn’t just be ill-justified dogma.

The answer, it turned out, would be precisely the message of the course. A pedagogy course that drills you in “pedagogy facts” would indeed be annoying. But one of those “pedagogy facts” is that teaching isn’t just drilling students in facts. And because this course practiced what it preached, it ended up much less annoying than I worried it would be.

There were hints of that dogmatic approach in the course materials, but only hints. An early slide had a stark quote calling pure lecturing irresponsible. The teacher immediately and awkwardly distanced himself from it, almost literally saying “well that is a thing someone could say”. Instead, most of the class was made up of example lessons and student discussions. We’d be assembled into groups to discuss something, then watch a lesson intended to show off a particular technique. Only then would we get a brief lecture about the technique, giving a name and some justification, before being thrown into yet more discussion about it.

In the terminology we were taught, this made the course dialogical rather than authoritative, and inductive rather than deductive. We learned by reflecting on examples rather than deriving general truths, and discussed various perspectives rather than learning one canonical one.

Did we learn anything from that, besides the terms?

One criticism of both dialogical and inductive approaches to teaching is that students can only get out what they put in. If you learn by discussing and solving examples by yourself, you’d expect the only things you’ll learn are things you already know.

We weren’t given the evidence to refute this criticism in general, and honestly I wouldn’t have trusted it if we had (see above: replication crisis). But in this context, that criticism does miss something. Yes, pretty much every method I learned in this course was something I could come up with on my own in the right situation. But I wouldn’t be thinking of the methods systematically. I’d notice a good idea for one lesson or another, but miss others because I wouldn’t be thinking of the ideas as part of a larger pattern. With the patterns in mind, with terms to “hook” the methods on to, I can be more aware of when opportunities come up. I don’t have to think of dialogical as better than authoritative, or inductive as better than deductive, in general. All I have to do is keep an eye out for when a dialogical or inductive approach might prove useful. And that’s something I feel genuinely better at after taking this course.

Beyond that core, we got some extremely topical tips about online teaching and way too many readings (I think the teachers overestimated how easy it is to read papers from a different discipline…and a “theory paper” in education is about as far from a “theory paper” in physics as you can get). At times the dialogue aspect felt a little too open, we heard “do what works for you” often enough that it felt like the teachers were apologizing for their own field. But overall, the course worked, and I expect to teach better going forward because of it.

At a Pedagogy Course

I’m at a pedagogy course this week. It’s the first time I’ve taken a course like this, and it has been really interesting learning about different approaches to teaching (which, as I keep being reminded, is very different from outreach!). It’s also really time-consuming: seven hours of class a day, with readings and lecture prep in the evening. As such, I haven’t had time to do a full blog post. Next week I’ll likely post some reflections about the course. Until then, here’s a slide from the practice lecture I gave:

Building One’s Technology

There are theoretical physicists who can do everything they do with a pencil and a piece of paper. I’m not one of them. The calculations I do are long, complicated, or tedious enough that they’re often best done with a computer. For a calculation like that, I can’t just use existing software “out of the box”: I need to program special-purpose tools to do the kind of calculation I need. This means each project has its own kind of learning curve. If I already have the right code, or almost the right code, things go very smoothly: with a few tweaks I can do a lot of interesting calculations. If I don’t have the right code yet, things go much more slowly: I have to build up my technology, figuring out what I need piece by piece until I’m back up to my usual speed.

I don’t always need to use computers to do my calculations. Sometimes my work hinges on something more conceptual: understanding a mathematical proof, or the arguments from another physicist’s paper. While this seems different on the surface, I’ve found that it has the same kinds of learning curves. If I know the right papers and mathematical methods, I can go pretty quickly. If I don’t, I have to “build up my technology”, reading and practicing, a slow build-up to my goal.

The times when I have to “build my technology” are always a bit frustrating. I don’t work as fast as I’d like, and I get tripped up by dumb mistakes. I keep having to go back, almost to the beginning, realizing that some aspect of how I set things up needs to be changed to make the rest work. As I go, though, the work gets more and more satisfying. I find pieces (of the code, of my understanding) that become solid, that I can rely on. I build my technology, and I can do more and more, and feel better about myself in the bargain. Eventually, I get back up to my full abilities, my technology set up, and a wide variety of calculations become possible.

Theoretical Uncertainty and Uncertain Theory

Yesterday, Fermilab’s Muon g-2 experiment announced a new measurement of the magnetic moment of the muon, a number which describes how muons interact with magnetic fields. For what might seem like a small technical detail, physicists have been very excited about this measurement because it’s a small technical detail that the Standard Model seems to get wrong, making it a potential hint of new undiscovered particles. Quanta magazine has a great piece on the announcement, which explains more than I will here, but the upshot is that there are two different calculations on the market that attempt to predict the magnetic moment of the muon. One of them, using older methods, disagrees with the experiment. The other, with a new approach, agrees. The question then becomes, which calculation was wrong? And why?

What does it mean for a prediction to match an experimental result? The simple, wrong, answer is that the numbers must be equal: if you predict “3”, the experiment has to measure “3”. The reason why this is wrong is that in practice, every experiment and every prediction has some uncertainty. If you’ve taken a college physics class, you’ve run into this kind of uncertainty in one of its simplest forms, measurement uncertainty. Measure with a ruler, and you can only confidently measure down to the smallest divisions on the ruler. If you measure 3cm, but your ruler has ticks only down to a millimeter, then what you’re measuring might be as large as 3.1cm or as small as 2.9 cm. You just don’t know.

This uncertainty doesn’t mean you throw up your hands and give up. Instead, you estimate the effect it can have. You report, not a measurement of 3cm, but of 3cm plus or minus 1mm. If the prediction was 2.9cm, then you’re fine: it falls within your measurement uncertainty.

Measurements aren’t the only thing that can be uncertain. Predictions have uncertainty too, theoretical uncertainty. Sometimes, this comes from uncertainty on a previous measurement: if you make a prediction based on that experiment that measured 3cm plus or minus 1mm, you have to take that plus or minus into account and estimate its effect (we call this propagation of errors). Sometimes, the uncertainty comes instead from an approximation you’re making. In particle physics, we sometimes approximate interactions between different particles with diagrams, beginning with the simplest diagrams and adding on more complicated ones as we go. To estimate the uncertainty there, we estimate the size of the diagrams we left out, the more complicated ones we haven’t calculated yet. Other times, that approximation doesn’t work, and we need to use a different approximation, treating space and time as a finite grid where we can do computer simulations. In that case, you can estimate your uncertainty based on how small you made your grid. The new approach to predicting the muon magnetic moment uses that kind of approximation.

There’s a common thread in all of these uncertainty estimates: you don’t expect to be too far off on average. Your measurements won’t be perfect, but they won’t all be screwed up in the same way either: chances are, they will randomly be a little below or a little above the truth. Your calculations are similar: whether you’re ignoring complicated particle physics diagrams or the spacing in a simulated grid, you can treat the difference as something small and random. That randomness means you can use statistics to talk about your errors: you have statistical uncertainty. When you have statistical uncertainty, you can estimate, not just how far off you might get, but how likely it is you ended up that far off. In particle physics, we have very strict standards for this kind of thing: to call something new a discovery, we demand that it is so unlikely that it would only show up randomly under the old theory roughly one in a million times. The muon magnetic moment isn’t quite up to our standards for a discovery yet, but the new measurement brought it closer.

The two dueling predictions for the muon’s magnetic moment both estimate some amount of statistical uncertainty. It’s possible that the two calculations just disagree due to chance, and that better measurements or a tighter simulation grid would make them agree. Given their estimates, though, that’s unlikely. That takes us from the realm of theoretical uncertainty, and into uncertainty about the theoretical. The two calculations use very different approaches. The new calculation tries to compute things from first principles, using the Standard Model directly. The risk is that such a calculation needs to make assumptions, ignoring some effects that are too difficult to calculate, and one of those assumptions may be wrong. The older calculation is based more on experimental results, using different experiments to estimate effects that are hard to calculate but that should be similar between different situations. The risk is that the situations may be less similar than expected, their assumptions breaking down in a way that the bottom-up calculation could catch.

None of these risks are easy to estimate. They’re “unknown unknowns”, or rather, “uncertain uncertainties”. And until some of them are resolved, it won’t be clear whether Fermilab’s new measurement is a sign of undiscovered particles, or just a (challenging!) confirmation of the Standard Model.

The Grant-Writing Moment

When a scientist applies for a grant to fund their research, there’s a way it’s supposed to go. The scientist starts out with a clear idea, a detailed plan for an experiment or calculation they’d like to do, and an expectation of what they could learn from it. Then they get the grant, do their experiment or calculation, and make their discovery. The world smiles upon them.

There’s also a famous way it actually goes. Like the other way, the scientist has a clear idea and detailed plan. Then they do their experiment, or calculation, and see what they get, making their discovery. Finally, they write their grant application, proposing to do the experiment they already did. Getting the grant, they then spend the money on their next idea instead, which they will propose only in the next grant application, and so on.

This is pretty shady behavior. But there’s yet another way things can go, one that flips the previous method on its head. And after considering it, you might find the shady method more understandable.

What happens if a scientist is going to run out of funding, but doesn’t yet have a clear idea? Maybe they don’t know enough yet to have a detailed plan for their experiment or their calculation. Maybe they have an idea, but they’re still foggy about what they can learn from it.

Well, they’re still running out of funding. They still have to write that grant. So they start writing. Along the way, they’ll manage to find some of that clarity: they’ll have to write a detailed plan, they’ll have to describe some expected discovery. If all goes well, they tell a plausible story, and they get that funding.

When they actually go do that research, though, there’s no guarantee it sticks to the plan. In fact, it’s almost guaranteed not to: neither the scientist nor the grant committee typically knows what experiment or calculation needs to be done: that’s what makes the proposal novel science in the first place. The result is that once again, the grant proposal wasn’t exactly honest: it didn’t really describe what was actually going to be done.

You can think of these different stories as falling on a sliding scale. On the one end, the scientist may just have the first glimmer of an idea, and their funded research won’t look anything like their application. On the other, the scientist has already done the research, and the funded research again looks nothing like the application. In between there’s a sweet spot, the intended system: late enough that the scientist has a good idea of what they need to do, early enough that they haven’t done it yet.

How big that sweet spot is depends on the pace of the field. If you’re a field with big, complicated experiments, like randomized controlled trials, you can mostly make this work. Your work takes a long time to plan, and requires sticking to that plan, so you can, at least sometimes, do grants “the right way”. The smaller your experiments are though, the more the details can change, and the smaller the window gets. For a field like theoretical physics, if you know exactly what calculation to do, or what proof to write, with no worries or uncertainty…well, you’ve basically done the calculation already. The sweet spot for ethical grant-writing shrinks down to almost a single moment.

In practice, some grant committees understand this. There are grants where you are expected to present preliminary evidence from work you’ve already started, and to discuss the risks your vaguer ideas might face. Grants of this kind recognize that science is a process, and that catching people at that perfect moment is next-to-impossible. They try to assess what the scientist is doing as a whole, not just a single idea.

Scientists ought to be honest about what they’re doing. But grant agencies need to be honest too, about how science in a given field actually works. Hopefully, one enables the other, and we reach a more honest world.

Physical Intuition From Physics Experience

One of the most mysterious powers physicists claim is physical intuition. Let the mathematicians have their rigorous proofs and careful calculations. We just need to ask ourselves, “Does this make sense physically?”

It’s tempting to chalk this up to bluster, or physicist arrogance. Sometimes, though, a physicist manages to figure out something that stumps the mathematicians. Edward Witten’s work on knot theory is a classic example, where he used ideas from physics, not rigorous proof, to win one of mathematics’ highest honors.

So what is physical intuition? And what is its relationship to proof?

Let me walk you through an example. I recently saw a talk by someone in my field who might be a master of physical intuition. He was trying to learn about what we call Effective Field Theories, theories that are “effectively” true at some energy but don’t include the details of higher-energy particles. He calculated that there are limits to the effect these higher-energy particles can have, just based on simple cause and effect. To explain the calculation to us, he gave a physical example, of coupled oscillators.

Oscillators are familiar problems for first-year physics students. Objects that go back and forth, like springs and pendulums, tend to obey similar equations. Link two of them together (couple them), and the equations get more complicated, work for a second-year student instead of a first-year one. Such a student will notice that coupled oscillators “repel” each other: their frequencies get father apart than they would be if they weren’t coupled.

Our seminar speaker wanted us to revisit those second-year-student days, in order to understand how different particles behave in Effective Field Theory. Just as the frequencies of the oscillators repel each other, the energies of particles repel each other: the unknown high-energy particles could only push the energies of the lighter particles we can detect lower, not higher.

This is an example of physical intuition. Examine it, and you can learn a few things about how physical intuition works.

First, physical intuition comes from experience. Using physical intuition wasn’t just a matter of imagining the particles and trying to see what “makes sense”. Instead, it required thinking about similar problems from our experience as physicists: problems that don’t just seem similar on the surface, but are mathematically similar.

Second, physical intuition doesn’t replace calculation. Our speaker had done the math, he hadn’t just made a physical argument. Instead, physical intuition serves two roles: to inspire, and to help remember. Physical intuition can inspire new solutions, suggesting ideas that you go on to check with calculation. In addition to that, it can help your mind sort out what you already know. Without the physical story, we might not have remembered that the low-energy particles have their energies pushed down. With the story though, we had a similar problem to compare, and it made the whole thing more memorable. Human minds aren’t good at holding a giant pile of facts. What they are good at is holding narratives. “Physical intuition” ties what we know into a narrative, building on past problems to understand new ones.

Finally, physical intuition can be risky. If the problem is too different then the intuition can lead you astray. The mathematics of coupled oscillators and Effective Field Theories was similar enough for this argument to work, but if it turned out to be different in an important way then the intuition would have backfired, making it harder to find the answer and harder to keep track once it was found.

Physical intuition may seem mysterious. But deep down, it’s just physicists using our experience, comparing similar problems to help keep track of what we need to know. I’m sure chemists, biologists, and mathematicians all have similar stories to tell.

Physics Acculturation

We all agree physics is awesome, right?

Me, I chose physics as a career, so I’d better like it. And you, right now you’re reading a physics blog for fun, so you probably like physics too.

Ok, so we agree, physics is awesome. But it isn’t always awesome.

Read a blog like this, or the news, and you’ll hear about the more awesome parts of physics: the black holes and big bangs, quantum mysteries and elegant mathematics. As freshman physics majors learn every year, most of physics isn’t like that. It’s careful calculation and repetitive coding, incremental improvements to a piece of a piece of a piece of something that might eventually answer a Big Question. Even if intellectually you can see the line from what you’re doing to the big flashy stuff, emotionally the two won’t feel connected, and you might struggle to feel motivated.

Physics solves this through acculturation. Physicists don’t just work on their own, they’re part of a shared worldwide culture of physicists. They spend time with other physicists, and not just working time but social time: they eat lunch together, drink coffee together, travel to conferences together. Spending that time together gives physics more emotional weight: as humans, we care a bit about Big Questions, but we care a lot more about our community.

This isn’t unique to physics, of course, or even to academics. Programmers who have lunch together, philanthropists who pat each other on the back for their donations, these people are trying to harness the same forces. By building a culture around something, you can get people more motivated to do it.

There’s a risk here, of course, that the culture takes over, and we lose track of the real reasons to do science. It’s easy to care about something because your friends care about it because their friends care about it, looping around until it loses contact with reality. In science we try to keep ourselves grounded, to respect those who puncture our bubbles with a good argument or a clever experiment. But we don’t always succeed.

The pandemic has made acculturation more difficult. As a scientist working from home, that extra bit of social motivation is much harder to get. It’s perhaps even harder for new students, who haven’t had the chance to hang out and make friends with other researchers. People’s behavior, what they research and how and when, has changed, and I suspect changing social ties are a big part of it.

In the long run, I don’t think we can do without the culture of physics. We can’t be lone geniuses motivated only by our curiosity, that’s just not how people work. We have to meld the two, mix the social with the intellectual…and hope that when we do, we keep the engines of discovery moving.

A Physicist New Year

Happy New Year to all!

Physicists celebrate the new year by trying to sneak one last paper in before the year is over. Looking at Facebook last night I saw three different friends preview the papers they just submitted. The site where these papers appear, arXiv, had seventy new papers this morning, just in the category of theoretical high-energy physics. Of those, nine of them were in my, or a closely related subfield.

I’d love to tell you all about these papers (some exciting! some long-awaited!), but I’m still tired from last night and haven’t read them yet. So I’ll just close by wishing you all, once again, a happy new year.

A Taste of Normal

I grew up in the US. I’ve roamed over the years, but each year I’ve managed to come back around this time. My folks throw the kind of Thanksgiving you see in movies, a table overflowing with turkey and nine kinds of pie.

This year, obviously, is different. No travel, no big party. Still, I wanted to capture some of the feeling here in my cozy Copenhagen apartment. My wife and I baked mini-pies instead, a little feast just for us two.

In these weird times, it’s good to have the occasional taste of normal, a dose of tradition to feel more at home. That doesn’t just apply to personal life, but to academic life as well.

One tradition among academics is the birthday conference. Often timed around a 60th birthday, birthday conferences are a way to celebrate the achievements of professors who have made major contributions to a field. There are talks by their students and close collaborators, filled with stories of the person being celebrated.

Last week was one such conference, in honor of one of the pioneers of my field, Dirk Kreimer. The conference was Zoom-based, and it was interesting to compare with the other Zoom conferences I’ve seen this year. One thing that impressed me was how they handled the “social side” of the conference. Instead of a Slack space like the other conferences, they used a platform called Gather. Gather gives people avatars on a 2D map, mocked up to look like an old-school RPG. Walk close to a group of people, and it lets you video chat with them. There are chairs and tables for private conversations, whiteboards to write on, and in this case even a birthday card to sign.

I didn’t get a chance to try Gather. My guess is it’s a bit worse than Slack for some kinds of discussion. Start a conversation in a Slack channel and people can tune in later from other time zones, each posting new insights and links to references. It’s a good way to hash out an idea.

But a birthday conference isn’t really about hashing out ideas. It’s about community and familiarity, celebrating people we care about. And for that purpose, Gather seems great. You want that little taste of normalcy, of walking across the room and seeing a familiar face, chatting with the folks you keep seeing year after year.

I’ve mused a bit about what it takes to do science when we can’t meet in person. Part of that is a question of efficiency: what does it take it get research done? But if we focus too much on that, we might forget the role of culture. Scientists are people, we form a community, and part of what we value is comfort and familiarity. Keeping that community alive means not just good research discussions, but traditions as well, ways of referencing things we’ve done to carry forward to new circumstances. We will keep changing, our practices will keep evolving. But if we want those changes to stick, we should tie them to the past too. We should keep giving people those comforting tastes of normal.