Category Archives: Life as a Physicist

Theoretical Uncertainty and Uncertain Theory

Yesterday, Fermilab’s Muon g-2 experiment announced a new measurement of the magnetic moment of the muon, a number which describes how muons interact with magnetic fields. For what might seem like a small technical detail, physicists have been very excited about this measurement because it’s a small technical detail that the Standard Model seems to get wrong, making it a potential hint of new undiscovered particles. Quanta magazine has a great piece on the announcement, which explains more than I will here, but the upshot is that there are two different calculations on the market that attempt to predict the magnetic moment of the muon. One of them, using older methods, disagrees with the experiment. The other, with a new approach, agrees. The question then becomes, which calculation was wrong? And why?

What does it mean for a prediction to match an experimental result? The simple, wrong, answer is that the numbers must be equal: if you predict “3”, the experiment has to measure “3”. The reason why this is wrong is that in practice, every experiment and every prediction has some uncertainty. If you’ve taken a college physics class, you’ve run into this kind of uncertainty in one of its simplest forms, measurement uncertainty. Measure with a ruler, and you can only confidently measure down to the smallest divisions on the ruler. If you measure 3cm, but your ruler has ticks only down to a millimeter, then what you’re measuring might be as large as 3.1cm or as small as 2.9 cm. You just don’t know.

This uncertainty doesn’t mean you throw up your hands and give up. Instead, you estimate the effect it can have. You report, not a measurement of 3cm, but of 3cm plus or minus 1mm. If the prediction was 2.9cm, then you’re fine: it falls within your measurement uncertainty.

Measurements aren’t the only thing that can be uncertain. Predictions have uncertainty too, theoretical uncertainty. Sometimes, this comes from uncertainty on a previous measurement: if you make a prediction based on that experiment that measured 3cm plus or minus 1mm, you have to take that plus or minus into account and estimate its effect (we call this propagation of errors). Sometimes, the uncertainty comes instead from an approximation you’re making. In particle physics, we sometimes approximate interactions between different particles with diagrams, beginning with the simplest diagrams and adding on more complicated ones as we go. To estimate the uncertainty there, we estimate the size of the diagrams we left out, the more complicated ones we haven’t calculated yet. Other times, that approximation doesn’t work, and we need to use a different approximation, treating space and time as a finite grid where we can do computer simulations. In that case, you can estimate your uncertainty based on how small you made your grid. The new approach to predicting the muon magnetic moment uses that kind of approximation.

There’s a common thread in all of these uncertainty estimates: you don’t expect to be too far off on average. Your measurements won’t be perfect, but they won’t all be screwed up in the same way either: chances are, they will randomly be a little below or a little above the truth. Your calculations are similar: whether you’re ignoring complicated particle physics diagrams or the spacing in a simulated grid, you can treat the difference as something small and random. That randomness means you can use statistics to talk about your errors: you have statistical uncertainty. When you have statistical uncertainty, you can estimate, not just how far off you might get, but how likely it is you ended up that far off. In particle physics, we have very strict standards for this kind of thing: to call something new a discovery, we demand that it is so unlikely that it would only show up randomly under the old theory roughly one in a million times. The muon magnetic moment isn’t quite up to our standards for a discovery yet, but the new measurement brought it closer.

The two dueling predictions for the muon’s magnetic moment both estimate some amount of statistical uncertainty. It’s possible that the two calculations just disagree due to chance, and that better measurements or a tighter simulation grid would make them agree. Given their estimates, though, that’s unlikely. That takes us from the realm of theoretical uncertainty, and into uncertainty about the theoretical. The two calculations use very different approaches. The new calculation tries to compute things from first principles, using the Standard Model directly. The risk is that such a calculation needs to make assumptions, ignoring some effects that are too difficult to calculate, and one of those assumptions may be wrong. The older calculation is based more on experimental results, using different experiments to estimate effects that are hard to calculate but that should be similar between different situations. The risk is that the situations may be less similar than expected, their assumptions breaking down in a way that the bottom-up calculation could catch.

None of these risks are easy to estimate. They’re “unknown unknowns”, or rather, “uncertain uncertainties”. And until some of them are resolved, it won’t be clear whether Fermilab’s new measurement is a sign of undiscovered particles, or just a (challenging!) confirmation of the Standard Model.

The Grant-Writing Moment

When a scientist applies for a grant to fund their research, there’s a way it’s supposed to go. The scientist starts out with a clear idea, a detailed plan for an experiment or calculation they’d like to do, and an expectation of what they could learn from it. Then they get the grant, do their experiment or calculation, and make their discovery. The world smiles upon them.

There’s also a famous way it actually goes. Like the other way, the scientist has a clear idea and detailed plan. Then they do their experiment, or calculation, and see what they get, making their discovery. Finally, they write their grant application, proposing to do the experiment they already did. Getting the grant, they then spend the money on their next idea instead, which they will propose only in the next grant application, and so on.

This is pretty shady behavior. But there’s yet another way things can go, one that flips the previous method on its head. And after considering it, you might find the shady method more understandable.

What happens if a scientist is going to run out of funding, but doesn’t yet have a clear idea? Maybe they don’t know enough yet to have a detailed plan for their experiment or their calculation. Maybe they have an idea, but they’re still foggy about what they can learn from it.

Well, they’re still running out of funding. They still have to write that grant. So they start writing. Along the way, they’ll manage to find some of that clarity: they’ll have to write a detailed plan, they’ll have to describe some expected discovery. If all goes well, they tell a plausible story, and they get that funding.

When they actually go do that research, though, there’s no guarantee it sticks to the plan. In fact, it’s almost guaranteed not to: neither the scientist nor the grant committee typically knows what experiment or calculation needs to be done: that’s what makes the proposal novel science in the first place. The result is that once again, the grant proposal wasn’t exactly honest: it didn’t really describe what was actually going to be done.

You can think of these different stories as falling on a sliding scale. On the one end, the scientist may just have the first glimmer of an idea, and their funded research won’t look anything like their application. On the other, the scientist has already done the research, and the funded research again looks nothing like the application. In between there’s a sweet spot, the intended system: late enough that the scientist has a good idea of what they need to do, early enough that they haven’t done it yet.

How big that sweet spot is depends on the pace of the field. If you’re a field with big, complicated experiments, like randomized controlled trials, you can mostly make this work. Your work takes a long time to plan, and requires sticking to that plan, so you can, at least sometimes, do grants “the right way”. The smaller your experiments are though, the more the details can change, and the smaller the window gets. For a field like theoretical physics, if you know exactly what calculation to do, or what proof to write, with no worries or uncertainty…well, you’ve basically done the calculation already. The sweet spot for ethical grant-writing shrinks down to almost a single moment.

In practice, some grant committees understand this. There are grants where you are expected to present preliminary evidence from work you’ve already started, and to discuss the risks your vaguer ideas might face. Grants of this kind recognize that science is a process, and that catching people at that perfect moment is next-to-impossible. They try to assess what the scientist is doing as a whole, not just a single idea.

Scientists ought to be honest about what they’re doing. But grant agencies need to be honest too, about how science in a given field actually works. Hopefully, one enables the other, and we reach a more honest world.

Physical Intuition From Physics Experience

One of the most mysterious powers physicists claim is physical intuition. Let the mathematicians have their rigorous proofs and careful calculations. We just need to ask ourselves, “Does this make sense physically?”

It’s tempting to chalk this up to bluster, or physicist arrogance. Sometimes, though, a physicist manages to figure out something that stumps the mathematicians. Edward Witten’s work on knot theory is a classic example, where he used ideas from physics, not rigorous proof, to win one of mathematics’ highest honors.

So what is physical intuition? And what is its relationship to proof?

Let me walk you through an example. I recently saw a talk by someone in my field who might be a master of physical intuition. He was trying to learn about what we call Effective Field Theories, theories that are “effectively” true at some energy but don’t include the details of higher-energy particles. He calculated that there are limits to the effect these higher-energy particles can have, just based on simple cause and effect. To explain the calculation to us, he gave a physical example, of coupled oscillators.

Oscillators are familiar problems for first-year physics students. Objects that go back and forth, like springs and pendulums, tend to obey similar equations. Link two of them together (couple them), and the equations get more complicated, work for a second-year student instead of a first-year one. Such a student will notice that coupled oscillators “repel” each other: their frequencies get father apart than they would be if they weren’t coupled.

Our seminar speaker wanted us to revisit those second-year-student days, in order to understand how different particles behave in Effective Field Theory. Just as the frequencies of the oscillators repel each other, the energies of particles repel each other: the unknown high-energy particles could only push the energies of the lighter particles we can detect lower, not higher.

This is an example of physical intuition. Examine it, and you can learn a few things about how physical intuition works.

First, physical intuition comes from experience. Using physical intuition wasn’t just a matter of imagining the particles and trying to see what “makes sense”. Instead, it required thinking about similar problems from our experience as physicists: problems that don’t just seem similar on the surface, but are mathematically similar.

Second, physical intuition doesn’t replace calculation. Our speaker had done the math, he hadn’t just made a physical argument. Instead, physical intuition serves two roles: to inspire, and to help remember. Physical intuition can inspire new solutions, suggesting ideas that you go on to check with calculation. In addition to that, it can help your mind sort out what you already know. Without the physical story, we might not have remembered that the low-energy particles have their energies pushed down. With the story though, we had a similar problem to compare, and it made the whole thing more memorable. Human minds aren’t good at holding a giant pile of facts. What they are good at is holding narratives. “Physical intuition” ties what we know into a narrative, building on past problems to understand new ones.

Finally, physical intuition can be risky. If the problem is too different then the intuition can lead you astray. The mathematics of coupled oscillators and Effective Field Theories was similar enough for this argument to work, but if it turned out to be different in an important way then the intuition would have backfired, making it harder to find the answer and harder to keep track once it was found.

Physical intuition may seem mysterious. But deep down, it’s just physicists using our experience, comparing similar problems to help keep track of what we need to know. I’m sure chemists, biologists, and mathematicians all have similar stories to tell.

Physics Acculturation

We all agree physics is awesome, right?

Me, I chose physics as a career, so I’d better like it. And you, right now you’re reading a physics blog for fun, so you probably like physics too.

Ok, so we agree, physics is awesome. But it isn’t always awesome.

Read a blog like this, or the news, and you’ll hear about the more awesome parts of physics: the black holes and big bangs, quantum mysteries and elegant mathematics. As freshman physics majors learn every year, most of physics isn’t like that. It’s careful calculation and repetitive coding, incremental improvements to a piece of a piece of a piece of something that might eventually answer a Big Question. Even if intellectually you can see the line from what you’re doing to the big flashy stuff, emotionally the two won’t feel connected, and you might struggle to feel motivated.

Physics solves this through acculturation. Physicists don’t just work on their own, they’re part of a shared worldwide culture of physicists. They spend time with other physicists, and not just working time but social time: they eat lunch together, drink coffee together, travel to conferences together. Spending that time together gives physics more emotional weight: as humans, we care a bit about Big Questions, but we care a lot more about our community.

This isn’t unique to physics, of course, or even to academics. Programmers who have lunch together, philanthropists who pat each other on the back for their donations, these people are trying to harness the same forces. By building a culture around something, you can get people more motivated to do it.

There’s a risk here, of course, that the culture takes over, and we lose track of the real reasons to do science. It’s easy to care about something because your friends care about it because their friends care about it, looping around until it loses contact with reality. In science we try to keep ourselves grounded, to respect those who puncture our bubbles with a good argument or a clever experiment. But we don’t always succeed.

The pandemic has made acculturation more difficult. As a scientist working from home, that extra bit of social motivation is much harder to get. It’s perhaps even harder for new students, who haven’t had the chance to hang out and make friends with other researchers. People’s behavior, what they research and how and when, has changed, and I suspect changing social ties are a big part of it.

In the long run, I don’t think we can do without the culture of physics. We can’t be lone geniuses motivated only by our curiosity, that’s just not how people work. We have to meld the two, mix the social with the intellectual…and hope that when we do, we keep the engines of discovery moving.

A Physicist New Year

Happy New Year to all!

Physicists celebrate the new year by trying to sneak one last paper in before the year is over. Looking at Facebook last night I saw three different friends preview the papers they just submitted. The site where these papers appear, arXiv, had seventy new papers this morning, just in the category of theoretical high-energy physics. Of those, nine of them were in my, or a closely related subfield.

I’d love to tell you all about these papers (some exciting! some long-awaited!), but I’m still tired from last night and haven’t read them yet. So I’ll just close by wishing you all, once again, a happy new year.

A Taste of Normal

I grew up in the US. I’ve roamed over the years, but each year I’ve managed to come back around this time. My folks throw the kind of Thanksgiving you see in movies, a table overflowing with turkey and nine kinds of pie.

This year, obviously, is different. No travel, no big party. Still, I wanted to capture some of the feeling here in my cozy Copenhagen apartment. My wife and I baked mini-pies instead, a little feast just for us two.

In these weird times, it’s good to have the occasional taste of normal, a dose of tradition to feel more at home. That doesn’t just apply to personal life, but to academic life as well.

One tradition among academics is the birthday conference. Often timed around a 60th birthday, birthday conferences are a way to celebrate the achievements of professors who have made major contributions to a field. There are talks by their students and close collaborators, filled with stories of the person being celebrated.

Last week was one such conference, in honor of one of the pioneers of my field, Dirk Kreimer. The conference was Zoom-based, and it was interesting to compare with the other Zoom conferences I’ve seen this year. One thing that impressed me was how they handled the “social side” of the conference. Instead of a Slack space like the other conferences, they used a platform called Gather. Gather gives people avatars on a 2D map, mocked up to look like an old-school RPG. Walk close to a group of people, and it lets you video chat with them. There are chairs and tables for private conversations, whiteboards to write on, and in this case even a birthday card to sign.

I didn’t get a chance to try Gather. My guess is it’s a bit worse than Slack for some kinds of discussion. Start a conversation in a Slack channel and people can tune in later from other time zones, each posting new insights and links to references. It’s a good way to hash out an idea.

But a birthday conference isn’t really about hashing out ideas. It’s about community and familiarity, celebrating people we care about. And for that purpose, Gather seems great. You want that little taste of normalcy, of walking across the room and seeing a familiar face, chatting with the folks you keep seeing year after year.

I’ve mused a bit about what it takes to do science when we can’t meet in person. Part of that is a question of efficiency: what does it take it get research done? But if we focus too much on that, we might forget the role of culture. Scientists are people, we form a community, and part of what we value is comfort and familiarity. Keeping that community alive means not just good research discussions, but traditions as well, ways of referencing things we’ve done to carry forward to new circumstances. We will keep changing, our practices will keep evolving. But if we want those changes to stick, we should tie them to the past too. We should keep giving people those comforting tastes of normal.

Science and Its Customers

In most jobs, you know who you’re working for.

A chef cooks food, and people eat it. A tailor makes clothes, and people wear them. An artist has an audience, an engineer has end users, a teacher has students. Someone out there benefits directly from what you do. Make them happy, and they’ll let you know. Piss them off, and they’ll stop hiring you.

Science benefits people too…but most of its benefits are long-term. The first person to magnetize a needle couldn’t have imagined worldwide electronic communication, and the scientists who uncovered quantum mechanics couldn’t have foreseen transistors, or personal computers. The world benefits just by having more expertise in it, more people who spend their lives understanding difficult things, and train others to understand difficult things. But those benefits aren’t easy to see for each individual scientist. As a scientist, you typically don’t know who your work will help, or how much. You might not know for years, or even decades, what impact your work will have. Even then, it will be difficult to tease out your contribution from the other scientists of your time.

We can’t ask the customers of the future to pay for the scientists of today. (At least, not straightforwardly.) In practice, scientists are paid by governments and foundations, groups trying on some level to make the future a better place. Instead of feedback from customers we get feedback from each other. If our ideas get other scientists excited, maybe they’ll matter down the road.

This is a risky thing to do, of course. Governments, foundations, and scientists can’t tell the future. They can try to act in the interests of future generations, but they might just act for themselves instead. Trying to plan ahead like this makes us prey to all the cognitive biases that flesh is heir to.

But we don’t really have an alternative. If we want to have a future at all, if we want a happier and more successful world, we need science. And if we want science, we can’t ask its real customers, the future generations, to choose whether to pay for it. We need to work for the smiles on our colleagues faces and the checks from government grant agencies. And we need to do it carefully enough that at the end of the day, we still make a positive difference.

What You Don’t Know, You Can Parametrize

In physics, what you don’t know can absolutely hurt you. If you ignore that planets have their own gravity, or that metals conduct electricity, you’re going to calculate a lot of nonsense. At the same time, as physicists we can’t possibly know everything. Our experiments are never perfect, our math never includes all the details, and even our famous Standard Model is almost certainly not the whole story. Luckily, we have another option: instead of ignoring what we don’t know, we can parametrize it, and estimate its effect.

Estimating the unknown is something we physicists have done since Newton. You might think Newton’s big discovery was the inverse-square law for gravity, but others at the time, like Robert Hooke, had also been thinking along those lines. Newton’s big discovery was that gravity was universal: that you need to know the effect of gravity, not just from the sun, but from all the other planets as well. The trouble was, Newton didn’t know how to calculate the motion of all of the planets at once (in hindsight, we know he couldn’t have). Instead, he estimated, using what he knew to guess how big the effect of what he didn’t would be. It was the accuracy of those guesses, not just the inverse square law by itself, that convinced the world that Newton was right.

If you’ve studied electricity and magnetism, you get to the point where you can do simple calculations with a few charges in your sleep. The world doesn’t have just a few charges, though: it has many charges, protons and electrons in every atom of every object. If you had to keep all of them in your calculations you’d never pass freshman physics, but luckily you can once again parametrize what you don’t know. Often you can hide those charges away, summarizing their effects with just a few numbers. Other times, you can treat materials as boundaries, and summarize everything beyond in terms of what happens on the edge. The equations of the theory let you do this, but this isn’t true for every theory: for the Navier-Stokes equation, which we use to describe fluids, it still isn’t known whether you can do this kind of trick.

Parametrizing what we don’t know isn’t just a trick for college physics, it’s key to the cutting edge as well. Right now we have a picture for how all of particle physics works, called the Standard Model, but we know that picture is incomplete. There are a million different theories you could write to go beyond the Standard Model, with a million different implications. Instead of having to use all those theories, physicists can summarize them all with what we call an effective theory: one that keeps track of the effect of all that new physics on the particles we already know. By summarizing those effects with a few parameters, we can see what they would have to be to be compatible with experimental results, ruling out some possibilities and suggesting others.

In a world where we never know everything, there’s always something that can hurt us. But if we’re careful and estimate what we don’t know, if we write down numbers and parameters and keep our options open, we can keep from getting burned. By focusing on what we do know, we can still manage to understand the world.

When and How Scientists Reach Out

You’ve probably heard of the myth of the solitary scientist. While Newton might have figured out calculus isolated on his farm, most scientists work better when they communicate. If we reach out to other scientists, we can make progress a lot faster.

Even if you understand that, you might not know what that reaching out actually looks like. I’ve seen far too many crackpots who approach scientific communication like a spammer: sending out emails to everyone in a department, commenting in every vaguely related comment section they can find. While commercial spammers hope for a few gullible people among the thousands they contact, that kind of thing doesn’t benefit crackpots. As far as I can tell, they communicate that way because they genuinely don’t know any better.

So in this post, I want to give a road map for how we scientists reach out to other scientists. Keep these steps in mind, and if you ever need to reach out to a scientist you’ll know what to do.

First, decide what you want to know. This may sound obvious, but sometimes people skip this step. We aren’t communicating just to communicate, but because we want to learn something from the other person. Maybe it’s a new method or idea, maybe we just want confirmation we’re on the right track. We don’t reach out just to “show our theory”, but because we hope to learn something from the response.

Then, figure out who might know it. To do this, we first need to decide how specialized our question is. We often have questions about specific papers: a statement we don’t understand, a formula that seems wrong, or a method that isn’t working. For those, we contact an author from that paper. Other times, the question hasn’t been addressed in a paper, but does fall under a specific well-defined topic: a particular type of calculation, for example. For those we seek out a specialist on that specific topic. Finally, sometimes the question is more general, something anyone in our field might in principle know but we happen not to. For that kind of question, we look for someone we trust, someone we have a prior friendship with and feel comfortable asking “dumb questions”. These days, we can supplement that with platforms like PhysicsOverflow that let us post technical questions and invite anyone to respond.

Note that, for all of these, there’s some work to do first. We need to read the relevant papers, bone up on a topic, even check Wikipedia sometimes. We need to put in enough work to at least try to answer our question, so that we know exactly what we need the other person for.

Finally, contact them appropriately. Papers will usually give contact information for one, or all, of the authors. University websites will give university emails. We’d reach out with something like that first, and switch to personal email (or something even more casual, like Skype or social media) only for people we already have a track record of communicating with in that way.

By posing and directing our questions well, scientists can reach out and get help when we struggle. Science is a team effort, we’re stronger when we work together.

Grants at the Other End

I’m a baby academic. Two years ago I got my first real grant, a Marie Curie Individual Fellowship from the European Union. Applying for it was a complicated process, full of Word templates and mismatched expectations. Two years later the grant is over, and I get another new experience: grant reporting.

Writing a report after a grant is sort of like applying for a grant. Instead of summarizing and justifying what you intend to do, you summarize and justify what you actually did. There are also Word templates. Grant reports are probably easier than grant applications: you don’t have to “hook” your audience or show off. But they are harder in one aspect: they highlight the different ways different fields handle uncertainty.

If you do experiments, having a clear plan makes sense. You buy special equipment and hire postdocs and even technicians to do specific jobs. Your experiments may or may not find what you hope for, but at least you can try to do them on schedule, and describe the setbacks when you can’t.

As a theorist, you’re more nimble. Your equipment are computers, your postdocs have their own research. Overall, it’s easy to pick up new projects as new ideas come in. As a result, your plans change more. New papers might inspire you to try new things. They might also discourage you, if you learn the idea you had won’t actually work. The field can move fast, and you want to keep up with it.

Writing my first grant report will be interesting. I’ll need to thread the gap between expectations and reality, to look back on my progress and talk about why. And of course, I have to do it in Microsoft Word.