# The Grant-Writing Moment

When a scientist applies for a grant to fund their research, there’s a way it’s supposed to go. The scientist starts out with a clear idea, a detailed plan for an experiment or calculation they’d like to do, and an expectation of what they could learn from it. Then they get the grant, do their experiment or calculation, and make their discovery. The world smiles upon them.

There’s also a famous way it actually goes. Like the other way, the scientist has a clear idea and detailed plan. Then they do their experiment, or calculation, and see what they get, making their discovery. Finally, they write their grant application, proposing to do the experiment they already did. Getting the grant, they then spend the money on their next idea instead, which they will propose only in the next grant application, and so on.

This is pretty shady behavior. But there’s yet another way things can go, one that flips the previous method on its head. And after considering it, you might find the shady method more understandable.

What happens if a scientist is going to run out of funding, but doesn’t yet have a clear idea? Maybe they don’t know enough yet to have a detailed plan for their experiment or their calculation. Maybe they have an idea, but they’re still foggy about what they can learn from it.

Well, they’re still running out of funding. They still have to write that grant. So they start writing. Along the way, they’ll manage to find some of that clarity: they’ll have to write a detailed plan, they’ll have to describe some expected discovery. If all goes well, they tell a plausible story, and they get that funding.

When they actually go do that research, though, there’s no guarantee it sticks to the plan. In fact, it’s almost guaranteed not to: neither the scientist nor the grant committee typically knows what experiment or calculation needs to be done: that’s what makes the proposal novel science in the first place. The result is that once again, the grant proposal wasn’t exactly honest: it didn’t really describe what was actually going to be done.

You can think of these different stories as falling on a sliding scale. On the one end, the scientist may just have the first glimmer of an idea, and their funded research won’t look anything like their application. On the other, the scientist has already done the research, and the funded research again looks nothing like the application. In between there’s a sweet spot, the intended system: late enough that the scientist has a good idea of what they need to do, early enough that they haven’t done it yet.

How big that sweet spot is depends on the pace of the field. If you’re a field with big, complicated experiments, like randomized controlled trials, you can mostly make this work. Your work takes a long time to plan, and requires sticking to that plan, so you can, at least sometimes, do grants “the right way”. The smaller your experiments are though, the more the details can change, and the smaller the window gets. For a field like theoretical physics, if you know exactly what calculation to do, or what proof to write, with no worries or uncertainty…well, you’ve basically done the calculation already. The sweet spot for ethical grant-writing shrinks down to almost a single moment.

In practice, some grant committees understand this. There are grants where you are expected to present preliminary evidence from work you’ve already started, and to discuss the risks your vaguer ideas might face. Grants of this kind recognize that science is a process, and that catching people at that perfect moment is next-to-impossible. They try to assess what the scientist is doing as a whole, not just a single idea.

Scientists ought to be honest about what they’re doing. But grant agencies need to be honest too, about how science in a given field actually works. Hopefully, one enables the other, and we reach a more honest world.

# A Tale of Two Donuts

I’ve got a new paper up this week, with Hjalte Frellesvig, Cristian Vergu, and Matthias Volk, about the elliptic integrals that show up in Feynman diagrams.

You can think of elliptic integrals as integrals over a torus, a curve shaped like the outer crust of a donut.

Integrals like these are showing up more and more in our field, the subject of bigger and bigger conferences. By now, we think we have a pretty good idea of how to handle them, but there are still some outstanding mysteries to solve.

One such mystery came up in a paper in 2017, by Luise Adams and Stefan Weinzierl. They were working with one of the favorite examples of this community, the so-called sunrise diagram (sunrise being a good time to eat donuts). And they noticed something surprising: if they looked at the sunrise diagram in different ways, it was described by different donuts.

What do I mean, different donuts?

The integrals we know best in this field aren’t integrals on a torus, but rather integrals on a sphere. In some sense, all spheres are the same: you can make them bigger or smaller, but they don’t have different shapes, they’re all “sphere-shaped”. In contrast, integrals on a torus are trickier, because toruses can have different shapes. Think about different donuts: some might have a thin ring, others a thicker one, even if the overall donut is the same size. You can’t just scale up one donut and get the other.

My colleague, Cristian Vergu, was annoyed by this. He’s the kind of person who trusts mathematics like an old friend, one who would never lead him astray. He thought that there must be one answer, one correct donut, one natural way to represent the sunrise diagram mathematically. I was skeptical, I don’t trust mathematics nearly as much as Cristian does. To sort it out, we brought in Hjalte Frellesvig and Matthias Volk, and started trying to write the sunrise diagram every way we possibly could. (Along the way, we threw in another “donut diagram”, the double-box, just to see what would happen.)

Rather than getting a zoo of different donuts, we got a surprise: we kept seeing the same two. And in the end, we stumbled upon the answer Cristian was hoping for: one of these two is, in a meaningful sense, the “correct donut”.

What was wrong with the other donut? It turns out when the original two donuts were found, one of them involved a move that is a bit risky mathematically, namely, combining square roots.

For readers who don’t know what I mean, or why this is risky, let me give a simple example. Everyone else can skip to after the torus gif.

Suppose I am solving a problem, and I find a product of two square roots:

$\sqrt{x}\sqrt{x}$

I could try combining them under the same square root sign, like so:

$\sqrt{x^2}$

That works, if $x$ is positive. But now suppose $x=-1$. Plug in negative one to the first expression, and you get,

$\sqrt{-1}\sqrt{-1}=i\times i=-1$

while in the second,

$\sqrt{(-1)^2}=\sqrt{1}=1$

In this case, it wasn’t as obvious that combining roots would change the donut. It might have been perfectly safe. It took some work to show that indeed, this was the root of the problem. If the roots are instead combined more carefully, then one of the donuts goes away, leaving only the one, true donut.

I’m interested in seeing where this goes, how many different donuts we have to understand and how they might be related. But I’ve also been writing about donuts for the last hour or so, so I’m getting hungry. See you next week!

# Physical Intuition From Physics Experience

One of the most mysterious powers physicists claim is physical intuition. Let the mathematicians have their rigorous proofs and careful calculations. We just need to ask ourselves, “Does this make sense physically?”

It’s tempting to chalk this up to bluster, or physicist arrogance. Sometimes, though, a physicist manages to figure out something that stumps the mathematicians. Edward Witten’s work on knot theory is a classic example, where he used ideas from physics, not rigorous proof, to win one of mathematics’ highest honors.

So what is physical intuition? And what is its relationship to proof?

Let me walk you through an example. I recently saw a talk by someone in my field who might be a master of physical intuition. He was trying to learn about what we call Effective Field Theories, theories that are “effectively” true at some energy but don’t include the details of higher-energy particles. He calculated that there are limits to the effect these higher-energy particles can have, just based on simple cause and effect. To explain the calculation to us, he gave a physical example, of coupled oscillators.

Oscillators are familiar problems for first-year physics students. Objects that go back and forth, like springs and pendulums, tend to obey similar equations. Link two of them together (couple them), and the equations get more complicated, work for a second-year student instead of a first-year one. Such a student will notice that coupled oscillators “repel” each other: their frequencies get father apart than they would be if they weren’t coupled.

Our seminar speaker wanted us to revisit those second-year-student days, in order to understand how different particles behave in Effective Field Theory. Just as the frequencies of the oscillators repel each other, the energies of particles repel each other: the unknown high-energy particles could only push the energies of the lighter particles we can detect lower, not higher.

This is an example of physical intuition. Examine it, and you can learn a few things about how physical intuition works.

First, physical intuition comes from experience. Using physical intuition wasn’t just a matter of imagining the particles and trying to see what “makes sense”. Instead, it required thinking about similar problems from our experience as physicists: problems that don’t just seem similar on the surface, but are mathematically similar.

Second, physical intuition doesn’t replace calculation. Our speaker had done the math, he hadn’t just made a physical argument. Instead, physical intuition serves two roles: to inspire, and to help remember. Physical intuition can inspire new solutions, suggesting ideas that you go on to check with calculation. In addition to that, it can help your mind sort out what you already know. Without the physical story, we might not have remembered that the low-energy particles have their energies pushed down. With the story though, we had a similar problem to compare, and it made the whole thing more memorable. Human minds aren’t good at holding a giant pile of facts. What they are good at is holding narratives. “Physical intuition” ties what we know into a narrative, building on past problems to understand new ones.

Finally, physical intuition can be risky. If the problem is too different then the intuition can lead you astray. The mathematics of coupled oscillators and Effective Field Theories was similar enough for this argument to work, but if it turned out to be different in an important way then the intuition would have backfired, making it harder to find the answer and harder to keep track once it was found.

Physical intuition may seem mysterious. But deep down, it’s just physicists using our experience, comparing similar problems to help keep track of what we need to know. I’m sure chemists, biologists, and mathematicians all have similar stories to tell.

# Physics Acculturation

We all agree physics is awesome, right?

Me, I chose physics as a career, so I’d better like it. And you, right now you’re reading a physics blog for fun, so you probably like physics too.

Ok, so we agree, physics is awesome. But it isn’t always awesome.

Read a blog like this, or the news, and you’ll hear about the more awesome parts of physics: the black holes and big bangs, quantum mysteries and elegant mathematics. As freshman physics majors learn every year, most of physics isn’t like that. It’s careful calculation and repetitive coding, incremental improvements to a piece of a piece of a piece of something that might eventually answer a Big Question. Even if intellectually you can see the line from what you’re doing to the big flashy stuff, emotionally the two won’t feel connected, and you might struggle to feel motivated.

Physics solves this through acculturation. Physicists don’t just work on their own, they’re part of a shared worldwide culture of physicists. They spend time with other physicists, and not just working time but social time: they eat lunch together, drink coffee together, travel to conferences together. Spending that time together gives physics more emotional weight: as humans, we care a bit about Big Questions, but we care a lot more about our community.

This isn’t unique to physics, of course, or even to academics. Programmers who have lunch together, philanthropists who pat each other on the back for their donations, these people are trying to harness the same forces. By building a culture around something, you can get people more motivated to do it.

There’s a risk here, of course, that the culture takes over, and we lose track of the real reasons to do science. It’s easy to care about something because your friends care about it because their friends care about it, looping around until it loses contact with reality. In science we try to keep ourselves grounded, to respect those who puncture our bubbles with a good argument or a clever experiment. But we don’t always succeed.

The pandemic has made acculturation more difficult. As a scientist working from home, that extra bit of social motivation is much harder to get. It’s perhaps even harder for new students, who haven’t had the chance to hang out and make friends with other researchers. People’s behavior, what they research and how and when, has changed, and I suspect changing social ties are a big part of it.

In the long run, I don’t think we can do without the culture of physics. We can’t be lone geniuses motivated only by our curiosity, that’s just not how people work. We have to meld the two, mix the social with the intellectual…and hope that when we do, we keep the engines of discovery moving.

# Inevitably Arbitrary

Physics is universal…or at least, it aspires to be. Drop an apple anywhere on Earth, at any point in history, and it will accelerate at roughly the same rate. When we call something a law of physics, we expect it to hold everywhere in the universe. It shouldn’t depend on anything arbitrary.

Sometimes, though, something arbitrary manages to sneak in. Even if the laws of physics are universal, the questions we want to answer are not: they depend on our situation, on what we want to know.

The simplest example is when we have to use units. The mass of an electron is the same here as it is on Alpha Centauri, the same now as it was when the first galaxies formed. But what is that mass? We could write it as 9.1093837015×10−31 kilograms, if we wanted to, but kilograms aren’t exactly universal. Their modern definition is at least based on physical constants, but with some pretty arbitrary numbers. It defines the Planck constant as 6.62607015×10−34 Joule-seconds. Chase that number back, and you’ll find references to the Earth’s circumference and the time it takes to turn round on its axis. The mass of the electron may be the same on Alpha Centauri, but they’d never write it as 9.1093837015×10−31 kilograms.

Units aren’t the only time physics includes something arbitrary. Sometimes, like with units, we make a choice of how we measure or calculate something. We choose coordinates for a plot, a reference frame for relativity, a zero for potential energy, a gauge for gauge theories and regularization and subtraction schemes for quantum field theory. Sometimes, the choice we make is instead what we measure. To do thermodynamics we must choose what we mean by a state, to call two substances water even if their atoms are in different places. Some argue a perspective like this is the best way to think about quantum mechanics. In a different context, I’d argue it’s why we say coupling constants vary with energy.

So what do we do, when something arbitrary sneaks in? We have a few options. I’ll illustrate each with the mass of the electron:

• Make an arbitrary choice, and stick with it: There’s nothing wrong with measuring an electron in kilograms, if you’re consistent about it. You could even use ounces. You just have to make sure that everyone else you compare with is using the same units, or be careful to convert.
• Make a “natural” choice: Why not set the speed of light and Planck’s constant to one? They come up a lot in particle physics, and all they do is convert between length and time, or time and energy. That way you can use the same units for all of them, and use something convenient, like electron-Volts. They even have electron in the name! Of course they also have “Volt” in the name, and Volts are as arbitrary as any other metric unit. A “natural” choice might make your life easier, but you should always remember it’s still arbitrary.
• Make an efficient choice: This isn’t always the same as the “natural” choice. The units you choose have an effect on how difficult your calculation is. Sometimes, the best choice for the mass of an electron is “one electron-mass”, because it lets you calculate something else more easily. This is easier to illustrate with other choices: for example, if you have to pick a reference frame for a collision, picking one in which one of the objects is at rest, or where they move symmetrically, might make your job easier.
• Stick to questions that aren’t arbitrary: No matter what units we use, the electron’s mass will be arbitrary. Its ratios to other masses won’t be though. No matter where we measure, dimensionless ratios like the mass of the muon divided by the mass of the electron, or the mass of the electron divided by the value of the Higgs field, will be the same. If we can make sure to ask only this kind of question, we can avoid arbitrariness. Note that we can think of even a mass in “kilograms” as this kind of question: what’s the ratio of the mass of the electron to “this arbitrary thing we’ve chosen”? In practice though, you want to compare things in the same theory, without the historical baggage of metric.

This problem may seem silly, and if we just cared about units it might be. But at the cutting-edge of physics there are still areas where the arbitrary shows up. Our choices of how to handle it, or how to avoid it, can be crucial to further progress.

# A Taste of Normal

I grew up in the US. I’ve roamed over the years, but each year I’ve managed to come back around this time. My folks throw the kind of Thanksgiving you see in movies, a table overflowing with turkey and nine kinds of pie.

This year, obviously, is different. No travel, no big party. Still, I wanted to capture some of the feeling here in my cozy Copenhagen apartment. My wife and I baked mini-pies instead, a little feast just for us two.

In these weird times, it’s good to have the occasional taste of normal, a dose of tradition to feel more at home. That doesn’t just apply to personal life, but to academic life as well.

One tradition among academics is the birthday conference. Often timed around a 60th birthday, birthday conferences are a way to celebrate the achievements of professors who have made major contributions to a field. There are talks by their students and close collaborators, filled with stories of the person being celebrated.

Last week was one such conference, in honor of one of the pioneers of my field, Dirk Kreimer. The conference was Zoom-based, and it was interesting to compare with the other Zoom conferences I’ve seen this year. One thing that impressed me was how they handled the “social side” of the conference. Instead of a Slack space like the other conferences, they used a platform called Gather. Gather gives people avatars on a 2D map, mocked up to look like an old-school RPG. Walk close to a group of people, and it lets you video chat with them. There are chairs and tables for private conversations, whiteboards to write on, and in this case even a birthday card to sign.

I didn’t get a chance to try Gather. My guess is it’s a bit worse than Slack for some kinds of discussion. Start a conversation in a Slack channel and people can tune in later from other time zones, each posting new insights and links to references. It’s a good way to hash out an idea.

But a birthday conference isn’t really about hashing out ideas. It’s about community and familiarity, celebrating people we care about. And for that purpose, Gather seems great. You want that little taste of normalcy, of walking across the room and seeing a familiar face, chatting with the folks you keep seeing year after year.

I’ve mused a bit about what it takes to do science when we can’t meet in person. Part of that is a question of efficiency: what does it take it get research done? But if we focus too much on that, we might forget the role of culture. Scientists are people, we form a community, and part of what we value is comfort and familiarity. Keeping that community alive means not just good research discussions, but traditions as well, ways of referencing things we’ve done to carry forward to new circumstances. We will keep changing, our practices will keep evolving. But if we want those changes to stick, we should tie them to the past too. We should keep giving people those comforting tastes of normal.

# Science and Its Customers

In most jobs, you know who you’re working for.

A chef cooks food, and people eat it. A tailor makes clothes, and people wear them. An artist has an audience, an engineer has end users, a teacher has students. Someone out there benefits directly from what you do. Make them happy, and they’ll let you know. Piss them off, and they’ll stop hiring you.

Science benefits people too…but most of its benefits are long-term. The first person to magnetize a needle couldn’t have imagined worldwide electronic communication, and the scientists who uncovered quantum mechanics couldn’t have foreseen transistors, or personal computers. The world benefits just by having more expertise in it, more people who spend their lives understanding difficult things, and train others to understand difficult things. But those benefits aren’t easy to see for each individual scientist. As a scientist, you typically don’t know who your work will help, or how much. You might not know for years, or even decades, what impact your work will have. Even then, it will be difficult to tease out your contribution from the other scientists of your time.

We can’t ask the customers of the future to pay for the scientists of today. (At least, not straightforwardly.) In practice, scientists are paid by governments and foundations, groups trying on some level to make the future a better place. Instead of feedback from customers we get feedback from each other. If our ideas get other scientists excited, maybe they’ll matter down the road.

This is a risky thing to do, of course. Governments, foundations, and scientists can’t tell the future. They can try to act in the interests of future generations, but they might just act for themselves instead. Trying to plan ahead like this makes us prey to all the cognitive biases that flesh is heir to.

But we don’t really have an alternative. If we want to have a future at all, if we want a happier and more successful world, we need science. And if we want science, we can’t ask its real customers, the future generations, to choose whether to pay for it. We need to work for the smiles on our colleagues faces and the checks from government grant agencies. And we need to do it carefully enough that at the end of the day, we still make a positive difference.

# What You Don’t Know, You Can Parametrize

In physics, what you don’t know can absolutely hurt you. If you ignore that planets have their own gravity, or that metals conduct electricity, you’re going to calculate a lot of nonsense. At the same time, as physicists we can’t possibly know everything. Our experiments are never perfect, our math never includes all the details, and even our famous Standard Model is almost certainly not the whole story. Luckily, we have another option: instead of ignoring what we don’t know, we can parametrize it, and estimate its effect.

Estimating the unknown is something we physicists have done since Newton. You might think Newton’s big discovery was the inverse-square law for gravity, but others at the time, like Robert Hooke, had also been thinking along those lines. Newton’s big discovery was that gravity was universal: that you need to know the effect of gravity, not just from the sun, but from all the other planets as well. The trouble was, Newton didn’t know how to calculate the motion of all of the planets at once (in hindsight, we know he couldn’t have). Instead, he estimated, using what he knew to guess how big the effect of what he didn’t would be. It was the accuracy of those guesses, not just the inverse square law by itself, that convinced the world that Newton was right.

If you’ve studied electricity and magnetism, you get to the point where you can do simple calculations with a few charges in your sleep. The world doesn’t have just a few charges, though: it has many charges, protons and electrons in every atom of every object. If you had to keep all of them in your calculations you’d never pass freshman physics, but luckily you can once again parametrize what you don’t know. Often you can hide those charges away, summarizing their effects with just a few numbers. Other times, you can treat materials as boundaries, and summarize everything beyond in terms of what happens on the edge. The equations of the theory let you do this, but this isn’t true for every theory: for the Navier-Stokes equation, which we use to describe fluids, it still isn’t known whether you can do this kind of trick.

Parametrizing what we don’t know isn’t just a trick for college physics, it’s key to the cutting edge as well. Right now we have a picture for how all of particle physics works, called the Standard Model, but we know that picture is incomplete. There are a million different theories you could write to go beyond the Standard Model, with a million different implications. Instead of having to use all those theories, physicists can summarize them all with what we call an effective theory: one that keeps track of the effect of all that new physics on the particles we already know. By summarizing those effects with a few parameters, we can see what they would have to be to be compatible with experimental results, ruling out some possibilities and suggesting others.

In a world where we never know everything, there’s always something that can hurt us. But if we’re careful and estimate what we don’t know, if we write down numbers and parameters and keep our options open, we can keep from getting burned. By focusing on what we do know, we can still manage to understand the world.

# At “Antidifferentiation and the Calculation of Feynman Amplitudes”

I was at a conference this week, called Antidifferentiation and the Calculation of Feynman Amplitudes. The conference is a hybrid kind of affair: I attended via Zoom, but there were seven or so people actually there in the room (the room in question being at DESY Zeuthen, near Berlin).

The road to this conference was a bit of a roller-coaster. It was originally scheduled for early March. When the organizers told us they were postponing it, they seemed at the time a little overcautious…until the world proved me, and all of us, wrong. They rescheduled for October, and as more European countries got their infection rates down it looked like the conference could actually happen. We booked rooms at the DESY guest house, until it turned out they needed the space to keep the DESY staff socially distanced, and we quickly switched to booking at a nearby hotel.

Then Europe’s second wave hit. Cases in Denmark started to rise, so Germany imposed a quarantine on entry from Copenhagen and I switched to remote participation. Most of the rest of the participants did too, even several in Germany. For the few still there in person they have a variety of measures to stop infection, from fixed seats in the conference room to gloves for the coffee machine.

The content has been interesting. It’s an eclectic mix of review talks and talks on recent research, all focused on different ways to integrate (or, as one of the organizers emphasized, antidifferentiate) functions in quantum field theory. I’ve learned about the history of the field, and gotten a better feeling for the bottlenecks in some LHC-relevant calculations.

This week was also the announcement of the Physics Nobel Prize. I’ll do my traditional post on it next week, but for now, congratulations to Penrose, Genzel, and Ghez!

# When and How Scientists Reach Out

You’ve probably heard of the myth of the solitary scientist. While Newton might have figured out calculus isolated on his farm, most scientists work better when they communicate. If we reach out to other scientists, we can make progress a lot faster.

Even if you understand that, you might not know what that reaching out actually looks like. I’ve seen far too many crackpots who approach scientific communication like a spammer: sending out emails to everyone in a department, commenting in every vaguely related comment section they can find. While commercial spammers hope for a few gullible people among the thousands they contact, that kind of thing doesn’t benefit crackpots. As far as I can tell, they communicate that way because they genuinely don’t know any better.

So in this post, I want to give a road map for how we scientists reach out to other scientists. Keep these steps in mind, and if you ever need to reach out to a scientist you’ll know what to do.

First, decide what you want to know. This may sound obvious, but sometimes people skip this step. We aren’t communicating just to communicate, but because we want to learn something from the other person. Maybe it’s a new method or idea, maybe we just want confirmation we’re on the right track. We don’t reach out just to “show our theory”, but because we hope to learn something from the response.

Then, figure out who might know it. To do this, we first need to decide how specialized our question is. We often have questions about specific papers: a statement we don’t understand, a formula that seems wrong, or a method that isn’t working. For those, we contact an author from that paper. Other times, the question hasn’t been addressed in a paper, but does fall under a specific well-defined topic: a particular type of calculation, for example. For those we seek out a specialist on that specific topic. Finally, sometimes the question is more general, something anyone in our field might in principle know but we happen not to. For that kind of question, we look for someone we trust, someone we have a prior friendship with and feel comfortable asking “dumb questions”. These days, we can supplement that with platforms like PhysicsOverflow that let us post technical questions and invite anyone to respond.

Note that, for all of these, there’s some work to do first. We need to read the relevant papers, bone up on a topic, even check Wikipedia sometimes. We need to put in enough work to at least try to answer our question, so that we know exactly what we need the other person for.

Finally, contact them appropriately. Papers will usually give contact information for one, or all, of the authors. University websites will give university emails. We’d reach out with something like that first, and switch to personal email (or something even more casual, like Skype or social media) only for people we already have a track record of communicating with in that way.

By posing and directing our questions well, scientists can reach out and get help when we struggle. Science is a team effort, we’re stronger when we work together.