I remember, a while back, visiting a friend in his office. He had just became a professor, and was still setting things up. I noticed a list on the chalkboard, taking up almost one whole side. Taking a closer look, I realized that list was a list of projects. To my young postdoc eyes, the list was amazing: how could one person be working on so many things?
There’s an idiom in English, “too many irons in the fire”. You can imagine a blacksmith forging many things at once, each piece of iron taking focus from the others. Too many, and a piece might break, or otherwise fail.
In theoretical physics, a typical PhD publishes three papers before they graduate. That usually means one project at a time, maybe two. For someone used to one or two irons in the fire, so many at a time seems an impossible feat.
Scientists grow over their careers, though, and in more than one way. What seems impossible can eventually be business as usual.
First, as your skill grows, you become more efficient. A lot of scientific work is a kind of debugging: making mistakes, and figuring out how to fix them. The more experience you have, the more you know what kinds of mistakes you might make, and the better you will be at avoiding them. (Never perfect, of course: scientists always have to debug something.)
Second, your collaborations grow. The more people you work with, the more you can share these projects, each person contributing their own piece. With time, you start supervising as well: Masters students, PhD students, postdocs. Each one adds to the number of irons you can manage in your fire. While for bad supervisors this just means having their name on lots of projects, the good supervisors will be genuinely contributing to each one. That’s yet another kind of growth: as you get further along, you get a better idea of what works and what doesn’t, so even in a quick meeting you can solve meaningful problems.
Third, you grow your system. The ideas you explore early on blossom into full-fledged methods, tricks which you can pull out again and again when you need them. The tricks combine, forming new, bigger tricks, and eventually a long list of projects becomes second nature, a natural thing your system is able to do.
As you grow as a scientist, you become more than just one researcher, one debugger at a laptop or pipetter at a lab bench. You become a research program, one that manifests across many people and laptops and labs. As your expertise grows, you become a kind of living exchange of ideas, concepts flowing through you when needed, building your own scientific world.
It’s time for my yearly Halloween post. My regular readers know what to expect: a horror trope and a physics topic, linked by a tortured analogy. And this year, the pun is definitely intended.
Horror movies have a fascination with serial killers. Over the years, they’ve explored every possible concept: from gritty realism to the supernatural, crude weapons to sophisticated traps, motivations straightforward to mysterious, and even killers who are puppets.
One common theme of all fictional serial killers is power. Serial killers are scary because they have almost all the power in a situation, turned to alien and unpredictable goals. The protagonists of a horror film are the underdogs, never knowing whether the killer will pull out some new ability or plan that makes everything they try irrelevant. Even if they get the opportunity to negotiate, the power imbalance means that they can’t count on getting what they need: anything the killer agrees will be twisted to serve their own ends.
Academics tell their own kind of horror stories. Earlier this month, the historian Brett Deveraux had a blog post about graduate school, describing what students go through to get a PhD. As he admits, parts of his story only apply to the humanities. STEM departments have more money, and pay their students a bit better. It’s not a lot better (I was making around $20,000 a year at Stony Brook), but it’s enough that I’ve never heard of a student taking out a loan to make ends meet. (At most, people took on tutoring jobs for a bit of extra cash.) We don’t need to learn new languages, and our degrees take a bit less time: six or seven years for an experimental physicist, and often five for a theoretical physicist. Finally, the work can be a lot less lonely, especially for those who work in a lab.
Still, there is a core in common, and that core once again is power. Universities have power, of course: and when you’re not a paying customer but an employee with your career on the line, that power can be quite scary. But the person with the most power over a PhD student is their advisor. Deveraux talks compellingly about the difference that power can make: how an advisor who is cruel, or indifferent, or just clueless, can make or break not just your career but your psychological well-being. The lucky students, like Deveraux and me, find supportive mentors who help us survive and move forward. The unlucky students leave with scars, even if those scars aren’t jigsaw-shaped.
Neither Deveraux or I have experience with PhD programs in Europe, which are quite different in structure from those in the US. But the power imbalance is still there, and still deadly, and so despite the different structure, I’ve seen students here break down, scarred in the same way.
Deveraux frames his post as advice for those who want to go to grad school, and his first piece of advice is “Have you tried wanting something else?” I try to echo that when I advise students. I don’t always succeed: there’s something exciting about a young person interested in the same topics we’re interested in, willing to try to make a life of it. But it is important to know what you’re getting into, and to know there’s a big world out there of other options. If, after all that, you decide to stick through it, just remember: power matters. If you give someone power over you, try to be as sure as you can that it won’t turn into a horror story.
What do TED talks and grant applications have in common?
Put a scientist on a stage, and what happens? Some of us panic and mumble. Others are as smooth as a movie star. Most, though, fall back on a well-practiced mode: “self-promotion voice”.
A scientist doing self-promotion voice is easy to recognize. We focus on ourselves, of course (that’s in the name!), talking about all the great things we’ve done. If we have to mention someone else, we make sure to link it in some way: “my colleague”, “my mentor”, “which inspired me to”. All vulnerability is “canned” in one way or another: “challenges we overcame”, light touches on the most sympathetic of issues. Usually, we aren’t negative towards our colleagues either: apart from the occasional very distant enemy, everyone is working with great scientific virtue. If we talk about our past, we tell the same kinds of stories, mentioning our youthful curiosity and deep buzzwordy motivations. Any jokes or references are carefully pruned, made accessible to the lowest-common-denominator. This results in a standard vocabulary: see a metaphor, a quote, or a turn of phrase, and you’re bound to see it in talks again and again and again. Things get even more repetitive when you take into account how often we lean on the voice: a given speech or piece will be assembled from elementary pieces, snippets of practiced self-promotion that we pour in like packing peanuts after a minimal edit, filling all available time and word count.
Packing peanuts may not be glamorous, but they get the job done. A scientist who can’t do “the voice” is going to find life a lot harder, their negativity or clumsiness turning away support when they need it most. Except for the greatest of geniuses, we all have to learn a bit of self-promotion to stay employed.
We don’t have to stop there, though. Self-promotion voice works, but it’s boring and stilted, and it all looks basically the same. If we can do something a bit more authentic then we stand out from the crowd.
I’ve been learning this more and more lately. My blog posts have always run the gamut: some are pure formula, but the ones I’m most proud of have a voice all their own. Over the years, I’ve been pushing my applications in that direction. Each grant and job application has a bit of the standard self-promotion voice pruned away, and a bit of another voice (my own voice?) sneaking in. This year, as I send out applications, I’ve been tweaking things. I almost hope the best jobs come late in the year, my applications will be better then!
A couple weeks back someone linked to this blog with a problem. A non-academic, he had done some mathematical work but didn’t feel it was ready to publish. He reached out to a nearby math department and asked what they would charge to help him clean up the work. If the price was reasonable, he’d do it, if not at least he’d know what it would cost.
Neither happened. He got no response, and got more and more frustrated.
For many of you, that result isn’t a big surprise. My academic readers are probably cringing at the thought of getting an email like that. But the guy’s instinct here isn’t too off-base. Certainly, in many industries that kind of email would get a response with an actual quote. Academia happens to be different, in a way that makes the general rule not really apply.
There’s a community called Effective Altruists that evaluate charities. They have a saying, “Money is the Unit of Caring”. The point of the saying isn’t that people with more money care more, or anything like that. Rather, it’s a reminder that, whatever a charity wants to accomplish, more money makes it easier. A lawyer could work an hour in a soup kitchen, but if they donated the proceeds of an hour’s work the soup kitchen could hire four workers instead. Food banks would rather receive money than food, because the money lets them buy whatever they need in bulk. As the Simpsons meme says, “money can be exchanged for goods and services”.
If you pay a charity, or a business, it helps them achieve what they want to do. If you pay an academic, it gets a bit more complicated.
The problem is that for academics, time matters a lot more than our bank accounts. If we want to settle down with a stable job, we need to spend our time doing things that look good on job applications: writing papers, teaching students, and so on. The rest of the time gets spent resting so we have the energy to do all of that.
(What about tenured professors? They don’t have to fight for their own jobs…but by that point, they’ve gotten to know their students and other young people in their sub-field. They want them to get jobs too!)
Money can certainly help with those goals, but not personal money: grant money. With grant money we can hire students and postdocs to do some of that work for us, or pay our own salary so we’re easier for a university to hire. We can buy equipment for those who need that sort of thing, and get to do more interesting science. Rather than “Money is the Unit of Caring”, for academics, “Grant Money is the Unit of Caring”.
Personal money, in contrast, just matters for our rest time. And unless we have expensive tastes, we usually get paid enough for that.
(The exception is for extremely underpaid academics, like PhD students and adjuncts. For some of them money can make a big difference to their quality of life. I had quite a few friends during my PhD who had side gigs, like tutoring, to live a bit more comfortably.)
This is not to say that it’s impossible to pay academics to do side jobs. People do. But when it works, it’s usually due to one of these reasons:
It’s fun. Side work trades against rest time, but if it helps us rest up then it’s not really a tradeoff. Even if it’s a little more boring that what we’d rather do, if it’s not so bad the money can make up the difference.
It looks good on a CV. This covers most of the things academics are sometimes paid to do, like writing articles for magazines. If we can spin something as useful to our teaching or research, or as good for the greater health of the field (or just for our “personal brand”), then we can justify doing it.
It’s a door out of academia. I’ve seen the occasional academic take time off to work for a company. Usually that’s a matter of seeing what it’s like, and deciding whether it looks like a better life. It’s not really “about the money”, even in those cases.
So what if you need an academic’s help with something? You need to convince them it’s worth their time. Money could do it, but only if they’re living precariously, like some PhD students. Otherwise, you need to show that what you’re asking helps the academic do what they’re trying to do: that it is likely to move the field forward, or that it fulfills some responsibility tied to their personal brand. Without that, you’re not likely to hear back.
In science, every project is different. Sometimes, my collaborators and I have a clear enough goal, and a clear enough way to get there. There are always surprises along the way, of course, but nonetheless we keep a certain amount of structure. That can mean dividing tasks (“you find the basis, I’ll find the constraints”), or it can mean everyone doing the same work in parallel, like a group of students helping each other with homework.
Recently, I’ve experienced a different kind of collaboration. The goals are less clear, and the methods are more…playful.
A big task improves with collaboration: you can divide it up. A delicate task improves with collaboration: you can check each other’s work. An unclear task also improves with collaboration: you can explore more ground.
Picture a bunch of children playing in a sandbox. The children start out sitting by themselves, each digging in the sand. Some are building castles, others dig moats, or search for buried treasure, or dinosaur bones. As the children play, their games link up: the moat protects the castle, the knights leave for treasure, the dinosaur awakens and attacks. The stories feed back on one another, and the game grows.
The project I’m working on now is a bit like that sandbox. Each of us has our own ideas about what we’d like to build, and each experiments with them. We see what works and what doesn’t, which castles hold and which fall over. We keep an eye on what each other are doing, and adjust: if that castle is close to done, maybe a moat would improve the view. Piece by piece, the unclear task becomes clearer. Our individual goals draw us in different directions, but what we discover in the end brings us back together, richer for our distant discoveries.
Working this way requires a lot of communication! In the past, I was mystified when I saw other physicists spend hours talking at a blackboard. I thought that must be a waste of time: surely they’d get more done if they sat at their desks and worked things out, rather than having to talk through every step. Now I realize they were likely part of a different kind of collaboration: not dividing tasks or working in parallel on a clear calculation, but exploring different approaches. In these collaborations, those long chats are a kind of calibration: by explaining what you’re trying to do, you see whether it makes sense to your collaborators. You can drop the parts that don’t make sense and build in some of your collaborators’ ideas. In the end you begin to converge, to something that everyone can endorse. Your sandcastles meet up, your stories become one story. When everything looks good, you’re ready to call over your mom (or in this case, the arXiv) and show it off.
Now that I’ve rested up after this year’s Amplitudes, I’ll give a few of my impressions.
Overall, I think the conference went pretty well. People seemed amused by the digital Niels Bohr, even if he looked a bit like a puppet (Lance compared him to Yoda in his final speech, which was…apt). We used Gather.town, originally just for the poster session and a “virtual reception”, but later we also encouraged people to meet up in it during breaks. That in particular was a big hit: I think people really liked the ability to just move around and chat in impromptu groups, and while nobody seemed to use the “virtual bar”, the “virtual beach” had a lively crowd. Time zones were inevitably rough, but I think we ended up with a good compromise where everyone could still see a meaningful chunk of the conference.
A few things didn’t work as well. For those planning conferences, I would strongly suggest not making a brand new gmail account to send out conference announcements: for a lot of people the emails went straight to spam. Zulip was a bust: I’m not sure if people found it more confusing than last year’s Slack or didn’t notice it due to the spam issue, but almost no-one posted in it. YouTube was complicated: the stream went down a few times and I could never figure out exactly why, it may have just been internet issues here at the Niels Bohr Institute (we did have a power outage one night and had to scramble to get internet access back the next morning). As far as I could tell YouTube wouldn’t let me re-open the previous stream so each time I had to post a new link, which probably was frustrating for those following along there.
That said, this was less of a problem than it might have been, because attendance/”viewership” as a whole was lower than expected. Zoomplitudes last year had massive numbers of people join in both on Zoom and via YouTube. We had a lot fewer: out of over 500 registered participants, we had fewer than 200 on Zoom at any one time, and at most 30 or so on YouTube. Confusion around the conference email might have played a role here, but I suspect part of the difference is simple fatigue: after over a year of this pandemic, online conferences no longer feel like an exciting new experience.
The actual content of the conference ranged pretty widely. Some people reviewed earlier work, others presented recent papers or even work-in-progress. As in recent years, a meaningful chunk of the conference focused on applications of amplitudes techniques to gravitational wave physics. This included a talk by Thibault Damour, who has by now mostly made his peace with the field after his early doubts were sorted out. He still suspected that the mismatch of scales (weak coupling on the one hand, classical scattering on the other) would cause problems in future, but after his work withLaporta and Mastrolia even he had to acknowledge that amplitudes techniques were useful.
In the past I would have put the double-copy and gravitational wave researchers under the same heading, but this year they were quite distinct. While a few of the gravitational wave talks mentioned the double-copy, most of those who brought it up were doing something quite a bit more abstract than gravitational wave physics. Indeed, several people were pushing the boundaries of what it means to double-copy. There were modified KLT kernels, different versions of color-kinematics duality, and explorations of what kinds of massive particles can and (arguably more interestingly) cannot be compatible with a double-copy framework. The sheer range of different generalizations had me briefly wondering whether the double-copy could be “too flexible to be meaningful”, whether the right definitions would let you double-copy anything out of anything. I was reassured by the points where each talk argued that certain things didn’t work: it suggests that wherever this mysterious structure comes from, its powers are limited enough to make it meaningful.
A fair number of talks dealt with what has always been our main application, collider physics. There the context shifted, but the message stayed consistent: for a “clean” enough process two or three-loop calculations can make a big difference, taking a prediction that would be completely off from experiment and bringing it into line. These are more useful the more that can be varied about the calculation: functions are more useful than numbers, for example. I was gratified to hear confirmation that a particular kind of process, where two massless particles like quarks become three massive particles like W or Z bosons, is one of these “clean enough” examples: it means someone will need to compute my “tardigrade” diagram eventually.
If collider physics is our main application, N=4 super Yang-Mills has always been our main toy model. Jaroslav Trnka gave us the details behind Nima’s exciting talk from last year, and Nima had a whole new exciting talk this year with promised connections to category theory (connections he didn’t quite reach after speaking for two and a half hours). Anastasia Volovich presented two distinct methods for predicting square-root symbol letters, while my colleague Chi Zhang showed some exciting progress with the elliptic double-box, realizing the several-year dream of representing it in a useful basis of integrals and showcasing several interesting properties. Anne Spiering came over from the integrability side to show us just how special the “planar” version of the theory really is: by increasing the number of colors of gluons, she showed that one could smoothly go between an “integrability-esque” spectrum and a “chaotic” spectrum. Finally, Lance Dixon mentioned his progress with form-factors in his talk at the end of the conference, showing off some statistics of coefficients of different functions and speculating that machine learning might be able to predict them.
On the more mathematical side, Francis Brown showed us a new way to get numbers out of graphs, one distinct but related to our usual interpretation in terms of Feynman diagrams. I’m still unsure what it will be used for, but the fact that it maps every graph to something finite probably has some interesting implications. Albrecht Klemm and Claude Duhr talked about two sides of the same story, their recent work on integrals involving Calabi-Yau manifolds. They focused on a particular nice set of integrals, and time will tell whether the methods work more broadly, but there are some exciting suggestions that at least parts will.
There’s been a resurgence of the old dream of the S-matrix community, constraining amplitudes via “general constraints” alone, and several talks dealt with those ideas. Sebastian Mizera went the other direction, and tried to test one of those “general constraints”, seeing under which circumstances he could prove that you can swap a particle going in with an antiparticle going out. Others went out to infinity, trying to understand amplitudes from the perspective of the so-called “celestial sphere” where they appear to be governed by conformal field theories of some sort. A few talks dealt with amplitudes in string theory itself: Yvonne Geyer built them out of field-theory amplitudes, while Ashoke Sen explained how to include D-instantons in them.
We also had three “special talks” in the evenings. I’ve mentioned Nima’s already. Zvi Bern gave a retrospective talk that I somewhat cheesily describe as “good for the soul”: a look to the early days of the field that reminded us of why we are who we are. Lance Dixon closed the conference with a light-hearted summary and a look to the future. That future includes next year’s Amplitudes, which after a hasty discussion during this year’s conference has now localized to Prague. Let’s hope it’s in person!
I’m busy this week with Amplitudes 2021. Being behind the “organizer’s desk” for one of these conferences is an entirely different experience. There’s a lot to keep track of, keeping the Zoom going smoothly, the website up to date, and the YouTube stream running. Luckily we have good help, a team of students handling a lot of the more finicky details. I think we’ve been putting on a good conference, but there are definitely lessons I’ve learned for the next time I host something.
The content has been interesting too of course, and despite being busy I’ve still gotten to watch the talks. I’ll say more about this after the conference, there have been quite a few interesting developments in the past year.
I calculate things called scattering amplitudes, the building-blocks of predictions in particle physics. I’m part of a community of “amplitudeologists” that try to find better ways to compute these things, to achieve more efficiency and deeper understanding. We meet once a year for our big conference, called Amplitudes. And this year, I’m one of the organizers.
This year also happens to be the 100th anniversary of the founding of the Niels Bohr Institute, so we wanted to do something special. We found a group of artists working on a rendering of Niels Bohr. The original idea was to do one of those celebrity holograms, but after the conference went online we decided to make a few short clips instead. I wrote a Bohr-esque script, and we got help from one of Bohr’s descendants to get the voice just-so. Now, you can see the result, as our digital Bohr invites you to the conference.
We’ll be livestreaming the conference on the same YouTube channel, and posting videos of the talks each day. If you’re curious about the latest developments in scattering amplitudes, I encourage you to tune in. And if you’re an amplitudeologist yourself, registration is still open!
The scientific method, as we usually learn it, starts with a hypothesis. The scientist begins with a guess, and asks a question with a clear answer: true, or false? That guess lets them design an experiment, observe the consequences, and improve our knowledge of the world.
But where did the scientist get the hypothesis in the first place? Often, through some form of exploratory research.
Exploratory research is research done, not to answer a precise question, but to find interesting questions to ask. Each field has their own approach to exploration. A psychologist might start with interviews, asking broad questions to find narrower questions for a future survey. An ecologist might film an animal, looking for changes in its behavior. A chemist might measure many properties of a new material, seeing if any stand out. Each approach is like digging for treasure, not sure of exactly what you will find.
Mathematicians and theoretical physicists don’t do experiments, but we still need hypotheses. We need an idea of what we plan to prove, or what kind of theory we want to build: like other scientists, we want to ask a question with a clear, true/false answer. And to find those questions, we still do exploratory research.
What does exploratory research look like, in the theoretical world? Often, it begins with examples and calculations. We can start with a known method, or a guess at a new one, a recipe for doing some specific kind of calculation. Recipe in hand, we proceed to do the same kind of calculation for a few different examples, covering different sorts of situation. Along the way, we notice patterns: maybe the same steps happen over and over, or the result always has some feature.
We can then ask, do those same steps always happen? Does the result really always have that feature? We have our guess, our hypothesis, and our attempt to prove it is much like an experiment. If we find a proof, our hypothesis was true. On the other hand, we might not be able to find a proof. Instead, exploring, we might find a counterexample – one where the steps don’t occur, the feature doesn’t show up. That’s one way to learn that our hypothesis was false.
This kind of exploration is essential to discovery. As scientists, we all have to eventually ask clear yes/no questions, to submit our beliefs to clear tests. But we can’t start with those questions. We have to dig around first, to observe the world without a clear plan, to get to a point where we have a good question to ask.
I’ve found that when it comes to reading papers, there are two distinct things I look for.
Sometimes, I read a paper looking for an answer. Typically, this is a “how to” kind of answer: I’m trying to do something, and the paper I’m reading is supposed to explain how. More rarely, I’m directly using a result: the paper proved a theorem or compute a formula, and I just take it as written and use it to calculate something else. Either way, I’m seeking out the paper with a specific goal in mind, which typically means I’m reading it long after it came out.
Other times, I read a paper looking for a question. Specifically, I look for the questions the author couldn’t answer. Sometimes these are things they point out, limitations of their result or opportunities for further study. Sometimes, these are things they don’t notice, holes or patterns in their results that make me wonder “what if?” Either can be the seed of a new line of research, a problem I can solve with a new project. If I read a paper in this way, typically it just came out, and this is the first time I’ve read it. When that isn’t the case, it’s because I start out with another reason to read it: often I’m looking for an answer, only to realize the answer I need isn’t there. The missing answer then becomes my new question.
I’m curious about the balance of these two behaviors in different fields. My guess is that some fields read papers more for their answers, while others read them more for their questions. If you’re working in another field, let me know what you do in the comments!