In science, every project is different. Sometimes, my collaborators and I have a clear enough goal, and a clear enough way to get there. There are always surprises along the way, of course, but nonetheless we keep a certain amount of structure. That can mean dividing tasks (“you find the basis, I’ll find the constraints”), or it can mean everyone doing the same work in parallel, like a group of students helping each other with homework.
Recently, I’ve experienced a different kind of collaboration. The goals are less clear, and the methods are more…playful.
A big task improves with collaboration: you can divide it up. A delicate task improves with collaboration: you can check each other’s work. An unclear task also improves with collaboration: you can explore more ground.
Picture a bunch of children playing in a sandbox. The children start out sitting by themselves, each digging in the sand. Some are building castles, others dig moats, or search for buried treasure, or dinosaur bones. As the children play, their games link up: the moat protects the castle, the knights leave for treasure, the dinosaur awakens and attacks. The stories feed back on one another, and the game grows.
The project I’m working on now is a bit like that sandbox. Each of us has our own ideas about what we’d like to build, and each experiments with them. We see what works and what doesn’t, which castles hold and which fall over. We keep an eye on what each other are doing, and adjust: if that castle is close to done, maybe a moat would improve the view. Piece by piece, the unclear task becomes clearer. Our individual goals draw us in different directions, but what we discover in the end brings us back together, richer for our distant discoveries.
Working this way requires a lot of communication! In the past, I was mystified when I saw other physicists spend hours talking at a blackboard. I thought that must be a waste of time: surely they’d get more done if they sat at their desks and worked things out, rather than having to talk through every step. Now I realize they were likely part of a different kind of collaboration: not dividing tasks or working in parallel on a clear calculation, but exploring different approaches. In these collaborations, those long chats are a kind of calibration: by explaining what you’re trying to do, you see whether it makes sense to your collaborators. You can drop the parts that don’t make sense and build in some of your collaborators’ ideas. In the end you begin to converge, to something that everyone can endorse. Your sandcastles meet up, your stories become one story. When everything looks good, you’re ready to call over your mom (or in this case, the arXiv) and show it off.
Now that I’ve rested up after this year’s Amplitudes, I’ll give a few of my impressions.
Overall, I think the conference went pretty well. People seemed amused by the digital Niels Bohr, even if he looked a bit like a puppet (Lance compared him to Yoda in his final speech, which was…apt). We used Gather.town, originally just for the poster session and a “virtual reception”, but later we also encouraged people to meet up in it during breaks. That in particular was a big hit: I think people really liked the ability to just move around and chat in impromptu groups, and while nobody seemed to use the “virtual bar”, the “virtual beach” had a lively crowd. Time zones were inevitably rough, but I think we ended up with a good compromise where everyone could still see a meaningful chunk of the conference.
A few things didn’t work as well. For those planning conferences, I would strongly suggest not making a brand new gmail account to send out conference announcements: for a lot of people the emails went straight to spam. Zulip was a bust: I’m not sure if people found it more confusing than last year’s Slack or didn’t notice it due to the spam issue, but almost no-one posted in it. YouTube was complicated: the stream went down a few times and I could never figure out exactly why, it may have just been internet issues here at the Niels Bohr Institute (we did have a power outage one night and had to scramble to get internet access back the next morning). As far as I could tell YouTube wouldn’t let me re-open the previous stream so each time I had to post a new link, which probably was frustrating for those following along there.
That said, this was less of a problem than it might have been, because attendance/”viewership” as a whole was lower than expected. Zoomplitudes last year had massive numbers of people join in both on Zoom and via YouTube. We had a lot fewer: out of over 500 registered participants, we had fewer than 200 on Zoom at any one time, and at most 30 or so on YouTube. Confusion around the conference email might have played a role here, but I suspect part of the difference is simple fatigue: after over a year of this pandemic, online conferences no longer feel like an exciting new experience.
The actual content of the conference ranged pretty widely. Some people reviewed earlier work, others presented recent papers or even work-in-progress. As in recent years, a meaningful chunk of the conference focused on applications of amplitudes techniques to gravitational wave physics. This included a talk by Thibault Damour, who has by now mostly made his peace with the field after his early doubts were sorted out. He still suspected that the mismatch of scales (weak coupling on the one hand, classical scattering on the other) would cause problems in future, but after his work withLaporta and Mastrolia even he had to acknowledge that amplitudes techniques were useful.
In the past I would have put the double-copy and gravitational wave researchers under the same heading, but this year they were quite distinct. While a few of the gravitational wave talks mentioned the double-copy, most of those who brought it up were doing something quite a bit more abstract than gravitational wave physics. Indeed, several people were pushing the boundaries of what it means to double-copy. There were modified KLT kernels, different versions of color-kinematics duality, and explorations of what kinds of massive particles can and (arguably more interestingly) cannot be compatible with a double-copy framework. The sheer range of different generalizations had me briefly wondering whether the double-copy could be “too flexible to be meaningful”, whether the right definitions would let you double-copy anything out of anything. I was reassured by the points where each talk argued that certain things didn’t work: it suggests that wherever this mysterious structure comes from, its powers are limited enough to make it meaningful.
A fair number of talks dealt with what has always been our main application, collider physics. There the context shifted, but the message stayed consistent: for a “clean” enough process two or three-loop calculations can make a big difference, taking a prediction that would be completely off from experiment and bringing it into line. These are more useful the more that can be varied about the calculation: functions are more useful than numbers, for example. I was gratified to hear confirmation that a particular kind of process, where two massless particles like quarks become three massive particles like W or Z bosons, is one of these “clean enough” examples: it means someone will need to compute my “tardigrade” diagram eventually.
If collider physics is our main application, N=4 super Yang-Mills has always been our main toy model. Jaroslav Trnka gave us the details behind Nima’s exciting talk from last year, and Nima had a whole new exciting talk this year with promised connections to category theory (connections he didn’t quite reach after speaking for two and a half hours). Anastasia Volovich presented two distinct methods for predicting square-root symbol letters, while my colleague Chi Zhang showed some exciting progress with the elliptic double-box, realizing the several-year dream of representing it in a useful basis of integrals and showcasing several interesting properties. Anne Spiering came over from the integrability side to show us just how special the “planar” version of the theory really is: by increasing the number of colors of gluons, she showed that one could smoothly go between an “integrability-esque” spectrum and a “chaotic” spectrum. Finally, Lance Dixon mentioned his progress with form-factors in his talk at the end of the conference, showing off some statistics of coefficients of different functions and speculating that machine learning might be able to predict them.
On the more mathematical side, Francis Brown showed us a new way to get numbers out of graphs, one distinct but related to our usual interpretation in terms of Feynman diagrams. I’m still unsure what it will be used for, but the fact that it maps every graph to something finite probably has some interesting implications. Albrecht Klemm and Claude Duhr talked about two sides of the same story, their recent work on integrals involving Calabi-Yau manifolds. They focused on a particular nice set of integrals, and time will tell whether the methods work more broadly, but there are some exciting suggestions that at least parts will.
There’s been a resurgence of the old dream of the S-matrix community, constraining amplitudes via “general constraints” alone, and several talks dealt with those ideas. Sebastian Mizera went the other direction, and tried to test one of those “general constraints”, seeing under which circumstances he could prove that you can swap a particle going in with an antiparticle going out. Others went out to infinity, trying to understand amplitudes from the perspective of the so-called “celestial sphere” where they appear to be governed by conformal field theories of some sort. A few talks dealt with amplitudes in string theory itself: Yvonne Geyer built them out of field-theory amplitudes, while Ashoke Sen explained how to include D-instantons in them.
We also had three “special talks” in the evenings. I’ve mentioned Nima’s already. Zvi Bern gave a retrospective talk that I somewhat cheesily describe as “good for the soul”: a look to the early days of the field that reminded us of why we are who we are. Lance Dixon closed the conference with a light-hearted summary and a look to the future. That future includes next year’s Amplitudes, which after a hasty discussion during this year’s conference has now localized to Prague. Let’s hope it’s in person!
A couple different things that some of you might like to know about:
Are you an amateur with an idea you think might revolutionize all of physics? If so, absolutely do not contact me about it. Instead, you can talk to these people. Sabine Hossenfelder runs a service that will hook you up with a scientist who will patiently listen to your idea and help you learn what you need to develop it further. They do charge for that service, and they aren’t cheap, so only do this if you can comfortably afford it. If you can’t, then I have some advice in a post here. Try to contact people who are experts in the specific topic you’re working on, ask concrete questions that you expect to give useful answers, and be prepared to do some background reading.
Are you an undergraduate student planning for a career in theoretical physics? If so, consider the Perimeter Scholars International (PSI) master’s program. Located at the Perimeter Institute in Waterloo, Canada, PSI is an intense one-year boot-camp in theoretical physics, teaching the foundational ideas you’ll need for the rest of your career. It’s something I wish I was aware of when I was applying for schools at that age. Theoretical physics is a hard field, and a big part of what makes it hard is all the background knowledge one needs to take part in it. Starting work on a PhD with that background knowledge already in place can be a tremendous advantage. There are other programs with similar concepts, but I’ve gotten a really good impression of PSI specifically so it’s them I would recommend. Note that applications for the new year aren’t open yet: I always plan to advertise them when they open, and I always forget. So consider this an extremely-early warning.
Are you an amplitudeologist? Registration for Amplitudes 2021 is now live! We’re doing an online conference this year, co-hosted by the Niels Bohr Institute and Penn State. We’ll be doing a virtual poster session, so if you want to contribute to that please include a title and abstract when you register. We also plan to stream on YouTube, and will have a fun online surprise closer to the conference date.
I’ve found that when it comes to reading papers, there are two distinct things I look for.
Sometimes, I read a paper looking for an answer. Typically, this is a “how to” kind of answer: I’m trying to do something, and the paper I’m reading is supposed to explain how. More rarely, I’m directly using a result: the paper proved a theorem or compute a formula, and I just take it as written and use it to calculate something else. Either way, I’m seeking out the paper with a specific goal in mind, which typically means I’m reading it long after it came out.
Other times, I read a paper looking for a question. Specifically, I look for the questions the author couldn’t answer. Sometimes these are things they point out, limitations of their result or opportunities for further study. Sometimes, these are things they don’t notice, holes or patterns in their results that make me wonder “what if?” Either can be the seed of a new line of research, a problem I can solve with a new project. If I read a paper in this way, typically it just came out, and this is the first time I’ve read it. When that isn’t the case, it’s because I start out with another reason to read it: often I’m looking for an answer, only to realize the answer I need isn’t there. The missing answer then becomes my new question.
I’m curious about the balance of these two behaviors in different fields. My guess is that some fields read papers more for their answers, while others read them more for their questions. If you’re working in another field, let me know what you do in the comments!
A reader pointed me to Stephen Wolfram’s one-year update of his proposal for a unified theory of physics. I was pretty squeamish about it one year ago, and now I’m even less interested in wading in to the topic. But I thought it would be worth saying something, and rather than say something specific, I realized I could say something general. I thought I’d talk a bit about how we judge good and bad research in theoretical physics.
In science, there are two things we want out of a new result: we want it to be true, and we want it to be surprising. The first condition should be obvious, but the second is also important. There’s no reason to do an experiment or calculation if it will just tell us something we already know. We do science in the hope of learning something new, and that means that the best results are the ones we didn’t expect.
(What about replications? We’ll get there.)
If you’re judging an experiment, you can measure both of these things with statistics. Statistics lets you estimate how likely an experiment’s conclusion is to be true: was there a large enough sample? Strong enough evidence? It also lets you judge how surprising the experiment is, by estimating how likely it would be to happen given what was known beforehand. Did existing theories and earlier experiments make the result seem likely, or unlikely? While you might not have considered replications surprising, from this perspective they can be: if a prior experiment seems unreliable, successfully replicating it can itself be a surprising result.
If instead you’re judging a theoretical result, these measures get more subtle. There aren’t always good statistical tools to test them. Nonetheless, you don’t have to rely on vague intuitions either. You can be fairly precise, both about how true a result is and how surprising it is.
We get our results in theoretical physics through mathematical methods. Sometimes, this is an actual mathematical proof: guaranteed to be true, no statistics needed. Sometimes, it resembles a proof, but falls short: vague definitions and unstated assumptions mar the argument, making it less likely to be true. Sometimes, the result uses an approximation. In those cases we do get to use some statistics, estimating how good the approximation may be. Finally, a result can’t be true if it contradicts something we already know. This could be a logical contradiction in the result itself, but if the result is meant to describe reality (note: not always the case), it might contradict the results of a prior experiment.
What makes a theoretical result surprising? And how precise can we be about that surprise?
Theoretical results can be surprising in the light of earlier theory. Sometimes, this gets made precise by a no-go theorem, a proof that some kind of theoretical result is impossible to obtain. If a result finds a loophole in a no-go theorem, that can be quite surprising. Other times, a result is surprising because it’s something no-one else was able to do. To be precise about that kind of surprise, you need to show that the result is something others wanted to do, but couldn’t. Maybe someone else made a conjecture, and only you were able to prove it. Maybe others did approximate calculations, and now you can do them more precisely. Maybe a question was controversial, with different people arguing for different sides, and you have a more conclusive argument. This is one of the better reasons to include a long list of references in a paper: not to pad your friends’ citation counts, but to show that your accomplishment is surprising: that others might have wanted to achieve it, but had to settle for something lesser.
In general, this means that showing whether a theoretical result is good: not merely true, but surprising and new, links you up to the rest of the theoretical community. You can put in all the work you like on a theory of everything, and make it as rigorous as possible, but if all you did was reproduce a sub-case of someone else’s theory then you haven’t accomplished all that much. If you put your work in context, compare and contrast to what others have done before, then we can start getting precise about how much we should be surprised, and get an idea of what your result is really worth.
Pedagogy courses have a mixed reputation among physicists, and for once I don’t just mean “mixed” as a euphemism for “bad”. I’ve met people who found them very helpful, and I’ve been told that attending a Scandinavian pedagogy course looks really good on a CV. On the other hand, I’ve heard plenty of horror stories of classes that push a jumble of dogmatic requirements and faddish gimmicks, all based on research that if anything has more of a replication crisis going than psychology does.
With that reputation in mind, I went into the pedagogy course last week hopeful, but skeptical. In part, I wasn’t sure whether pedagogy was the kind of thing that could be taught. Each class is different, and so much of what makes a bad or good teacher seems to be due to experience, which one can’t get much of in a one-week course. I couldn’t imagine what facts a pedagogy course could tell me that would actually improve my teaching, and wouldn’t just be ill-justified dogma.
The answer, it turned out, would be precisely the message of the course. A pedagogy course that drills you in “pedagogy facts” would indeed be annoying. But one of those “pedagogy facts” is that teaching isn’t just drilling students in facts. And because this course practiced what it preached, it ended up much less annoying than I worried it would be.
There were hints of that dogmatic approach in the course materials, but only hints. An early slide had a stark quote calling pure lecturing irresponsible. The teacher immediately and awkwardly distanced himself from it, almost literally saying “well that is a thing someone could say”. Instead, most of the class was made up of example lessons and student discussions. We’d be assembled into groups to discuss something, then watch a lesson intended to show off a particular technique. Only then would we get a brief lecture about the technique, giving a name and some justification, before being thrown into yet more discussion about it.
In the terminology we were taught, this made the course dialogical rather than authoritative, and inductive rather than deductive. We learned by reflecting on examples rather than deriving general truths, and discussed various perspectives rather than learning one canonical one.
Did we learn anything from that, besides the terms?
One criticism of both dialogical and inductive approaches to teaching is that students can only get out what they put in. If you learn by discussing and solving examples by yourself, you’d expect the only things you’ll learn are things you already know.
We weren’t given the evidence to refute this criticism in general, and honestly I wouldn’t have trusted it if we had (see above: replication crisis). But in this context, that criticism does miss something. Yes, pretty much every method I learned in this course was something I could come up with on my own in the right situation. But I wouldn’t be thinking of the methods systematically. I’d notice a good idea for one lesson or another, but miss others because I wouldn’t be thinking of the ideas as part of a larger pattern. With the patterns in mind, with terms to “hook” the methods on to, I can be more aware of when opportunities come up. I don’t have to think of dialogical as better than authoritative, or inductive as better than deductive, in general. All I have to do is keep an eye out for when a dialogical or inductive approach might prove useful. And that’s something I feel genuinely better at after taking this course.
Beyond that core, we got some extremely topical tips about online teaching and way too many readings (I think the teachers overestimated how easy it is to read papers from a different discipline…and a “theory paper” in education is about as far from a “theory paper” in physics as you can get). At times the dialogue aspect felt a little too open, we heard “do what works for you” often enough that it felt like the teachers were apologizing for their own field. But overall, the course worked, and I expect to teach better going forward because of it.
Technically, we don’t know yet. The ALPHA-g experiment would have been the first to check this, making anti-hydrogen by trapping anti-protons and positrons in a long tube and seeing which way it falls. While they got most of their setup working, the LHC complex shut down before they could finish. It starts up again next month, so we should have our answer soon.
That said, for most theorists’ purposes, we absolutely do know: antimatter falls down. Antimatter is one of the cleanest examples of a prediction from pure theory that was confirmed by experiment. When Paul Dirac first tried to write down an equation that described electrons, he found the math forced him to add another particle with the opposite charge. With no such particle in sight, he speculated it could be the proton (this doesn’t work, they need the same mass), before Carl D. Anderson discovered the positron in 1932.
The same math that forced Dirac to add antimatter also tells us which way it falls. There’s a bit more involved, in the form of general relativity, but the recipe is pretty simple: we know how to take an equation like Dirac’s and add gravity to it, and we have enough practice doing it in different situations that we’re pretty sure it’s the right way to go. Pretty sure doesn’t mean 100% sure: talk to the right theorists, and you’ll probably find a proposal or two in which antimatter falls up instead of down. But they tend to be pretty weird proposals, from pretty weird theorists.
Ok, but if those theorists are that “weird”, that outside the mainstream, why does an experiment like ALPHA-g exist? Why does it happen at CERN, one of the flagship facilities for all of mainstream particle physics?
This gets at a misconception I occasionally hear from critics of the physics mainstream. They worry about groupthink among mainstream theorists, the physics community dismissing good ideas just because they’re not trendy (you may think I did that just now, for antigravity antimatter!) They expect this to result in a self-fulfilling prophecy where nobody tests ideas outside the mainstream, so they find no evidence for them, so they keep dismissing them.
The mistake of these critics is in assuming that what gets tested has anything to do with what theorists think is reasonable.
Theorists talk to experimentalists, sure. We motivate them, give them ideas and justification. But ultimately, people do experiments because they can do experiments. I watched a talk about the ALPHA experiment recently, and one thing that struck me was how so many different techniques play into it. They make antiprotons using a proton beam from the accelerator, slow them down with magnetic fields, and cool them with lasers. They trap their antihydrogen in an extremely precise vacuum, and confirm it’s there with particle detectors. The whole setup is a blend of cutting-edge accelerator physics and cutting-edge tricks for manipulating atoms. At its heart, ALPHA-g feels like its primary goal is to stress-test all of those tricks: to push the state of the art in a dozen experimental techniques in order to accomplish something remarkable.
And so even if the mainstream theorists don’t care, ALPHA will keep going. It will keep getting funding, it will keep getting visited by celebrities and inspiring pop fiction. Because enough people recognize that doing something difficult can be its own reward.
In my experience, this motivation applies to theorists too. Plenty of us will dismiss this or that proposal as unlikely or impossible. But give us a concrete calculation, something that lets us use one of our flashy theoretical techniques, and the tune changes. If we’re getting the chance to develop our tools, and get a paper out of it in the process, then sure, we’ll check your wacky claim. Why not?
I suspect critics of the mainstream would have a lot more success with this kind of pitch-based approach. If you can find a theorist who already has the right method, who’s developing and extending it and looking for interesting applications, then make your pitch: tell them how they can answer your question just by doing what they do best. They’ll think of it as a chance to disprove you, and you should let them, that’s the right attitude to take as a scientist anyway. It’ll work a lot better than accusing them of hogging the grant money.
As a scientist, you will always need to be able to communicate your work. Most of the time you can get away with papers and talks aimed at your peers. But the longer you mean to stick around, the more often you will have to justify yourself to others: to departments, to universities, and to grant agencies. A scientist cannot survive on scientific ability alone: to get jobs, to get funding, to survive, you need to be able to promote yourself, at least a little.
Self-promotion isn’t outreach, though. Talking to the public, or to journalists, is a different skill from talking to other academics or writing grants. And it’s entirely possible to go through an entire scientific career without exercising that skill.
That’s a reassuring message for some. I’ve met people for whom science is a refuge from the mess of human interaction, people horrified by the thought of fame or even being mentioned in a newspaper. When I meet these people, they sometimes seem to worry that I’m silently judging them, thinking that they’re ignoring their responsibilities by avoiding outreach. They think this in part because the field seems to be going in that direction. Grants that used to focus just on science have added outreach as a requirement, demanding that each application come with a plan for some outreach project.
I can’t guarantee that more grants won’t add outreach requirements. But I can say at least that I’m on your side here: I don’t think you should have to do outreach if you don’t want to. I don’t think you have to, just yet. And I think if grant agencies are sensible, they’ll find a way to encourage outreach without making it mandatory.
I think that overall, collectively, we have a responsibility to do outreach. Beyond the old arguments about justifying ourselves to taxpayers, we also just ought to be open about what we do. In a world where people are actively curious about us, we ought to encourage and nurture that curiosity. I don’t think this is unique to science, I think it’s something every industry, every hobby, and every community should foster. But in each case, I think that communication should be done by people who want to do it, not forced on every member.
I also think that, potentially, anyone can do outreach. Outreach can take different forms for different people, anything from speaking to high school students to talking to journalists to writing answers for Stack Exchange. I don’t think anyone should feel afraid of outreach because they think they won’t be good enough. Chances are, you know something other people don’t: I guarantee if you want to, you will have something worth saying.
When a scientist applies for a grant to fund their research, there’s a way it’s supposed to go. The scientist starts out with a clear idea, a detailed plan for an experiment or calculation they’d like to do, and an expectation of what they could learn from it. Then they get the grant, do their experiment or calculation, and make their discovery. The world smiles upon them.
There’s also a famous way it actually goes. Like the other way, the scientist has a clear idea and detailed plan. Then they do their experiment, or calculation, and see what they get, making their discovery. Finally, they write their grant application, proposing to do the experiment they already did. Getting the grant, they then spend the money on their next idea instead, which they will propose only in the next grant application, and so on.
This is pretty shady behavior. But there’s yet another way things can go, one that flips the previous method on its head. And after considering it, you might find the shady method more understandable.
What happens if a scientist is going to run out of funding, but doesn’t yet have a clear idea? Maybe they don’t know enough yet to have a detailed plan for their experiment or their calculation. Maybe they have an idea, but they’re still foggy about what they can learn from it.
Well, they’re still running out of funding. They still have to write that grant. So they start writing. Along the way, they’ll manage to find some of that clarity: they’ll have to write a detailed plan, they’ll have to describe some expected discovery. If all goes well, they tell a plausible story, and they get that funding.
When they actually go do that research, though, there’s no guarantee it sticks to the plan. In fact, it’s almost guaranteed not to: neither the scientist nor the grant committee typically knows what experiment or calculation needs to be done: that’s what makes the proposal novel science in the first place. The result is that once again, the grant proposal wasn’t exactly honest: it didn’t really describe what was actually going to be done.
You can think of these different stories as falling on a sliding scale. On the one end, the scientist may just have the first glimmer of an idea, and their funded research won’t look anything like their application. On the other, the scientist has already done the research, and the funded research again looks nothing like the application. In between there’s a sweet spot, the intended system: late enough that the scientist has a good idea of what they need to do, early enough that they haven’t done it yet.
How big that sweet spot is depends on the pace of the field. If you’re a field with big, complicated experiments, like randomized controlled trials, you can mostly make this work. Your work takes a long time to plan, and requires sticking to that plan, so you can, at least sometimes, do grants “the right way”. The smaller your experiments are though, the more the details can change, and the smaller the window gets. For a field like theoretical physics, if you know exactly what calculation to do, or what proof to write, with no worries or uncertainty…well, you’ve basically done the calculation already. The sweet spot for ethical grant-writing shrinks down to almost a single moment.
In practice, some grant committees understand this. There are grants where you are expected to present preliminary evidence from work you’ve already started, and to discuss the risks your vaguer ideas might face. Grants of this kind recognize that science is a process, and that catching people at that perfect moment is next-to-impossible. They try to assess what the scientist is doing as a whole, not just a single idea.
Scientists ought to be honest about what they’re doing. But grant agencies need to be honest too, about how science in a given field actually works. Hopefully, one enables the other, and we reach a more honest world.
Me, I chose physics as a career, so I’d better like it. And you, right now you’re reading a physics blog for fun, so you probably like physics too.
Ok, so we agree, physics is awesome. But it isn’t always awesome.
Read a blog like this, or the news, and you’ll hear about the more awesome parts of physics: the black holes and big bangs, quantum mysteries and elegant mathematics. As freshman physics majors learn every year, most of physics isn’t like that. It’s careful calculation and repetitive coding, incremental improvements to a piece of a piece of a piece of something that might eventually answer a Big Question. Even if intellectually you can see the line from what you’re doing to the big flashy stuff, emotionally the two won’t feel connected, and you might struggle to feel motivated.
Physics solves this through acculturation. Physicists don’t just work on their own, they’re part of a shared worldwide culture of physicists. They spend time with other physicists, and not just working time but social time: they eat lunch together, drink coffee together, travel to conferences together. Spending that time together gives physics more emotional weight: as humans, we care a bit about Big Questions, but we care a lot more about our community.
This isn’t unique to physics, of course, or even to academics. Programmers who have lunch together, philanthropists who pat each other on the back for their donations, these people are trying to harness the same forces. By building a culture around something, you can get people more motivated to do it.
There’s a risk here, of course, that the culture takes over, and we lose track of the real reasons to do science. It’s easy to care about something because your friends care about it because their friends care about it, looping around until it loses contact with reality. In science we try to keep ourselves grounded, to respect those who puncture our bubbles with a good argument or a clever experiment. But we don’t always succeed.
The pandemic has made acculturation more difficult. As a scientist working from home, that extra bit of social motivation is much harder to get. It’s perhaps even harder for new students, who haven’t had the chance to hang out and make friends with other researchers. People’s behavior, what they research and how and when, has changed, and I suspect changing social ties are a big part of it.
In the long run, I don’t think we can do without the culture of physics. We can’t be lone geniuses motivated only by our curiosity, that’s just not how people work. We have to meld the two, mix the social with the intellectual…and hope that when we do, we keep the engines of discovery moving.