Tag Archives: academia

When to Trust the Contrarians

One of my colleagues at the NBI had an unusual experience: one of his papers took a full year to get through peer review. This happens often in math, where reviewers will diligently check proofs for errors, but it’s quite rare in physics: usually the path from writing to publication is much shorter. Then again, the delays shouldn’t have been too surprising for him, given what he was arguing.

My colleague Mohamed Rameez, along with Jacques Colin, Roya Mohayaee, and Subir Sarkar, wants to argue against one of the most famous astronomical discoveries of the last few decades: that the expansion of our universe is accelerating, and thus that an unknown “dark energy” fills the universe. They argue that one of the key pieces of evidence used to prove acceleration is mistaken: that a large region of the universe around us is in fact “flowing” in one direction, and that tricked astronomers into thinking its expansion was accelerating. You might remember a paper making a related argument back in 2016. I didn’t like the media reaction to that paper, and my post triggered a response by the authors, one of whom (Sarkar) is on this paper as well.

I’m not an astronomer or an astrophysicist. I’m not qualified to comment on their argument, and I won’t. I’d still like to know whether they’re right, though. And that means figuring out which experts to trust.

Pick anything we know in physics, and you’ll find at least one person who disagrees. I don’t mean a crackpot, though they exist too. I mean an actual expert who is convinced the rest of the field is wrong. A contrarian, if you will.

I used to be very unsympathetic to these people. I was convinced that the big results of a field are rarely wrong, because of how much is built off of them. I thought that even if a field was using dodgy methods or sloppy reasoning, the big results are used in so many different situations that if they were wrong they would have to be noticed. I’d argue that if you want to overturn one of these big claims you have to disprove not just the result itself, but every other success the field has ever made.

I still believe that, somewhat. But there are a lot of contrarians here at the Niels Bohr Institute. And I’ve started to appreciate what drives them.

The thing is, no scientific result is ever as clean as it ought to be. Everything we do is jury-rigged. We’re almost never experts in everything we’re trying to do, so we often don’t know the best method. Instead, we approximate and guess, we find rough shortcuts and don’t check if they make sense. This can take us far sometimes, sure…but it can also backfire spectacularly.

The contrarians I’ve known got their inspiration from one of those backfires. They saw a result, a respected mainstream result, and they found a glaring screw-up. Maybe it was an approximation that didn’t make any sense, or a statistical measure that was totally inappropriate. Whatever it was, it got them to dig deeper, and suddenly they saw screw-ups all over the place. When they pointed out these problems, at best the people they accused didn’t understand. At worst they got offended. Instead of cooperation, the contrarians are told they can’t possibly know what they’re talking about, and ignored. Eventually, they conclude the entire sub-field is broken.

Are they right?

Not always. They can’t be, for every claim you can find a contrarian, believing them all would be a contradiction.

But sometimes?

Often, they’re right about the screw-ups. They’re right that there’s a cleaner, more proper way to do that calculation, a statistical measure more suited to the problem. And often, doing things right raises subtleties, means that the big important result everyone believed looks a bit less impressive.

Still, that’s not the same as ruling out the result entirely. And despite all the screw-ups, the main result is still often correct. Often, it’s justified not by the original, screwed-up argument, but by newer evidence from a different direction. Often, the sub-field has grown to a point that the original screwed-up argument doesn’t really matter anymore.

Often, but again, not always.

I still don’t know whether to trust the contrarians. I still lean towards expecting fields to sort themselves out, to thinking that error alone can’t sustain long-term research. But I’m keeping a more open mind now. I’m waiting to see how far the contrarians go.

Knowing When to Hold/Fold ‘Em in Science

The things one learns from Wikipedia. For example, today I learned that the country song “The Gambler” was selected for preservation by the US Library of Congress as being “culturally, historically, or artistically significant.”

You’ve got to know when to hold ’em, know when to fold ’em,

Know when to walk away, know when to run.

Knowing when to “hold ’em” or “fold ’em” is important in life in general, but it’s particularly important in science.

And not just on poker night

As scientists, we’re often trying to do something no-one else has done before. That’s exciting, but it’s risky too: sometimes whatever we’re trying simply doesn’t work. In those situations, it’s important to recognize when we aren’t making progress, and change tactics. The trick is, we can’t give up too early either: science is genuinely hard, and sometimes when we feel stuck we’re actually close to the finish line. Knowing which is which, when to “hold” and when to “fold”, is an essential skill, and a hard one to learn.

Sometimes, we can figure this out mathematically. Computational complexity theory classifies calculations by how difficult they are, including how long they take. If you can estimate how much time you should take to do a calculation, you can decide whether you’ll finish it in a reasonable amount of time. If you just want a rough guess, you can do a simpler version of the calculation, and see how long that takes, then estimate how much longer the full one will. If you figure out you’re doomed, then it’s time to switch to a more efficient algorithm, or a different question entirely.

Sometimes, we don’t just have to consider time, but money as well. If you’re doing an experiment, you have to estimate how much the equipment will cost, and how much it will cost to run it. Experimenters get pretty good at estimating these things, but they still screw up sometimes and run over budget. Occasionally this is fine: LIGO didn’t detect anything in its first eight-year run, but they upgraded the machines and tried again, and won a Nobel prize. Other times it’s a disaster, and money keeps being funneled into a project that never works. Telling the difference is crucial, and it’s something we as a community are still not so good at.

Sometimes we just have to follow our instincts. This is dangerous, because we have a bias (the “sunk cost fallacy”) to stick with something if we’ve already spent a lot of time or money on it. To counteract that, it’s good to cultivate a bias in the opposite direction, which you might call “scientific impatience”. Getting frustrated with slow progress may not seem productive, but it keeps you motivated to search for a better way. Experienced scientists get used to how long certain types of project take. Too little progress, and they look for another option. This can fail, killing a project that was going to succeed, but it can also prevent over-investment in a failing idea. Only a mix of instincts keeps the field moving.

In the end, science is a gamble. Like the song, we have to know when to hold ’em and fold ’em, when to walk away, and when to run an idea as far as it will go. Sometimes it works, and sometimes it doesn’t. That’s science.

In Defense of the Streetlight

If you read physics blogs, you’ve probably heard this joke before:

A policeman sees a drunk man searching for something under a streetlight and asks what the drunk has lost. He says he lost his keys and they both look under the streetlight together. After a few minutes the policeman asks if he is sure he lost them here, and the drunk replies, no, and that he lost them in the park. The policeman asks why he is searching here, and the drunk replies, “this is where the light is”.

The drunk’s line of thinking has a name, the streetlight effect, and while it may seem ridiculous it’s a common error, even among experts. When it gets too tough to research something, scientists will often be tempted by an easier problem even if it has little to do with the original question. After all, it’s “where the light is”.

Physicists get accused of this all the time. Dark matter could be completely undetectable on Earth, but physicists still build experiments to search for it. Our universe appears to be curved one way, but string theory makes it much easier to study universes curved the other way, so physicists write a lot of nice proofs about a universe we don’t actually inhabit. In my own field, we spend most of our time studying a very nice theory that we know can’t describe the real world.

I’m not going to defend this behavior in general. There are real cases where scientists trick themselves into thinking they can solve an easy problem when they need to solve a hard one. But there is a crucial difference between scientists and drunkards looking for their keys, one that makes this behavior a lot more reasonable: scientists build technology.

As scientists, we can’t just grope around in the dark for our keys. The spaces we’re searching, from the space of all theories of gravity to actual outer space, are much too vast to search randomly. We need new ideas, new mathematics or new equipment, to do the search properly. If we were the drunkard of the story, we’d need to invent night-vision goggles.

Is the light better here, or is it just me?

Suppose you wanted to design new night-vision goggles, to search for your keys in the park. You could try to build them in the dark, but you wouldn’t be able to see what you were doing: you’d lose pieces, miss screws, and break lenses. Much better to build the goggles under that convenient streetlight.

Of course, if you build and test your prototype goggles under the streetlight, you risk that they aren’t good enough for the dark. You’ll have calibrated them in an unrealistic case. In all likelihood, you’ll have to go back and fix your goggles, tweaking them as you go, and you’ll run into the same problem: you can’t see what you’re doing in the dark.

At that point, though, you have an advantage: you now know how to build night-vision goggles. You’ve practiced building goggles in the light, and now even if the goggles aren’t good enough, you remember how you put them together. You can tweak the process, modify your goggles, and make something good enough to find your keys. You’re good enough at making goggles that you can modify them now, even in the dark.

Sometimes scientists really are like the drunk, searching under the most convenient streetlight. Sometimes, though, scientists are working where the light is for a reason. Instead of wasting their time lost in the dark, they’re building new technology and practicing new methods, getting better and better at searching until, when they’re ready, they can go back and find their keys. Sometimes, the streetlight is worth it.

“X Meets Y” Conferences

Most conferences focus on a specific sub-field. If you call a conference “Strings” or “Amplitudes”, people know what to expect. Likewise if you focus on something more specific, say Elliptic Integrals. But what if your conference is named after two sub-fields?

These conferences, with names like “QCD Meets Gravity” and “Scattering Amplitudes and the Conformal Bootstrap”, try to connect two different sub-fields together. I’ll call them “X Meets Y” conferences.

The most successful “X Meets Y” conferences involve two sub-fields that have been working together for quite some time. At that point, you don’t just have “X” researchers and “Y” researchers, but “X and Y” researchers, people who work on the connection between both topics. These people can glue a conference together, showing the separate “X” and “Y” researchers what “X and Y” research looks like. At a conference like that speakers have a clear idea of what to talk about: the “X” researchers know how to talk to the “Y” researchers, and vice versa, and the organizers can invite speakers who they know can talk to both groups.

If the sub-fields have less history of collaboration, “X Meets Y” conferences become trickier. You need at least a few “X and Y” researchers (or at least aspiring “X and Y” researchers) to guide the way. Even if most of the “X” researchers don’t know how to talk to “Y” researchers, the “X and Y” researchers can give suggestions, telling “X” which topics would be most interesting to “Y” and vice versa. With that kind of guidance, “X Meets Y” conferences can inspire new directions of research, opening one field up to the tools of another.

The biggest risk in an “X Meets Y” conference, that becomes more likely the fewer “X and Y” researchers there are, is that everyone just gives their usual talks. The “X” researchers talk about their “X”, and the “Y” researchers talk about their “Y”, and both groups nod politely and go home with no new ideas whatsoever. Scientists are fundamentally lazy creatures. If we already have a talk written, we’re tempted to use it, even if it doesn’t quite fit the occasion. Counteracting that is a tough job, and one that isn’t always feasible.

“X Meets Y” conferences can be very productive, the beginning of new interdisciplinary ideas. But they’re certainly hard to get right. Overall, they’re one of the trickier parts of the social side of science.

Reader Background Poll Reflections

A few weeks back I posted a poll, asking you guys what sort of physics background you have. The idea was to follow up on a poll I did back in 2015, to see how this blog’s audience has changed.

One thing that immediately leaped out of the data was how many of you are physicists. As of writing this, 66% of readers say they either have a PhD in physics or a related field, or are currently in grad school. This includes 7% specifically from my sub-field, “amplitudeology” (though this number may be higher than usual since we just had our yearly conference, and more amplitudeologists were reminded my blog exists).

I didn’t use the same categories in 2015, so the numbers can’t be easily compared. In 2015 only 2.5% of readers described themselves as amplitudeologists. Adding these up with the physics PhDs and grad students gives 59%, which goes up to 64.5% if I include the mathematicians (who this year might have put either “PhD in a related field” or “Other Academic”). So overall the percentages are pretty similar, though now it looks like more of my readers are grad students.

Despite the small difference, I am a bit worried: it looks like I’m losing non-physicist readers. I could flatter myself and think that I inspired those non-physicists to go to grad school, but more realistically I should admit that fewer of my posts have been interesting to a non-physics audience. In 2015 I worked at the Perimeter Institute, and helped out with their public lectures. Now I’m at the Niels Bohr Institute, and I get fewer opportunities to hear questions from non-physicists. I get fewer ideas for interesting questions to answer.

I want to keep this blog’s language accessible and its audience general. I appreciate that physicists like this blog and view it as a resource, but I don’t want it to turn into a blog for physicists only. I’d like to encourage the non-physicists in the audience: ask questions! Don’t worry if it sounds naive, or if the question seems easy: if you’re confused, likely others are too.

When to Read Someone Else’s Thesis

There’s a cynical truism we use to reassure grad students. A thesis is a big, daunting project, but it shouldn’t be too stressful: in the end, nobody else is going to read it.

This is mostly true. In many fields your thesis is a mix of papers you’ve already published, stitched together into your overall story. Anyone who’s interested will have read the papers the thesis is based on, they don’t need to read the thesis too.

Like every good truism, though, there is an exception. Some rare times, you will actually want to read someone else’s thesis. This isn’t usually because the material is new: rather it’s because it’s well explained.

When we academics publish, we’re often in a hurry, and there isn’t time to write well. When we publish more slowly, often we have more collaborators, so the paper is a set of compromises written by committee. Either way, we rarely make a concept totally crystal-clear.

A thesis isn’t always crystal-clear either, but it can be. It’s written by just one person, and that person is learning. A grad student who just learned a topic can be in the best position to teach it: they know exactly what confused them when they start out. Thesis-writing is also a slower process, one that gives more time to hammer at a text until it’s right. Finally, a thesis is written for a committee, and that committee usually contains people from different fields. A thesis needs to be an accessible introduction, in a way that a published paper doesn’t.

There are topics that I never really understood until I looked up the thesis of the grad student who helped discover it. There are tricks that never made it to published papers, that I’ve learned because they were tucked in to the thesis of someone who went on to do great things.

So if you’re finding a subject confusing, if you’ve read all the papers and none of them make any sense, look for the grad students. Sometimes the best explanation of a tricky topic isn’t in the published literature, it’s hidden away in someone’s thesis.

Academic Age

Growing up in the US there are a lot of age-based milestones. You can drive at 16, vote at 18, and drink at 21. Once you’re in academia though, your actual age becomes much less relevant. Instead, academics are judged based on academic age, the time since you got your PhD.

And no, we don’t get academic birthdays

Grants often have restrictions based on academic age. The European Research Council’s Starting Grant, for example, demands an academic age of 2-7. If you’re academically “older”, they expect more from you: you must instead apply for a Consolidator Grant, or an Advanced Grant.

More generally, when academics apply for jobs they are often weighed in terms of academic age. Compared to others, how long have you spent as a postdoc since your PhD? How many papers have you published since then, and how well cited were they? The longer you spend without finding a permanent position, the more likely employers are to wonder why, and the reasons they assume are rarely positive.

This creates some weird incentives. If you have a choice, it’s often better to graduate late than to graduate early. Employers don’t check how long you took to get your PhD, but they do pay attention to how many papers you published. If it’s an option, staying in school to finish one more project can actually be good for your career.

Biological age matters, but mostly for biological reasons: for example, if you plan to have children. Raising a family is harder if you have to move every few years, so those who find permanent positions by then have an easier time of it. That said, as academics have to take more temporary positions before settling down fewer people have this advantage.

Beyond that, biological age only matters again at the end of your career, especially if you work somewhere with a mandatory retirement age. Even then, retirement for academics doesn’t mean the same thing as for normal people: retired professors often have emeritus status, meaning that while technically retired they keep a role at the university, maintaining an office and often still doing some teaching or research.