Tag Archives: academia

The Academic Workflow (Or Lack Thereof)

I was chatting with someone in biotech recently, who was frustrated with the current state of coronavirus research. The problem, in her view, was that researchers were approaching the problem in too “academic” a way. Instead of coordinating, trying to narrow down to a few approaches and make sure they get the testing they need, researchers were each focusing on their own approach, answering the questions they thought were interesting or important without fitting their work into a broader plan. She thought that a more top-down, corporate approach would do much better.

I don’t know anything about the current state of coronavirus research, what works and what doesn’t. But the conversation got me thinking about my own field.

Theoretical physics is about as far from “top-down” as you can get. As a graduate student, your “boss” is your advisor, but that “bossiness” can vary from telling you to do specific calculations to just meeting you every so often to discuss ideas. As a postdoc, even that structure evaporates: while you usually have an official “supervisor”, they won’t tell you what to do outside of the most regimented projects. Instead, they suggest, proposing ideas they’d like to collaborate on. As a professor, you don’t have this kind of “supervisor”: while there are people in charge of the department, they won’t tell you what to research. At most, you have informal hierarchies: senior professors influencing junior professors, or the hot-shots influencing the rest.

Even when we get a collaboration going, we don’t tend to have assigned roles. People do what they can, when they can, and if you’re an expert on one part of the work you’ll probably end up doing that part, but that won’t be “the plan” because there almost never is a plan. There’s very rarely a “person in charge”: if there’s a disagreement it gets solved by one person convincing another that they’re right.

This kind of loose structure is freeing, but it can also be frustrating. Even the question of who is on a collaboration can be up in the air, with a sometimes tacit assumption that if you were there for certain conversations you’re there for the paper. It’s possible to push for more structure, but push too hard and people will start ignoring you anyway.

Would we benefit from more structure? That depends on the project. Sometimes, when we have clear goals, a more “corporate” approach can work. Other times, when we’re exploring something genuinely new, any plan is going to fail: we simply don’t know what we’re going to run into, what will matter and what won’t. Maybe there are corporate strategies for that kind of research, ways to manage that workflow. I don’t know them.

Thoughts on Doing Science Remotely

In these times, I’m unusually lucky.

I’m a theoretical physicist. I don’t handle goods, or see customers. Other scientists need labs, or telescopes: I just need a computer and a pad of paper. As a postdoc, I don’t even teach. In the past, commenters have asked me why I don’t just work remotely. Why go to conferences, why even go to the office?

With COVID-19, we’re finding out.

First, the good: my colleagues at the Niels Bohr Institute have been hard at work keeping everyone connected. Our seminars have moved online, where we hold weekly Zoom seminars jointly with Iceland, Uppsala and Nordita. We have a “virtual coffee room”, a Zoom room that’s continuously open with “virtual coffee breaks” at 10 and 3:30 to encourage people to show up. We’re planning virtual colloquia, and even a virtual social night with Jackbox games.

Is it working? Partially.

The seminars are the strongest part. Remote seminars let us bring in speakers from all over the world (time zones permitting). They let one seminar serve the needs of several different institutes. Most of the basic things a seminar needs (slides, blackboards, ability to ask questions, ability to clap) are present on online platforms, particularly Zoom. And our seminar organizers had the bright idea to keep the Zoom room open after the talk, which allows the traditional “after seminar conversation with the speaker” for those who want it.

Still, the setup isn’t as good as it could be. If the audience turns off their cameras and mics, the speaker can feel like they’re giving a talk to an empty room. This isn’t just awkward, it makes the talk worse: speakers improve when they can “feel the room” and see what catches their audience’s interest. If the audience keeps their cameras or mics on instead, it takes a lot of bandwidth, and the speaker still can’t really feel the room. I don’t know if there’s a good solution here, but it’s worth working on.

The “virtual coffee room” is weaker. It was quite popular at first, but as time went on fewer and fewer people (myself included) showed up. In contrast, my wife’s friends at Waterloo do a daily cryptic crossword, and that seems to do quite well. What’s the difference? They have real crosswords, we don’t have real coffee.

I kid, but only a little. Coffee rooms and tea breaks work because of a core activity, a physical requirement that brings people together. We value them for their social role, but that role on its own isn’t enough to get us in the door. We need the excuse: the coffee, the tea, the cookies, the crossword. Without that shared structure, people just don’t show up.

Getting this kind of thing right is more important than it might seem. Social activities help us feel better, they help us feel less isolated. But more than that, they help us do science better.

That’s because science works, at least in part, through serendipity.

You might think of scientific collaboration as something we plan, and it can be sometimes. Sometimes we know exactly what we’re looking for: a precise calculation someone else can do, a question someone else can answer. Sometimes, though, we’re helped by chance. We have random conversations, different people in different situations, coffee breaks and conference dinners, and eventually someone brings up an idea we wouldn’t have thought of on our own.

Other times, chance helps by providing an excuse. I have a few questions rattling around in my head that I’d like to ask some of my field’s big-shots, but that don’t feel worth an email. I’ve been waiting to meet them at a conference instead. The advantage of those casual meetings is that they give an excuse for conversation: we have to talk about something, it might as well be my dumb question. Without that kind of causal contact, it feels a lot harder to broach low-stakes topics.

None of this is impossible to do remotely. But I think we need new technology (social or digital) to make it work well. Serendipity is easy to find in person, but social networks can imitate it. Log in to facebook or tumblr looking for your favorite content, and you face a pile of ongoing conversations. Looking through them, you naturally “run into” whatever your friends are talking about. I could see something similar for academia. Take something like the list of new papers on arXiv, then run a list of ongoing conversations next to it. When we check the arXiv each morning, we could see what our colleagues were talking about, and join in if we see something interesting. It would be a way to stay connected that would keep us together more, giving more incentive and structure beyond simple loneliness, and lead to the kind of accidental meetings that science craves. You could even graft conferences on to that system, talks in the middle with conversation threads on the side.

None of us know how long the pandemic will last, or how long we’ll be asked to work from home. But even afterwards, it’s worth thinking about the kind of infrastructure science needs to work remotely. Some ideas may still be valuable after all this is over.

Life Cycle of an Academic Scientist

So you want to do science for a living. Some scientists work for companies, developing new products. Some work for governments. But if you want to do “pure science”, science just to learn about the world, then you’ll likely work at a university, as part of what we call academia.

The first step towards academia is graduate school. In the US, this means getting a PhD.

(Master’s degrees, at least in the US, have a different purpose. Most are “terminal Master’s”, designed to be your last degree. With a terminal Master’s, you can be a technician in a lab, but you won’t get farther down this path. In the US you don’t need a Master’s before you apply for a PhD program, and having one is usually a waste of time: PhD programs will make you re-take most of the same classes.)

Once you have a PhD, it’s time to get a job! Often, your first job after graduate school is a postdoc. Postdocs are short-term jobs, usually one to three years long. Some people are lucky enough to go to the next stage quickly, others have more postdoc jobs first. These jobs will take you all over the world, everywhere people with your specialty work. Sometimes these jobs involve teaching, but more often you just do scientific research.

In the US system, If everything goes well, eventually you get a tenure-track job. These jobs involve both teaching and research. You get to train PhD students, hire postdocs, and in general start acting like a proper professor. This stage lasts around seven years, while the university evaluates you. If they decide you’re not worth it then typically you’ll have to leave to apply for another job in another university. If they like you though, you get tenure.

Tenure is the first time as an academic scientist that you aren’t on a short-term contract. Your job is more permanent than most, you have extra protection from being fired that most people don’t. While you can’t just let everything slide, you have freedom to make more of your own decisions.

A tenured job can last until retirement, when you become an emeritus professor. Emeritus professors are retired but still do some of the work they did as professors. They’re paid out of their pension instead of a university salary, but they still sometimes teach or do research, and they usually still have an office. The university can hire someone new, and the cycle continues.

This isn’t the only path scientists take. Some work in a national lab instead. These don’t usually involve teaching duties, and the path to a permanent job is a bit different. Some get teaching jobs instead of research professorships. These teaching jobs are usually not permanent, instead universities are hiring more and more adjunct faculty who have to string together temporary contracts to make a precarious living.

I’ve mostly focused on the US system here. Europe is a bit different: Master’s degrees are a real part of the system, tenure-track doesn’t really exist, and adjunct faculty don’t always either. Some particular countries, like Germany, have their own quite complicated systems, other countries fall in between.

Guest Post: On the Real Inhomogeneous Universe and the Weirdness of ‘Dark Energy’

A few weeks ago, I mentioned a paper by a colleague of mine, Mohamed Rameez, that generated some discussion. Since I wasn’t up for commenting on the paper’s scientific content, I thought it would be good to give Rameez a chance to explain it in his own words, in a guest post. Here’s what he has to say:

In an earlier post, 4gravitons had contemplated the question of ‘when to trust the contrarians’, in the context of our about-to-be-published paper in which we argue that accounting for the effects of the bulk flow in the local Universe, there is no evidence for any isotropic cosmic acceleration, which would be required to claim some sort of ‘dark energy’.

In the following I would like to emphasize that this is a reasonable view, and not a contrarian one. To do so I will examine the bulk flow of the local Universe and the historical evolution of what appears to be somewhat dodgy supernova data. I will present a trivial solution (from data) to the claimed ‘Hubble tension’.  I will then discuss inhomogeneous cosmology, and the 2011 Nobel prize in Physics. I will proceed to make predictions that can be falsified with future data. I will conclude with some questions that should be frequently asked.

Disclaimer: The views expressed here are not necessarily shared by my collaborators. 

The bulk flow of the local Universe:

The largest anisotropy in the Cosmic Microwave Background is the dipole, believed to be caused by our motion with respect to the ‘rest frame’ of the CMB with a velocity of ~369 km s^-1. Under this view, all matter in the local Universe appear to be flowing. At least out to ~300 Mpc, this flow continues to be directionally coherent, to within ~40 degrees of the CMB dipole, and the scale at which the average relative motion between matter and radiation converges to zero has so far not been found.

This is one of the most widely accepted results in modern cosmology, to the extent that SN1a data come pre ‘corrected’ for it.

Such a flow has covariant consequences under general relativity and this is what we set out to test.

Supernova data, directions in the sky and dodgyness:

Both Riess et al 1998 and Perlmutter et al 1999 used samples of supernovae down to redshifts of 0.01, in which almost all SNe at redshifts below 0.1 were in the direction of the flow.

Subsequently in Astier et al 2006, Kowalsky et al 2008, Amanullah et al 2010 and Suzuki et al 2011, it is reported that a process of outlier rejection was adopted in which data points >3\sigma from the Hubble diagram were discarded. This was done using a highly questionable statistical method that involves adjusting an intrinsic dispersion term \sigma_{\textrm{int}} by hand until a \chi^2/\textrm{ndof} of 1 is obtained to the assumed \LambdaCDM model. The number of outliers rejected is however far in excess of 0.3% – which is the 3\sigma expectation. As the sky coverage became less skewed, supernovae with redshift less than ~0.023 were excluded for being outside the Hubble flow. While the Hubble diagram so far had been inferred from heliocentric redshifts and magnitudes, with the introduction of SDSS supernovae that happened to be in the direction opposite to the flow, peculiar velocity ‘corrections’ were adopted in the JLA catalogue and supernovae down to extremely low redshifts were reintroduced. While the early claims of a cosmological constant were stated as ‘high redshift supernovae were found to be dimmer (15% in flux) than the low redshift supernovae (compared to what would be expected in a \Lambda=0 universe)’, it is worth noting that the peculiar velocity corrections change the redshifts and fluxes of low redshift supernovae by up to ~20 %.

When it was observed that even with this ‘corrected’ sample of 740 SNe, any evidence for isotropic acceleration using a principled Maximum Likelihood Estimator is less than 3\sigma , it was claimed that by adding 12 additional parameters (to the 10 parameter model) to allow for redshift and sample dependence of the light curve fitting parameters, the evidence was greater than 4\sigma .

As we discuss in Colin et al. 2019, these corrections also appear to be arbitrary, and betray an ignorance of the fundamentals of both basic statistical analysis and relativity. With the Pantheon compilation, heliocentric observables were no longer public and these peculiar velocity corrections initially extended far beyond the range of any known flow model of the Local Universe. When this bug was eventually fixed, both the heliocentric redshifts and magnitudes of the SDSS SNe that filled in the ‘redshift desert’ between low and high redshift SNe were found to be alarmingly discrepant. The authors have so far not offered any clarification of these discrepancies.

Thus it seems to me that the latest generation of ‘publicly available’ supernova data are not aiding either open science or progress in cosmology.

A trivial solution to the ‘Hubble tension’?

The apparent tension between the Hubble parameter as inferred from the Cosmic Microwave Background and low redshift tracers has been widely discussed, and recent studies suggest that redshift errors as low as 0.0001 can have a significant impact. Redshift discrepancies as big as 0.1 have been reported. The shifts reported between JLA and Pantheon appear to be sufficient to lower the Hubble parameter from ~73 km s^-1 Mpc^-1 to ~68 km s^-1 Mpc^-1.

On General Relativity, cosmology, metric expansion and inhomogeneities:

In the maximally symmetric Friedmann-Lemaitre-Robertson-Walker solution to general relativity, there is only one meaningful global notion of distance and it expands at the same rate everywhere. However, the late time Universe has structure on all scales, and one may only hope for statistical (not exact) homogeneity. The Universe is expected to be lumpy. A background FLRW metric is not expected to exist and quantities analogous to the Hubble and deceleration parameters will vary across the sky.  Peculiar velocities may be more precisely thought of as variations in the expansion rate of the Universe. At what rate does a real Universe with structure expand? The problems of defining a meaningful average notion of volume, its dynamical evolution, and connecting it to observations are all conceptually open.

On the 2011 Nobel Prize in Physics:

The Fitting Problem in cosmology was written in 1987. In the context of this work and the significant theoretical difficulties involved in inferring fundamental physics from the real Universe, any claims of having measured a cosmological constant from directionally skewed, sparse samples of intrinsically scattered observations should have been taken with a grain of salt.  By honouring this claim with a Nobel Prize, the Swedish Academy may have induced runaway prestige bias in favour of some of the least principled analyses in science, strengthening the confirmation bias that seems prevalent in cosmology.

This has resulted in the generation of a large body of misleading literature, while normalizing the practice of ‘massaging’ scientific data. In her recent video about gravitational waves, Sabine Hossenfelder says “We should not hand out Nobel Prizes if we don’t know how the predictions were fitted to the data”. What about when the data was fitted (in 1998-1999) using a method that has been discredited in 1989 to a toy model that has been cautioned against in 1987, leading to a ‘discovery’ of profound significance to fundamental physics?

A prediction with future cosmological data:

With the advent of high statistics cosmological data in the future, such as from the Large Synoptic Survey Telescope, I predict that the Hubble and deceleration parameters inferred from supernovae in hemispheres towards and away from the CMB dipole will be found to be different in a statistically significant (>5\sigma ) way. Depending upon the criterion for selection and blind analyses of data that can be agreed upon, I would be willing to bet a substantial amount of money on this prediction.

Concluding : on the amusing sociology of ‘Dark Energy’ and manufactured concordance:

Of the two authors of the well-known cosmology textbook ‘The Early Universe’, Edward Kolb writes these interesting papers questioning dark energy while Michael Turner is credited with coining the term ‘Dark Energy’.  Reasonable scientific perspectives have to be presented as ‘Dark Energy without dark energy’. Papers questioning the need to invoke such a mysterious content that makes up ‘68% of the Universe’ are quickly targeted by inane articles by non-experts or perhaps well-meant but still misleading YouTube videos. Much of this is nothing more than a spectacle.

In summary, while the theoretical debate about whether what has been observed as Dark Energy is the effect of inhomogeneities is ongoing, observers appear to have been actively using the most inhomogeneous feature of the local Universe through opaque corrections to data, to continue claiming that this ‘dark energy’ exists.

It is heartening to see that recent works lean toward a breaking of this manufactured concordance and speak of a crisis for cosmology.

Questions that should be frequently asked:

Q. Is there a Hubble frame in the late time Universe?

A. The Hubble frame is a property of the FLRW exact solution, and in the late time Universe in which galaxies and clusters have peculiar motions with respect to each other, an equivalent notion does not exist. While popular inference treats the frame in which the CMB dipole vanishes as the Hubble frame, the scale at which the bulk flow of the local Universe converges to that frame has never been found. We are tilted observers.

Q. I am about to perform blinded analyses on new cosmological data. Should I correct all my redshifts towards the CMB rest frame?

A. No. Correcting all your redshifts towards a frame that has never been found is a good way to end up with ‘dark energy’. It is worth noting that while the CMB dipole has been known since 1994, supernova data have been corrected towards the CMB rest frame only after 2010, for what appear to be independent reasons.

Q. Can I combine new data with existing Supernova data?

A. No. The current generation of publicly available supernova data suffer from the natural biases that are to be expected when data are compiled incrementally through a human mediated process. It would be better to start fresh with a new sample.

Q. Is ‘dark energy’ fundamental or new physics?

A. Given that general relativity is a 100+ year old theory and significant difficulties exist in describing the late time Universe with it, it is unnecessary to invoke new fundamental physics when confronting any apparent acceleration of the real Universe. All signs suggest that what has been ascribed to dark energy are the result of a community that is hell bent on repeating what Einstein supposedly called his greatest mistake.

Digging deeper:

The inquisitive reader may explore the resources on inhomogeneous cosmology, as well as the works of George Ellis, Thomas Buchert and David Wiltshire.

Academia Has Changed Less Than You’d Think

I recently read a biography of James Franck. Many of you won’t recognize the name, but physicists might remember the Franck-Hertz experiment from a lab class. Franck and Hertz performed a decisive test of Bohr’s model of the atom, ushering in the quantum age and receiving the 1925 Nobel Prize. After fleeing Germany when Hitler took power, Franck worked on the Manhattan project and co-authored the Franck Report urging the US not to use nuclear bombs on Japan. He settled at the University of Chicago, which named an institute after him.*

You can find all that on his Wikipedia page. The page also mentions his marriage later in life to Hertha Sponer. Her Wikipedia page talks about her work in spectroscopy, about how she was among the first women to receive a PhD in Germany and the first on the physics faculty at Duke University, and that she remained a professor there until 1966, when she was 70.

Neither Wikipedia page talks about two-body problems, or teaching loads, or pensions.

That’s why I was surprised when the biography covered Franck’s later life. Until Franck died, he and Sponer would travel back and forth, he visiting her at Duke and she visiting him in Chicago. According to the biography, this wasn’t exactly by choice: they both would have preferred to live together in the same city. Somehow though, despite his Nobel Prize and her scientific accomplishments, they never could. The biography talks about how the university kept her teaching class after class, so she struggled to find time for research. It talks about what happened as the couple got older, as their health made it harder and harder to travel back and forth, and they realized that without access to their German pensions they would not be able to support themselves in retirement. The biography gives the impression that Sponer taught till 70 not out of dedication but because she had no alternative.

When we think about the heroes of the past, we imagine them battling foes with historic weight: sexism, antisemitism, Nazi-ism. We don’t hear about their more everyday battles, with academic two-body problems and stingy universities. From this, we can get the impression that the dysfunctions of modern academia are new. But while the problems have grown, we aren’t the first academics with underpaid, overworked teaching faculty, nor the first to struggle to live where we want and love who we want. These are struggles academics have faced for a long, long time.

*Full disclosure: Franck was also my great-great-grandfather, hence I may find his story more interesting than most.

When to Trust the Contrarians

One of my colleagues at the NBI had an unusual experience: one of his papers took a full year to get through peer review. This happens often in math, where reviewers will diligently check proofs for errors, but it’s quite rare in physics: usually the path from writing to publication is much shorter. Then again, the delays shouldn’t have been too surprising for him, given what he was arguing.

My colleague Mohamed Rameez, along with Jacques Colin, Roya Mohayaee, and Subir Sarkar, wants to argue against one of the most famous astronomical discoveries of the last few decades: that the expansion of our universe is accelerating, and thus that an unknown “dark energy” fills the universe. They argue that one of the key pieces of evidence used to prove acceleration is mistaken: that a large region of the universe around us is in fact “flowing” in one direction, and that tricked astronomers into thinking its expansion was accelerating. You might remember a paper making a related argument back in 2016. I didn’t like the media reaction to that paper, and my post triggered a response by the authors, one of whom (Sarkar) is on this paper as well.

I’m not an astronomer or an astrophysicist. I’m not qualified to comment on their argument, and I won’t. I’d still like to know whether they’re right, though. And that means figuring out which experts to trust.

Pick anything we know in physics, and you’ll find at least one person who disagrees. I don’t mean a crackpot, though they exist too. I mean an actual expert who is convinced the rest of the field is wrong. A contrarian, if you will.

I used to be very unsympathetic to these people. I was convinced that the big results of a field are rarely wrong, because of how much is built off of them. I thought that even if a field was using dodgy methods or sloppy reasoning, the big results are used in so many different situations that if they were wrong they would have to be noticed. I’d argue that if you want to overturn one of these big claims you have to disprove not just the result itself, but every other success the field has ever made.

I still believe that, somewhat. But there are a lot of contrarians here at the Niels Bohr Institute. And I’ve started to appreciate what drives them.

The thing is, no scientific result is ever as clean as it ought to be. Everything we do is jury-rigged. We’re almost never experts in everything we’re trying to do, so we often don’t know the best method. Instead, we approximate and guess, we find rough shortcuts and don’t check if they make sense. This can take us far sometimes, sure…but it can also backfire spectacularly.

The contrarians I’ve known got their inspiration from one of those backfires. They saw a result, a respected mainstream result, and they found a glaring screw-up. Maybe it was an approximation that didn’t make any sense, or a statistical measure that was totally inappropriate. Whatever it was, it got them to dig deeper, and suddenly they saw screw-ups all over the place. When they pointed out these problems, at best the people they accused didn’t understand. At worst they got offended. Instead of cooperation, the contrarians are told they can’t possibly know what they’re talking about, and ignored. Eventually, they conclude the entire sub-field is broken.

Are they right?

Not always. They can’t be, for every claim you can find a contrarian, believing them all would be a contradiction.

But sometimes?

Often, they’re right about the screw-ups. They’re right that there’s a cleaner, more proper way to do that calculation, a statistical measure more suited to the problem. And often, doing things right raises subtleties, means that the big important result everyone believed looks a bit less impressive.

Still, that’s not the same as ruling out the result entirely. And despite all the screw-ups, the main result is still often correct. Often, it’s justified not by the original, screwed-up argument, but by newer evidence from a different direction. Often, the sub-field has grown to a point that the original screwed-up argument doesn’t really matter anymore.

Often, but again, not always.

I still don’t know whether to trust the contrarians. I still lean towards expecting fields to sort themselves out, to thinking that error alone can’t sustain long-term research. But I’m keeping a more open mind now. I’m waiting to see how far the contrarians go.

Knowing When to Hold/Fold ‘Em in Science

The things one learns from Wikipedia. For example, today I learned that the country song “The Gambler” was selected for preservation by the US Library of Congress as being “culturally, historically, or artistically significant.”

You’ve got to know when to hold ’em, know when to fold ’em,

Know when to walk away, know when to run.

Knowing when to “hold ’em” or “fold ’em” is important in life in general, but it’s particularly important in science.

And not just on poker night

As scientists, we’re often trying to do something no-one else has done before. That’s exciting, but it’s risky too: sometimes whatever we’re trying simply doesn’t work. In those situations, it’s important to recognize when we aren’t making progress, and change tactics. The trick is, we can’t give up too early either: science is genuinely hard, and sometimes when we feel stuck we’re actually close to the finish line. Knowing which is which, when to “hold” and when to “fold”, is an essential skill, and a hard one to learn.

Sometimes, we can figure this out mathematically. Computational complexity theory classifies calculations by how difficult they are, including how long they take. If you can estimate how much time you should take to do a calculation, you can decide whether you’ll finish it in a reasonable amount of time. If you just want a rough guess, you can do a simpler version of the calculation, and see how long that takes, then estimate how much longer the full one will. If you figure out you’re doomed, then it’s time to switch to a more efficient algorithm, or a different question entirely.

Sometimes, we don’t just have to consider time, but money as well. If you’re doing an experiment, you have to estimate how much the equipment will cost, and how much it will cost to run it. Experimenters get pretty good at estimating these things, but they still screw up sometimes and run over budget. Occasionally this is fine: LIGO didn’t detect anything in its first eight-year run, but they upgraded the machines and tried again, and won a Nobel prize. Other times it’s a disaster, and money keeps being funneled into a project that never works. Telling the difference is crucial, and it’s something we as a community are still not so good at.

Sometimes we just have to follow our instincts. This is dangerous, because we have a bias (the “sunk cost fallacy”) to stick with something if we’ve already spent a lot of time or money on it. To counteract that, it’s good to cultivate a bias in the opposite direction, which you might call “scientific impatience”. Getting frustrated with slow progress may not seem productive, but it keeps you motivated to search for a better way. Experienced scientists get used to how long certain types of project take. Too little progress, and they look for another option. This can fail, killing a project that was going to succeed, but it can also prevent over-investment in a failing idea. Only a mix of instincts keeps the field moving.

In the end, science is a gamble. Like the song, we have to know when to hold ’em and fold ’em, when to walk away, and when to run an idea as far as it will go. Sometimes it works, and sometimes it doesn’t. That’s science.

In Defense of the Streetlight

If you read physics blogs, you’ve probably heard this joke before:

A policeman sees a drunk man searching for something under a streetlight and asks what the drunk has lost. He says he lost his keys and they both look under the streetlight together. After a few minutes the policeman asks if he is sure he lost them here, and the drunk replies, no, and that he lost them in the park. The policeman asks why he is searching here, and the drunk replies, “this is where the light is”.

The drunk’s line of thinking has a name, the streetlight effect, and while it may seem ridiculous it’s a common error, even among experts. When it gets too tough to research something, scientists will often be tempted by an easier problem even if it has little to do with the original question. After all, it’s “where the light is”.

Physicists get accused of this all the time. Dark matter could be completely undetectable on Earth, but physicists still build experiments to search for it. Our universe appears to be curved one way, but string theory makes it much easier to study universes curved the other way, so physicists write a lot of nice proofs about a universe we don’t actually inhabit. In my own field, we spend most of our time studying a very nice theory that we know can’t describe the real world.

I’m not going to defend this behavior in general. There are real cases where scientists trick themselves into thinking they can solve an easy problem when they need to solve a hard one. But there is a crucial difference between scientists and drunkards looking for their keys, one that makes this behavior a lot more reasonable: scientists build technology.

As scientists, we can’t just grope around in the dark for our keys. The spaces we’re searching, from the space of all theories of gravity to actual outer space, are much too vast to search randomly. We need new ideas, new mathematics or new equipment, to do the search properly. If we were the drunkard of the story, we’d need to invent night-vision goggles.

Is the light better here, or is it just me?

Suppose you wanted to design new night-vision goggles, to search for your keys in the park. You could try to build them in the dark, but you wouldn’t be able to see what you were doing: you’d lose pieces, miss screws, and break lenses. Much better to build the goggles under that convenient streetlight.

Of course, if you build and test your prototype goggles under the streetlight, you risk that they aren’t good enough for the dark. You’ll have calibrated them in an unrealistic case. In all likelihood, you’ll have to go back and fix your goggles, tweaking them as you go, and you’ll run into the same problem: you can’t see what you’re doing in the dark.

At that point, though, you have an advantage: you now know how to build night-vision goggles. You’ve practiced building goggles in the light, and now even if the goggles aren’t good enough, you remember how you put them together. You can tweak the process, modify your goggles, and make something good enough to find your keys. You’re good enough at making goggles that you can modify them now, even in the dark.

Sometimes scientists really are like the drunk, searching under the most convenient streetlight. Sometimes, though, scientists are working where the light is for a reason. Instead of wasting their time lost in the dark, they’re building new technology and practicing new methods, getting better and better at searching until, when they’re ready, they can go back and find their keys. Sometimes, the streetlight is worth it.

“X Meets Y” Conferences

Most conferences focus on a specific sub-field. If you call a conference “Strings” or “Amplitudes”, people know what to expect. Likewise if you focus on something more specific, say Elliptic Integrals. But what if your conference is named after two sub-fields?

These conferences, with names like “QCD Meets Gravity” and “Scattering Amplitudes and the Conformal Bootstrap”, try to connect two different sub-fields together. I’ll call them “X Meets Y” conferences.

The most successful “X Meets Y” conferences involve two sub-fields that have been working together for quite some time. At that point, you don’t just have “X” researchers and “Y” researchers, but “X and Y” researchers, people who work on the connection between both topics. These people can glue a conference together, showing the separate “X” and “Y” researchers what “X and Y” research looks like. At a conference like that speakers have a clear idea of what to talk about: the “X” researchers know how to talk to the “Y” researchers, and vice versa, and the organizers can invite speakers who they know can talk to both groups.

If the sub-fields have less history of collaboration, “X Meets Y” conferences become trickier. You need at least a few “X and Y” researchers (or at least aspiring “X and Y” researchers) to guide the way. Even if most of the “X” researchers don’t know how to talk to “Y” researchers, the “X and Y” researchers can give suggestions, telling “X” which topics would be most interesting to “Y” and vice versa. With that kind of guidance, “X Meets Y” conferences can inspire new directions of research, opening one field up to the tools of another.

The biggest risk in an “X Meets Y” conference, that becomes more likely the fewer “X and Y” researchers there are, is that everyone just gives their usual talks. The “X” researchers talk about their “X”, and the “Y” researchers talk about their “Y”, and both groups nod politely and go home with no new ideas whatsoever. Scientists are fundamentally lazy creatures. If we already have a talk written, we’re tempted to use it, even if it doesn’t quite fit the occasion. Counteracting that is a tough job, and one that isn’t always feasible.

“X Meets Y” conferences can be very productive, the beginning of new interdisciplinary ideas. But they’re certainly hard to get right. Overall, they’re one of the trickier parts of the social side of science.

Reader Background Poll Reflections

A few weeks back I posted a poll, asking you guys what sort of physics background you have. The idea was to follow up on a poll I did back in 2015, to see how this blog’s audience has changed.

One thing that immediately leaped out of the data was how many of you are physicists. As of writing this, 66% of readers say they either have a PhD in physics or a related field, or are currently in grad school. This includes 7% specifically from my sub-field, “amplitudeology” (though this number may be higher than usual since we just had our yearly conference, and more amplitudeologists were reminded my blog exists).

I didn’t use the same categories in 2015, so the numbers can’t be easily compared. In 2015 only 2.5% of readers described themselves as amplitudeologists. Adding these up with the physics PhDs and grad students gives 59%, which goes up to 64.5% if I include the mathematicians (who this year might have put either “PhD in a related field” or “Other Academic”). So overall the percentages are pretty similar, though now it looks like more of my readers are grad students.

Despite the small difference, I am a bit worried: it looks like I’m losing non-physicist readers. I could flatter myself and think that I inspired those non-physicists to go to grad school, but more realistically I should admit that fewer of my posts have been interesting to a non-physics audience. In 2015 I worked at the Perimeter Institute, and helped out with their public lectures. Now I’m at the Niels Bohr Institute, and I get fewer opportunities to hear questions from non-physicists. I get fewer ideas for interesting questions to answer.

I want to keep this blog’s language accessible and its audience general. I appreciate that physicists like this blog and view it as a resource, but I don’t want it to turn into a blog for physicists only. I’d like to encourage the non-physicists in the audience: ask questions! Don’t worry if it sounds naive, or if the question seems easy: if you’re confused, likely others are too.