# Kicking Students Out of Their Homes During a Pandemic: A Bad Idea

I avoid talking politics on this blog. There are a few issues, though, where I feel not just able, but duty-bound, to speak out. Those are issues affecting graduate students.

This week, US Immigration and Customs Enforcement (ICE) announced that, if a university switched to online courses as a response to COVID-19, international students would have to return to their home countries or transfer to a school that still teaches in-person.

Suppose you’re a foreign PhD student at a US university. Maybe your school is already planning to have classes online this fall, like Harvard is. Maybe your school is planning to have classes in person, but will change its mind a few weeks in, when so many students and professors are infected that it’s clearly unreasonable to continue. Maybe your school never changes its mind, but your state does, and the school has to lock down anyway.

As a PhD student, you likely don’t live in the dorms. More likely you live in a shared house, or an apartment. You’re an independent adult. Your parents aren’t paying for you to go to school. Your school is itself a full-time job, one that pays (as little as the university thinks it can get away with).

What happens when your school goes online? If you need to leave the country?

You’d have to find some way out of your lease, or keep paying for it. You’d have to find a flight on short notice. You’d have to pack up all your belongings, ship or sell anything you can’t store, or find friends to hold on to it.

You’d have to find somewhere to stay in your “home country”. Some could move in with their parents temporarily, many can’t. Some of those who could in other circumstances, shouldn’t if they’re fleeing from an outbreak: their parents are likely older, and vulnerable to the virus. So you have to find a hotel, eventually perhaps a new apartment, far from what was until recently your home.

Reminder: you’re doing all of this on a shoestring budget, because the university pays you peanuts.

Can you transfer instead? In a word, no.

PhD students are specialists. They’re learning very specific things from very specific people. Academics aren’t the sort of omnidisciplinary scientists you see in movies. Bruce Banner or Tony Stark could pick up a new line of research on a whim, real people can’t. This is why, while international students may be good at the undergraduate level, they’re absolutely necessary for PhDs. When only three people in the world study the thing you want to study, you don’t have the luxury of staying in your birth country. And you can’t just transfer schools when yours goes online.

It feels like the people who made this decision didn’t think about any of this. That they don’t think grad students matter, or forgot they exist altogether. It seems frustratingly common for policy that affects grad students to be made by people who know nothing about grad students, and that baffles me. PhDs are a vital part of the academic career, without them universities in their current form wouldn’t even exist. Ignoring them is like if hospital policy ignored residencies.

I hope that this policy gets reversed, or halted, or schools find some way around it. At the moment, anyone starting school in the US this fall is in a very tricky position. And anyone already there is in a worse one.

# The Citation Motivation Situation

Citations are the bread and butter of academia, or maybe its prison cigarettes. They link us together, somewhere between a map to show us the way and an informal currency. They’re part of how the world grades us, a measure more objective than letters from our peers but that’s not saying much. It’s clear why we we want to be cited, but why do we cite others?

For more reasons than you’d expect.

First, we cite to respect priority. Since the dawn of science, we’ve kept track not only of what we know, but of who figured it out first. If we use an idea in our paper, we cite its origin: the paper that discovered or invented it. We don’t do this for the oldest and most foundational ideas: nobody cites Einstein for relativity. But if the idea is at all unusual, we make sure to give credit where credit is due.

Second, we cite to substantiate our claims. Academic papers don’t stand on their own: they depend on older proofs and prior discoveries. If we make a claim that was demonstrated in older work, we don’t need to prove it again. By citing the older work, we let the reader know where to look. If they doubt our claim, they can look at the older paper and see what went wrong.

Those two are the most obvious uses of citations, but there are more. Another important use is to provide context. Academic work doesn’t stand alone: we choose what we work on in part based on how it relates to other work. As such, it’s important to cite that other work, to help readers understand our motivation. When we’re advancing the state of the art, we need to tell the reader what that state of the art is. When we’re answering a question or solving a problem, we can cite the paper that asked the question or posed the problem. When we’re introducing a new method or idea, we need to clearly say what’s new about it: how it improves on older, similar ideas.

Scientists are social creatures. While we often have a scientific purpose in mind, citations also follow social conventions. These vary from place to place, field to field, and sub-field to sub-field. Mention someone’s research program, and you might be expected to cite every paper in that program. Cite one of a pair of rivals, and you should probably cite the other one too. Some of these conventions are formalized in the form of “citeware“, software licenses that require citations, rather than payments, to use. Others come from unspoken cultural rules. Citations are a way to support each other, something that can slightly improve another’s job prospects at no real cost to your own. It’s not surprising that they ended up part of our culture, well beyond their pure academic use.

# In Defense of Shitty Code

Scientific programming was in the news lately, when doubts were raised about a coronavirus simulation by researchers at Imperial College London. While the doubts appear to have been put to rest, doing so involved digging through some seriously messy code. The whole situation seems to have gotten a lot of people worried. If these people are that bad at coding, why should we trust their science?

I don’t know much about coronavirus simulations, my knowledge there begins and ends with a talk I saw last month. But I know a thing or two about bad scientific code, because I write it. My code is atrocious. And I’ve seen published code that’s worse.

Why do scientists write bad code?

In part, it’s a matter of training. Some scientists have formal coding training, but most don’t. I took two CS courses in college and that was it. Despite that lack of training, we’re expected and encouraged to code. Before I took those courses, I spent a summer working in a particle physics lab, where I was expected to pick up the C++-based interface pretty much on the fly. I don’t think there’s another community out there that has as much reason to code as scientists do, and as little training for it.

Would it be useful for scientists to have more of the tools of a trained coder? Sometimes, yeah. Version control is a big one, I’ve collaborated on papers that used Git and papers that didn’t, and there’s a big difference. There are coding habits that would speed up our work and lead to fewer dead ends, and they’re worth picking up when we have the time.

But there’s a reason we don’t prioritize “proper coding”. It’s because the things we’re trying to do, from a coding perspective, are really easy.

What, code-wise, is a coronavirus simulation? A vector of “people”, really just simple labels, all randomly infecting each other and recovering, with a few parameters describing how likely they are to do so and how long it takes. What do I do, code-wise? Mostly, giant piles of linear algebra.

These are not some sort of cutting-edge programming tasks. These are things people have been able to do since the dawn of computers. These are things that, when you screw them up, become quite obvious quite quickly.

Compared to that, the everyday tasks of software developers, like making a reliable interface for users, or efficient graphics, are much more difficult. They’re tasks that really require good coding practices, that just can’t function without them.

For us, the important part is not the coding itself, but what we’re doing with it. Whatever bugs are in a coronavirus simulation, they will have much less impact than, for example, the way in which the simulation includes superspreaders. Bugs in my code give me obviously wrong answers, bad scientific assumptions are much harder for me to root out.

There’s an exception that proves the rule here, and it’s that, when the coding task is actually difficult, scientists step up and write better code. Scientists who want to run efficiently on supercomputers, who are afraid of numerical error or need to simulate on many scales at once, these people learn how to code properly. The code behind the LHC still might be jury-rigged by industry standards, but it’s light-years better than typical scientific code.

I get the furor around the Imperial group’s code. I get that, when a government makes a critical decision, you hope that their every input is as professional as possible. But without getting too political for this blog, let me just say that whatever your politics are, if any of it is based on science, it comes from code like this. Psychology studies, economic modeling, polling…they’re using code, and it’s jury-rigged to hell. Scientists just have more important things to worry about.

# Zoomplitudes 2020

This week, I’m at Zoomplitudes!

My field’s big yearly conference, Amplitudes, was supposed to happen in Michigan this year, but with the coronavirus pandemic it was quickly clear that would be impossible. Luckily, Anastasia Volovich stepped in to Zoomganize the conference from Brown.

The conference is still going, so I’ll say more about the scientific content later. (Except to say there have been a lot of interesting talks!) Here, I’ll just write a bit about the novel experience of going to a conference on Zoom.

Time zones are always tricky in an online conference like this. Our field is spread widely around the world, but not evenly: there are a few areas with quite a lot of amplitudes research. As a result, Zoomganizing from the US east coast seems like it was genuinely the best compromise. It means the talks start a bit early for the west coast US (6am their time), but still end not too late for the Europeans (10:30pm CET). The timing is awkward for our colleagues in China and Taiwan, but they can still join in the morning session (their evening). Overall, I don’t think it was possible to do better there.

Usually, Amplitudes is accompanied by a one-week school for Master’s and PhD students. That wasn’t feasible this year, but to fill the gap Nima Arkani-Hamed gave a livestreamed lecture the Friday before, which apparently clocked in at thirteen hours!

One aspect of the conference that really impressed me was the Slack space. The organizers wanted to replicate the “halls” part of the conference, with small groups chatting around blackboards between the talks. They set up a space on the platform Slack, and let attendees send private messages and make their own channels for specific topics. Soon the space was filled with lively discussion, including a #coffeebreak channel with pictures of everyone’s morning coffee. I think the organizers did a really good job of achieving the kind of “serendipity” I talked about in this post, where accidental meetings spark new ideas. More than that, this is the kind of thing I’d appreciate even in face-to-face conferences. The ability to message anyone at the conference from a shared platform, to have discussions that anyone can stumble on and read later, to post papers and links, all of this seems genuinely quite useful. As one of the organizers for Amplitudes 2021, I may soon get a chance to try this out.

Zoom itself worked reasonably well. A few people had trouble connecting or sharing screens, but overall things worked reliably, and the Zoom chat window is arguably better than people whispering to each other in the back of an in-person conference. One feature of the platform that confused people a bit is that co-hosts can’t raise their hands to ask questions: since speakers had to be made co-hosts to share their screens they had a harder time asking questions during other speakers’ talks.

A part I was more frustrated by was the scheduling. Fitting everyone who wanted to speak between 6am west coast and 10:30pm Europe must have been challenging, and the result was a tightly plotted conference, with three breaks each no more than 45 minutes. That’s already a bit tight, but it ended up much tighter because most talks went long. The conference’s 30 minute slots regularly took 40 minutes, between speakers running over and questions going late. As a result, the conference’s “lunch break” (roughly dinner break for the Europeans) was often only 15 minutes. I appreciate the desire for lively discussion, especially since the conference is recorded and the question sessions can be a resource for others. But I worry that, as a pitfall of remote conferences, the inconveniences people suffer to attend can become largely invisible. Yes, we can always skip a talk, and watch the recording later. Yes, we can prepare food beforehand. Still, I don’t think a 15 minute lunch break was what the organizers had in mind, and if our community does more remote conferences we should brainstorm ways to avoid this problem next time.

I’m curious how other fields are doing remote conferences right now. Even after the pandemic, I suspect some fields will experiment with this kind of thing. It’s worth sharing and paying attention to what works and what doesn’t.

# The Point of a Model

I’ve been reading more lately, partially for the obvious reasons. Mostly, I’ve been catching up on books everyone else already read.

One such book is Daniel Kahneman’s “Thinking, Fast and Slow”. With all the talk lately about cognitive biases, Kahneman’s account of his research on decision-making was quite familiar ground. The book turned out to more interesting as window into the culture of psychology research. While I had a working picture from psychologist friends in grad school, “Thinking, Fast and Slow” covered the other side, the perspective of a successful professor promoting his field.

Most of this wasn’t too surprising, but one passage struck me:

Several economists and psychologists have proposed models of decision making that are based on the emotions of regret and disappointment. It is fair to say that these models have had less influence than prospect theory, and the reason is instructive. The emotions of regret and disappointment are real, and decision makers surely anticipate these emotions when making their choices. The problem is that regret theories make few striking predictions that would distinguish them from prospect theory, which has the advantage of being simpler. The complexity of prospect theory was more acceptable in the competition with expected utility theory because it did predict observations that expected utility theory could not explain.

Richer and more realistic assumptions do not suffice to make a theory successful. Scientists use theories as a bag of working tools, and they will not take on the burden of a heavier bag unless the new tools are very useful. Prospect theory was accepted by many scholars not because it is “true” but because the concepts that it added to utility theory, notably the reference point and loss aversion, were worth the trouble; they yielded new predictions that turned out to be true. We were lucky.

Thinking Fast and Slow, page 288

Kahneman is contrasting three theories of decision making here: the old proposal that people try to maximize their expected utility (roughly, the benefit they get in future), his more complicated “prospect theory” that takes into account not only what benefits people get but their attachment to what they already have, and other more complicated models based on regret. His theory ended up more popular, both than the older theory and than the newer regret-based models.

Why did his theory win out? Apparently, not because it was the true one: as he says, people almost certainly do feel regret, and make decisions based on it. No, his theory won because it was more useful. It made new, surprising predictions, while being simpler and easier to use than the regret-based models.

This, a theory defeating another without being “more true”, might bug you. By itself, it doesn’t bug me. That’s because, as a physicist, I’m used to the idea that models should not just be true, but useful. If we want to test our theories against reality, we have a large number of “levels” of description to choose from. We can “zoom in” to quarks and gluons, or “zoom out” to look at atoms, or molecules, or polymers. We have to decide how much detail to include, and we have real pragmatic reasons for doing so: some details are just too small to measure!

It’s not clear Kahneman’s community was doing this, though. That is, it doesn’t seem like he’s saying that regret and disappointment are just “too small to be measured”. Instead, he’s saying that they don’t seem to predict much differently from prospect theory, and prospect theory is simpler to use.

Ok, we do that in physics too. We like working with simpler theories, when we have a good excuse. We’re just careful about it. When we can, we derive our simpler theories from more complicated ones, carving out complexity and estimating how much of a difference it would have made. Do this carefully, and we can treat black holes as if they were subatomic particles. When we can’t, we have what we call “phenomenological” models, models built up from observation and not from an underlying theory. We never take such models as the last word, though: a phenomenological model is always viewed as temporary, something to bridge a gap while we try to derive it from more basic physics.

Kahneman doesn’t seem to view prospect theory as temporary. It doesn’t sound like anyone is trying to derive it from regret theory, or to make regret theory easier to use, or to prove it always agrees with regret theory. Maybe they are, and Kahneman simply doesn’t think much of their efforts. Either way, it doesn’t sound like a major goal of the field.

That’s the part that bothered me. In physics, we can’t always hope to derive things from a more fundamental theory, some theories are as fundamental as we know. Psychology isn’t like that: any behavior people display has to be caused by what’s going on in their heads. What Kahneman seems to be saying here is that regret theory may well be closer to what’s going on in people’s heads, but he doesn’t care: it isn’t as useful.

And at that point, I have to ask: useful for what?

As a psychologist, isn’t your goal ultimately to answer that question? To find out “what’s going on in people’s heads”? Isn’t every model you build, every theory you propose, dedicated to that question?

And if not, what exactly is it “useful” for?

For technology? It’s true, “Thinking Fast and Slow” describes several groups Kahneman advised, most memorably the IDF. Is the advantage of prospect theory, then, its “usefulness”, that it leads to better advice for the IDF?

I don’t think that’s what Kahneman means, though. When he says “useful”, he doesn’t mean “useful for advice”. He means it’s good for giving researchers ideas, good for getting people talking. He means “useful for designing experiments”. He means “useful for writing papers”.

And this is when things start to sound worryingly familiar. Because if I’m accusing Kahneman’s community of giving up on finding the fundamental truth, just doing whatever they can to write more papers…well, that’s not an uncommon accusation in physics as well. If the people who spend their lives describing cognitive biases are really getting distracted like that, what chance does, say, string theory have?

I don’t know how seriously to take any of this. But it’s lurking there, in the back of my mind, that nasty, vicious, essential question: what are all of our models for?

Bonus quote, for the commenters to have fun with:

I have yet to meet a successful scientist who lacks the ability to exaggerate the importance of what he or she is doing, and I believe that someone who lacks a delusional sense of significance will wilt in the face of repeated experiences of multiple small failures and rare successes, the fate of most researchers.

Thinking Fast and Slow, page 264

# The Academic Workflow (Or Lack Thereof)

I was chatting with someone in biotech recently, who was frustrated with the current state of coronavirus research. The problem, in her view, was that researchers were approaching the problem in too “academic” a way. Instead of coordinating, trying to narrow down to a few approaches and make sure they get the testing they need, researchers were each focusing on their own approach, answering the questions they thought were interesting or important without fitting their work into a broader plan. She thought that a more top-down, corporate approach would do much better.

I don’t know anything about the current state of coronavirus research, what works and what doesn’t. But the conversation got me thinking about my own field.

Theoretical physics is about as far from “top-down” as you can get. As a graduate student, your “boss” is your advisor, but that “bossiness” can vary from telling you to do specific calculations to just meeting you every so often to discuss ideas. As a postdoc, even that structure evaporates: while you usually have an official “supervisor”, they won’t tell you what to do outside of the most regimented projects. Instead, they suggest, proposing ideas they’d like to collaborate on. As a professor, you don’t have this kind of “supervisor”: while there are people in charge of the department, they won’t tell you what to research. At most, you have informal hierarchies: senior professors influencing junior professors, or the hot-shots influencing the rest.

Even when we get a collaboration going, we don’t tend to have assigned roles. People do what they can, when they can, and if you’re an expert on one part of the work you’ll probably end up doing that part, but that won’t be “the plan” because there almost never is a plan. There’s very rarely a “person in charge”: if there’s a disagreement it gets solved by one person convincing another that they’re right.

This kind of loose structure is freeing, but it can also be frustrating. Even the question of who is on a collaboration can be up in the air, with a sometimes tacit assumption that if you were there for certain conversations you’re there for the paper. It’s possible to push for more structure, but push too hard and people will start ignoring you anyway.

Would we benefit from more structure? That depends on the project. Sometimes, when we have clear goals, a more “corporate” approach can work. Other times, when we’re exploring something genuinely new, any plan is going to fail: we simply don’t know what we’re going to run into, what will matter and what won’t. Maybe there are corporate strategies for that kind of research, ways to manage that workflow. I don’t know them.

# Thoughts on Doing Science Remotely

In these times, I’m unusually lucky.

I’m a theoretical physicist. I don’t handle goods, or see customers. Other scientists need labs, or telescopes: I just need a computer and a pad of paper. As a postdoc, I don’t even teach. In the past, commenters have asked me why I don’t just work remotely. Why go to conferences, why even go to the office?

With COVID-19, we’re finding out.

First, the good: my colleagues at the Niels Bohr Institute have been hard at work keeping everyone connected. Our seminars have moved online, where we hold weekly Zoom seminars jointly with Iceland, Uppsala and Nordita. We have a “virtual coffee room”, a Zoom room that’s continuously open with “virtual coffee breaks” at 10 and 3:30 to encourage people to show up. We’re planning virtual colloquia, and even a virtual social night with Jackbox games.

Is it working? Partially.

The seminars are the strongest part. Remote seminars let us bring in speakers from all over the world (time zones permitting). They let one seminar serve the needs of several different institutes. Most of the basic things a seminar needs (slides, blackboards, ability to ask questions, ability to clap) are present on online platforms, particularly Zoom. And our seminar organizers had the bright idea to keep the Zoom room open after the talk, which allows the traditional “after seminar conversation with the speaker” for those who want it.

Still, the setup isn’t as good as it could be. If the audience turns off their cameras and mics, the speaker can feel like they’re giving a talk to an empty room. This isn’t just awkward, it makes the talk worse: speakers improve when they can “feel the room” and see what catches their audience’s interest. If the audience keeps their cameras or mics on instead, it takes a lot of bandwidth, and the speaker still can’t really feel the room. I don’t know if there’s a good solution here, but it’s worth working on.

The “virtual coffee room” is weaker. It was quite popular at first, but as time went on fewer and fewer people (myself included) showed up. In contrast, my wife’s friends at Waterloo do a daily cryptic crossword, and that seems to do quite well. What’s the difference? They have real crosswords, we don’t have real coffee.

I kid, but only a little. Coffee rooms and tea breaks work because of a core activity, a physical requirement that brings people together. We value them for their social role, but that role on its own isn’t enough to get us in the door. We need the excuse: the coffee, the tea, the cookies, the crossword. Without that shared structure, people just don’t show up.

Getting this kind of thing right is more important than it might seem. Social activities help us feel better, they help us feel less isolated. But more than that, they help us do science better.

That’s because science works, at least in part, through serendipity.

You might think of scientific collaboration as something we plan, and it can be sometimes. Sometimes we know exactly what we’re looking for: a precise calculation someone else can do, a question someone else can answer. Sometimes, though, we’re helped by chance. We have random conversations, different people in different situations, coffee breaks and conference dinners, and eventually someone brings up an idea we wouldn’t have thought of on our own.

Other times, chance helps by providing an excuse. I have a few questions rattling around in my head that I’d like to ask some of my field’s big-shots, but that don’t feel worth an email. I’ve been waiting to meet them at a conference instead. The advantage of those casual meetings is that they give an excuse for conversation: we have to talk about something, it might as well be my dumb question. Without that kind of causal contact, it feels a lot harder to broach low-stakes topics.

None of this is impossible to do remotely. But I think we need new technology (social or digital) to make it work well. Serendipity is easy to find in person, but social networks can imitate it. Log in to facebook or tumblr looking for your favorite content, and you face a pile of ongoing conversations. Looking through them, you naturally “run into” whatever your friends are talking about. I could see something similar for academia. Take something like the list of new papers on arXiv, then run a list of ongoing conversations next to it. When we check the arXiv each morning, we could see what our colleagues were talking about, and join in if we see something interesting. It would be a way to stay connected that would keep us together more, giving more incentive and structure beyond simple loneliness, and lead to the kind of accidental meetings that science craves. You could even graft conferences on to that system, talks in the middle with conversation threads on the side.

None of us know how long the pandemic will last, or how long we’ll be asked to work from home. But even afterwards, it’s worth thinking about the kind of infrastructure science needs to work remotely. Some ideas may still be valuable after all this is over.

# Life Cycle of an Academic Scientist

So you want to do science for a living. Some scientists work for companies, developing new products. Some work for governments. But if you want to do “pure science”, science just to learn about the world, then you’ll likely work at a university, as part of what we call academia.

The first step towards academia is graduate school. In the US, this means getting a PhD.

(Master’s degrees, at least in the US, have a different purpose. Most are “terminal Master’s”, designed to be your last degree. With a terminal Master’s, you can be a technician in a lab, but you won’t get farther down this path. In the US you don’t need a Master’s before you apply for a PhD program, and having one is usually a waste of time: PhD programs will make you re-take most of the same classes.)

Once you have a PhD, it’s time to get a job! Often, your first job after graduate school is a postdoc. Postdocs are short-term jobs, usually one to three years long. Some people are lucky enough to go to the next stage quickly, others have more postdoc jobs first. These jobs will take you all over the world, everywhere people with your specialty work. Sometimes these jobs involve teaching, but more often you just do scientific research.

In the US system, If everything goes well, eventually you get a tenure-track job. These jobs involve both teaching and research. You get to train PhD students, hire postdocs, and in general start acting like a proper professor. This stage lasts around seven years, while the university evaluates you. If they decide you’re not worth it then typically you’ll have to leave to apply for another job in another university. If they like you though, you get tenure.

Tenure is the first time as an academic scientist that you aren’t on a short-term contract. Your job is more permanent than most, you have extra protection from being fired that most people don’t. While you can’t just let everything slide, you have freedom to make more of your own decisions.

A tenured job can last until retirement, when you become an emeritus professor. Emeritus professors are retired but still do some of the work they did as professors. They’re paid out of their pension instead of a university salary, but they still sometimes teach or do research, and they usually still have an office. The university can hire someone new, and the cycle continues.

This isn’t the only path scientists take. Some work in a national lab instead. These don’t usually involve teaching duties, and the path to a permanent job is a bit different. Some get teaching jobs instead of research professorships. These teaching jobs are usually not permanent, instead universities are hiring more and more adjunct faculty who have to string together temporary contracts to make a precarious living.

I’ve mostly focused on the US system here. Europe is a bit different: Master’s degrees are a real part of the system, tenure-track doesn’t really exist, and adjunct faculty don’t always either. Some particular countries, like Germany, have their own quite complicated systems, other countries fall in between.

# Guest Post: On the Real Inhomogeneous Universe and the Weirdness of ‘Dark Energy’

A few weeks ago, I mentioned a paper by a colleague of mine, Mohamed Rameez, that generated some discussion. Since I wasn’t up for commenting on the paper’s scientific content, I thought it would be good to give Rameez a chance to explain it in his own words, in a guest post. Here’s what he has to say:

In an earlier post, 4gravitons had contemplated the question of ‘when to trust the contrarians’, in the context of our about-to-be-published paper in which we argue that accounting for the effects of the bulk flow in the local Universe, there is no evidence for any isotropic cosmic acceleration, which would be required to claim some sort of ‘dark energy’.

In the following I would like to emphasize that this is a reasonable view, and not a contrarian one. To do so I will examine the bulk flow of the local Universe and the historical evolution of what appears to be somewhat dodgy supernova data. I will present a trivial solution (from data) to the claimed ‘Hubble tension’.  I will then discuss inhomogeneous cosmology, and the 2011 Nobel prize in Physics. I will proceed to make predictions that can be falsified with future data. I will conclude with some questions that should be frequently asked.

Disclaimer: The views expressed here are not necessarily shared by my collaborators.

The bulk flow of the local Universe:

The largest anisotropy in the Cosmic Microwave Background is the dipole, believed to be caused by our motion with respect to the ‘rest frame’ of the CMB with a velocity of ~369 km s^-1. Under this view, all matter in the local Universe appear to be flowing. At least out to ~300 Mpc, this flow continues to be directionally coherent, to within ~40 degrees of the CMB dipole, and the scale at which the average relative motion between matter and radiation converges to zero has so far not been found.

This is one of the most widely accepted results in modern cosmology, to the extent that SN1a data come pre ‘corrected’ for it.

Such a flow has covariant consequences under general relativity and this is what we set out to test.

Supernova data, directions in the sky and dodgyness:

Both Riess et al 1998 and Perlmutter et al 1999 used samples of supernovae down to redshifts of 0.01, in which almost all SNe at redshifts below 0.1 were in the direction of the flow.

Subsequently in Astier et al 2006, Kowalsky et al 2008, Amanullah et al 2010 and Suzuki et al 2011, it is reported that a process of outlier rejection was adopted in which data points >3$\sigma$ from the Hubble diagram were discarded. This was done using a highly questionable statistical method that involves adjusting an intrinsic dispersion term $\sigma_{\textrm{int}}$ by hand until a $\chi^2/\textrm{ndof}$ of 1 is obtained to the assumed $\Lambda$CDM model. The number of outliers rejected is however far in excess of 0.3% – which is the 3$\sigma$ expectation. As the sky coverage became less skewed, supernovae with redshift less than ~0.023 were excluded for being outside the Hubble flow. While the Hubble diagram so far had been inferred from heliocentric redshifts and magnitudes, with the introduction of SDSS supernovae that happened to be in the direction opposite to the flow, peculiar velocity ‘corrections’ were adopted in the JLA catalogue and supernovae down to extremely low redshifts were reintroduced. While the early claims of a cosmological constant were stated as ‘high redshift supernovae were found to be dimmer (15% in flux) than the low redshift supernovae (compared to what would be expected in a $\Lambda=0$ universe)’, it is worth noting that the peculiar velocity corrections change the redshifts and fluxes of low redshift supernovae by up to ~20 %.

When it was observed that even with this ‘corrected’ sample of 740 SNe, any evidence for isotropic acceleration using a principled Maximum Likelihood Estimator is less than 3$\sigma$ , it was claimed that by adding 12 additional parameters (to the 10 parameter model) to allow for redshift and sample dependence of the light curve fitting parameters, the evidence was greater than 4$\sigma$ .

As we discuss in Colin et al. 2019, these corrections also appear to be arbitrary, and betray an ignorance of the fundamentals of both basic statistical analysis and relativity. With the Pantheon compilation, heliocentric observables were no longer public and these peculiar velocity corrections initially extended far beyond the range of any known flow model of the Local Universe. When this bug was eventually fixed, both the heliocentric redshifts and magnitudes of the SDSS SNe that filled in the ‘redshift desert’ between low and high redshift SNe were found to be alarmingly discrepant. The authors have so far not offered any clarification of these discrepancies.

Thus it seems to me that the latest generation of ‘publicly available’ supernova data are not aiding either open science or progress in cosmology.

A trivial solution to the ‘Hubble tension’?

The apparent tension between the Hubble parameter as inferred from the Cosmic Microwave Background and low redshift tracers has been widely discussed, and recent studies suggest that redshift errors as low as 0.0001 can have a significant impact. Redshift discrepancies as big as 0.1 have been reported. The shifts reported between JLA and Pantheon appear to be sufficient to lower the Hubble parameter from ~73 km s^-1 Mpc^-1 to ~68 km s^-1 Mpc^-1.

On General Relativity, cosmology, metric expansion and inhomogeneities:

In the maximally symmetric Friedmann-Lemaitre-Robertson-Walker solution to general relativity, there is only one meaningful global notion of distance and it expands at the same rate everywhere. However, the late time Universe has structure on all scales, and one may only hope for statistical (not exact) homogeneity. The Universe is expected to be lumpy. A background FLRW metric is not expected to exist and quantities analogous to the Hubble and deceleration parameters will vary across the sky.  Peculiar velocities may be more precisely thought of as variations in the expansion rate of the Universe. At what rate does a real Universe with structure expand? The problems of defining a meaningful average notion of volume, its dynamical evolution, and connecting it to observations are all conceptually open.

On the 2011 Nobel Prize in Physics:

The Fitting Problem in cosmology was written in 1987. In the context of this work and the significant theoretical difficulties involved in inferring fundamental physics from the real Universe, any claims of having measured a cosmological constant from directionally skewed, sparse samples of intrinsically scattered observations should have been taken with a grain of salt.  By honouring this claim with a Nobel Prize, the Swedish Academy may have induced runaway prestige bias in favour of some of the least principled analyses in science, strengthening the confirmation bias that seems prevalent in cosmology.

This has resulted in the generation of a large body of misleading literature, while normalizing the practice of ‘massaging’ scientific data. In her recent video about gravitational waves, Sabine Hossenfelder says “We should not hand out Nobel Prizes if we don’t know how the predictions were fitted to the data”. What about when the data was fitted (in 1998-1999) using a method that has been discredited in 1989 to a toy model that has been cautioned against in 1987, leading to a ‘discovery’ of profound significance to fundamental physics?

A prediction with future cosmological data:

With the advent of high statistics cosmological data in the future, such as from the Large Synoptic Survey Telescope, I predict that the Hubble and deceleration parameters inferred from supernovae in hemispheres towards and away from the CMB dipole will be found to be different in a statistically significant (>5$\sigma$ ) way. Depending upon the criterion for selection and blind analyses of data that can be agreed upon, I would be willing to bet a substantial amount of money on this prediction.

Concluding : on the amusing sociology of ‘Dark Energy’ and manufactured concordance:

Of the two authors of the well-known cosmology textbook ‘The Early Universe’, Edward Kolb writes these interesting papers questioning dark energy while Michael Turner is credited with coining the term ‘Dark Energy’.  Reasonable scientific perspectives have to be presented as ‘Dark Energy without dark energy’. Papers questioning the need to invoke such a mysterious content that makes up ‘68% of the Universe’ are quickly targeted by inane articles by non-experts or perhaps well-meant but still misleading YouTube videos. Much of this is nothing more than a spectacle.

In summary, while the theoretical debate about whether what has been observed as Dark Energy is the effect of inhomogeneities is ongoing, observers appear to have been actively using the most inhomogeneous feature of the local Universe through opaque corrections to data, to continue claiming that this ‘dark energy’ exists.

It is heartening to see that recent works lean toward a breaking of this manufactured concordance and speak of a crisis for cosmology.

Questions that should be frequently asked:

Q. Is there a Hubble frame in the late time Universe?

A. The Hubble frame is a property of the FLRW exact solution, and in the late time Universe in which galaxies and clusters have peculiar motions with respect to each other, an equivalent notion does not exist. While popular inference treats the frame in which the CMB dipole vanishes as the Hubble frame, the scale at which the bulk flow of the local Universe converges to that frame has never been found. We are tilted observers.

Q. I am about to perform blinded analyses on new cosmological data. Should I correct all my redshifts towards the CMB rest frame?

A. No. Correcting all your redshifts towards a frame that has never been found is a good way to end up with ‘dark energy’. It is worth noting that while the CMB dipole has been known since 1994, supernova data have been corrected towards the CMB rest frame only after 2010, for what appear to be independent reasons.

Q. Can I combine new data with existing Supernova data?

A. No. The current generation of publicly available supernova data suffer from the natural biases that are to be expected when data are compiled incrementally through a human mediated process. It would be better to start fresh with a new sample.

Q. Is ‘dark energy’ fundamental or new physics?

A. Given that general relativity is a 100+ year old theory and significant difficulties exist in describing the late time Universe with it, it is unnecessary to invoke new fundamental physics when confronting any apparent acceleration of the real Universe. All signs suggest that what has been ascribed to dark energy are the result of a community that is hell bent on repeating what Einstein supposedly called his greatest mistake.

Digging deeper:

The inquisitive reader may explore the resources on inhomogeneous cosmology, as well as the works of George Ellis, Thomas Buchert and David Wiltshire.

# Academia Has Changed Less Than You’d Think

I recently read a biography of James Franck. Many of you won’t recognize the name, but physicists might remember the Franck-Hertz experiment from a lab class. Franck and Hertz performed a decisive test of Bohr’s model of the atom, ushering in the quantum age and receiving the 1925 Nobel Prize. After fleeing Germany when Hitler took power, Franck worked on the Manhattan project and co-authored the Franck Report urging the US not to use nuclear bombs on Japan. He settled at the University of Chicago, which named an institute after him.*

You can find all that on his Wikipedia page. The page also mentions his marriage later in life to Hertha Sponer. Her Wikipedia page talks about her work in spectroscopy, about how she was among the first women to receive a PhD in Germany and the first on the physics faculty at Duke University, and that she remained a professor there until 1966, when she was 70.