Tag Archives: PublicPerception

Stop Listing the Amplituhedron as a Competitor of String Theory

The Economist recently had an article (paywalled) that meandered through various developments in high-energy physics. It started out talking about the failure of the LHC to find SUSY, argued this looked bad for string theory (which…not really?) and used it as a jumping-off point to talk about various non-string “theories of everything”. Peter Woit quoted it a few posts back as kind of a bellwether for public opinion on supersymmetry and string theory.

The article was a muddle, but a fairly conventional muddle, explaining or mis-explaining things in roughly the same way as other popular physics pieces. For the most part that didn’t bug me, but one piece of the muddle hit a bit close to home:

The names of many of these [non-string theories of everything] do, it must be conceded, torture the English language. They include “causal dynamical triangulation”, “asymptotically safe gravity”, “loop quantum gravity” and the “amplituhedron formulation of quantum theory”.

I’ve posted about the amplituhedron more than a few times here on this blog. Out of every achievement of my sub-field, it has most captured the public imagination. It’s legitimately impressive, a way to translate calculations of probabilities of collisions of fundamental particles (in a toy model, to be clear) into geometrical objects. What it isn’t, and doesn’t pretend to be, is a theory of everything.

To be fair, the Economist piece admits this:

Most attempts at a theory of everything try to fit gravity, which Einstein describes geometrically, into quantum theory, which does not rely on geometry in this way. The amplituhedron approach does the opposite, by suggesting that quantum theory is actually deeply geometric after all. Better yet, the amplituhedron is not founded on notions of spacetime, or even statistical mechanics. Instead, these ideas emerge naturally from it. So, while the amplituhedron approach does not as yet offer a full theory of quantum gravity, it has opened up an intriguing path that may lead to one.

The reasoning they have leading up to it has a few misunderstandings anyway. The amplituhedron is geometrical, but in a completely different way from how Einstein’s theory of gravity is geometrical: Einstein’s gravity is a theory of space and time, the amplituhedron’s magic is that it hides space and time behind a seemingly more fundamental mathematics.

This is not to say that the amplituhedron won’t lead to insights about gravity. That’s a big part of what it’s for, in the long-term. Because the amplituhedron hides the role of space and time, it might show the way to theories that lack them altogether, theories where space and time are just an approximation for a more fundamental reality. That’s a real possibility, though not at this point a reality.

Even if you take this possibility completely seriously, though, there’s another problem with the Economist’s description: it’s not clear that this new theory would be a non-string theory!

The main people behind the amplituhedron are pretty positively disposed to string theory. If you asked them, I think they’d tell you that, rather than replacing string theory, they expect to learn more about string theory: to see how it could be reformulated in a way that yields insight about trickier problems. That’s not at all like the other “non-string theories of everything” in that list, which frame themselves as alternatives to, or even opponents of, string theory.

It is a lot like several other research programs, though, like ER=EPR and It from Qubit. Researchers in those programs try to use physical principles and toy models to say fundamental things about quantum gravity, trying to think about space and time as being made up of entangled quantum objects. By that logic, they belong in that list in the article alongside the amplituhedron. The reason they aren’t is obvious if you know where they come from: ER=EPR and It from Qubit are worked on by string theorists, including some of the most prominent ones.

The thing is, any reason to put the amplituhedron on that list is also a reason to put them. The amplituhedron is not a theory of everything, it is not at present a theory of quantum gravity. It’s a research direction that might shed new insight about quantum gravity. It doesn’t explicitly involve strings, but neither does It from Qubit most of the time. Unless you’re going to describe It from Qubit as a “non-string theory of everything”, you really shouldn’t describe the amplituhedron as one.

The amplituhedron is a really cool idea, one with great potential. It’s not something like loop quantum gravity, or causal dynamical triangulations, and it doesn’t need to be. Let it be what it is, please!

The Winding Path of a Physics Conversation

In my line of work, I spend a lot of time explaining physics. I write posts here of course, and give the occasional public lecture. I also explain physics when I supervise Master’s students, and in a broader sense whenever I chat with my collaborators or write papers. I’ll explain physics even more when I start teaching. But of all the ways to explain physics, there’s one that has always been my favorite: the one-on-one conversation.

Talking science one-on-one is validating in a uniquely satisfying way. You get instant feedback, questions when you’re unclear and comprehension when you’re close. There’s a kind of puzzle to it, discovering what you need to fill in the gaps in one particular person’s understanding. As a kid, I’d chase this feeling with imaginary conversations: I’d plot out a chat with Democritus or Newton, trying to explain physics or evolution or democracy. It was a game, seeing how I could ground our modern understanding in concepts someone from history already knew.

Way better than Parcheesi

I’ll never get a chance in real life to explain physics to a Democritus or a Newton, to bridge a gap quite that large. But, as I’ve discovered over the years, everyone has bits and pieces they don’t yet understand. Even focused on the most popular topics, like black holes or elementary particles, everyone has gaps in what they’ve managed to pick up. I do too! So any conversation can be its own kind of adventure, discovering what that one person knows, what they don’t, and how to connect the two.

Of course, there’s fun in writing and public speaking too (not to mention, of course, research). Still, I sometimes wonder if there’s a career out there in just the part I like best: just one conversation after another, delving deep into one person’s understanding, making real progress, then moving on to the next. It wouldn’t be efficient by any means, but it sure sounds fun.

Doing Difficult Things Is Its Own Reward

Does antimatter fall up, or down?

Technically, we don’t know yet. The ALPHA-g experiment would have been the first to check this, making anti-hydrogen by trapping anti-protons and positrons in a long tube and seeing which way it falls. While they got most of their setup working, the LHC complex shut down before they could finish. It starts up again next month, so we should have our answer soon.

That said, for most theorists’ purposes, we absolutely do know: antimatter falls down. Antimatter is one of the cleanest examples of a prediction from pure theory that was confirmed by experiment. When Paul Dirac first tried to write down an equation that described electrons, he found the math forced him to add another particle with the opposite charge. With no such particle in sight, he speculated it could be the proton (this doesn’t work, they need the same mass), before Carl D. Anderson discovered the positron in 1932.

The same math that forced Dirac to add antimatter also tells us which way it falls. There’s a bit more involved, in the form of general relativity, but the recipe is pretty simple: we know how to take an equation like Dirac’s and add gravity to it, and we have enough practice doing it in different situations that we’re pretty sure it’s the right way to go. Pretty sure doesn’t mean 100% sure: talk to the right theorists, and you’ll probably find a proposal or two in which antimatter falls up instead of down. But they tend to be pretty weird proposals, from pretty weird theorists.

Ok, but if those theorists are that “weird”, that outside the mainstream, why does an experiment like ALPHA-g exist? Why does it happen at CERN, one of the flagship facilities for all of mainstream particle physics?

This gets at a misconception I occasionally hear from critics of the physics mainstream. They worry about groupthink among mainstream theorists, the physics community dismissing good ideas just because they’re not trendy (you may think I did that just now, for antigravity antimatter!) They expect this to result in a self-fulfilling prophecy where nobody tests ideas outside the mainstream, so they find no evidence for them, so they keep dismissing them.

The mistake of these critics is in assuming that what gets tested has anything to do with what theorists think is reasonable.

Theorists talk to experimentalists, sure. We motivate them, give them ideas and justification. But ultimately, people do experiments because they can do experiments. I watched a talk about the ALPHA experiment recently, and one thing that struck me was how so many different techniques play into it. They make antiprotons using a proton beam from the accelerator, slow them down with magnetic fields, and cool them with lasers. They trap their antihydrogen in an extremely precise vacuum, and confirm it’s there with particle detectors. The whole setup is a blend of cutting-edge accelerator physics and cutting-edge tricks for manipulating atoms. At its heart, ALPHA-g feels like its primary goal is to stress-test all of those tricks: to push the state of the art in a dozen experimental techniques in order to accomplish something remarkable.

And so even if the mainstream theorists don’t care, ALPHA will keep going. It will keep getting funding, it will keep getting visited by celebrities and inspiring pop fiction. Because enough people recognize that doing something difficult can be its own reward.

In my experience, this motivation applies to theorists too. Plenty of us will dismiss this or that proposal as unlikely or impossible. But give us a concrete calculation, something that lets us use one of our flashy theoretical techniques, and the tune changes. If we’re getting the chance to develop our tools, and get a paper out of it in the process, then sure, we’ll check your wacky claim. Why not?

I suspect critics of the mainstream would have a lot more success with this kind of pitch-based approach. If you can find a theorist who already has the right method, who’s developing and extending it and looking for interesting applications, then make your pitch: tell them how they can answer your question just by doing what they do best. They’ll think of it as a chance to disprove you, and you should let them, that’s the right attitude to take as a scientist anyway. It’ll work a lot better than accusing them of hogging the grant money.

Is Outreach for Everyone?

Betteridge’s law applies here: the answer is “no”. It’s a subtle “no”, though.

As a scientist, you will always need to be able to communicate your work. Most of the time you can get away with papers and talks aimed at your peers. But the longer you mean to stick around, the more often you will have to justify yourself to others: to departments, to universities, and to grant agencies. A scientist cannot survive on scientific ability alone: to get jobs, to get funding, to survive, you need to be able to promote yourself, at least a little.

Self-promotion isn’t outreach, though. Talking to the public, or to journalists, is a different skill from talking to other academics or writing grants. And it’s entirely possible to go through an entire scientific career without exercising that skill.

That’s a reassuring message for some. I’ve met people for whom science is a refuge from the mess of human interaction, people horrified by the thought of fame or even being mentioned in a newspaper. When I meet these people, they sometimes seem to worry that I’m silently judging them, thinking that they’re ignoring their responsibilities by avoiding outreach. They think this in part because the field seems to be going in that direction. Grants that used to focus just on science have added outreach as a requirement, demanding that each application come with a plan for some outreach project.

I can’t guarantee that more grants won’t add outreach requirements. But I can say at least that I’m on your side here: I don’t think you should have to do outreach if you don’t want to. I don’t think you have to, just yet. And I think if grant agencies are sensible, they’ll find a way to encourage outreach without making it mandatory.

I think that overall, collectively, we have a responsibility to do outreach. Beyond the old arguments about justifying ourselves to taxpayers, we also just ought to be open about what we do. In a world where people are actively curious about us, we ought to encourage and nurture that curiosity. I don’t think this is unique to science, I think it’s something every industry, every hobby, and every community should foster. But in each case, I think that communication should be done by people who want to do it, not forced on every member.

I also think that, potentially, anyone can do outreach. Outreach can take different forms for different people, anything from speaking to high school students to talking to journalists to writing answers for Stack Exchange. I don’t think anyone should feel afraid of outreach because they think they won’t be good enough. Chances are, you know something other people don’t: I guarantee if you want to, you will have something worth saying.

“Inreach”

This is, first and foremost, an outreach blog. I try to make my writing as accessible as possible, so that anyone from high school students to my grandparents can learn something. My goal is to get the general public to know a bit more about physics, and about the people who do it, both to better understand the world and to view us in a better light.

However, as I am occasionally reminded, my readers aren’t exactly the general public. I’ve done polls, and over 60% of you either have a PhD in physics, or are on your way to one. The rest include people with what one might call an unusually strong interest in physics: engineers with a fondness for the (2,0) theory, or retired lawyers who like to debate dark matter.

With that in mind, am I really doing outreach? Or am I doing some sort of “inreach” instead?

First, it’s important to remember that just because someone is a physicist doesn’t mean they’re an expert in everything. This is especially relevant when I talk about my own sub-field, but it matters for other topics too: experts in one part of physics can still find something to learn, and it’s still worth getting on their good side. Still, if that was my main audience, I’d probably want to strike a different tone, more like the colloquium talks we give for our fellow physicists.

Second, I like to think that outreach “trickles down”. I write for a general audience, and get read by “physics fans”, but they will go on to talk about physics to anyone who will listen: to parents who want to understand what they do, to people they’re trying to impress at parties, to friends they share articles with. If I write good metaphors and clear analogies, they will get passed on to those friends and parents, and the “inreach” will become outreach. I know that’s why I read other physicists’ outreach blogs: I’m looking for new tricks to make ideas clearer.

Third, active readers are not all readers. The people who answer a poll are more likely to be regulars, people who come back to the blog again and again, and those people are pretty obviously interested in physics. (Interested doesn’t mean expert, of course…but in practice, far more non-experts read blogs on, say, military history, than on physics.) But I suspect most of my readers aren’t regulars. My most popular post, “The Way You Think Everything Is Connected Isn’t the Way Everything Is Connected”, gets a trickle of new views every day. WordPress lets me see some of the search terms people use to find it, and there are people who literally google “is everything connected?” These aren’t physics PhDs looking for content, these are members of the general public who hear something strange and confusing and want to check it out. Being that check, the source someone googles to clear things up, that’s an honor. Knowing I’m serving that role, I know I’m not doing “just inreach”: I’m reaching out too.

What Tells Your Story

I watched Hamilton on Disney+ recently. With GIFs and songs from the show all over social media for the last few years, there weren’t many surprises. One thing that nonetheless struck me was the focus on historical evidence. The musical Hamilton is based on Ron Chernow’s biography of Alexander Hamilton, and it preserves a surprising amount of the historian’s care for how we know what we know, hidden within the show’s other themes. From the refrain of “who tells your story”, to the importance of Eliza burning her letters with Hamilton (not just the emotional gesture but the “gap in the narrative” it created for historians), to the song “The Room Where It Happens” (which looked from GIFsets like it was about Burr’s desire for power, but is mostly about how much of history is hidden in conversations we can only partly reconstruct), the show keeps the puzzle of reasoning from incomplete evidence front-and-center.

Any time we try to reason about the past, we are faced with these kinds of questions. They don’t just apply to history, but to the so-called historical sciences as well, sciences that study the past. Instead of asking “who” told the story, such scientists must keep in mind “what” is telling the story. For example, paleontologists reason from fossils, and thus are limited by what does and doesn’t get preserved. As a result after a century of studying dinosaurs, only in the last twenty years did it become clear they had feathers.

Astronomy, too, is a historical science. Whenever astronomers look out at distant stars, they are looking at the past. And just like historians and paleontologists, they are limited by what evidence happened to be preserved, and what part of that evidence they can access.

These limitations lead to mysteries, and often controversies. Before LIGO, astronomers had an idea of what the typical mass of a black hole was. After LIGO, a new slate of black holes has been observed, with much higher mass. It’s still unclear why.

Try to reason about the whole universe, and you end up asking similar questions. When we see the movement of “standard candle” stars, is that because the universe’s expansion is accelerating, or are the stars moving as a group?

Push far enough back and the evidence doesn’t just lead to controversy, but to hard limits on what we can know. No matter how good our telescopes are, we won’t see light older than the cosmic microwave background: before that background was emitted the universe was filled with plasma, which would have absorbed any earlier light, erasing anything we could learn from it. Gravitational waves may one day let us probe earlier, and make discoveries as surprising as feathered dinosaurs. But there is yet a stronger limit to how far back we can go, beyond which any evidence has been so diluted that it is indistinguishable from random noise. We can never quite see into “the room where it happened”.

It’s gratifying to see questions of historical evidence in a Broadway musical, in the same way it was gratifying to hear fractals mentioned in a Disney movie. It’s important to think about who, and what, is telling the stories we learn. Spreading that lesson helps all of us reason better.

Science and Its Customers

In most jobs, you know who you’re working for.

A chef cooks food, and people eat it. A tailor makes clothes, and people wear them. An artist has an audience, an engineer has end users, a teacher has students. Someone out there benefits directly from what you do. Make them happy, and they’ll let you know. Piss them off, and they’ll stop hiring you.

Science benefits people too…but most of its benefits are long-term. The first person to magnetize a needle couldn’t have imagined worldwide electronic communication, and the scientists who uncovered quantum mechanics couldn’t have foreseen transistors, or personal computers. The world benefits just by having more expertise in it, more people who spend their lives understanding difficult things, and train others to understand difficult things. But those benefits aren’t easy to see for each individual scientist. As a scientist, you typically don’t know who your work will help, or how much. You might not know for years, or even decades, what impact your work will have. Even then, it will be difficult to tease out your contribution from the other scientists of your time.

We can’t ask the customers of the future to pay for the scientists of today. (At least, not straightforwardly.) In practice, scientists are paid by governments and foundations, groups trying on some level to make the future a better place. Instead of feedback from customers we get feedback from each other. If our ideas get other scientists excited, maybe they’ll matter down the road.

This is a risky thing to do, of course. Governments, foundations, and scientists can’t tell the future. They can try to act in the interests of future generations, but they might just act for themselves instead. Trying to plan ahead like this makes us prey to all the cognitive biases that flesh is heir to.

But we don’t really have an alternative. If we want to have a future at all, if we want a happier and more successful world, we need science. And if we want science, we can’t ask its real customers, the future generations, to choose whether to pay for it. We need to work for the smiles on our colleagues faces and the checks from government grant agencies. And we need to do it carefully enough that at the end of the day, we still make a positive difference.

Truth Doesn’t Have to Break the (Word) Budget

Imagine you saw this headline:

Scientists Say They’ve Found the Missing 40 Percent of the Universe’s Matter

It probably sounds like they’re talking about dark matter, right? And if scientists found dark matter, that could be a huge discovery: figuring out what dark matter is made of is one of the biggest outstanding mysteries in physics. Still, maybe that 40% number makes you a bit suspicious…

Now, read this headline instead:

Astronomers Have Finally Found Most of The Universe’s Missing Visible Matter

Visible matter! Ah, what a difference a single word makes!

These are two articles, the first from this year and the second from 2017, talking about the same thing. Leave out dark matter and dark energy, and the rest of the universe is made of ordinary protons, neutrons, and electrons. We sometimes call that “visible matter”, but that doesn’t mean it’s easy to spot. Much of it lingers in threads of gas and dust between galaxies, making it difficult to detect. These two articles are about astronomers who managed to detect this matter in different ways. But while the articles cover the same sort of matter, one headline is a lot more misleading.

Now, I know science writing is hard work. You can’t avoid misleading your readers, if only a little, because you can never include every detail. Introduce too many new words and you’ll use up your “vocabulary budget” and lose your audience. I also know that headlines get tweaked by editors at the last minute to maximize “clicks”, and that news that doesn’t get enough “clicks” dies out, replaced by news that does.

But that second headline? It’s shorter than the first. They were able to fit that crucial word “visible” in, without breaking the budget. And while I don’t have the data, I doubt the first headline was that much more viral. They could have afforded to get this right, if they wanted to.

Read each article further, and you see the same pattern. The 2020 article does mention visible matter in the first sentence at least, so they don’t screw that one up completely. But another important detail never gets mentioned.

See, you might be wondering, if one of these articles is from 2017 and the other is from 2020, how are they talking about the same thing? If astronomers found this matter already in 2017, how did they find it again in 2020?

There’s a key detail that the 2017 article mentions and the 2020 article leaves out. Here’s a quote from the 2017 article, emphasis mine:

We now have our first solid piece of evidence that this matter has been hiding in the delicate threads of cosmic webbing bridging neighbouring galaxies, right where the models predicted.

This “missing” matter was expected to exist, was predicted by models to exist. It just hadn’t been observed yet. In 2017, astronomers detected some of this matter indirectly, through its effect on the Cosmic Microwave Background. In 2020, they found it more directly, through X-rays shot out from the gases themselves.

Once again, the difference is just a short phrase. By saying “right where the models predicted”, the 2017 article clears up an important point, that this matter wasn’t a surprise. And all it took was five words.

These little words and phrases make a big difference. If you’re writing about science, you will always face misunderstandings. But if you’re careful and clever, you can clear up the most obvious ones. With just a few well-chosen words, you can have a much better piece.

Which Things Exist in Quantum Field Theory

If you ever think metaphysics is easy, learn a little quantum field theory.

Someone asked me recently about virtual particles. When talking to the public, physicists sometimes explain the behavior of quantum fields with what they call “virtual particles”. They’ll describe forces coming from virtual particles going back and forth, or a bubbling sea of virtual particles and anti-particles popping out of empty space.

The thing is, this is a metaphor. What’s more, it’s a metaphor for an approximation. As physicists, when we draw diagrams with more and more virtual particles, we’re trying to use something we know how to calculate with (particles) to understand something tougher to handle (interacting quantum fields). Virtual particles, at least as you’re probably picturing them, don’t really exist.

I don’t really blame physicists for talking like that, though. Virtual particles are a metaphor, sure, a way to talk about a particular calculation. But so is basically anything we can say about quantum field theory. In quantum field theory, it’s pretty tough to say which things “really exist”.

I’ll start with an example, neutrino oscillation.

You might have heard that there are three types of neutrinos, corresponding to the three “generations” of the Standard Model: electron-neutrinos, muon-neutrinos, and tau-neutrinos. Each is produced in particular kinds of reactions: electron-neutrinos, for example, get produced by beta-plus decay, when a proton turns into a neutron, an anti-electron, and an electron-neutrino.

Leave these neutrinos alone though, and something strange happens. Detect what you expect to be an electron-neutrino, and it might have changed into a muon-neutrino or a tau-neutrino. The neutrino oscillated.

Why does this happen?

One way to explain it is to say that electron-neutrinos, muon-neutrinos, and tau-neutrinos don’t “really exist”. Instead, what really exists are neutrinos with specific masses. These don’t have catchy names, so let’s just call them neutrino-one, neutrino-two, and neutrino-three. What we think of as electron-neutrinos, muon-neutrinos, and tau-neutrinos are each some mix (a quantum superposition) of these “really existing” neutrinos, specifically the mixes that interact nicely with electrons, muons, and tau leptons respectively. When you let them travel, it’s these neutrinos that do the traveling, and due to quantum effects that I’m not explaining here you end up with a different mix than you started with.

This probably seems like a perfectly reasonable explanation. But it shouldn’t. Because if you take one of these mass-neutrinos, and interact with an electron, or a muon, or a tau, then suddenly it behaves like a mix of the old electron-neutrinos, muon-neutrinos, and tau-neutrinos.

That’s because both explanations are trying to chop the world up in a way that can’t be done consistently. There aren’t electron-neutrinos, muon-neutrinos, and tau-neutrinos, and there aren’t neutrino-ones, neutrino-twos, and neutrino-threes. There’s a mathematical object (a vector space) that can look like either.

Whether you’re comfortable with that depends on whether you think of mathematical objects as “things that exist”. If you aren’t, you’re going to have trouble thinking about the quantum world. Maybe you want to take a step back, and say that at least “fields” should exist. But that still won’t do: we can redefine fields, add them together or even use more complicated functions, and still get the same physics. The kinds of things that exist can’t be like this. Instead you end up invoking another kind of mathematical object, equivalence classes.

If you want to be totally rigorous, you have to go a step further. You end up thinking of physics in a very bare-bones way, as the set of all observations you could perform. Instead of describing the world in terms of “these things” or “those things”, the world is a black box, and all you’re doing is finding patterns in that black box.

Is there a way around this? Maybe. But it requires thought, and serious philosophy. It’s not intuitive, it’s not easy, and it doesn’t lend itself well to 3d animations in documentaries. So in practice, whenever anyone tells you about something in physics, you can be pretty sure it’s a metaphor. Nice describable, non-mathematical things typically don’t exist.

Kicking Students Out of Their Homes During a Pandemic: A Bad Idea

I avoid talking politics on this blog. There are a few issues, though, where I feel not just able, but duty-bound, to speak out. Those are issues affecting graduate students.

This week, US Immigration and Customs Enforcement (ICE) announced that, if a university switched to online courses as a response to COVID-19, international students would have to return to their home countries or transfer to a school that still teaches in-person.

This is already pretty unreasonable for many undergrads. But think about PhD students.

Suppose you’re a foreign PhD student at a US university. Maybe your school is already planning to have classes online this fall, like Harvard is. Maybe your school is planning to have classes in person, but will change its mind a few weeks in, when so many students and professors are infected that it’s clearly unreasonable to continue. Maybe your school never changes its mind, but your state does, and the school has to lock down anyway.

As a PhD student, you likely don’t live in the dorms. More likely you live in a shared house, or an apartment. You’re an independent adult. Your parents aren’t paying for you to go to school. Your school is itself a full-time job, one that pays (as little as the university thinks it can get away with).

What happens when your school goes online? If you need to leave the country?

You’d have to find some way out of your lease, or keep paying for it. You’d have to find a flight on short notice. You’d have to pack up all your belongings, ship or sell anything you can’t store, or find friends to hold on to it.

You’d have to find somewhere to stay in your “home country”. Some could move in with their parents temporarily, many can’t. Some of those who could in other circumstances, shouldn’t if they’re fleeing from an outbreak: their parents are likely older, and vulnerable to the virus. So you have to find a hotel, eventually perhaps a new apartment, far from what was until recently your home.

Reminder: you’re doing all of this on a shoestring budget, because the university pays you peanuts.

Can you transfer instead? In a word, no.

PhD students are specialists. They’re learning very specific things from very specific people. Academics aren’t the sort of omnidisciplinary scientists you see in movies. Bruce Banner or Tony Stark could pick up a new line of research on a whim, real people can’t. This is why, while international students may be good at the undergraduate level, they’re absolutely necessary for PhDs. When only three people in the world study the thing you want to study, you don’t have the luxury of staying in your birth country. And you can’t just transfer schools when yours goes online.

It feels like the people who made this decision didn’t think about any of this. That they don’t think grad students matter, or forgot they exist altogether. It seems frustratingly common for policy that affects grad students to be made by people who know nothing about grad students, and that baffles me. PhDs are a vital part of the academic career, without them universities in their current form wouldn’t even exist. Ignoring them is like if hospital policy ignored residencies.

I hope that this policy gets reversed, or halted, or schools find some way around it. At the moment, anyone starting school in the US this fall is in a very tricky position. And anyone already there is in a worse one.

As usual, I’m going to ask that the comments don’t get too directly political. As a partial measure to tone things down, I’d like to ask you to please avoid mentioning any specific politicians, political parties, or political ideologies. Feel free to talk instead about your own experiences: how this policy is likely to affect you, or your loved ones. Please also feel free to talk more technically on the policy/legal side. I’d like to know what universities can do to work around this, and whether there are plausible paths to change or halt the policy. Please be civil, and be kind to your fellow commenters.