Monthly Archives: September 2025

Requests for an Ethnography of Cheating

What is AI doing to higher education? And what, if anything, should be done about it?

Chad Orzel at Counting Atoms had a post on this recently, tying the question to a broader point. There is a fundamental tension in universities, between actual teaching and learning and credentials. A student who just wants the piece of paper at the end has no reason not to cheat if they can get away with it, so the easier it becomes to get away with cheating (say, by using AI), the less meaningful the credential gets. Meanwhile, professors who want students to actually learn something are reduced to trying to “trick” these goal-oriented students into accidentally doing something that makes them fall in love with a subject, while being required to police the credential side of things.

Social science, as Orzel admits and emphasizes, is hard. Any broad-strokes picture like this breaks down into details, and while Orzel talks through some of those details he and I are of course not social scientists.

Because of that, I’m not going to propose my own “theory” here. Instead, think of this post as a request.

I want to read an ethnography of cheating. Like other ethnographies, it should involve someone spending time in the culture in question (here, cheating students), talking to the people involved, and getting a feeling for what they believe and value. Ideally, it would be augmented with an attempt at quantitative data, like surveys, that estimate how representative the picture is.

I suspect that cheating students aren’t just trying to get a credential. Part of why is that I remember teaching pre-meds. In the US, students don’t directly study medicine as a Bachelor’s degree. Instead, they study other subjects as pre-medical students (“pre-meds”), and then apply to Medical School, which grants a degree on the same level as a PhD. As part of their application, they include a standardized test called the MCAT, which checks that they have the basic level of math and science that the medical schools expect.

A pre-med in a physics class, then, has good reason to want to learn: the better they know their physics, the better they will do on the MCAT. If cheating was mostly about just trying to get a credential, pre-meds wouldn’t cheat.

I’m pretty sure they do cheat, though. I didn’t catch any cheaters back when I taught, but there were a lot of students who tried to push the rules, pre-meds and not.

Instead, I think there are a few other motivations involved. And in an ethnography of cheating, I’d love to see some attempt to estimate how prevalent they are:

  1. Temptation: Maybe students know that they shouldn’t cheat, in the same way they know they should go to the gym. They want to understand the material and learn in the same way people who exercise have physical goals. But the mind, and flesh, are weak. You have a rough week, you feel like you can’t handle the work right now. So you compensate. Some of the motivation here is still due to credentials: a student who shrugs and accepts that their breakup will result in failing a course is a student who might have to pay for an extra year of ultra-expensive US university education to get that credential. But I suspect there is a more fundamental motivation here, related to ego and easy self-deception. If you do the assignment, even if you cheat for part of it, you get to feel like you did it, while if you just turn in a blank page you have to accept the failure.
  2. Skepticism: Education isn’t worth much if it doesn’t actually work. Students may be skeptical that the things that professors are asking them to do actually help them learn what they want to learn, or that the things the professors want them to learn are actually the course’s most valuable content. A student who uses ChatGPT to write an essay might believe that they will never have to write something without ChatGPT in life, so why not use it now? Sometimes professors simply aren’t explicit about what an exercise is actually meant to teach (there have been a huge number of blog posts explaining that writing is meant to teach you to think, not to write), and sometimes professors are genuinely pretty bad at teaching, since there is little done to retain the good ones in most places. A student in this situation still has to be optimistic about some aspect of the education, at some time. But they may be disillusioned, or just interested in something very different.
  3. Internalized Expectations: Do employers actually care if you get a bad grade? Does it matter? By the time a student is in college, they’ve been spending half their waking hours in a school environment for over a decade. Maybe the need to get good grades is so thoroughly drilled in that the actual incentives don’t matter. If you think of yourself as the kind of person who doesn’t fail courses, and you start failing, what do you do?
  4. External Non-Credential Expectations: Don’t worry about the employers, worry about the parents. Some college students have the kind of parents who keep checking in on how they’re doing, who want to see evidence and progress the same way they did when they were kids. Any feedback, no matter how much it’s intended to teach, not to judge, might get twisted into a judgement. Better to avoid that judgement, right?
  5. Credentials, but for the Government, not Employers: Of course, for some students, failing really does wreck their life. If you’re on the kind of student visa that requires you maintain grades a certain level, you’ve got a much stronger incentive to cheat, imposed for much less reason.

If you’re aware of a good ethnography of cheating, let me know! And if you’re a social scientist, consider studying this!

To Measure Something or to Test It

Black holes have been in the news a couple times recently.

On one end, there was the observation of an extremely large black hole in the early universe, when no black holes of the kind were expected to exist. My understanding is this is very much a “big if true” kind of claim, something that could have dramatic implications but may just be being misunderstood. At the moment, I’m not going to try to work out which one it is.

In between, you have a piece by me in Quanta Magazine a couple weeks ago, about tests of whether black holes deviate from general relativity. They don’t, by the way, according to the tests so far.

And on the other end, you have the coverage last week of a “confirmation” (or even “proof”) of the black hole area law.

The black hole area law states that the total area of the event horizons of all black holes will always increase. It’s also known as the second law of black hole thermodynamics, paralleling the second law of thermodynamics that entropy always increases. Hawking proved this as a theorem in 1971, assuming that general relativity holds true.

(That leaves out quantum effects, which indeed can make black holes shrink, as Hawking himself famously later argued.)

The black hole area law is supposed to hold even when two black holes collide and merge. While the combination may lose energy (leading to gravitational waves that carry energy to us), it will still have greater area, in the end, than the sum of the black holes that combined to make it.

Ok, so that’s the area law. What’s this paper that’s supposed to “finally prove” it?

The LIGO, Virgo, and KAGRA collaborations recently published a paper based on gravitational waves from one particularly clear collision of black holes, which they measured back in January. They compare their measurements to predictions from general relativity, and checked two things: whether the measurements agreed with predictions based on the Kerr metric (how space-time around a rotating black hole is supposed to behave), and whether they obeyed the area law.

The first check isn’t so different in purpose from the work I wrote about in Quanta Magazine, just using different methods. In both studies, physicists are looking for deviations from the laws of general relativity, triggered by the highly curved environments around black holes. These deviations could show up in one way or another in any black hole collision, so while you would ideally look for them by scanning over many collisions (as the paper I reported on did), you could do a meaningful test even with just one collision. That kind of a check may not be very strenuous (if general relativity is wrong, it’s likely by a very small amount), but it’s still an opportunity, diligently sought, to be proven wrong.

The second check is the one that got the headlines. It also got first billing in the paper title, and a decent amount of verbiage in the paper itself. And if you think about it for more than five minutes, it doesn’t make a ton of sense as presented.

Suppose the black hole area law is wrong, and sometimes black holes lose area when they collide. Even if this happened sometimes, you wouldn’t expect it to happen every time. It’s not like anyone is pondering a reverse black hole area law, where black holes only shrink!

Because of that, I think it’s better to say that LIGO measured the black hole area law for this collision, while they tested whether black holes obey the Kerr metric. In one case, they’re just observing what happened in this one situation. In the other, they can try to draw implications for other collisions.

That doesn’t mean their work wasn’t impressive, but it was impressive for reasons that don’t seem to be getting emphasized. It’s impressive because, prior to this paper, they had not managed to measure the areas of colliding black holes well enough to confirm that they obeyed the area law! The previous collisions looked like they obeyed the law, but when you factor in the experimental error they couldn’t say it with confidence. The current measurement is better, and can. So the new measurement is interesting not because it confirms a fundamental law of the universe or anything like that…it’s interesting because previous measurements were so bad, that they couldn’t even confirm this kind of fundamental law!

That, incidentally, feels like a “missing mood” in pop science. Some things are impressive not because of their amazing scale or awesome implications, but because they are unexpectedly, unintuitively, really really hard to do. These measurements shouldn’t be thought of, or billed, as tests of nature’s fundamental laws. Instead they’re interesting because they highlight what we’re capable of, and what we still need to accomplish.

What You’re Actually Scared of in Impostor Syndrome

Academics tend to face a lot of impostor syndrome. Something about a job with no clear criteria for success, where you could always in principle do better and you mostly only see the cleaned-up, idealized version of others’ work, is a recipe for driving people utterly insane with fear.

The way most of us talk about that fear, it can seem like a cognitive bias, like a failure of epistemology. “Competent people think they’re less competent than they are,” the less-discussed half of the Dunning-Kruger effect.

(I’ve talked about it that way before. And, in an impostor-syndrome-inducing turn of events, I got quoted in a news piece in Nature about it.)

There’s something missing in that perspective, though. It doesn’t really get across how impostor syndrome feels. There’s something very raw about it, something that feels much more personal and urgent than an ordinary biased self-assessment.

To get at the core of it, let me ask a question: what happens to impostors?

The simple answer, the part everyone will admit to, is to say they stop getting grants, or stop getting jobs. Someone figures out they can’t do what they claim, and stops choosing them to receive limited resources. Pretty much anyone with impostor syndrome will say that they fear this: the moment that they reach too far, and the world decides they aren’t worth the money after all.

In practice, it’s not even clear that that happens. You might have people in your field who are actually thought of as impostors, on some level. People who get snarked about behind their back, people where everyone rolls their eyes when they ask a question at a conference and the question just never ends. People who are thought of as shiny storytellers without substance, who spin a tale for journalists but aren’t accomplishing anything of note. Those people…aren’t facing consequences at all, really! They keep getting the grants, they keep finding the jobs, and the ranks of people leaving for industry are instead mostly filled with those you respect.

Instead, I think what we fear when we feel impostor syndrome isn’t the obvious consequence, or even the real consequence, but something more primal. Primatologists and psychologists talk about our social brain, and the role of ostracism. They talk about baboons who piss off the alpha and get beat up and cast out of the group, how a social animal on their own risks starvation and becomes easy prey for bigger predators.

I think when we wake up in a cold sweat remembering how we had no idea what that talk was about, and were too afraid to ask, it’s a fear on that level that’s echoing around in our heads. That the grinding jags of adrenaline, the run-away-and-hide feeling of never being good enough, the desperate unsteadiness of trying to sound competent when you’re sure that you’re not and will get discovered at any moment…that’s not based on any realistic fears about what would happen if you got caught. That’s your monkey-brain, telling you a story drilled down deep by evolution.

Does that help? I’m not sure. If you manage to tell your inner monkey that it won’t get eaten by a lion if its friends stop liking it, let me know!

The Rocks in the Ground Era of Fundamental Physics

It’s no secret that the early twentieth century was a great time to make progress in fundamental physics. On one level, it was an era when huge swaths of our understanding of the world were being rewritten, with relativity and quantum mechanics just being explored. It was a time when a bright student could guide the emergence of whole new branches of scholarship, and recently discovered physical laws could influence world events on a massive scale.

Put that way, it sounds like it was a time of low-hanging fruit, the early days of a field when great strides can be made before the easy problems are all solved and only the hard ones are left. And that’s part of it, certainly: the fields sprung from that era have gotten more complex and challenging over time, requiring more specialized knowledge to make any kind of progress. But there is also a physical reason why physicists had such an enormous impact back then.

The early twentieth century was the last time that you could dig up a rock out of the ground, do some chemistry, and end up with a discovery about the fundamental laws of physics.

When scientists like Curie and Becquerel were working with uranium, they didn’t yet understand the nature of atoms. The distinctions between elements were described in qualitative terms, but only just beginning to be physically understood. That meant that a weird object in nature, “a weird rock”, could do quite a lot of interesting things.

And once you find a rock that does something physically unexpected, you can scale up. From the chemistry experiments of a single scientist’s lab, countries can build industrial processes to multiply the effect. Nuclear power and the bomb were such radical changes because they represented the end effect of understanding the nature of atoms, and atoms are something people could build factories to manipulate.

Scientists went on to push that understanding further. They wanted to know what the smallest pieces of matter were composed of, to learn the laws behind the most fundamental laws they knew. And with relativity and quantum mechanics, they could begin to do so systematically.

US particle physics has a nice bit of branding. They talk about three frontiers: the Energy Frontier, the Intensity Frontier, and the Cosmic Frontier.

Some things we can’t yet test in physics are gated by energy. If we haven’t discovered a particle, it may be because it’s unstable, decaying quickly into lighter particles so we can’t observe it in everyday life. If these particles interact appreciably with particles of everyday matter like protons and electrons, then we can try to make them in particle colliders. These end up creating pretty much everything up to a certain mass, due to a combination of the tendency in quantum mechanics for everything that can happen to happen, and relativity’s E=mc^2. In the mid-20th century these particle colliders were serious pieces of machinery, but still small enough to make industrial: now, there are so-called medical accelerators in many hospitals based on their designs. But current particle accelerators are a different beast, massive facilities built by international collaborations. This is the Energy Frontier.

Some things in physics are gated by how rare they are. Some particles interact only very faintly with other particles, so to detect them, physicists have to scan a huge chunk of matter, a giant tank of argon or a kilometer of antarctic ice, looking for deviations from the norm. Over time, these experiments have gotten bigger, looking for more and more subtle effects. A few weird ones still fit on tabletops, but only because they have the tools to measure incredibly small variations. Most are gigantic. This is the Intensity Frontier.

Finally, the Cosmic Frontier looks for the unknown behind both kinds of gates, using the wider universe to look at events with extremely high energy or size.

Pushing these frontiers has meant cleaning up our understanding of the fundamental laws of physics up to these frontiers. It means that whatever is still hiding, it either requires huge amounts of energy to produce, or is an extremely rare, subtle effect.

That means that you shouldn’t expect another nuclear bomb out of fundamental physics. Physics experiments are already working on vast scales, to the extent that a secret government project would have to be smaller than publicly known experiments, in physical size, energy use, and budget. And you shouldn’t expect another nuclear power plant, either: we’ve long passed the kinds of things you could devise a clever industrial process to take advantage of at scale.

Instead, new fundamental physics will only be directly useful once we’re the kind of civilization that operates on a much greater scale than we do today. That means larger than the solar system: there wouldn’t be much advantage, at this point, of putting a particle physics experiment on the edge of the Sun. It means the kind of civilization that tosses galaxies around.

It means that right now, you won’t see militaries or companies pushing the frontiers of fundamental physics, unlike the way they might have wanted to at the dawn of the twentieth century. By the time fundamental physics is useful in that way, all of these actors will likely be radically different: companies, governments, and in all likelihood human beings themselves. Instead, supporting fundamental physics right now is an act of philanthropy, maintaining a practice because it maintains good habits of thought and produces powerful ideas, the same reasons organizations support mathematics or poetry. That’s not nothing, and fundamental physics is still often affordable as philanthropy goes. But it’s not changing the world, not the way physicists did in the early twentieth century.