Monthly Archives: July 2022

Shape the Science to the Statistics, Not the Statistics to the Science

In theatre, and more generally in writing, the advice is always to “show, don’t tell”. You could just tell your audience that Long John Silver is a ruthless pirate, but it works a lot better to show him marching a prisoner off the plank. Rather than just informing with words, you want to make things as concrete as possible, with actions.

There is a similar rule in pedagogy. Pedagogy courses teach you to be explicit about your goals, planning a course by writing down Intended Learning Outcomes. (They never seem amused when I ask about the Unintended Learning Outcomes.) At first, you’d want to write down outcomes like “students will understand calculus” or “students will know what a sine is”. These, however, are hard to judge, and thus hard to plan around. Instead, the advice is to write outcomes that correspond to actions you want the students to take, things you want them to be capable of doing: “students can perform integration by parts” “students can decide correctly whether to use a sine or cosine”. Again and again, the best way to get the students to know something is to get them to do something.

Jay Daigle recently finished a series of blog posts on how scientists use statistics to test hypotheses. I recommend it, it’s a great introduction to the concepts scientists use to reason about data, as well as a discussion of how they often misuse those concepts and what they can do better. I have a bit of a different perspective on one of the “takeaways” of the post, and I wanted to highlight that here.

The center of Daigle’s point is a tool, widely used in science, called Neyman-Pearson Hypothesis Testing. Neyman-Pearson is a tool for making decisions involving a threshold for significance: a number that scientists often call a p-value. If you follow the procedure, only acting when you find a p-value below 0.05, then you will only be wrong 5% of the time: specifically, that will be your rate of false positives, the percent of the time you conclude some action works when it really doesn’t.

A core problem, from Daigle’s perspective, is that scientists use Neyman-Pearson for the wrong purpose. Neyman-Pearson is a tool for making decisions, not a test that tells you whether or not a specific claim is true. It tells you “on average, if I approve drugs when their p-value is below 0.05, only 5% of them will fail”. That’s great if you can estimate how bad it is to deny a drug that should be approved, how bad it is to approve a drug that should be denied, and calculate out on average how often you can afford to be wrong. It doesn’t tell you anything about the specific drug, though. It doesn’t tell you “every drug with a p-value below 0.05 works”. It certainly doesn’t tell you “a drug with a p-value of 0.051 almost works” or “a drug with a p-value of 0.001 definitely works”. It just doesn’t give you that information.

In later posts, Daigle suggests better tools, which he argues map better to what scientists want to do, as well as general ways scientists can do better. Section 4. in particular focuses on the idea that one thing scientists need to do is ask better questions. He uses a specific example from cognitive psychology, a study that tests whether describing someone’s face makes you worse at recognizing it later. That’s a clear scientific question, one that can be tested statistically. That doesn’t mean it’s a good question, though. Daigle points out that questions like this have a problem: it isn’t clear what the result actually tells us.

Here’s another example of the same problem. In grad school, I knew a lot of social psychologists. One was researching a phenomenon called extended contact. Extended contact is meant to be a foil to another phenomenon called direct contact, both having to do with our views of other groups. In direct contact, making a friend from another group makes you view that whole group better. In extended contact, making a friend who has a friend from another group makes you view the other group better.

The social psychologist was looking into a concrete-sounding question: which of these phenomena, direct or extended contact, is stronger?

At first, that seems like it has the same problem as Daigle’s example. Suppose one of these effects is larger: what does that mean? Why do we care?

Well, one answer is that these aren’t just phenomena: they’re interventions. If you know one phenomenon is stronger than another, you can use that to persuade people to be more accepting of other groups. The psychologist’s advisor even had a procedure to make people feel like they made a new friend. Armed with that, it’s definitely useful to know whether extended contact or direct contact is better: whichever one is stronger is the one you want to use!

You do need some “theory” behind this, of course. You need to believe that, if a phenomenon is stronger in your psychology lab, it will be stronger wherever you try to apply it in the real world. It probably won’t be stronger every single time, so you need some notion of how much stronger it needs to be. That in turn means you need to estimate costs: what it costs if you pick the weaker one instead, how much money you’re wasting or harm you’re doing.

You’ll notice this is sounding a lot like the requirements I described earlier, for Neyman-Pearson. That’s not accident: as you try to make your science more and more clearly defined, it will get closer and closer to a procedure to make a decision, and that’s exactly what Neyman-Pearson is good for.

So in the end I’m quite a bit more supportive of Neyman-Pearson than Daigle is. That doesn’t mean it isn’t being used wrong: most scientists are using it wrong. Instead of calculating a p-value each time they make a decision, they do it at the end of a paper, misinterpreting it as evidence that one thing or another is “true”. But I think that what these scientists need to do is not chance their statistics, but change their science. If they focused their science on making concrete decisions, they would actually be justified in using Neyman-Pearson…and their science would get a lot better in the process.

Covering the Angles

One way to think of science is of a lot of interesting little problems. Some scientists are driven by questions like “how does this weird cell work?” or “how accurately can I predict the chance these particles collide?” If the puzzles are fun enough and the questions are interesting enough, then that can be enough motivation on its own.

Another perspective thinks of science as pursuit of a few big problems. Physicists want to write down the laws of nature, to know where the universe came from, to reconcile gravity and quantum mechanics. Biologists want to understand how life works and manipulate it, psychologists want the same for the human mind. For some scientists, these big questions are at the heart of why they do science. Someone in my field once joked he can’t get up in the morning without telling himself “spacetime is doomed”.

Even if you care about the big questions, though, you can’t neglect the small ones. That’s because modern science is collaborative. A big change, like a new particle or a whole new theory of physics, requires confirmation. It’s not enough for one person to propose it. The ideas that last in science last because they crop up in many different places, with many different methods. They last because we check all the angles, compulsively, looking for any direction that might be screwed up.

In those checks, any and all science can be useful. We need the big conceptual leaps from people like Einstein and the careful and systematic measurements of Brahe. We need people who look for the wackiest ideas, not just because they might be true, but to rule them out when they’re false, to make us all the more confident we’re on the right path. We need people pushing tried-and-true theories to the next leap of precision, to show that nothing is hiding in the gaps and make it clearer when something is. We need many people pushing many different paths: all are necessary, and any one might be crucial.

Often, one of these paths gets the lion’s share of the glory: the press, the Nobel, the mention in the history books. But the other paths still matter: we wouldn’t be confident in the science if they didn’t exist. Most working scientists will be on those other paths, as a matter of course. But we still need them to get science done.

The Folks With the Best Pictures

Sometimes I envy astronomers. Particle physicists can write books full of words and pages of colorful graphs and charts, and the public won’t retain any of it. Astronomers can mesmerize the world with a single picture.

NASA just released the first images from its James Webb Space Telescope. They’re impressive, and not merely visually: in twelve hours, they probe deeper than the Hubble Space Telescope managed in weeks on the same patch of sky, as well as gathering data that can show what kinds of molecules are present in the galaxies.

(If you’re curious how the James Webb images compare to Hubble ones, here’s a nice site comparing them.)

Images like this enter the popular imagination. The Hubble telescope’s deep field has appeared on essentially every artistic product one could imagine. As of writing this, searching for “Hubble” on Etsy gives almost 5,000 results. “JWST”, the acronym for the James Webb Space Telescope, already gives over 1,000, including several on the front page that already contain just-released images. Despite the Large Hadron Collider having operated for over a decade, searching “LHC” also leads to just around 1,000 results…and a few on the front page are actually pictures of the JWST!

It would be great as particle physicists to have that kind of impact…but I think we shouldn’t stress ourselves too much about it. Ultimately astronomers will always have this core advantage. Space is amazing, visually stunning and mind-bogglingly vast. It has always had a special place for human cultures, and I’m happy for astronomers to inherit that place.

The Conference Dilemma: Freshness vs. Breadth

Back in 2017, I noticed something that should have struck me as a little odd. My sub-field has a big yearly conference, called Amplitudes, that brings in everyone who works on our kind of research. Amplitudes 2017 was fun, but not “fresh”: most people talked about work they had already published. A smaller conference I went to that year, called QCD Meets Gravity, was much “fresher”: a lot of discussion of work in progress and work “hot off the presses”.

At the time, I chalked the difference up to timing: it was a few months later, and people happened to have projects that matured around then. But I realized recently there’s another reason, one why you would expect bigger conferences to have less fresh content.

See, I’ve recently been on the other “side of the curtain”: I was an organizer for Amplitudes last year. And I noticed one big obstacle to having fresh content: the timeframe.

The bigger a conference is, the longer in advance you need to invite speakers. It’s a bigger task to organize everyone, to make sure travel and hotels and raw availability works, that everyone has time to prepare their talks and you have a nice full (but not too full) schedule. So when we started asking people, we didn’t know what the “freshest” work was going to be. We had recommendations from our scientific committee (a group of experts in the subfield whose job is to suggest speakers), but in practice the goal is more one of breadth than freshness: we needed to make sure that everybody in our community was represented.

A smaller conference can get around this. It can be organized a bit later, so the organizers have more information about new developments. It covers a smaller area, so the organizers have more information about new hot topics and unpublished results. And it typically invites most of the sub-community anyway, so you’re guaranteed to cover the hot new stuff just by raw completeness.

This doesn’t mean small conferences are “just better” or anything like that. Breadth is genuinely useful: a big conference covering a whole subfield is great for bringing a community together, getting everyone on a shared page and expanding their horizons. There’s a real tradeoff between those goals and getting a conference with the latest progress. It’s not a fixed tradeoff, we can improve both goals at once (I think at Amplitudes we as organizers could have been better at highlighting unpublished work), but we still have to make choices of what to emphasize.

Einstein-Years

Scott Aaronson recently published an interesting exchange on his blog Shtetl Optimized, between him and cognitive psychologist Steven Pinker. The conversation was about AI: Aaronson is optimistic (though not insanely so) Pinker is pessimistic (again, not insanely though). While fun reading, the whole thing would normally be a bit too off-topic for this blog, except that Aaronson’s argument ended up invoking something I do know a bit about: how we make progress in theoretical physics.

Aaronson was trying to respond to an argument of Pinker’s, that super-intelligence is too vague and broad to be something we could expect an AI to have. Aaronson asks us to imagine an AI that is nothing more or less than a simulation of Einstein’s brain. Such a thing isn’t possible today, and might not even be efficient, but it has the advantage of being something concrete we can all imagine. Aarsonson then suggests imagining that AI sped up a thousandfold, so that in one year it covers a thousand years of Einstein’s thought. Such an AI couldn’t solve every problem, of course. But in theoretical physics, surely such an AI could be safely described as super-intelligent: an amazing power that would change the shape of physics as we know it.

I’m not as sure of this as Aaronson is. We don’t have a machine that generates a thousand Einstein-years to test, but we do have one piece of evidence: the 76 Einstein-years the man actually lived.

Einstein is rightly famous as a genius in theoretical physics. His annus mirabilis resulted in five papers that revolutionized the field, and the next decade saw his theory of general relativity transform our understanding of space and time. Later, he explored what general relativity was capable of and framed challenges that deepened our understanding of quantum mechanics.

After that, though…not so much. For Einstein-decades, he tried to work towards a new unified theory of physics, and as far as I’m aware made no useful progress at all. I’ve never seen someone cite work from that period of Einstein’s life.

Aarsonson mentions simulating Einstein “at his peak”, and it would be tempting to assume that the unified theory came “after his peak”, when age had weakened his mind. But while that kind of thing can sometimes be an issue for older scientists, I think it’s overstated. I don’t think careers peak early because of “youthful brains”, and with the exception of genuine dementia I don’t think older physicists are that much worse-off cognitively than younger ones. The reason so many prominent older physicists go down unproductive rabbit-holes isn’t because they’re old. It’s because genius isn’t universal.

Einstein made the progress he did because he was the right person to make that progress. He had the right background, the right temperament, and the right interests to take others’ mathematics and take them seriously as physics. As he aged, he built on what he found, and that background in turn enabled him to do more great things. But eventually, the path he walked down simply wasn’t useful anymore. His story ended, driven to a theory that simply wasn’t going to work, because given his experience up to that point that was the work that interested him most.

I think genius in physics is in general like that. It can feel very broad because a good genius picks up new tricks along the way, and grows their capabilities. But throughout, you can see the links: the tools mastered at one age that turn out to be just right for a new pattern. For the greatest geniuses in my field, you can see the “signatures” in their work, hints at why they were just the right genius for one problem or another. Give one a thousand years, and I suspect the well would eventually run dry: the state of knowledge would no longer be suitable for even their breadth.

…of course, none of that really matters for Aaronson’s point.

A century of Einstein-years wouldn’t have found the Standard Model or String Theory, but a century of physicist-years absolutely did. If instead of a simulation of Einstein, your AI was a simulation of a population of scientists, generating new geniuses as the years go by, then the argument works again. Sure, such an AI would be much more expensive, much more difficult to build, but the first one might have been as well. The point of the argument is simply to show such a thing is possible.

The core of Aaronson’s point rests on two key traits of technology. Technology is replicable: once we know how to build something, we can build more of it. Technology is scalable: if we know how to build something, we can try to build a bigger one with more resources. Evolution can tap into both of these, but not reliably: just because it’s possible to build a mind a thousand times better at some task doesn’t mean it will.

That is why the possibility of AI leads to the possibility of super-intelligence. If we can make a computer that can do something, we can make it do that something faster. That something doesn’t have to be “general”, you can have programs that excel at one task or another. For each such task, with more resources you can scale things up: so anything a machine can do now, a later machine can probably do better. Your starting-point doesn’t necessarily even have to be efficient, or a good algorithm: bad algorithms will take longer to scale, but could eventually get there too.

The only question at that point is “how fast?” I don’t have the impression that’s settled. The achievements that got Pinker and Aarsonson talking, GPT-3 and DALL-E and so forth, impressed people by their speed, by how soon they got to capabilities we didn’t expect them to have. That doesn’t mean that something we might really call super-intelligence is close: that has to do with the details, with what your target is and how fast you can actually scale. And it certainly doesn’t mean that another approach might not be faster! (As a total outsider, I can’t help but wonder if current ML is in some sense trying to fit a cubic with straight lines.)

It does mean, though, that super-intelligence isn’t inconceivable, or incoherent. It’s just the recognition that technology is a master of brute force, and brute force eventually triumphs. If you want to think about what happens in that “eventually”, that’s a very important thing to keep in mind.