How Expert Is That Expert?

The blog Astral Codex Ten had an interesting post a while back, about when to trust experts. Rather than thinking of some experts as “trustworthy” and some as “untrustworthy”, the post suggests an approach of “bounded distrust”. Even if an expert is biased or a news source sometimes lies, there are certain things you can still expect them to tell the truth about. If you are familiar enough with their work, you can get an idea of which kinds of claims you can trust and which you can’t, in a consistent and reliable way. Knowing how to do this is a skill, one you can learn to get better at.

In my corner of science, I can’t think of anyone who outright lies. Nonetheless, some claims are worth more trust than others. Sometimes experts have solid backing for what they say, direct experience that’s hard to contradict. Other times they’re speaking mostly from general impressions, and bias could easily creep in. Luckily, it’s not so hard to tell the difference. In this post, I’ll try to teach you how.

For an example, I’ll use something I saw at a conference last week. A speaker gave a talk describing the current state of cosmology: the new tools we have to map the early universe, and the challenges in using them to their full potential. After the talk, I remember her answering three questions. In each case, she seemed to know what she was talking about, but for different reasons. If she was contradicted by a different expert, I’d use these reasons to figure out which one to trust.

First, sometimes an expert gives what is an informed opinion, but just an informed opinion. As scientists, we are expected to know a fairly broad range of background behind our work, and be able to say something informed about it. We see overview talks and hear our colleagues’ takes, and get informed opinions about topics we otherwise don’t work on. This speaker fielded a question about quantum gravity, and her answer made it clear that the topic falls into this category for her. Her answer didn’t go into much detail, mentioning a few terms but no specific scientific results, and linked back in the end to a different question closer to her expertise. That’s generally how we speak on this kind of topic: vaguely enough to show what we know without overstepping.

The second question came from a different kind of knowledge, which I might call journal club knowledge. Many scientists have what are called “journal clubs”. We meet on a regular basis, read recent papers, and talk about them. The papers go beyond what we work on day-to-day, but not by that much, because the goal is to keep an eye open for future research topics. We read papers in close-by areas, watching for elements that could be useful, answers to questions we have or questions we know how to answer. The kind of “journal club knowledge” we have covers a fair amount of detail: these aren’t topics we are working on right now, but if we spent more time on it they could be. Here, the speaker answered a question about the Hubble tension, a discrepancy between two different ways of measuring the expansion of the universe. The way she answered focused on particular results: someone did X, there was a paper showing Y, this telescope is planning to measure Z. That kind of answer is a good way to tell that someone is answering from “journal club knowledge”. It’s clearly an area she could get involved in if she wanted to, one where she knows the important questions and which papers to read, with some of her own work close enough to the question to give an important advantage. But it was also clear that she hadn’t developed a full argument on one “side” or the other, and as such there are others I’d trust a bit more on that aspect of the question.

Finally, experts are the most trustworthy when we speak about our own work. In this speaker’s case, the questions about machine learning were where her expertise clearly shone through. Her answers there were detailed in a different way than her answers about the Hubble tension: not just papers, but personal experience. They were full of phrases like “I tried that, but it doesn’t work…” or “when we do this, we prefer to do it this way”. They also had the most technical terms of any of her answers, terms that clearly drew distinctions relevant to those who work in the field. In general, when an expert talks about what they do in their own work, and uses a lot of appropriate technical terms, you have especially good reason to trust them.

These cues can help a lot when evaluating experts. An expert who makes a generic claim, like “no evidence for X”, might not know as much as an expert who cites specific papers, and in turn they might not know as much as an expert who describes what they do in their own research. The cues aren’t perfect: one risk is that someone may be an expert on their own work, but that work may be irrelevant to the question you’re asking. But they help: rather than distrusting everyone, they help you towards “bounded distrust”, knowing which claims you can trust and which are riskier.

7 thoughts on “How Expert Is That Expert?

  1. Nibud Kakoy (@NibudKakoy)

    Hi, regarding the topics on which experts work themselves, it is quite frequently something where you should distrust the most. The opinions can be quite biased there, for (not only) scientists tend to have strong opinions on their own topics. I guess you know ample examples here; I do know such examples, e.g. when I look at a mirror.

    Like

    Reply
    1. 4gravitons Post author

      People can be very biased about the value of what they work on, or its implications. Concrete questions, though: how they do it, what’s the best way to do it, why can’t you just do something else…those are much harder to twist.

      Like

      Reply
  2. Karl Young

    It seems that a problem with this approach might be that the closer an expert gets to their own work in a particular argument, the harder it is for a non-expert to asses that argument; i.e. you just have to trust that the expert knows what they’re doing. That’s probably OK at a conference where there are other experts in that area who’s reaction one can gauge, but it might be less reliable in other situations.

    Like

    Reply
    1. 4gravitons Post author

      Part of the point I’m making is that you can tell whether an expert knows what they’re doing, at least in part, by how they make their argument. Without knowing the meanings of the citations, you can tell if they’re citing people or not. Without knowing the technical terms, you can tell if they’re talking about their own work experience or not. None of these hints are perfect, of course, and even if a subject is really something someone works on every day they might still be missing something important. But in my experience it helps more than you’d expect.

      Like

      Reply
  3. vampyricon

    How can we distinguish between jargon by an expert and obscurantism by a charlatan? Not necessarily talking about the sciences here; I’m mostly thinking of continental philosophy.

    Like

    Reply
    1. 4gravitons Post author

      Imperfectly! The best advice I can give here is that real jargon has a kind of “structure” that pure obscurantism doesn’t. Also, a field can itself be obscurantist without an individual practitioner being a charlatan: I’d trust a Deleuze scholar to know things about Deleuze texts for example, even if I don’t necessarily trust Deleuze to not be an elaborate parody.

      Liked by 1 person

      Reply
    2. Kirk

      An expert will point you to actual textbooks and sources if you ask. A charlatan will usually tell you their own book is the only one you can trust, or point to very cherry picked papers that, when you search about opinions on it, find that it is heavily disregarded by actual scientists, or even worse youtube and blogs. 😛 Example: antivaxxers and their heavily discredited papers.

      Like

      Reply

Leave a comment