Monthly Archives: March 2022

Answering Questions: Virtue or Compulsion?

I was talking to a colleague about this blog. I mentioned worries I’ve had about email conversations with readers: worries about whether I’m communicating well, whether the readers are really understanding. For the colleague though, something else stood out:

“You sure are generous with your time.”

Am I?

I’d never really thought about it that way before. It’s not like I drop everything to respond to a comment, or a message. I leave myself a reminder, and get to it when I have time. To the extent that I have a time budget, I don’t spend it freely, I prioritize work before chatting with my readers, as nice as you folks are.

At the same time, though, I think my colleague was getting at a real difference there. It’s true that I don’t answer questions right away. But I do answer them eventually. I can’t imagine being asked a question, and just never answering it.

There are exceptions, of course. If you’re obviously just trolling, just insulting me or messing with me or asking the same question over and over, yeah I’ll skip your question. And if I don’t understand what you’re asking, there’s only so much effort I’m going to put in to try to decipher it. Even in those cases, though, I have a certain amount of regret. I have to take a deep breath and tell myself no, I can really skip this one.

On the one hand, this feels like a moral obligation, a kind of intellectual virtue. If knowledge, truth, information are good regardless of anything else, then answering questions is just straightforwardly good. People ought to know more, asking questions is how you learn, and that can’t work unless we’re willing to teach. Even if there’s something you need to keep secret, you can at least say something, if only to explain why you can’t answer. Just leaving a question hanging feels like something bad people do.

On the other hand, I think this might just be a compulsion, a weird quirk of my personality. It may even be more bad than good, an urge that makes me “waste my time”, or makes me too preoccupied with what others say, drafting responses in my head until I find release by writing them down. I think others are much more comfortable just letting a question lie, and moving on. It feels a bit like the urge to have the last word in a conversation, just more specific: if someone asks me to have the last word, I feel like I have to oblige!

I know this has to have its limits. The more famous bloggers get so many questions they can’t possibly respond to all of them. I’ve seen how people like Neil Gaiman describe responding to questions on tumblr, just opening a giant pile of unread messages, picking a few near the top, and then going back to their day. I can barely stand leaving unread messages in my email. If I got that famous, I don’t know how I’d deal with that. But I’d probably figure something out.

Am I too generous with you guys? Should people always answer questions? And does the fact that I ended this post with questions mean I’ll get more comments?

Of Snowmass and SAGEX

arXiv-watchers might have noticed an avalanche of papers with the word Snowmass in the title. (I contributed to one of them.)

Snowmass is a place, an area in Colorado known for its skiing. It’s also an event in that place, the Snowmass Community Planning Exercise for the American Physical Society’s Division of Particles and Fields. In plain terms, it’s what happens when particle physicists from across the US get together in a ski resort to plan their future.

Usually someone like me wouldn’t be involved in that. (And not because it’s a ski resort.) In the past, these meetings focused on plans for new colliders and detectors. They got contributions from experimentalists, and a few theorists heavily focused on their work, but not the more “formal” theorists beyond.

This Snowmass is different. It’s different because of Corona, which changed it from a big meeting in a resort to a spread-out series of meetings and online activities. It’s also different because they invited theorists to contribute, and not just those interested in particle colliders. The theorists involved study everything from black holes and quantum gravity to supersymmetry and the mathematics of quantum field theory. Groups focused on each topic submit “white papers” summarizing the state of their area. These white papers in turn get organized and summarized into a few subfields, which in turn contribute to the planning exercise. No-one I’ve talked to is entirely clear on how this works, how much the white papers will actually be taken into account or by whom. But it seems like a good chance to influence US funding agencies, like the Department of Energy, and see if we can get them to prioritize our type of research.

Europe has something similar to Snowmass, called the European Strategy for Particle Physics. It also has smaller-scale groups, with their own purposes, goals, and funding sources. One such group is called SAGEX: Scattering Amplitudes: from Geometry to EXperiment. SAGEX is an Innovative Training Network, an organization funded by the EU to train young researchers, in this case in scattering amplitudes. Its fifteen students are finishing their PhDs and ready to take the field by storm. Along the way, they spent a little time in industry internships (mostly at Maple and Mathematica), and quite a bit of time working on outreach.

They have now summed up that outreach work in an online exhibition. I’ve had fun exploring it over the last couple days. They’ve got a lot of good content there, from basic explanations of relativity and quantum mechanics, to detailed games involving Feynman diagrams and associahedra, to a section that uses solitons as a gentle introduction to integrability. If you’re in the target audience, you should check it out!

How Expert Is That Expert?

The blog Astral Codex Ten had an interesting post a while back, about when to trust experts. Rather than thinking of some experts as “trustworthy” and some as “untrustworthy”, the post suggests an approach of “bounded distrust”. Even if an expert is biased or a news source sometimes lies, there are certain things you can still expect them to tell the truth about. If you are familiar enough with their work, you can get an idea of which kinds of claims you can trust and which you can’t, in a consistent and reliable way. Knowing how to do this is a skill, one you can learn to get better at.

In my corner of science, I can’t think of anyone who outright lies. Nonetheless, some claims are worth more trust than others. Sometimes experts have solid backing for what they say, direct experience that’s hard to contradict. Other times they’re speaking mostly from general impressions, and bias could easily creep in. Luckily, it’s not so hard to tell the difference. In this post, I’ll try to teach you how.

For an example, I’ll use something I saw at a conference last week. A speaker gave a talk describing the current state of cosmology: the new tools we have to map the early universe, and the challenges in using them to their full potential. After the talk, I remember her answering three questions. In each case, she seemed to know what she was talking about, but for different reasons. If she was contradicted by a different expert, I’d use these reasons to figure out which one to trust.

First, sometimes an expert gives what is an informed opinion, but just an informed opinion. As scientists, we are expected to know a fairly broad range of background behind our work, and be able to say something informed about it. We see overview talks and hear our colleagues’ takes, and get informed opinions about topics we otherwise don’t work on. This speaker fielded a question about quantum gravity, and her answer made it clear that the topic falls into this category for her. Her answer didn’t go into much detail, mentioning a few terms but no specific scientific results, and linked back in the end to a different question closer to her expertise. That’s generally how we speak on this kind of topic: vaguely enough to show what we know without overstepping.

The second question came from a different kind of knowledge, which I might call journal club knowledge. Many scientists have what are called “journal clubs”. We meet on a regular basis, read recent papers, and talk about them. The papers go beyond what we work on day-to-day, but not by that much, because the goal is to keep an eye open for future research topics. We read papers in close-by areas, watching for elements that could be useful, answers to questions we have or questions we know how to answer. The kind of “journal club knowledge” we have covers a fair amount of detail: these aren’t topics we are working on right now, but if we spent more time on it they could be. Here, the speaker answered a question about the Hubble tension, a discrepancy between two different ways of measuring the expansion of the universe. The way she answered focused on particular results: someone did X, there was a paper showing Y, this telescope is planning to measure Z. That kind of answer is a good way to tell that someone is answering from “journal club knowledge”. It’s clearly an area she could get involved in if she wanted to, one where she knows the important questions and which papers to read, with some of her own work close enough to the question to give an important advantage. But it was also clear that she hadn’t developed a full argument on one “side” or the other, and as such there are others I’d trust a bit more on that aspect of the question.

Finally, experts are the most trustworthy when we speak about our own work. In this speaker’s case, the questions about machine learning were where her expertise clearly shone through. Her answers there were detailed in a different way than her answers about the Hubble tension: not just papers, but personal experience. They were full of phrases like “I tried that, but it doesn’t work…” or “when we do this, we prefer to do it this way”. They also had the most technical terms of any of her answers, terms that clearly drew distinctions relevant to those who work in the field. In general, when an expert talks about what they do in their own work, and uses a lot of appropriate technical terms, you have especially good reason to trust them.

These cues can help a lot when evaluating experts. An expert who makes a generic claim, like “no evidence for X”, might not know as much as an expert who cites specific papers, and in turn they might not know as much as an expert who describes what they do in their own research. The cues aren’t perfect: one risk is that someone may be an expert on their own work, but that work may be irrelevant to the question you’re asking. But they help: rather than distrusting everyone, they help you towards “bounded distrust”, knowing which claims you can trust and which are riskier.

At the Bohr Centennial

One hundred years ago, Niels Bohr received his Nobel prize. One hundred and one years ago, the Niels Bohr Institute opened its doors (it would have been one hundred and two, but pandemics are inconvenient things).

This year, also partly delayed by a pandemic, the Niels Bohr Institute is celebrating.

Using the fanciest hall the university has.

We’ve had a three-day conference, packed with Nobel prizewinners, people who don’t feel out of place among Nobel prizewinners, and for one morning’s ceremony the crown prince of Denmark. There were last-minute cancellations but also last-minute additions, including a moving speech by two Ukrainian PhD students. I don’t talk politics on this blog, so I won’t say much more about it (and you shouldn’t in the comments either, there are better venues), but I will say that was the only time I’ve seen a standing ovation at a scientific conference.

The other talks ran from reminiscences (Glashow struggled to get to the stage, but his talk was witty, even quoting Peter Woit apparently to try to rile David Gross in the front row (next to the Ukranian PhD students who must have found it very awkward)) to classic colloquium style talks (really interesting crisply described puzzles from astrochemistry to biophysics) to a few more “conference-ey” talks (t’Hooft, unfortunately). It’s been fun, but also exhausting, and as such that’s all I’m writing this week.