Tag Archives: PublicPerception

Gateway Hobbies

When biologists tell stories of their childhoods, they’re full of trails of ants and fireflies in jars. Lots of writers start young, telling stories on the playground and making skits with their friends. And the mere existence of “chemistry sets” tells you exactly how many chemists get started. Many fields have these “gateway hobbies”, like gateway drugs for careers, ways that children and teenagers get hooked and gain experience.

Physics is a little different, though. While kids can play with magnets and electricity, there aren’t a whole lot of other “physics hobbies”, especially for esoteric corners like particle physics. Instead, the “gateway hobbies” of physics are more varied, drawing from many different fields.

First, of course, even if a child can’t “do physics”, they can always read about it. Kids will memorize the names of quarks, read about black holes, or watch documentaries about string theory. I’m not counting this as a “physics hobby” because it isn’t really: physics isn’t a collection of isolated facts, but of equations: frameworks you can use to make predictions. Reading about the Big Bang is a good way to get motivated and excited, it’s a great thing to do…but it doesn’t prepare you for the “science part” of the science.

A few efforts at physics popularization get a bit more hands-on. Many come in the form of video games. You can get experience with relativity through Velocity Raptor, quantum mechanics through Quantum Chess, or orbital mechanics through Kerbal Space Program. All of these get just another bit closer to “doing physics” rather than merely reading about it.

One can always gain experience in other fields, and that can be surprisingly relevant. Playing around with a chemistry set gives first-hand experience of the kinds of things that motivated quantum mechanics, and some things that still motivate condensed matter research. Circuits are physics, more directly, even if they’re also engineering: and for some physicists, designing electronic sensors is a huge part of what they do.

Astronomy has a special place, both in the history of physics and the pantheon of hobbies. There’s a huge amateur astronomy community, one that both makes real discoveries and reaches out to kids of all ages. Many physicists got their start looking at the heavens, using it like Newton’s contemporaries as a first glimpse into the mechanisms of nature.

More and more research in physics involves at least some programming, and programming is another activity kids have access to in spades, from Logo to robotics competitions. Learning how to program isn’t just an important skill: it’s also a way for young people to experience a world bound by clear laws and logic, another motivation to study physics.

Of course, if you’re interested in rules and logic, why not go all the way? Plenty of physicists grew up doing math competitions. I have fond memories of Oregon’s Pentagames, and the more “serious” activities go all the way up to the famously challenging Putnam Competition.

Finally, there are physics competitions too, at least in the form of the International Physics Olympiad, where high school students compete in physics prowess.

Not every physicist did these sorts of things, of course: some got hooked later. Others did more than one. A friend of mine who’s always been “Mr. Science” got almost the whole package, with a youth spent exploring the wild west of the early internet, working at a planetarium, and discovering just how easy it is to get legal access to dangerous and radioactive chemicals. There are many paths in to physics, so even if kids can’t “do physics” the same way they “do chemistry”, there’s still plenty to do!

Keeping It Colloquial

In the corners of academia where I hang out, a colloquium is a special kind of talk. Most talks we give are part of weekly seminars for specific groups. For example, the theoretical particle physicists here have a seminar. Each week we invite a speaker, who gives a talk on their recent work. Since they expect an audience of theoretical particle physicists, they can go into more detail.

A colloquium isn’t like that. Colloquia are talks for the whole department: theorists and experimentalists, particle physicists and biophysicists. They’re more prestigious, for big famous professors (or sometimes, for professors interviewing for jobs…). The different audience, and different context, means that the talk plays by different rules.

Recently, I saw a conference full of “colloquium-style” talks, trying to play by these rules. Some succeeded, some didn’t…and I think I now have a better idea of how those rules work.

First, in a colloquium, you’re not just speaking for yourself. You’re an ambassador for your field. For some of the audience, this might be the first time they’ve heard a talk by someone who does your kind of research. You want to give them a good impression, not just about you, but about the whole topic. So while you definitely want to mention your own work, you want to tell a full story, one that gives more than a glimpse of what others are doing as well.

Second, you want to connect to something the audience already knows. With an audience of physicists, you can assume a certain baseline, but not much more than that. You need to make the beginning accessible and start with something familiar. For the conference I mentioned, a talk that did this well was the talk on exoplanets, which started with the familiar planets of the solar system, classifying them in order to show what you might expect exoplanets to look like. In contrast, t’Hooft’s talk did this poorly. His work is exploiting a loophole in a quantum-mechanical argument called Bell’s theorem, which most physicists have heard of. Instead of mentioning Bell’s theorem, he referred vaguely to “criteria from philosophers”, and only even mentioned that near the end of the talk, instead starting with properties of quantum mechanics his audience was much less familiar with.

Moving on, then, you want to present a mystery. So far, everything in the talk has made sense, and your audience feels like they understand. Now, you show them something that doesn’t fit, something their familiar model can’t accommodate. This activates your audience’s scientist instincts: they’re curious now, they want to know the answer. A good example from the conference was a talk on chemistry in space. The speaker emphasized that we can see evidence of complex molecules in space, but that space dust is so absurdly dilute that it seems impossible such molecules could form: two atoms could go a billion years without meeting each other.

You can’t just leave your audience mystified, though. You next have to solve the mystery. Ideally, your solution will be something smart, but simple: something your audience can intuitively understand. This has two benefits. First, it makes you look smart: you described a mysterious problem, and then you show how to solve it! Second, it makes the audience feel smart: they felt the problem was hard, but now they understand how to solve it too. The audience will have good feelings about you as a result, and good feelings about the topic: in some sense, you’ve tied a piece of their self-esteem to knowing the solution to your problem. This was well-done by the speaker discussing space chemistry, who explained that the solution was chemistry on surfaces: if two atoms are on the surface of a dust grain or meteorite, they’re much more likely to react. It was also well-done by a speaker discussing models of diseases like diabetes: he explained the challenge of controlling processes with cells, when cells replicate exponentially, and showed one way they could be controlled, when the immune system kills off any cells that replicate much faster than their neighbors. (He also played the guitar to immune system-themed songs…also a good colloquium strategy for those who can pull it off!)

Finally, a picture is worth a thousand wordsas long as it’s a clear one. For an audience that won’t follow most of your equations, it’s crucial to show them something visual: graphics, puns, pictures of equipment or graphs. Crucially, though, your graphics should be something the audience can understand. If you put up a graph with a lot of incomprehensible detail: parameters you haven’t explained, or just set up in a way your audience doesn’t get, then your audience gets stuck. Much like an unfamiliar word, a mysterious graph will have members of the audience scratching their heads, trying to figure out what it means. They’ll be so busy trying, they’ll miss what you say next, and you’ll lose them! So yes, put in graphs, put in pictures: but make sure that the ones you use, you have time to explain.

How Expert Is That Expert?

The blog Astral Codex Ten had an interesting post a while back, about when to trust experts. Rather than thinking of some experts as “trustworthy” and some as “untrustworthy”, the post suggests an approach of “bounded distrust”. Even if an expert is biased or a news source sometimes lies, there are certain things you can still expect them to tell the truth about. If you are familiar enough with their work, you can get an idea of which kinds of claims you can trust and which you can’t, in a consistent and reliable way. Knowing how to do this is a skill, one you can learn to get better at.

In my corner of science, I can’t think of anyone who outright lies. Nonetheless, some claims are worth more trust than others. Sometimes experts have solid backing for what they say, direct experience that’s hard to contradict. Other times they’re speaking mostly from general impressions, and bias could easily creep in. Luckily, it’s not so hard to tell the difference. In this post, I’ll try to teach you how.

For an example, I’ll use something I saw at a conference last week. A speaker gave a talk describing the current state of cosmology: the new tools we have to map the early universe, and the challenges in using them to their full potential. After the talk, I remember her answering three questions. In each case, she seemed to know what she was talking about, but for different reasons. If she was contradicted by a different expert, I’d use these reasons to figure out which one to trust.

First, sometimes an expert gives what is an informed opinion, but just an informed opinion. As scientists, we are expected to know a fairly broad range of background behind our work, and be able to say something informed about it. We see overview talks and hear our colleagues’ takes, and get informed opinions about topics we otherwise don’t work on. This speaker fielded a question about quantum gravity, and her answer made it clear that the topic falls into this category for her. Her answer didn’t go into much detail, mentioning a few terms but no specific scientific results, and linked back in the end to a different question closer to her expertise. That’s generally how we speak on this kind of topic: vaguely enough to show what we know without overstepping.

The second question came from a different kind of knowledge, which I might call journal club knowledge. Many scientists have what are called “journal clubs”. We meet on a regular basis, read recent papers, and talk about them. The papers go beyond what we work on day-to-day, but not by that much, because the goal is to keep an eye open for future research topics. We read papers in close-by areas, watching for elements that could be useful, answers to questions we have or questions we know how to answer. The kind of “journal club knowledge” we have covers a fair amount of detail: these aren’t topics we are working on right now, but if we spent more time on it they could be. Here, the speaker answered a question about the Hubble tension, a discrepancy between two different ways of measuring the expansion of the universe. The way she answered focused on particular results: someone did X, there was a paper showing Y, this telescope is planning to measure Z. That kind of answer is a good way to tell that someone is answering from “journal club knowledge”. It’s clearly an area she could get involved in if she wanted to, one where she knows the important questions and which papers to read, with some of her own work close enough to the question to give an important advantage. But it was also clear that she hadn’t developed a full argument on one “side” or the other, and as such there are others I’d trust a bit more on that aspect of the question.

Finally, experts are the most trustworthy when we speak about our own work. In this speaker’s case, the questions about machine learning were where her expertise clearly shone through. Her answers there were detailed in a different way than her answers about the Hubble tension: not just papers, but personal experience. They were full of phrases like “I tried that, but it doesn’t work…” or “when we do this, we prefer to do it this way”. They also had the most technical terms of any of her answers, terms that clearly drew distinctions relevant to those who work in the field. In general, when an expert talks about what they do in their own work, and uses a lot of appropriate technical terms, you have especially good reason to trust them.

These cues can help a lot when evaluating experts. An expert who makes a generic claim, like “no evidence for X”, might not know as much as an expert who cites specific papers, and in turn they might not know as much as an expert who describes what they do in their own research. The cues aren’t perfect: one risk is that someone may be an expert on their own work, but that work may be irrelevant to the question you’re asking. But they help: rather than distrusting everyone, they help you towards “bounded distrust”, knowing which claims you can trust and which are riskier.

Duality and Emergence: When Is Spacetime Not Spacetime?

Spacetime is doomed! At least, so say some physicists. They don’t mean this as a warning, like some comic-book universe-destroying disaster, but rather as a research plan. These physicists believe that what we think of as space and time aren’t the full story, but that they emerge from something more fundamental, so that an ultimate theory of nature might not use space or time at all. Other, grumpier physicists are skeptical. Joined by a few philosophers, they think the “spacetime is doomed” crowd are over-excited and exaggerating the implications of their discoveries. At the heart of the argument is the distinction between two related concepts: duality and emergence.

In physics, sometimes we find that two theories are actually dual: despite seeming different, the patterns of observations they predict are the same. Some of the more popular examples are what we call holographic theories. In these situations, a theory of quantum gravity in some space-time is dual to a theory without gravity describing the edges of that space-time, sort of like how a hologram is a 2D image that looks 3D when you move it. For any question you can ask about the gravitational “bulk” space, there is a matching question on the “boundary”. No matter what you observe, neither description will fail.

If theories with gravity can be described by theories without gravity, does that mean gravity doesn’t really exist? If you’re asking that question, you’re asking whether gravity is emergent. An emergent theory is one that isn’t really fundamental, but instead a result of the interaction of more fundamental parts. For example, hydrodynamics, the theory of fluids like water, emerges from more fundamental theories that describe the motion of atoms and molecules.

(For the experts: I, like most physicists, am talking about “weak emergence” here, not “strong emergence”.)

The “spacetime is doomed” crowd think that not just gravity, but space-time itself is emergent. They expect that distances and times aren’t really fundamental, but a result of relationships that will turn out to be more fundamental, like entanglement between different parts of quantum fields. As evidence, they like to bring up dualities where the dual theories have different concepts of gravity, number of dimensions, or space-time. Using those theories, they argue that space and time might “break down”, and not be really fundamental.

(I’ve made arguments like that in the past too.)

The skeptics, though, bring up an important point. If two theories are really dual, then no observation can distinguish them: they make exactly the same predictions. In that case, say the skeptics, what right do you have to call one theory more fundamental than the other? You can say that gravity emerges from a boundary theory without gravity, but you could just as easily say that the boundary theory emerges from the gravity theory. The whole point of duality is that no theory is “more true” than the other: one might be more or less convenient, but both describe the same world. If you want to really argue for emergence, then your “more fundamental” theory needs to do something extra: to predict something that your emergent theory doesn’t predict.

Sometimes this is a fair objection. There are members of the “spacetime is doomed” crowd who are genuinely reckless about this, who’ll tell a journalist about emergence when they really mean duality. But many of these people are more careful, and have thought more deeply about the question. They tend to have some mix of these two perspectives:

First, if two descriptions give the same results, then do the descriptions matter? As physicists, we have a history of treating theories as the same if they make the same predictions. Space-time itself is a result of this policy: in the theory of relativity, two people might disagree on which one of two events happened first or second, but they will agree on the overall distance in space-time between the two. From this perspective, a duality between a bulk theory and a boundary theory isn’t evidence that the bulk theory emerges from the boundary, but it is evidence that both the bulk and boundary theories should be replaced by an “overall theory”, one that treats bulk and boundary as irrelevant descriptions of the same physical reality. This perspective is similar to an old philosophical theory called positivism: that statements are meaningless if they cannot be derived from something measurable. That theory wasn’t very useful for philosophers, which is probably part of why some philosophers are skeptics of “space-time is doomed”. The perspective has been quite useful to physicists, though, so we’re likely to stick with it.

Second, some will say that it’s true that a dual theory is not an emergent theory…but it can be the first step to discover one. In this perspective, dualities are suggestive evidence that a deeper theory is waiting in the wings. The idea would be that one would first discover a duality, then discover situations that break that duality: examples on one side that don’t correspond to anything sensible on the other. Maybe some patterns of quantum entanglement are dual to a picture of space-time, but some are not. (Closer to my sub-field, maybe there’s an object like the amplituhedron that doesn’t respect locality or unitarity.) If you’re lucky, maybe there are situations, or even experiments, that go from one to the other: where the space-time description works until a certain point, then stops working, and only the dual description survives. Some of the models of emergent space-time people study are genuinely of this type, where a dimension emerges in a theory that previously didn’t have one. (For those of you having a hard time imagining this, read my old post about “bubbles of nothing”, then think of one happening in reverse.)

It’s premature to say space-time is doomed, at least as a definite statement. But it is looking like, one way or another, space-time won’t be the right picture for fundamental physics. Maybe that’s because it’s equivalent to another description, redundant embellishment on an essential theoretical core. Maybe instead it breaks down, and a more fundamental theory could describe more situations. We don’t know yet. But physicists are trying to figure it out.

The Unpublishable Dirty Tricks of Theoretical Physics

As the saying goes, it is better not to see laws or sausages being made. You’d prefer to see the clean package on the outside than the mess behind the scenes.

The same is true of science. A good paper tells a nice, clean story: a logical argument from beginning to end, with no extra baggage to slow it down. That story isn’t a lie: for any decent paper in theoretical physics, the conclusions will follow from the premises. Most of the time, though, it isn’t how the physicist actually did it.

The way we actually make discoveries is messy. It involves looking for inspiration in all the wrong places: pieces of old computer code and old problems, trying to reproduce this or that calculation with this or that method. In the end, once we find something interesting enough, we can reconstruct a clearer, cleaner, story, something actually fit to publish. We hide the original mess partly for career reasons (easier to get hired if you tell a clean, heroic story), partly to be understood (a paper that embraced the mess of discovery would be a mess to read), and partly just due to that deep human instinct to not let others see us that way.

The trouble is, some of that “mess” is useful, even essential. And because it’s never published or put into textbooks, the only way to learn it is word of mouth.

A lot of these messy tricks involve numerics. Many theoretical physics papers derive things analytically, writing out equations in symbols. It’s easy to make a mistake in that kind of calculation, either writing something wrong on paper or as a bug in computer code. To correct mistakes, many things are checked numerically: we plug in numbers to make sure everything still works. Sometimes this means using an approximation, trying to make sure two things cancel to some large enough number of decimal places. Sometimes instead it’s exact: we plug in prime numbers, and can much more easily see if two things are equal, or if something is rational or contains a square root. Sometimes numerics aren’t just used to check something, but to find a solution: exploring many options in an easier numerical calculation, finding one that works, and doing it again analytically.

“Ansatze” are also common: our fancy word for an educated guess. These we sometimes admit, when they’re at the core of a new scientific idea. But the more minor examples go un-mentioned. If a paper shows a nice clean formula and proves it’s correct, but doesn’t explain how the authors got it…probably, they used an ansatz. This trick can go hand-in-hand with numerics as well: make a guess, check it matches the right numbers, then try to see why it’s true.

The messy tricks can also involve the code itself. In my field we often use “computer algebra” systems, programs to do our calculations for us. These systems are programming languages in their own right, and we need to write computer code for them. That code gets passed around informally, but almost never standardized. Mathematical concepts that come up again and again can be implemented very differently by different people, some much more efficiently than others.

I don’t think it’s unreasonable that we leave “the mess” out of our papers. They would certainly be hard to understand otherwise! But it’s a shame we don’t publish our dirty tricks somewhere, even in special “dirty tricks” papers. Students often start out assuming everything is done the clean way, and start doubting themselves when they notice it’s much too slow to make progress. Learning the tricks is a big part of learning to be a physicist. We should find a better way to teach them.

Don’t Trust the Experiments, Trust the Science

I was chatting with an astronomer recently, and this quote by Arthur Eddington came up:

“Never trust an experimental result until it has been confirmed by theory.”

Arthur Eddington

At first, this sounds like just typical theorist arrogance, thinking we’re better than all those experimentalists. It’s not that, though, or at least not just that. Instead, it’s commenting on a trend that shows up again and again in science, but rarely makes the history books. Again and again an experiment or observation comes through with something fantastical, something that seems like it breaks the laws of physics or throws our best models into disarray. And after a few months, when everyone has checked, it turns out there was a mistake, and the experiment agrees with existing theories after all.

You might remember a recent example, when a lab claimed to have measured neutrinos moving faster than the speed of light, only for it to turn out to be due to a loose cable. Experiments like this aren’t just a result of modern hype: as Eddington’s quote shows, they were also common in his day. In general, Eddington’s advice is good: when an experiment contradicts theory, theory tends to win in the end.

This may sound unscientific: surely we should care only about what we actually observe? If we defer to theory, aren’t we putting dogma ahead of the evidence of our senses? Isn’t that the opposite of good science?

To understand what’s going on here, we can use an old philosophical argument: David Hume’s argument against miracles. David Hume wanted to understand how we use evidence to reason about the world. He argued that, for miracles in particular, we can never have good evidence. In Hume’s definition, a miracle was something that broke the established laws of science. Hume argued that, if you believe you observed a miracle, there are two possibilities: either the laws of science really were broken, or you made a mistake. The thing is, laws of science don’t just come from a textbook: they come from observations as well, many many observations in many different conditions over a long period of time. Some of those observations establish the laws in the first place, others come from the communities that successfully apply them again and again over the years. If your miracle was real, then it would throw into doubt many, if not all, of those observations. So the question you have to ask is: it it more likely those observations were wrong? Or that you made a mistake? Put another way, your evidence is only good enough for a miracle if it would be a bigger miracle if you were wrong.

Hume’s argument always struck me as a little bit too strict: if you rule out miracles like this, you also rule out new theories of science! A more modern approach would use numbers and statistics, weighing the past evidence for a theory against the precision of the new result. Most of the time you’d reach the same conclusion, but sometimes an experiment can be good enough to overthrow a theory.

Still, theory should always sit in the background, a kind of safety net for when your experiments screw up. It does mean that when you don’t have that safety net you need to be extra-careful. Physics is an interesting case of this: while we have “the laws of physics”, we don’t have any established theory that tells us what kinds of particles should exist. That puts physics in an unusual position, and it’s probably part of why we have such strict standards of statistical proof. If you’re going to be operating without the safety net of theory, you need that kind of proof.

This post was also inspired by some biological examples. The examples are politically controversial, so since this is a no-politics blog I won’t discuss them in detail. (I’ll also moderate out any comments that do.) All I’ll say is that I wonder if in that case the right heuristic is this kind of thing: not to “trust scientists” or “trust experts” or even “trust statisticians”, but just to trust the basic, cartoon-level biological theory.

Facts About Math Are Facts About Us

Each year, the Niels Bohr International Academy has a series of public talks. Part of Copenhagen’s Folkeuniversitet (“people’s university”), they attract a mix of older people who want to keep up with modern developments and young students looking for inspiration. I gave a talk a few days ago, as part of this year’s program. The last time I participated, back in 2017, I covered a topic that comes up a lot on this blog: “The Quest for Quantum Gravity”. This year, I was asked to cover something more unusual: “The Unreasonable Effectiveness of Mathematics in the Natural Sciences”.

Some of you might notice that title is already taken: it’s a famous lecture by the physicist Wigner, from 1959. Wigner posed an interesting question: why is advanced mathematics so useful in physics? Time and time again, mathematicians develop an idea purely for its own sake, only for physicists to find it absolutely indispensable to describe some part of the physical world. Should we be surprised that this keeps working? Suspicious?

I talked a bit about this: some of the answers people have suggested over the years, and my own opinion. But like most public talks, the premise was mostly a vehicle for cool examples: physicists through history bringing in new math, and surprising mathematical facts like the ones I talked about a few weeks back at Culture Night. Because of that, I was actually a bit unprepared to dive into the philosophical side of the topic (despite it being in principle a very philosophical topic!) When one of the audience members brought up mathematical Platonism, I floundered a bit, not wanting to say something that was too philosophically naive.

Well, if there’s anywhere I can be naive, it’s my own blog. I even have a label for Amateur Philosophy posts. So let’s do one.

Mathematical Platonism is the idea that mathematical truths “exist”: that they’re somewhere “out there” being discovered. On the other side, one might believe that mathematics is not discovered, but invented. For some reason, a lot of people with the latter opinion seem to think this has something to do with describing nature (for example, an essay a few years back by Lee Smolin defines mathematics as “the study of systems of evoked relationships inspired by observations of nature”).

I’m not a mathematical Platonist. I don’t even like to talk about which things do or don’t “exist”. But I also think that describing mathematics in terms of nature is missing the point. Mathematicians aren’t physicists. While there may have been a time when geometers argued over lines in the sand, these days mathematicians’ inspiration isn’t usually the natural world, at least not in the normal sense.

Instead, I think you can’t describe mathematics without describing mathematicians. A mathematical fact is, deep down, something a mathematician can say without other mathematicians shouting them down. It’s an allowed move in what my hazy secondhand memory of Wittgenstein wants to call a “language game”: something that gets its truth from a context of people interpreting and reacting to it, in the same way a move in chess matters only when everyone is playing by its rules.

This makes mathematics sound very subjective, and we’re used to the opposite: the idea that a mathematical fact is as objective as they come. The important thing to remember is that even with this kind of description, mathematics still ends up vastly less subjective than any other field. We care about subjectivity between different people: if a fact is “true” for Brits and “false” for Germans, then it’s a pretty limited fact. Mathematics is special because the “rules of its game” aren’t rules of one group or another. They’re rules that are in some sense our birthright. Any human who can read and write, or even just act and perceive, can act as a Turing Machine, a universal computer. With enough patience and paper, anything that you can prove to one person you can prove to another: you just have to give them the rules and let them follow them. It doesn’t matter how smart you are, or what you care about most: if something is mathematically true for others, it is mathematically true for you.

Some would argue that this is evidence for mathematical Platonism, that if something is a universal truth it should “exist”. Even if it does, though, I don’t think it’s useful to think of it in that way. Once you believe that mathematical truth is “out there”, you want to try to characterize it, to say something about it besides that it’s “out there”. You’ll be tempted to have an opinion on the Axiom of Choice, or the Continuum Hypothesis. And the whole point is that those aren’t sensible things to have opinions on, that having an opinion about them means denying the mathematical proofs that they are, in the “standard” axioms, undecidable. Whatever is “out there”, it has to include everything you can prove with every axiom system, whichever non-standard ones you can cook up, because mathematicians will happily work on any of them. The whole point of mathematics, the thing that makes it as close to objective as anything can be, is that openness: the idea that as long as an argument is good enough, as long as it can convince anyone prepared to wade through the pages, then it is mathematics. Nothing, so long as it can convince in the long-run, is excluded.

If we take this definition seriously, there are some awkward consequences. You could imagine a future in which every mind, everyone you might be able to do mathematics with, is crushed under some tyrant, forced to agree to something false. A real philosopher would dig in to this corner case, try to salvage the definition or throw it out. I’m not a real philosopher though. So all I can say is that while I don’t think that tyrant gets to define mathematics, I also don’t think there are good alternatives to my argument. Our only access to mathematics, and to truth in general, is through the people who pursue it. I don’t think we can define one without the other.

Outreach Talk on Math’s Role in Physics

Tonight is “Culture Night” in Copenhagen, the night when the city throws open its doors and lets the public in. Museums and hospitals, government buildings and even the Freemasons, all have public events. The Niels Bohr Institute does too, of course: an evening of physics exhibits and demos, capped off with a public lecture by Denmark’s favorite bow-tie wearing weirder-than-usual string theorist, Holger Bech Nielsen. In between, there are a number of short talks by various folks at the institute, including yours truly.

In my talk, I’m going to try and motivate the audience to care about math. Math is dry of course, and difficult for some, but we physicists need it to do our jobs. If you want to be precise about a claim in physics, you need math simply to say what you want clearly enough.

Since you guys likely don’t overlap with my audience tonight, it should be safe to give a little preview. I’ll be using a few examples, but this one is the most complicated:

I’ll be telling a story I stole from chapter seven of the web serial Almost Nowhere. (That link is to the first chapter, by the way, in case you want to read the series without spoilers. It’s very strange, very unique, and at least in my view quite worth reading.) You follow a warrior carrying a spear around a globe in two different paths. The warrior tries to always point in the same direction, but finds that the two different paths result in different spears when they meet. The story illustrates that such a simple concept as “what direction you are pointing” isn’t actually so simple: if you want to think about directions in curved space (like the surface of the Earth, but also, like curved space-time in general relativity) then you need more sophisticated mathematics (a notion called parallel transport) to make sense of it.

It’s kind of an advanced concept for a public talk. But seeing it show up in Almost Nowhere inspired me to try to get it across. I’ll let you know how it goes!

By the way, if you are interested in learning the kinds of mathematics you need for theoretical physics, and you happen to be a Bachelor’s student planning to pursue a PhD, then consider the Perimeter Scholars International Master’s Program! It’s a one-year intensive at the Perimeter Institute in Waterloo, Ontario, in Canada. In a year it gives you a crash course in theoretical physics, giving you tools that will set you ahead of other beginning PhD students. I’ve witnessed it in action, and it’s really remarkable how much the students learn in a year, and what they go on to do with it. Their early registration deadline is on November 15, just a month away, so if you’re interested you may want to start thinking about it.

Breaking Out of “Self-Promotion Voice”

What do TED talks and grant applications have in common?

Put a scientist on a stage, and what happens? Some of us panic and mumble. Others are as smooth as a movie star. Most, though, fall back on a well-practiced mode: “self-promotion voice”.

A scientist doing self-promotion voice is easy to recognize. We focus on ourselves, of course (that’s in the name!), talking about all the great things we’ve done. If we have to mention someone else, we make sure to link it in some way: “my colleague”, “my mentor”, “which inspired me to”. All vulnerability is “canned” in one way or another: “challenges we overcame”, light touches on the most sympathetic of issues. Usually, we aren’t negative towards our colleagues either: apart from the occasional very distant enemy, everyone is working with great scientific virtue. If we talk about our past, we tell the same kinds of stories, mentioning our youthful curiosity and deep buzzwordy motivations. Any jokes or references are carefully pruned, made accessible to the lowest-common-denominator. This results in a standard vocabulary: see a metaphor, a quote, or a turn of phrase, and you’re bound to see it in talks again and again and again. Things get even more repetitive when you take into account how often we lean on the voice: a given speech or piece will be assembled from elementary pieces, snippets of practiced self-promotion that we pour in like packing peanuts after a minimal edit, filling all available time and word count.

“My passion for teaching manifests…”

Packing peanuts may not be glamorous, but they get the job done. A scientist who can’t do “the voice” is going to find life a lot harder, their negativity or clumsiness turning away support when they need it most. Except for the greatest of geniuses, we all have to learn a bit of self-promotion to stay employed.

We don’t have to stop there, though. Self-promotion voice works, but it’s boring and stilted, and it all looks basically the same. If we can do something a bit more authentic then we stand out from the crowd.

I’ve been learning this more and more lately. My blog posts have always run the gamut: some are pure formula, but the ones I’m most proud of have a voice all their own. Over the years, I’ve been pushing my applications in that direction. Each grant and job application has a bit of the standard self-promotion voice pruned away, and a bit of another voice (my own voice?) sneaking in. This year, as I send out applications, I’ve been tweaking things. I almost hope the best jobs come late in the year, my applications will be better then!

Why Can’t I Pay Academics to Do Things for Me?

A couple weeks back someone linked to this blog with a problem. A non-academic, he had done some mathematical work but didn’t feel it was ready to publish. He reached out to a nearby math department and asked what they would charge to help him clean up the work. If the price was reasonable, he’d do it, if not at least he’d know what it would cost.

Neither happened. He got no response, and got more and more frustrated.

For many of you, that result isn’t a big surprise. My academic readers are probably cringing at the thought of getting an email like that. But the guy’s instinct here isn’t too off-base. Certainly, in many industries that kind of email would get a response with an actual quote. Academia happens to be different, in a way that makes the general rule not really apply.

There’s a community called Effective Altruists that evaluate charities. They have a saying, “Money is the Unit of Caring”. The point of the saying isn’t that people with more money care more, or anything like that. Rather, it’s a reminder that, whatever a charity wants to accomplish, more money makes it easier. A lawyer could work an hour in a soup kitchen, but if they donated the proceeds of an hour’s work the soup kitchen could hire four workers instead. Food banks would rather receive money than food, because the money lets them buy whatever they need in bulk. As the Simpsons meme says, “money can be exchanged for goods and services”.

If you pay a charity, or a business, it helps them achieve what they want to do. If you pay an academic, it gets a bit more complicated.

The problem is that for academics, time matters a lot more than our bank accounts. If we want to settle down with a stable job, we need to spend our time doing things that look good on job applications: writing papers, teaching students, and so on. The rest of the time gets spent resting so we have the energy to do all of that.

(What about tenured professors? They don’t have to fight for their own jobs…but by that point, they’ve gotten to know their students and other young people in their sub-field. They want them to get jobs too!)

Money can certainly help with those goals, but not personal money: grant money. With grant money we can hire students and postdocs to do some of that work for us, or pay our own salary so we’re easier for a university to hire. We can buy equipment for those who need that sort of thing, and get to do more interesting science. Rather than “Money is the Unit of Caring”, for academics, “Grant Money is the Unit of Caring”.

Personal money, in contrast, just matters for our rest time. And unless we have expensive tastes, we usually get paid enough for that.

(The exception is for extremely underpaid academics, like PhD students and adjuncts. For some of them money can make a big difference to their quality of life. I had quite a few friends during my PhD who had side gigs, like tutoring, to live a bit more comfortably.)

This is not to say that it’s impossible to pay academics to do side jobs. People do. But when it works, it’s usually due to one of these reasons:

  1. It’s fun. Side work trades against rest time, but if it helps us rest up then it’s not really a tradeoff. Even if it’s a little more boring that what we’d rather do, if it’s not so bad the money can make up the difference.
  2. It looks good on a CV. This covers most of the things academics are sometimes paid to do, like writing articles for magazines. If we can spin something as useful to our teaching or research, or as good for the greater health of the field (or just for our “personal brand”), then we can justify doing it.
  3. It’s a door out of academia. I’ve seen the occasional academic take time off to work for a company. Usually that’s a matter of seeing what it’s like, and deciding whether it looks like a better life. It’s not really “about the money”, even in those cases.

So what if you need an academic’s help with something? You need to convince them it’s worth their time. Money could do it, but only if they’re living precariously, like some PhD students. Otherwise, you need to show that what you’re asking helps the academic do what they’re trying to do: that it is likely to move the field forward, or that it fulfills some responsibility tied to their personal brand. Without that, you’re not likely to hear back.