Tag Archives: theoretical physics

The Metaphysics of Card Games

I tend to be skeptical of attempts to apply metaphysics to physics. In particular, I get leery when someone tries to describe physics in terms of which fundamental things exist, and which things are made up of other things.

Now, I’m not the sort of physicist who thinks metaphysics is useless in general. I’ve seen some impressive uses of supervenience, for example.

But I think that, in physics, talk of “things” is almost always premature. As physicists, we describe the world mathematically. It’s the most precise way we have access to of describing the universe. The trouble is, slightly different mathematics can imply the existence of vastly different “things”.

To give a slightly unusual example, let’s talk about card games.

magic_the_gathering-card_back

To defeat metaphysics, we must best it at a children’s card game!

Magic: The Gathering is a collectible card game in which players play powerful spellcasters who fight by casting spells and summoning creatures. Those spells and creatures are represented by cards.

If you wanted to find which “things” exist in Magic: The Gathering, you’d probably start with the cards. And indeed, cards are pretty good candidates for fundamental “things”. As a player, you have a hand of cards, a discard pile (“graveyard”) and a deck (“library”), and all of these are indeed filled with cards.

However, not every “thing” in the game is a card. That’s because the game is in some sense limited: it needs to represent a broad set of concepts while still using physical, purchasable cards.

Suppose you have a card that represents a general. Every turn, the general recruits a soldier. You could represent the soldiers with actual cards, but they’d have to come from somewhere, and over many turns you might quickly run out.

Instead, Magic represents these soldiers with “tokens”. A token is not a card: you can’t shuffle a token into your deck or return it to your hand, and if you try to it just ceases to exist. But otherwise, the tokens behave just like other creatures: they’re both the same type of “thing”, something Magic calls a “permanent”. Permanents live in an area between players called the “battlefield”.

And it gets even more complicated! Some creatures have special abilities. When those abilities are activated, they’re treated like spells in many ways: you can cast spells in response, and even counter them with the right cards. However, they’re not spells, because they’re not cards: like tokens, you can’t shuffle them into your deck. Instead, both they and spells that have just been cast live in another area, the “stack”.

So while Magic might look like it just has one type of “thing”, cards, in fact it has three: cards, permanents, and objects on the stack.

We can contrast this with another card game, Hearthstone.

hearthstone_screenshot

Hearthstone is much like Magic. You are a spellcaster, you cast spells, you summon creatures, and those spells and creatures are represented by cards.

The difference is, Hearthstone is purely electronic. You can’t go out and buy the cards in a store, they’re simulated in the online game. And this means that Hearthstone’s metaphysics can be a whole lot simpler.

In Hearthstone, if you have a general who recruits a soldier every turn, the soldiers can be cards just like the general. You can return them to your hand, or shuffle them into your deck, just like a normal card. Your computer can keep track of them, and make sure they go away properly at the end of the game.

This means that Hearthstone doesn’t need a concept of “permanents”: everything on its “battlefield” is just a card, which can have some strange consequences. If you return a creature to your hand, and you have room, it will just go there. But if your hand is full, and the creature has nowhere to go, it will “die”, in exactly the same way it would have died in the game if another creature killed it. From the game’s perspective, the creature was always a card, and the card “died”, so the creature died.

These small differences in implementation, in the “mathematics” of the game, change the metaphysics completely. Magic has three types of “things”, Hearthstone has only one.

And card games are a special case, because in some sense they’re built to make metaphysics easy. Cards are intuitive, everyday objects, and both Magic and Hearthstone are built off of our intuitions about them, which is why I can talk about “things” in either game.

Physics doesn’t have to be built that way. Physics is meant to capture our observations, and help us make predictions. It doesn’t have to sort itself neatly into “things”. Even if it does, I hope I’ve convinced you that small changes in physics could lead to large changes in which “things” exist. Unless you’re convinced that you understand the physics of something completely, you might want to skip the metaphysics. A minor mathematical detail could sweep it all away.

arXiv, Our Printing Press

IMG_20160714_091400

Johannes Gutenberg, inventor of the printing press, and possibly the only photogenic thing on the Mainz campus

I’ve had a few occasions to dig into older papers recently, and I’ve noticed a trend: old papers are hard to read!

Ok, that might not be surprising. The older a paper is, the greater the chance it will use obsolete notation, or assume a context that has long passed by. Older papers have different assumptions about what matters, or what rigor requires, and their readers cared about different things. All this is to be expected: a slow, gradual approach to a modern style and understanding.

I’ve been noticing, though, that this slow, gradual approach doesn’t always hold. Specifically, it seems to speed up quite dramatically at one point: the introduction of arXiv, the website where we store all our papers.

Part of this could just be a coincidence. As it happens, the founding papers in my subfield, those that started Amplitudes with a capital “A”, were right around the time that arXiv first got going. It could be that all I’m noticing is the difference between Amplitudes and “pre-Amplitudes”, with the Amplitudes subfield sharing notation more than they did before they had a shared identity.

But I suspect that something else is going on. With arXiv, we don’t just share papers (that was done, piecemeal, before arXiv). We also share LaTeX.

LaTeX is a document formatting language, like a programming language for papers. It’s used pretty much universally in physics and math, and increasingly in other fields. As it turns out, when we post a paper to arXiv, we don’t just send a pdf: we include the raw LaTeX code as well.

Before arXiv, if you wanted to include an equation from another paper, you’d format it yourself. You’d probably do it a little differently from the other paper, in accord with your own conventions, and just to make it easier on yourself. Over time, more and more differences would crop up, making older papers harder and harder to read.

With arXiv, you can still do all that. But you can also just copy.

Since arXiv makes the LaTeX code behind a paper public, it’s easy to lift the occasional equation. Even if you’re not lifting it directly, you can see how they coded it. Even if you don’t plan on copying, the default gets flipped around: instead of having to try to make your equation like the one in the previous paper and accidentally getting it wrong, every difference is intentional.

This reminds me, in a small-scale way, of the effect of the printing press on anatomy books.

Before the printing press, books on anatomy tended to be full of descriptions, but not illustrations. Illustrations weren’t reliable: there was no guarantee the monk who copied them would do so correctly, so nobody bothered. This made it hard to tell when an anatomist (fine it was always Galen) was wrong: he could just be using an odd description. It was only after the printing press that books could actually have illustrations that were reliable across copies of a book. Suddenly, it was possible to point out that a fellow anatomist had left something out: it would be missing from the illustration!

In a similar way, arXiv seems to have led to increasingly standard notation. We still aren’t totally consistent…but we do seem a lot more consistent than older papers, and I think arXiv is the reason why.

Thought Experiments, Minus the Thought

My second-favorite Newton fact is that, despite inventing calculus, he refused to use it for his most famous work of physics, the Principia. Instead, he used geometrical proofs, tweaked to smuggle in calculus without admitting it.

Essentially, these proofs were thought experiments. Newton would start with a standard geometry argument, one that would have been acceptable to mathematicians centuries earlier. Then, he’d imagine taking it further, pushing a line or angle to some infinite point. He’d argue that, if the proof worked for every finite choice, then it should work in the infinite limit as well.

These thought experiments let Newton argue on the basis of something that looked more rigorous than calculus. However, they also held science back. At the time, only a few people in the world could understand what Newton was doing. It was only later, when Newton’s laws were reformulated in calculus terms, that a wider group of researchers could start doing serious physics.

What changed? If Newton could describe his physics with geometrical thought experiments, why couldn’t everyone else?

The trouble with thought experiments is that they require careful setup, setup that has to be thought through for each new thought experiment. Calculus took Newton’s geometrical thought experiments, and took out the need for thought: the setup was automatically a part of calculus, and each new researcher could build on their predecessors without having to set everything up again.

This sort of thing happens a lot in science. An example from my field is the scattering matrix, or S-matrix.

The S-matrix, deep down, is a thought experiment. Take some particles, and put them infinitely far away from each other, off in the infinite past. Then, let them approach, close enough to collide. If they do, new particles can form, and these new particles will travel out again, infinite far away in the infinite future. The S-matrix then is a metaphorical matrix that tells you, for each possible set of incoming particles, what the probability is to get each possible set of outgoing particles.

In a real collider, the particles don’t come from infinitely far away, and they don’t travel infinitely far before they’re stopped. But the distances are long enough, compared to the sizes relevant for particle physics, that the S-matrix is the right idea for the job.

Like calculus, the S-matrix is a thought experiment minus the thought. When we want to calculate the probability of particles scattering, we don’t need to set up the whole thought experiment all over again. Instead, we can start by calculating, and over time we’ve gotten very good at it.

In general, sub-fields in physics can be divided into those that have found their S-matrices, their thought experiments minus thought, and those that have not. When a topic has to rely on thought experiments, progress is much slower: people argue over the details of each setup, and it’s difficult to build something that can last. It’s only when a field turns the corner, removing the thought from its thought experiments, that people can start making real collaborative progress.

Amplitudes 2016

I’m at Amplitudes this week, in Stockholm.

IMG_20160704_225049

The land of twilight at 11pm

Last year, I wrote a post giving a tour of the field. If I had to write it again this year most of the categories would be the same, but the achievements listed would advance in loops and legs, more complicated theories and more insight.

The ambitwistor string now goes to two loops, while my collaborators and I have pushed the polylogarithm program to five loops (dedicated post on that soon!) A decent number of techniques can now be applied to QCD, including a differential equation-based method that was used to find a four loop, three particle amplitude. Others tied together different approaches, found novel structures in string theory, or linked amplitudes techniques to physics from other disciplines. The talks have been going up on YouTube pretty quickly, due to diligent work by Nordita’s tech guy, so if you’re at all interested check it out!

Most of String Theory Is Not String Pheno

Last week, Sabine Hossenfelder wrote a post entitled “Why not string theory?” In it, she argued that string theory has a much more dominant position in physics than it ought to: that it’s crowding out alternative theories like Loop Quantum Gravity and hogging much more funding than it actually merits.

If you follow the string wars at all, you’ve heard these sorts of arguments before. There’s not really anything new here.

That said, there were a few sentences in Hossenfelder’s post that got my attention, and inspired me to write this post.

So far, string theory has scored in two areas. First, it has proved interesting for mathematicians. But I’m not one to easily get floored by pretty theorems – I care about math only to the extent that it’s useful to explain the world. Second, string theory has shown to be useful to push ahead with the lesser understood aspects of quantum field theories. This seems a fruitful avenue and is certainly something to continue. However, this has nothing to do with string theory as a theory of quantum gravity and a unification of the fundamental interactions.

(Bolding mine)

Here, Hossenfelder explicitly leaves out string theorists who work on “lesser understood aspects of quantum field theories” from her critique. They’re not the big, dominant program she’s worried about.

What Hossenfelder doesn’t seem to realize is that right now, it is precisely the “aspects of quantum field theories” crowd that is big and dominant. The communities of string theorists working on something else, and especially those making bold pronouncements about the nature of the real world, are much, much smaller.

Let’s define some terms:

Phenomenology (or pheno for short) is the part of theoretical physics that attempts to make predictions that can be tested in experiments. String pheno, then, covers attempts to use string theory to make predictions. In practice, though, it’s broader than that: while some people do attempt to predict the results of experiments, more work on figuring out how models constructed by other phenomenologists can make sense in string theory. This still attempts to test string theory in some sense: if a phenomenologist’s model turns out to be true but it can’t be replicated in string theory then string theory would be falsified. That said, it’s more indirect. In parallel to string phenomenology, there is also the related field of string cosmology, which has a similar relationship with cosmology.

If other string theorists aren’t trying to make predictions, what exactly are they doing? Well, a large number of them are studying quantum field theories. Quantum field theories are currently our most powerful theories of nature, but there are many aspects of them that we don’t yet understand. For a large proportion of string theorists, string theory is useful because it provides a new way to understand these theories in terms of different configurations of string theory, which often uncovers novel and unexpected properties. This is still physics, not mathematics: the goal, in the end, is to understand theories that govern the real world. But it doesn’t involve the same sort of direct statements about the world as string phenomenology or string cosmology: crucially, it doesn’t depend on whether string theory is true.

Last week, I said that before replying to Hossenfelder’s post I’d have to gather some numbers. I was hoping to find some statistics on how many people work on each of these fields, or on their funding. Unfortunately, nobody seems to collect statistics broken down by sub-field like this.

As a proxy, though, we can look at conferences. Strings is the premier conference in string theory. If something has high status in the string community, it will probably get a talk at Strings. So to investigate, I took a look at the talks given last year, at Strings 2015, and broke them down by sub-field.

strings2015topics

Here I’ve left out the historical overview talks, since they don’t say much about current research.

“QFT” is for talks about lesser understood aspects of quantum field theories. Amplitudes, my own sub-field, should be part of this: I’ve separated it out to show what a typical sub-field of the QFT block might look like.

“Formal Strings” refers to research into the fundamentals of how to do calculations in string theory: in principle, both the QFT folks and the string pheno folks find it useful.

“Holography” is a sub-topic of string theory in which string theory in some space is equivalent to a quantum field theory on the boundary of that space. Some people study this because they want to learn about quantum field theory from string theory, others because they want to learn about quantum gravity from quantum field theory. Since the field can’t be cleanly divided into quantum gravity and quantum field theory research, I’ve given it its own category.

While all string theory research is in principle about quantum gravity, the “Quantum Gravity” section refers to people focused on the sorts of topics that interest non-string quantum gravity theorists, like black hole entropy.

Finally, we have String Cosmology and String Phenomenology, which I’ve already defined.

Don’t take the exact numbers here too seriously: not every talk fit cleanly into a category, so there were some judgement calls on my part. Nonetheless, this should give you a decent idea of the makeup of the string theory community.

The biggest wedge in the diagram by far, taking up a majority of the talks, is QFT. Throwing in Amplitudes (part of QFT) and Formal Strings (useful to both), and you’ve got two thirds of the conference. Even if you believe Hossenfelder’s tale of the failures of string theory, then, that only matters to a third of this diagram. And once you take into account that many of the Holography and Quantum Gravity people are interested in aspects of QFT as well, you’re looking at an even smaller group. Really, Hossenfelder’s criticism is aimed at two small slices on the chart: String Pheno, and String Cosmo.

Of course, string phenomenologists also have their own conference. It’s called String Pheno, and last year it had 130 participants. In contrast, LOOPS’ 2015, the conference for string theory’s most famous “rival”, had…190 participants. The fields are really pretty comparable.

Now, I have a lot more sympathy for the string phenomenologists and string cosmologists than I do for loop quantum gravity. If other string theorists felt the same way, then maybe that would cause the sort of sociological effect that Hossenfelder is worried about.

But in practice, I don’t think this happens. I’ve met string theorists who didn’t even know that people still did string phenomenology. The two communities are almost entirely disjoint: string phenomenologists and string cosmologists interact much more with other phenomenologists and cosmologists than they do with other string theorists.

You want to talk about sociology? Sociologically, people choose careers and fund research because they expect something to happen soon. People don’t want to be left high and dry by a dearth of experiments, don’t feel comfortable working on something that may only be vindicated long after they’re dead. Most people choose the safe option, the one that, even if it’s still aimed at a distant goal, is also producing interesting results now (aspects of quantum field theories, for example).

The people that don’t? Tend to form small, tight-knit, passionate communities. They carve out a few havens of like-minded people, and they think big thoughts while the world around them seems to only care about their careers.

If you’re a loop quantum gravity theorist, or a quantum gravity phenomenologist like Hossenfelder, and you see some of your struggles in that paragraph, please realize that string phenomenology is like that too.

I feel like Hossenfelder imagines a world in which string theory is struck from its high place, and alternative theories of quantum gravity are of comparable size and power. But from where I’m sitting, it doesn’t look like it would work out that way. Instead, you’d have alternatives grow to the same size as similarly risky parts of string theory, like string phenomenology. And surprise, surprise: they’re already that size.

In certain corners of the internet, people like to argue about “punching up” and “punching down”. Hossenfelder seems to think she’s “punching up”, giving the big dominant group a taste of its own medicine. But by leaving out string theorists who study QFTs, she’s really “punching down”, or at least sideways, and calling out a sub-group that doesn’t have much more power than her own.

Quick Post

I’m traveling this week, so I don’t have time for a long post. I am rather annoyed with Sabine Hossenfelder’s recent post about string theory, but I don’t have time to write much about it now.

(Broadly speaking, she dismisses string theory’s success in investigating quantum field theories as irrelevant to string theory’s dominance, but as far as I’ve seen the only part of string theory that has any “institutional dominance” at all is the “investigating quantum field theories” part, while string theorists who spend their time making statements about the real world are roughly as “marginalized” as non-string quantum gravity theorists. But I ought to gather some numbers before I really commit to arguing this.)

Particles Aren’t Vibrations (at Least, Not the Ones You Think)

You’ve probably heard this story before, likely from Brian Greene.

In string theory, the fundamental particles of nature are actually short lengths of string. These strings can vibrate, and like a string on a violin, that vibration is arranged into harmonics. The more energy in the string, the more complex the vibration. In string theory, each of these vibrations corresponds to a different particle, explaining how the zoo of particles we observe can come out of a single type of fundamental string.

250px-moodswingerscale-svg

Particles. Probably.

It’s a nice story. It’s even partly true. But it gives a completely wrong idea of where the particles we’re used to come from.

Making a string vibrate takes energy, and that energy is determined by the tension of the string. It’s a lot harder to wiggle a thick rubber band than a thin one, if you’re holding both tightly.

String theory’s strings are under a lot of tension, so it takes a lot of energy to make them vibrate. From our perspective, that energy looks like mass, so the more complicated harmonics on a string correspond to extremely massive particles, close to the Planck mass!

Those aren’t the particles you’re used to. They’re not electrons, they’re not dark matter. They’re particles we haven’t observed, and may never observe. They’re not how string theory explains the fundamental particles of nature.

So how does string theory go from one fundamental type of string to all of the particles in the universe, if not through these vibrations? As it turns out, there are several different ways it can happen, tricks that allow the lightest and simplest vibrations to give us all the particles we’ve observed.* I’ll describe a few.

The first and most important trick here is supersymmetry. Supersymmetry relates different types of particles to each other. In string theory, it means that along with vibrations that go higher and higher, there are also low-energy vibrations that behave like different sorts of particles. In a sense, string theory sticks a quantum field theory inside another quantum field theory, in a way that would make Xzibit proud.

Even with supersymmetry, string theory doesn’t give rise to all of the right sorts of particles. You need something else, like compactifications or branes.

The strings of string theory live in ten dimensions, it’s the only place they’re mathematically consistent. Since our world looks four-dimensional, something has to happen to the other six dimensions. They have to be curled up, in a process called compactification. There are lots and lots (and lots) of ways to do this compactification, and different ways of curling up the extra dimensions give different places for strings to move. These new options make the strings look different in our four-dimensional world: a string curled around a donut hole looks very different from one that moves freely. Each new way the string can move or vibrate can give rise to a new particle.

Another option to introduce diversity in particles is to use branes. Branes (short for membranes) are surfaces that strings can end on. If two strings end on the same brane, those ends can meet up and interact. If they end on different branes though, then they can’t. By cleverly arranging branes, then, you can have different sets of strings that interact with each other in different ways, reproducing the different interactions of the particles we’re familiar with.

In string theory, the particles we’re used to aren’t just higher harmonics, or vibrations with more and more energy. They come from supersymmetry, from compactifications and from branes. The higher harmonics are still important: there are theorems that you can’t fix quantum gravity with a finite number of extra particles, so the infinite tower of vibrations allows string theory to exploit a key loophole. They just don’t happen to be how string theory gets the particles of the Standard Model. The idea that every particle is just a higher vibration is a common misconception, and I hope I’ve given you a better idea of how string theory actually works.

 

*But aren’t these lightest vibrations still close to the Planck mass? Nope! See the discussion with TE in the comments for details.

Those Wacky 60’s Physicists

The 60’s were a weird time in academia. Psychologists were busy experimenting with LSD, seeing if they could convince people to electrocute each other, and otherwise doing the sorts of shenanigans that ended up saddling them with Institutional Review Boards so that nowadays they can’t even hand out surveys without a ten page form attesting that it won’t have adverse effects on pregnant women.

We don’t have IRBs in theoretical physics. We didn’t get quite as wacky as the psychologists did. But the 60’s were still a time of utopian dreams and experimentation, even in physics. We may not have done unethical experiments on people…but we did have the Analytic S-Matrix Program.

The Analytic S-Matrix Program was an attempt to rebuild quantum field theory from the ground up. The “S” in S-Matrix stands for “scattering”: the S-Matrix is an enormous matrix that tells you, for each set of incoming particles, the probability that they scatter into some new set of outgoing particles. Normally, this gets calculated piece by piece with what are called Feynman diagrams. The goal of the Analytic S-Matrix program was a loftier one: to derive the S-Matrix from first principles, without building it out of quantum field theory pieces. Without Feynman diagrams’ reliance on space and time, people like  Geoffrey Chew, Stanley Mandelstam, Tullio Regge, and Lev Landau hoped to reach a deeper understanding of fundamental physics.

If this sounds familiar, it should. Amplitudeologists like me view the physicists of the Analytic S-Matrix Program as our spiritual ancestors. Like us, they tried to skip the mess of Feynman diagrams, looking for mathematical tricks and unexpected symmetries to show them the way forward.

Unfortunately, they didn’t have the tools we do now. They didn’t understand the mathematical functions they needed, nor did they have novel ways of writing down their results like the amplituhedron. Instead, they had to work with what they knew, which in practice usually meant going back to Feynman diagrams.

Paradoxically then, much of the lasting impact of the Analytic S-Matrix Program has been on how we understand the results of Feynman diagram calculations. Just as psychologists learn about the Milgram experiment in school, we learn about Mandelstam variables and Regge trajectories. Recently, we’ve been digging up old concepts from those days and finding new applications, like the recent work on Landau singularities, or some as-yet unpublished work I’ve been doing.

Of course, this post wouldn’t be complete without mentioning the Analytic S-Matrix Program’s most illustrious child, String Theory. Some of the mathematics cooked up by the physicists of the 60’s, while dead ends for the problems they were trying to solve, ended up revealing a whole new world of potential.

The physicists of the 60’s were overly optimistic. Nevertheless, their work opened up questions that are still worth asking today. Much as psychologists can’t ignore what they got up to in the 60’s, it’s important for physicists to be aware of our history. You never know what you might dig up.

0521523362cvr.qxd (Page 1)

And as Levar Burton would say, you don’t have to take my word for it.

A Collider’s Eye View

When it detected the Higgs, what did the LHC see, exactly?

cern-1304107-02-thumb

What do you see with your detector-eyes, CMS?

The first problem is that the Higgs, like most particles produced in particle colliders, is unstable. In a very short amount of time the Higgs transforms into two or more lighter particles. Often, these particles will decay in turn, possibly many more times.  So when the LHC sees a Higgs boson, it doesn’t really “see the Higgs”.

The second problem is that you can’t “see” the lighter particles either. They’re much too small for that. Instead, the LHC has to measure their properties.

Does the particle have a charge? Then its path will curve in a magnetic field, and it will send electrical signals in silicon. So the LHC can “see” charge.

Can the particle be stopped, absorbed by some material? Getting absorbed releases energy, lighting up a detector. So the LHC can “see” energy, and what it takes for a particle to be absorbed.

vvvvv

Diagram of a collider’s “eye”

And that’s…pretty much it. When the LHC “sees” the Higgs, what it sees is a set of tracks in a magnetic field, indicating charge, and energy in its detectors, caused by absorption at different points. Everything else has to be inferred: what exactly the particles were, where they decayed, and from what. Some of it can be figured out in real-time, some is only understood later once we can add up everything and do statistics.

On the face of it, this sounds about as impossible as astrophysics. Like astrophysics, it works in part because what the colliders see is not the whole story. The strong force has to both be consistent with our observations of hadrons, and with nuclear physics. Neutrinos aren’t just mysterious missing energy that we can’t track, they’re an important part of cosmology. And so on.

So in the sense of that massive, interconnected web of ideas, the LHC sees the Higgs. It sees patterns of charges and energies, binned into histograms and analyzed with statistics and cross-checked, implicitly or explicitly, against all of the rest of physics at every scale we know. All of that, together, is the collider’s eye view of the universe.

Source Your Common Sense

When I wrote that post on crackpots, one of my inspirations was a particularly annoying Twitter conversation. The guy I was talking to had convinced himself that general relativity was a mistake. He was especially pissed off by the fact that, in GR, energy is not always conserved. Screw Einstein, energy conservation is just common sense! Right?

Think a little bit about why you believe in energy conservation. Is it because you run into a lot of energy in your day-to-day life, and it’s always been conserved? Did you grow up around something that was obviously energy? Or maybe someone had to explain it to you?

Teacher Pointing at Map of World

Maybe you learned about it…from a physics teacher?

A lot of the time, things that seem obvious only got that way because you were taught them. “Energy” isn’t an intuitive concept, however much it’s misused that way. It’s something defined by physicists because it solves a particular role, a consequence of symmetries in nature. When you learn about energy conservation in school, that’s because it’s one of the simpler ways to explain a much bigger concept, so you shouldn’t be surprised if there are some inaccuracies. If you know where your “common sense” is coming from, you can anticipate when and how it might go awry.

Similarly, if, like one of the commenters on my crackpot post, you’re uncomfortable with countable and uncountable infinities, remember that infinity isn’t “common sense” either. It’s something you learned about in a math class, from a math teacher. And just like energy conservation, it’s a simplification of a more precise concept, with epsilons and deltas and all that jazz.

It’s not possible to teach all the nuances of every topic, so naturally most people will hear a partial story. What’s important is to recognize that you heard a partial story, and not enshrine it as “common sense” when the real story comes knocking.

Don’t physicists use common sense, though? What about “physical intuition”?

Physical intuition has a lot of mystique behind it, and is often described as what separates us from the mathematicians. As such, different people mean different things by it…but under no circumstances should it be confused with pure “common sense”. Physical intuition uses analogy and experience. It involves seeing a system and anticipating the sorts of things you can do with it, like playing a game and assuming there’ll be a save button. In order for these sorts of analogies to work, they generally aren’t built around everyday objects or experiences. Instead, they use physical systems that are “similar” to the one under scrutiny in important ways, while being better understood in others. Crucially, physical intuition involves working in context. It’s not just uncritical acceptance of what one would naively expect.

So when your common sense is tingling, see if you can provide a source. Is that source relevant, experience with a similar situation? Or is it in fact a half-remembered class from high school?