Category Archives: Amateur Philosophy

The Black Box Theory of Everything

What is science? What makes a theory scientific?

There’s a picture we learn in high school. It’s not the whole story, certainly: philosophers of science have much more sophisticated notions. But for practicing scientists, it’s a picture that often sits in the back of our minds, informing what we do. Because of that, it’s worth examining in detail.

In the high school picture, scientific theories make predictions. Importantly, postdictions don’t count: if you “predict” something that already happened, it’s too easy to cheat and adjust your prediction. Also, your predictions must be different from those of other theories. If all you can do is explain the same results with different words you aren’t doing science, you’re doing “something else” (“metaphysics”, “religion”, “mathematics”…whatever the person you’re talking to wants to make fun of, but definitely not science).

Seems reasonable, right? Let’s try a thought experiment.

In the late 1950’s, the physics of protons and neutrons was still quite mysterious. They seemed to be part of a bewildering zoo of particles that no-one could properly explain. In the 60’s and 70’s the field started converging on the right explanation, from Gell-Mann’s eightfold way to the parton model to the full theory of quantum chromodynamics (QCD for short). Today we understand the theory well enough to package things into computer code: amplitudes programs like BlackHat for collisions of individual quarks, jet algorithms that describe how those quarks become signals in colliders, lattice QCD implemented on supercomputers for pretty much everything else.

Now imagine that you had a time machine, prodigious programming skills, and a grudge against 60’s era-physicists.

Suppose you wrote a computer program that combined the best of QCD in the modern world. BlackHat and more from the amplitudes side, the best jet algorithms and lattice QCD code, and more: a program that could reproduce any calculation in QCD that anyone can do today. Further, suppose you don’t care about silly things like making your code readable. Since I began the list above with BlackHat, we’ll call the combined box of different codes BlackBox.

Now suppose you went back in time, and told the bewildered scientists of the 50’s that nuclear physics was governed by a very complicated set of laws: the ones implemented in BlackBox.

Behold, your theory

Your “BlackBox theory” passes the high school test. Not only would it match all previous observations, it could make predictions for any experiment the scientists of the 50’s could devise. Up until the present day, your theory would match observations as well as…well as well as QCD does today.

(Let’s ignore for the moment that they didn’t have computers that could run this code in the 50’s. This is a thought experiment, we can fudge things a bit.)

Now suppose that one of those enterprising 60’s scientists, Gell-Mann or Feynman or the like, noticed a pattern. Maybe they got it from an experiment scattering electrons off of protons, maybe they saw it in BlackBox’s code. They notice that different parts of “BlackBox theory” run on related rules. Based on those rules, they suggest a deeper reality: protons are made of quarks!

But is this “quark theory” scientific?

“Quark theory” doesn’t make any new predictions. Anything you could predict with quarks, you could predict with BlackBox. According to the high school picture of science, for these 60’s scientists quarks wouldn’t be scientific: they would be “something else”, metaphysics or religion or mathematics.

And in practice? I doubt that many scientists would care.

“Quark theory” makes the same predictions as BlackBox theory, but I think most of us understand that it’s a better theory. It actually explains what’s going on. It takes different parts of BlackBox and unifies them into a simpler whole. And even without new predictions, that would be enough for the scientists in our thought experiment to accept it as science.

Why am I thinking about this? For two reasons:

First, I want to think about what happens when we get to a final theory, a “Theory of Everything”. It’s probably ridiculously arrogant to think we’re anywhere close to that yet, but nonetheless the question is on physicists’ minds more than it has been for most of history.

Right now, the Standard Model has many free parameters, numbers we can’t predict and must fix based on experiments. Suppose there are two options for a final theory: one that has a free parameter, and one that doesn’t. Once that one free parameter is fixed, both theories will match every test you could ever devise (they’re theories of everything, after all).

If we come up with both theories before testing that final parameter, then all is well. The theory with no free parameters will predict the result of that final experiment, the other theory won’t, so the theory without the extra parameter wins the high school test.

What if we do the experiment first, though?

If we do, then we’re in a strange situation. Our “prediction” of the one free parameter is now a “postdiction”. We’ve matched numbers, sure, but by the high school picture we aren’t doing science. Our theory, the same theory that was scientific if history went the other way, is now relegated to metaphysics/religion/mathematics.

I don’t know about you, but I’m uncomfortable with the idea that what is or is not science depends on historical chance. I don’t like the idea that we could be stuck with a theory that doesn’t explain everything, simply because our experimentalists were able to work a bit faster.

My second reason focuses on the here and now. You might think we have nothing like BlackBox on offer, no time travelers taunting us with poorly commented code. But we’ve always had the option of our own Black Box theory: experiment itself.

The Standard Model fixes some of its parameters from experimental results. You do a few experiments, and you can predict the results of all the others. But why stop there? Why not fix all of our parameters with experiments? Why not fix everything with experiments?

That’s the Black Box Theory of Everything. Each individual experiment you could possibly do gets its own parameter, describing the result of that experiment. You do the experiment, fix that parameter, then move on to the next experiment. Your theory will never be falsified, you will never be proven wrong. Sure, you never predict anything either, but that’s just an extreme case of what we have now, where the Standard Model can’t predict the mass of the Higgs.

What’s wrong with the Black Box Theory? (I trust we can all agree that it’s wrong.)

It’s not just that it can’t make predictions. You could make it a Black Box All But One Theory instead, that predicts one experiment and takes every other experiment as input. You could even make a Black Box Except the Standard Model Theory, that predicts everything we can predict now and just leaves out everything we’re still confused by.

The Black Box Theory is wrong because the high school picture of what counts as science is wrong. The high school picture is a useful guide, it’s a good rule of thumb, but it’s not the ultimate definition of science. And especially now, when we’re starting to ask questions about final theories and ultimate parameters, we can’t cling to the high school picture. We have to be willing to actually think, to listen to the philosophers and consider our own motivations, to figure out what, in the end, we actually mean by science.


Changing the Question

I’ve recently been reading Why Does the World Exist?, a book by the journalist Jim Holt. In it he interviews a range of physicists and philosophers, asking each the question in the title. As the book goes on, he concludes that physicists can’t possibly give him the answer he’s looking for: even if physicists explain the entire universe from simple physical laws, they still would need to explain why those laws exist. A bit disappointed, he turns back to the philosophers.

Something about Holt’s account rubs me the wrong way. Yes, it’s true that physics can’t answer this kind of philosophical problem, at least not in a logically rigorous way. But I think we do have a chance of answering the question nonetheless…by eclipsing it with a better question.

How would that work? Let’s consider a historical example.

Does the Earth go around the Sun, or does the Sun go around the Earth? We learn in school that this is a solved question: Copernicus was right, the Earth goes around the Sun.

The details are a bit more subtle, though. The Sun and the Earth both attract each other: while it is a good approximation to treat the Sun as fixed, in reality it and the Earth both move in elliptical orbits around the same focus (which is close to, but not exactly, the center of the Sun). Furthermore, this is all dependent on your choice of reference frame: if you wish you can choose coordinates in which the Earth stays still while the Sun moves.

So what stops a modern-day Tycho Brahe from arguing that the Sun and the stars and everything else orbit around the Earth?

The reason we aren’t still debating the Copernican versus the Tychonic system isn’t that we proved Copernicus right. Instead, we replaced the old question with a better one. We don’t actually care which object is the center of the universe. What we care about is whether we can make predictions, and what mathematical laws we need to do so. Newton’s law of universal gravitation lets us calculate the motion of the solar system. It’s easier to teach it by talking about the Earth going around the Sun, so we talk about it that way. The “philosophical” question, about the “center of the universe”, has been explained away by the more interesting practical question.

My suspicion is that other philosophical questions will be solved in this way. Maybe physicists can’t solve the ultimate philosophical question, of why the laws of physics are one way and not another. But if we can predict unexpected laws and match observations of the early universe, then we’re most of the way to making the question irrelevant. Similarly, perhaps neuroscientists will never truly solve the mystery of consciousness, at least the way philosophers frame it today. Nevertheless, if they can describe brains well enough to understand why we act like we’re conscious, if they have something in their explanation that looks sufficiently “consciousness-like”, then it won’t matter if they meet the philosophical requirements, people simply won’t care. The question will have been eaten by a more interesting question.

This can happen in physics by itself, without reference to philosophy. Indeed, it may happen again soon. In the New Yorker this week, Natalie Wolchover has an article in which she talks to Nima Arkani-Hamed about the search for better principles to describe the universe. In it, Nima talks about looking for a deep mathematical question that the laws of physics answer. Peter Woit has expressed confusion that Nima can both believe this and pursue various complicated, far-fetched, and at times frankly ugly ideas for new physics.

I think the way to reconcile these two perspectives is to know that Nima takes naturalness seriously. The naturalness argument in physics states that physics as we currently see it is “unnatural”, in particular, that we can’t get it cleanly from the kinds of physical theories we understand. If you accept the argument as stated, then you get driven down a rabbit hole of increasingly strange solutions: versions of supersymmetry that cleverly hide from all experiments, hundreds of copies of the Standard Model, or even a multiverse.

Taking naturalness seriously doesn’t just mean accepting the argument as stated though. It can also mean believing the argument is wrong, but wrong in an interesting way.

One interesting way naturalness could be wrong would be if our reductionist picture of the world, where the ultimate laws live on the smallest scales, breaks down. I’ve heard vague hints from physicists over the years that this might be the case, usually based on the way that gravity seems to mix small and large scales. (Wolchover’s article also hints at this.) In that case, you’d want to find not just a new physical theory, but a new question to ask, something that could eclipse the old question with something more interesting and powerful.

Nima’s search for better questions seems to drive most of his research now. But I don’t think he’s 100% certain that the old questions are wrong, so you can still occasionally see him talking about multiverses and the like.

Ultimately, we can’t predict when a new question will take over. It’s a mix of the social and the empirical, of new predictions and observations but also of which ideas are compelling and beautiful enough to get people to dismiss the old question as irrelevant. It feels like we’re due for another change…but we might not be, and even if we are it might be a long time coming.

Different Fields, Different Worlds

My grandfather is a molecular biologist. When we meet, we swap stories: the state of my field and his, different methods and focuses but often a surprising amount of common ground.

Recently he forwarded me an article by Raymond Goldstein, a biological physicist, arguing that biologists ought to be more comfortable with physical reasoning. The article is interesting in its own right, contrasting how physicists and biologists think about the relationship between models, predictions, and experiments. But what struck me most about the article wasn’t the content, but the context.

Goldstein’s article focuses on a question that seemed to me oddly myopic: should physical models be in the Results section, or the Discussion section?

As someone who has never written a paper with either a Results section or a Discussion section, I wondered why anyone would care. In my field, paper formats are fairly flexible. We usually have an Introduction and a Conclusion, yes, but in between we use however many sections we need to explain what we need to. In contrast, biology papers seem to have a very fixed structure: after the Introduction, there’s a Results section, a Discussion section, and a Materials and Methods section at the end.

At first blush, this seemed incredibly bizarre. Why describe your results before the methods you used to get them? How do you talk about your results without discussing them, but still take a full section to do it? And why do reviewers care how you divide things up in the first place?

It made a bit more sense once I thought about how biology differs from theoretical physics. In theoretical physics, the “methods” are most of the result: unsolved problems are usually unsolved because existing methods don’t solve them, and we need to develop new methods to make progress. Our “methods”, in turn, are often the part of the paper experts are most eager to read. In biology, in contrast, the methods are much more standardized. While papers will occasionally introduce new methods, there are so many unexplored biological phenomena that most of the time researchers don’t need to invent a new method: just asking a question no-one else has asked can be enough for a discovery. In that environment, the “results” matter a lot more: they’re the part that takes the most scrutiny, that needs to stand up on its own.

I can even understand the need for a fixed structure. Biology is a much bigger field than theoretical physics. My field is small enough that we all pretty much know each other. If a paper is hard to read, we’ll probably get a chance to ask the author what they meant. Biology, in contrast, is huge. An important result could come from anywhere, and anyone. Having a standardized format makes it a lot easier to scan through an unfamiliar paper and find what you need, especially when there might be hundreds of relevant papers.

The problem with a standardized system, as always, is the existence of exceptions. A more “physics-like” biology paper is more readable with “physics-like” conventions, even if the rest of the field needs to stay “biology-like”. Because of that, I have a lot of sympathy for Goldstein’s argument, but I can’t help but feel that he should be asking for more. If creating new mathematical models and refining them with observation is at the heart of what Goldstein is doing, then maybe he shouldn’t have to use Results/Discussion/Methods in the first place. Maybe he should be allowed to write biology papers that look more like physics papers.

Adversarial Collaborations for Physics

Sometimes physics debates get ugly. For the scientists reading this, imagine your worst opponents. Think of the people who always misinterpret your work while using shoddy arguments to prop up their own, where every question at a talk becomes a screaming match until you just stop going to the same conferences at all.

Now, imagine writing a paper with those people.

Adversarial collaborations, subject of a recent a contest on the blog Slate Star Codex, are a proposed method for resolving scientific debates. Two scientists on opposite sides of an argument commit to writing a paper together, describing the overall state of knowledge on the topic. For the paper to get published, both sides have to sign off on it: they both have to agree that everything in the paper is true. This prevents either side from cheating, or from coming back later with made-up objections: if a point in the paper is wrong, one side or the other is bound to catch it.

This won’t work for the most vicious debates, when one (or both) sides isn’t interested in common ground. But for some ongoing debates in physics, I think this approach could actually help.

One advantage of adversarial collaborations is in preventing accusations of bias. The debate between dark matter and MOND-like proposals is filled with these kinds of accusations: claims that one group or another is ignoring important data, being dishonest about the parameters they need to fit, or applying standards of proof they would never require of their own pet theory. Adversarial collaboration prevents these kinds of accusations: whatever comes out of an adversarial collaboration, both sides would make sure the other side didn’t bias it.

Another advantage of adversarial collaborations is that they make it much harder for one side to move the goalposts, or to accuse the other side of moving the goalposts. From the sidelines, one thing that frustrates me watching string theorists debate whether the theory can describe de Sitter space is that they rarely articulate what it would take to decisively show that a particular model gives rise to de Sitter. Any conclusion of an adversarial collaboration between de Sitter skeptics and optimists would at least guarantee that both parties agreed on the criteria. Similarly, I get the impression that many debates about interpretations of quantum mechanics are bogged down by one side claiming they’ve closed off a loophole with a new experiment, only for the other to claim it wasn’t the loophole they were actually using, something that could be avoided if both sides were involved in the experiment from the beginning.

It’s possible, even likely, that no-one will try adversarial collaboration for these debates. Even if they did, it’s quite possible the collaborations wouldn’t be able to agree on anything! Still, I have to hope that someone takes the plunge and tries writing a paper with their enemies. At minimum, it’ll be an interesting read!

One, Two, Infinity

Physicists and mathematicians count one, two, infinity.

We start with the simplest case, as a proof of principle. We take a stripped down toy model or simple calculation and show that our idea works. We count “one”, and we publish.

Next, we let things get a bit more complicated. In the next toy model, or the next calculation, new interactions can arise. We figure out how to deal with those new interactions, our count goes from “one” to “two”, and once again we publish.

By this point, hopefully, we understand the pattern. We know what happens in the simplest case, and we know what happens when the different pieces start to interact. If all goes well, that’s enough: we can extrapolate our knowledge to understand not just case “three”, but any case: any model, any calculation. We publish the general case, the general method. We’ve counted one, two, infinity.

200px-infinite-svg

Once we’ve counted “infinity”, we don’t have to do any more cases. And so “infinity” becomes the new “zero”, and the next type of calculation you don’t know how to do becomes “one”. It’s like going from addition to multiplication, from multiplication to exponentiation, from exponentials up into the wilds of up-arrow notation. Each time, once you understand the general rules you can jump ahead to an entirely new world with new capabilities…and repeat the same process again, on a new scale. You don’t need to count one, two, three, four, on and on and on.

Of course, research doesn’t always work out this way. My last few papers counted three, four, five, with six on the way. (One and two were already known.) Unlike the ideal cases that go one, two, infinity, here “two” doesn’t give all the pieces you need to keep going. You need to go a few numbers more to get novel insights. That said, we are thinking about “infinity” now, so look forward to a future post that says something about that.

A lot of frustration in physics comes from situations when “infinity” remains stubbornly out of reach. When people complain about all the models for supersymmetry, or inflation, in some sense they’re complaining about fields that haven’t taken that “infinity” step. One or two models of inflation are nice, but by the time the count reaches ten you start hoping that someone will describe all possible models of inflation in one paper, and see if they can make any predictions from that.

(In particle physics, there’s an extent to which people can actually do this. There are methods to describe all possible modifications of the Standard Model in terms of what sort of effects they can have on observations of known particles. There’s a group at NBI who work on this sort of thing.)

The gold standard, though, is one, two, infinity. Our ability to step back, stop working case-by-case, and move on to the next level is not just a cute trick: it’s a foundation for exponential progress. If we can count one, two, infinity, then there’s nowhere we can’t reach.

The Multiverse Can Only Kill Physics by Becoming Physics

I’m not a fan of the multiverse. I think it’s over-hyped, way beyond its current scientific support.

But I don’t think it’s going to kill physics.

By “the multiverse” I’m referring to a group of related ideas. There’s the idea that we live in a vast, varied universe, with different physical laws in different regions. Relatedly, there’s the idea that the properties of our region aren’t typical of the universe as a whole, just typical of places where life can exist. It may be that in most of the universe the cosmological constant is enormous, but if life can only exist in places where it is tiny then a tiny cosmological constant is what we’ll see. That sort of logic is called anthropic reasoning. If it seems strange, think about a smaller scale: there are many planets in the universe, but only a small number of them can support life. Still, we shouldn’t be surprised that we live on a planet that can support life: if it couldn’t, we wouldn’t live here!

If we really do live in a multiverse, though, some of what we think of as laws of physics are just due to random chance. Maybe the quarks have the masses they do not for some important reason, but just because they happened to end up that way in our patch of the universe.

This seems to have depressing implications. If the laws of physics are random, or just consequences of where life can exist, then what’s left to discover? Why do experiments at all?

Well, why not ask the geoscientists?

tectonic_plate_boundaries

These guys

We might live in one universe among many, but we definitely live on one planet among many. And somehow, this realization hasn’t killed geoscience.

That’s because knowing we live on a random planet doesn’t actually tell us very much.

Now, I’m not saying you can’t do anthropic reasoning about the Earth. For example, it looks like an active system of plate tectonics is a necessary ingredient for life. Even if plate tectonics is rare, we shouldn’t be surprised to live on a planet that has it.

Ok, so imagine it’s 1900, before Wegener proposed continental drift. Scientists believe there are many planets in the universe, that we live in a “multiplanet”. Could you predict plate tectonics?

Even knowing that we live on one of the few planets that can support life, you don’t know how it supports life. Even living in a “multiplanet”, geoscience isn’t dead. The specifics of our Earth are still going to teach you something important about how planets work.

Physical laws work the same way. I’ve said that the masses of the quarks could be random, but it’s not quite that simple. The underlying reasons why the masses of the quarks are what they are could be random: the specifics of how six extra dimensions happened to curl up in our region of the universe, for example. But there’s important physics in between: the physics of how those random curlings of space give rise to quark masses. There’s a mechanism there, and we can’t just pick one out of a hat or work backwards to it anthropically. We have to actually go out and discover the answer.

Similarly, we don’t know automatically which phenomena are “random”, which are “anthropic”, and which are required by some deep physical principle. Even in a multiverse, we can’t assume that everything comes down to chance, we only know that some things will, much as the geoscientists don’t know what’s unique to Earth and what’s true of every planet without actually going out and checking.

You can even find a notion of “naturalness” here, if you squint. In physics, we find phenomena like the mass of the Higgs “unnatural”, they’re “fine-tuned” in a way that cries out for an explanation. Normally, we think of this in terms of a hypothetical “theory of everything”: the more “fine-tuned” something appears, the harder it would be to explain it in a final theory. In a multiverse, it looks like we’d have to give up on this, because even the most unlikely-looking circumstance would happen somewhere, especially if it’s needed for life.

Once again, though, imagine you’re a geoscientist. Someone suggests a ridiculously fine-tuned explanation for something: perhaps volcanoes only work if they have exactly the right amount of moisture. Even though we live on one planet in a vast universe, you’re still going to look for simpler explanations before you move on to more complicated ones. It’s human nature, and by and large it’s the only way we’re capable of doing science. As physicists, we’ve papered this over with technical definitions of naturalness, but at the end of the day even in a multiverse we’ll still start with less fine-tuned-looking explanations and only accept the fine-tuned ones when the evidence forces us to. It’s just what people do.

The only way for anthropic reasoning to get around this, to really make physics pointless once and for all, is if it actually starts making predictions. If anthropic reasoning in physics can be made much stronger than anthropic reasoning in geoscience (which, as mentioned, didn’t predict tectonic plates until a century after their discovery) then maybe we can imagine getting to a point where it tells us what particles we should expect to discover, and what masses they should have.

At that point, though, anthropic reasoning won’t have made physics pointless: it will have become physics.

If anthropic reasoning is really good enough to make reliable, falsifiable predictions, then we should be ecstatic! I don’t think we’re anywhere near that point, though some people are earnestly trying to get there. But if it really works out, then we’d have a powerful new method to make predictions about the universe.

 

Ok, so with all of this said, there is one other worry.

Karl Popper criticized Marxism and Freudianism for being unfalsifiable. In both disciplines, there was a tendency to tell what were essentially “just-so stories”. They could “explain” any phenomenon by setting it in their framework and explaining how it came to be “just so”. These explanations didn’t make new predictions, and different people often ended up coming up with different explanations with no way to distinguish between them. They were stories, not scientific hypotheses. In more recent times, the same criticism has been made of evolutionary psychology. In each case the field is accused of being able to justify anything and everything in terms of its overly ambiguous principles, whether dialectical materialism, the unconscious mind, or the ancestral environment.

just_so_stories_kipling_1902

Or an elephant’s ‘satiable curtiosity

You’re probably worried that this could happen to physics. With anthropic reasoning and the multiverse, what’s to stop physicists from just proposing some “anthropic” just-so-story for any evidence we happen to find, no matter what it is? Surely anything could be “required for life” given a vague enough argument.

You’re also probably a bit annoyed that I saved this objection for last. I know that for many people, this is precisely what you mean when you say the multiverse will “kill physics”.

I’ve saved this for last for a reason though. It’s because I want to point out something important: this outcome, that our field degenerates into just-so-stories, isn’t required by the physics of the multiverse. Rather, it’s a matter of sociology.

If we hold anthropic reasoning to the same standards as the rest of physics, then there’s no problem: if an anthropic explanation doesn’t make falsifiable predictions then we ignore it. The problem comes if we start loosening our criteria, start letting people publish just-so-stories instead of real science.

This is a real risk! I don’t want to diminish that. It’s harder than it looks for a productive academic field to fall into bullshit, but just-so-stories are a proven way to get there.

What I want to emphasize is that we’re all together in this. We all want to make sure that physics remains scientific. We all need to be vigilant, to prevent a culture of just-so-stories from growing. Regardless of whether the multiverse is the right picture, and regardless of how many annoying TV specials they make about it in the meantime, that’s the key: keeping physics itself honest. If we can manage that, nothing we discover can kill our field.

Boltzmann Brains, Evil Demons, and Why It’s Occasionally a Good Idea to Listen to Philosophers

There’s been a bit of a buzz recently about a paper Sean Carroll posted to the arXiv, “Why Boltzmann Brains Are Bad”. The argument in the paper isn’t new, it’s something Carroll has been arguing for a long time, and the arXiv post was just because he had been invited to contribute a piece to a book on Current Controversies in Philosophy of Science.

(By the way: in our field, invited papers and conference proceedings are almost always reviews of old work, not new results. If you see something on arXiv and want to know whether it’s actually new work, the “Comments:” section will almost always mention this.)

While the argument isn’t new, it is getting new attention. And since I don’t think I’ve said much about my objections to it, now seems like a good time to do so.

Carroll’s argument is based on theoretical beings called Boltzmann brains. The idea is that if you wait a very very long time in a sufficiently random (“high-entropy”) universe, the matter in that universe will arrange itself in pretty much every imaginable way, if only for a moment. In particular, it will eventually form a brain, or enough of a brain to have a conscious experience. Wait long enough, and you can find a momentary brain having any experience you want, with any (fake) memories you want. Long enough, and you can find a brain having the same experience you are having right now.

So, Carroll asks, how do you know you aren’t a Boltzmann brain? If the universe exists for long enough, most of the beings having your current experiences would be Boltzmann brains, not real humans. But if you really are a Boltzmann brain, then you can’t know anything about the universe at all: everything you think are your memories are just random fluctuations with no connection to the real world.

Carroll calls this sort of situation “cognitively unstable”. If you reason scientifically that the universe must be full of Boltzmann brains, then you can’t rule out that you could be a Boltzmann brain, and thus you shouldn’t accept your original reasoning.

The only way out, according to Carroll, is if we live in a universe that will never contain Boltzmann brains, for example one that won’t exist in its current form long enough to create them. So from a general concern about cognitive instability, Carroll argues for specific physics. And if that seems odd…well, it is.

For the purpose of this post, I’m going to take for granted the physics case: that a sufficiently old and random universe would indeed produce Boltzmann brains. That’s far from uncontroversial, and if you’re interested in that side of the argument (and have plenty of patience for tangents and Czech poop jokes) Lubos Motl posted about it recently.

Instead, I’d like to focus on the philosophical side of the argument.

Let’s start with intro philosophy, and talk about Descartes.

Descartes wanted to start philosophy from scratch by questioning everything he thought he knew. In one of his arguments, he asks the reader to imagine an evil demon.

315grazthrone

Probably Graz’zt. It’s usually Graz’zt.

Descartes imagines this evil demon exercising all its power to deceive. Perhaps it could confound your senses with illusions, or modify your memories. If such a demon existed, there would be no way to know if anything you believed or reasoned about the world was correct. So, Descartes asked, how do you know you’re not being deceived by an evil demon right now?

Amusingly, like Carroll, Descartes went on to use this uncertainty to argue for specific proposals in physics: in Descartes’ case, everything from the existence of a benevolent god to the idea that gravity was caused by a vortex of fluid around the sun.

Descartes wasn’t the last to propose this kind of uncertainty, and philosophers have asked more sophisticated questions over the years challenging the idea that it makes sense to reason from the past about the future at all.

Carroll is certainly aware of all of this. But I suspect he doesn’t quite appreciate the current opinion philosophers have on these sorts of puzzles.

The impression I’ve gotten from philosophers is that they don’t take this kind of “cognitive instability” very seriously anymore. There are specialists who still work on it, and it’s still of historical interest. But the majority of philosophers have moved on.

How did they move on? How have they dismissed these kinds of arguments?

That varies. Philosophers don’t tend to have the kind of consensus that physicists usually do.

Some reject them on pragmatic grounds: science works, even if we can’t “justify” it. Some use a similar argument to Carroll’s, but take it one step back, arguing that we shouldn’t worry that we could be deceived by an evil demon or be a Boltzmann brain because those worries by themselves are cognitively unstable. Some bite the bullet, that reasoning is impossible, then just ignore it and go on with their lives.

The common trait of all of these rejections, though? They don’t rely on physics.

Philosophers don’t argue “evil demons are impossible, therefore we can be sure we’re not deceived by evil demons”. They don’t argue “dreams are never completely realistic, so we can’t just be dreaming right now”.

And they certainly don’t try to argue the reverse: that consistency means there can never be evil demons, or never be realistic dreams.

I was on the debate team in high school. One popular tactic was called the “non-unique”. If your opponent argued that your plan had some negative consequences, you could argue that those consequences would happen regardless of whether you got to enact your plan or not: that the consequences were non-unique.

At this point, philosophers understand that cognitive instability and doubt are “non-unique”. No matter the physics, no matter how the world looks, it’s still possible to argue that reasoning isn’t justified, that even the logic we used to doubt the world in the first place could be flawed.

Carroll’s claim to me seems non-unique. Yes, in a universe that exists for a long time you could be a Boltzmann brain. But even if you don’t live in such a universe, you could still be a brain in a jar or a simulation. You could still be deceived by an “evil demon”.

And so regardless, you need the philosophers. Regardless, you need some argument that reasoning works, that you can ignore doubt. And once you’re happy with that argument, you don’t have to worry about Boltzmann brains.