Category Archives: Amateur Philosophy

Facts About Our Capabilities Are Facts About the World

A paper leaked from Google last week claimed that their researchers had achieved “quantum supremacy”, the milestone at which a quantum computer performs a calculation faster than any existing classical computer. Scott Aaronson has a great explainer about this. The upshot is that Google’s computer is much too small to crack all our encryptions (only 53 qubits, the equivalent of bits for quantum computers), but it still appears to be a genuine quantum computer doing a genuine quantum computation that is genuinely not feasible otherwise.

How impressed should we be about this?

On one hand, the practical benefits of a 53-qubit computer are pretty minimal. Scott discusses some applications: you can generate random numbers, distributed in a way that will let others verify that they are truly random, the kind of thing it’s occasionally handy to do in cryptography. Still, by itself this won’t change the world, and compared to the quantum computing hype I can understand if people find this underwhelming.

On the other hand, as Scott says, this falsifies the Extended Church-Turing Thesis! And that sounds pretty impressive, right?

Ok, I’m actually just re-phrasing what I said before. The Extended Church-Turing Thesis proposes that a classical computer (more specifically, a probabilistic Turing machine) can efficiently simulate any reasonable computation. Falsifying it means finding something that a classical computer cannot compute efficiently but another sort of computer (say, a quantum computer) can. If the calculation Google did truly can’t be done efficiently on a classical computer (this is not proven, though experts seem to expect it to be true) then yes, that’s what Google claims to have done.

So we get back to the real question: should we be impressed by quantum supremacy?

Well, should we have been impressed by the Higgs?

The detection of the Higgs boson in 2012 hasn’t led to any new Higgs-based technology. No-one expected it to. It did teach us something about the world: that the Higgs boson exists, and that it has a particular mass. I think most people accept that that’s important: that it’s worth knowing how the world works on a fundamental level.

Google may have detected the first-known violation of the Extended Church-Turing Thesis. This could eventually lead to some revolutionary technology. For now, though, it hasn’t. Instead, it teaches us something about the world.

It may not seem like it, at first. Unlike the Higgs boson, “Extended Church-Turing is false” isn’t a law of physics. Instead, it’s a fact about our capabilities. It’s a statement about the kinds of computers we can and cannot build, about the kinds of algorithms we can and cannot implement, the calculations we can and cannot do.

Facts about our capabilities are still facts about the world. They’re still worth knowing, for the same reasons that facts about the world are still worth knowing. They still give us a clearer picture of how the world works, which tells us in turn what we can and cannot do. According to the leaked paper, Google has taught us a new fact about the world, a deep fact about our capabilities. If that’s true we should be impressed, even without new technology.

The Changing Meaning of “Explain”

This is another “explanations are weird” post.

I’ve been reading a biography of James Clerk Maxwell, who formulated the theory of electromagnetism. Nowadays, we think about the theory in terms of fields: we think there is an “electromagnetic field”, filling space and time. At the time, though, this was a very unusual way to think, and not even Maxwell was comfortable with it. He felt that he had to present a “physical model” to justify the theory: a picture of tiny gears and ball bearings, somehow occupying the same space as ordinary matter.

Bang! Bang! Maxwell’s silver bearings…

Maxwell didn’t think space was literally filled with ball bearings. He did, however, believe he needed a picture that was sufficiently “physical”, that wasn’t just “mathematics”. Later, when he wrote down a theory that looked more like modern field theory, he still thought of it as provisional: a way to use Lagrange’s mathematics to ignore the unknown “real physical mechanism” and just describe what was observed. To Maxwell, field theory was a description, but not an explanation.

This attitude surprised me. I would have thought physicists in Maxwell’s day could have accepted fields. After all, they had accepted Newton.

In his time, there was quite a bit of controversy about whether Newton’s theory of gravity was “physical”. When rival models described planets driven around by whirlpools, Newton simply described the mathematics of the force, an “action at a distance”. Newton famously insisted hypotheses non fingo, “I feign no hypotheses”, and insisted that he wasn’t saying anything about why gravity worked, merely how it worked. Over time, as the whirlpool models continued to fail, people gradually accepted that gravity could be explained as action at a distance.

You’d think that this would make them able to accept fields as well. Instead, by Maxwell’s day the options for a “physical explanation” had simply been enlarged by one. Now instead of just explaining something with mechanical parts, you could explain it with action at a distance as well. Indeed, many physicists tried to explain electricity and magnetism with some sort of gravity-like action at a distance. They failed, though. You really do need fields.

The author of the biography is an engineer, not a physicist, so I find his perspective unusual at times. After discussing Maxwell’s discomfort with fields, the author says that today physicists are different: instead of insisting on a physical explanation, they accept that there are some things they just cannot know.

At first, I wanted to object: we do have physical explanations, we explain things with fields! We have electromagnetic fields and electron fields, gluon fields and Higgs fields, even a gravitational field for the shape of space-time. These fields aren’t papering over some hidden mechanism, they are the mechanism!

Are they, though?

Fields aren’t quite like the whirlpools and ball bearings of historical physicists. Sometimes fields that look different are secretly the same: the two “different explanations” will give the same result for any measurement you could ever perform. In my area of physics, we try to avoid this by focusing on the measurements instead, building as much as we can out of observable quantities instead of fields. In effect we’re going back yet another layer, another dose of hypotheses non fingo.

Physicists still ask for “physical explanations”, and still worry that some picture might be “just mathematics”. But what that means has changed, and continues to change. I don’t think we have a common standard right now, at least nothing as specific as “mechanical parts or action at a distance, and nothing else”. Somehow, we still care about whether we’ve given an explanation, or just a description, even though we can’t define what an explanation is.

Experimental Theoretical Physics

I was talking with some other physicists about my “Black Box Theory” thought experiment, where theorists have to compete with an impenetrable block of computer code. Even if the theorists come up with a “better” theory, that theory won’t predict anything that the code couldn’t already. If “predicting something new” is an essential part of science, then the theorists can no longer do science at all.

One of my colleagues made an interesting point: in the thought experiment, the theorists can’t predict new behaviors of reality. But they can predict new behaviors of the code.

Even when we have the right theory to describe the world, we can’t always calculate its consequences. Often we’re stuck in the same position as the theorists in the thought experiment, trying to understand the output of a theory that might as well be a black box. Increasingly, we are employing a kind of “experimental theoretical physics”. We try to predict the result of new calculations, just as experimentalists try to predict the result of new experiments.

This experimental approach seems to be a genuine cultural difference between physics and mathematics. There is such a thing as experimental mathematics, to be clear. And while mathematicians prefer proof, they’re not averse to working from a good conjecture. But when mathematicians calculate and conjecture, they still try to set a firm foundation. They’re precise about what they mean, and careful about what they imply.

“Experimental theoretical physics”, on the other hand, is much more like experimental physics itself. Physicists look for plausible patterns in the “data”, seeing if they make sense in some “physical” way. The conjectures aren’t always sharply posed, and the leaps of reasoning are often more reckless than the leaps of experimental mathematicians. We try to use intuition gleaned from a history of experiments on, and calculations about, the physical world.

There’s a real danger here, because mathematical formulas don’t behave like nature does. When we look at nature, we expect it to behave statistically. If we look at a large number of examples, we get more and more confident that they represent the behavior of the whole. This is sometimes dangerous in nature, but it’s even more dangerous in mathematics, because it’s often not clear what a good “sample” even is. Proving something is true “most of the time” is vastly different from proving it is true all of the time, especially when you’re looking at an infinity of possible examples. We can’t meet our favorite “five sigma” level of statistical confidence, or even know if we’re close.

At the same time, experimental theoretical physics has real power. Experience may be a bad guide to mathematics, but it’s a better guide to the mathematics that specifically shows up in physics. And in practice, our recklessness can accomplish great things, uncovering behaviors mathematicians would never have found by themselves.

The key is to always keep in mind that the two fields are different. “Experimental theoretical physics” isn’t mathematics, and it isn’t pretending to be, any more than experimental physics is pretending to be theoretical physics. We’re gathering data and advancing tentative explanations, but we’re fully aware that they may not hold up when examined with full rigor. We want to inspire, to raise questions and get people to think about the principles that govern the messy physical theories we use to describe our world. Experimental physics, theoretical physics, and mathematics are all part of a shared ecosystem, and each has its role to play.

The Black Box Theory of Everything

What is science? What makes a theory scientific?

There’s a picture we learn in high school. It’s not the whole story, certainly: philosophers of science have much more sophisticated notions. But for practicing scientists, it’s a picture that often sits in the back of our minds, informing what we do. Because of that, it’s worth examining in detail.

In the high school picture, scientific theories make predictions. Importantly, postdictions don’t count: if you “predict” something that already happened, it’s too easy to cheat and adjust your prediction. Also, your predictions must be different from those of other theories. If all you can do is explain the same results with different words you aren’t doing science, you’re doing “something else” (“metaphysics”, “religion”, “mathematics”…whatever the person you’re talking to wants to make fun of, but definitely not science).

Seems reasonable, right? Let’s try a thought experiment.

In the late 1950’s, the physics of protons and neutrons was still quite mysterious. They seemed to be part of a bewildering zoo of particles that no-one could properly explain. In the 60’s and 70’s the field started converging on the right explanation, from Gell-Mann’s eightfold way to the parton model to the full theory of quantum chromodynamics (QCD for short). Today we understand the theory well enough to package things into computer code: amplitudes programs like BlackHat for collisions of individual quarks, jet algorithms that describe how those quarks become signals in colliders, lattice QCD implemented on supercomputers for pretty much everything else.

Now imagine that you had a time machine, prodigious programming skills, and a grudge against 60’s era-physicists.

Suppose you wrote a computer program that combined the best of QCD in the modern world. BlackHat and more from the amplitudes side, the best jet algorithms and lattice QCD code, and more: a program that could reproduce any calculation in QCD that anyone can do today. Further, suppose you don’t care about silly things like making your code readable. Since I began the list above with BlackHat, we’ll call the combined box of different codes BlackBox.

Now suppose you went back in time, and told the bewildered scientists of the 50’s that nuclear physics was governed by a very complicated set of laws: the ones implemented in BlackBox.

Behold, your theory

Your “BlackBox theory” passes the high school test. Not only would it match all previous observations, it could make predictions for any experiment the scientists of the 50’s could devise. Up until the present day, your theory would match observations as well as…well as well as QCD does today.

(Let’s ignore for the moment that they didn’t have computers that could run this code in the 50’s. This is a thought experiment, we can fudge things a bit.)

Now suppose that one of those enterprising 60’s scientists, Gell-Mann or Feynman or the like, noticed a pattern. Maybe they got it from an experiment scattering electrons off of protons, maybe they saw it in BlackBox’s code. They notice that different parts of “BlackBox theory” run on related rules. Based on those rules, they suggest a deeper reality: protons are made of quarks!

But is this “quark theory” scientific?

“Quark theory” doesn’t make any new predictions. Anything you could predict with quarks, you could predict with BlackBox. According to the high school picture of science, for these 60’s scientists quarks wouldn’t be scientific: they would be “something else”, metaphysics or religion or mathematics.

And in practice? I doubt that many scientists would care.

“Quark theory” makes the same predictions as BlackBox theory, but I think most of us understand that it’s a better theory. It actually explains what’s going on. It takes different parts of BlackBox and unifies them into a simpler whole. And even without new predictions, that would be enough for the scientists in our thought experiment to accept it as science.

Why am I thinking about this? For two reasons:

First, I want to think about what happens when we get to a final theory, a “Theory of Everything”. It’s probably ridiculously arrogant to think we’re anywhere close to that yet, but nonetheless the question is on physicists’ minds more than it has been for most of history.

Right now, the Standard Model has many free parameters, numbers we can’t predict and must fix based on experiments. Suppose there are two options for a final theory: one that has a free parameter, and one that doesn’t. Once that one free parameter is fixed, both theories will match every test you could ever devise (they’re theories of everything, after all).

If we come up with both theories before testing that final parameter, then all is well. The theory with no free parameters will predict the result of that final experiment, the other theory won’t, so the theory without the extra parameter wins the high school test.

What if we do the experiment first, though?

If we do, then we’re in a strange situation. Our “prediction” of the one free parameter is now a “postdiction”. We’ve matched numbers, sure, but by the high school picture we aren’t doing science. Our theory, the same theory that was scientific if history went the other way, is now relegated to metaphysics/religion/mathematics.

I don’t know about you, but I’m uncomfortable with the idea that what is or is not science depends on historical chance. I don’t like the idea that we could be stuck with a theory that doesn’t explain everything, simply because our experimentalists were able to work a bit faster.

My second reason focuses on the here and now. You might think we have nothing like BlackBox on offer, no time travelers taunting us with poorly commented code. But we’ve always had the option of our own Black Box theory: experiment itself.

The Standard Model fixes some of its parameters from experimental results. You do a few experiments, and you can predict the results of all the others. But why stop there? Why not fix all of our parameters with experiments? Why not fix everything with experiments?

That’s the Black Box Theory of Everything. Each individual experiment you could possibly do gets its own parameter, describing the result of that experiment. You do the experiment, fix that parameter, then move on to the next experiment. Your theory will never be falsified, you will never be proven wrong. Sure, you never predict anything either, but that’s just an extreme case of what we have now, where the Standard Model can’t predict the mass of the Higgs.

What’s wrong with the Black Box Theory? (I trust we can all agree that it’s wrong.)

It’s not just that it can’t make predictions. You could make it a Black Box All But One Theory instead, that predicts one experiment and takes every other experiment as input. You could even make a Black Box Except the Standard Model Theory, that predicts everything we can predict now and just leaves out everything we’re still confused by.

The Black Box Theory is wrong because the high school picture of what counts as science is wrong. The high school picture is a useful guide, it’s a good rule of thumb, but it’s not the ultimate definition of science. And especially now, when we’re starting to ask questions about final theories and ultimate parameters, we can’t cling to the high school picture. We have to be willing to actually think, to listen to the philosophers and consider our own motivations, to figure out what, in the end, we actually mean by science.


Changing the Question

I’ve recently been reading Why Does the World Exist?, a book by the journalist Jim Holt. In it he interviews a range of physicists and philosophers, asking each the question in the title. As the book goes on, he concludes that physicists can’t possibly give him the answer he’s looking for: even if physicists explain the entire universe from simple physical laws, they still would need to explain why those laws exist. A bit disappointed, he turns back to the philosophers.

Something about Holt’s account rubs me the wrong way. Yes, it’s true that physics can’t answer this kind of philosophical problem, at least not in a logically rigorous way. But I think we do have a chance of answering the question nonetheless…by eclipsing it with a better question.

How would that work? Let’s consider a historical example.

Does the Earth go around the Sun, or does the Sun go around the Earth? We learn in school that this is a solved question: Copernicus was right, the Earth goes around the Sun.

The details are a bit more subtle, though. The Sun and the Earth both attract each other: while it is a good approximation to treat the Sun as fixed, in reality it and the Earth both move in elliptical orbits around the same focus (which is close to, but not exactly, the center of the Sun). Furthermore, this is all dependent on your choice of reference frame: if you wish you can choose coordinates in which the Earth stays still while the Sun moves.

So what stops a modern-day Tycho Brahe from arguing that the Sun and the stars and everything else orbit around the Earth?

The reason we aren’t still debating the Copernican versus the Tychonic system isn’t that we proved Copernicus right. Instead, we replaced the old question with a better one. We don’t actually care which object is the center of the universe. What we care about is whether we can make predictions, and what mathematical laws we need to do so. Newton’s law of universal gravitation lets us calculate the motion of the solar system. It’s easier to teach it by talking about the Earth going around the Sun, so we talk about it that way. The “philosophical” question, about the “center of the universe”, has been explained away by the more interesting practical question.

My suspicion is that other philosophical questions will be solved in this way. Maybe physicists can’t solve the ultimate philosophical question, of why the laws of physics are one way and not another. But if we can predict unexpected laws and match observations of the early universe, then we’re most of the way to making the question irrelevant. Similarly, perhaps neuroscientists will never truly solve the mystery of consciousness, at least the way philosophers frame it today. Nevertheless, if they can describe brains well enough to understand why we act like we’re conscious, if they have something in their explanation that looks sufficiently “consciousness-like”, then it won’t matter if they meet the philosophical requirements, people simply won’t care. The question will have been eaten by a more interesting question.

This can happen in physics by itself, without reference to philosophy. Indeed, it may happen again soon. In the New Yorker this week, Natalie Wolchover has an article in which she talks to Nima Arkani-Hamed about the search for better principles to describe the universe. In it, Nima talks about looking for a deep mathematical question that the laws of physics answer. Peter Woit has expressed confusion that Nima can both believe this and pursue various complicated, far-fetched, and at times frankly ugly ideas for new physics.

I think the way to reconcile these two perspectives is to know that Nima takes naturalness seriously. The naturalness argument in physics states that physics as we currently see it is “unnatural”, in particular, that we can’t get it cleanly from the kinds of physical theories we understand. If you accept the argument as stated, then you get driven down a rabbit hole of increasingly strange solutions: versions of supersymmetry that cleverly hide from all experiments, hundreds of copies of the Standard Model, or even a multiverse.

Taking naturalness seriously doesn’t just mean accepting the argument as stated though. It can also mean believing the argument is wrong, but wrong in an interesting way.

One interesting way naturalness could be wrong would be if our reductionist picture of the world, where the ultimate laws live on the smallest scales, breaks down. I’ve heard vague hints from physicists over the years that this might be the case, usually based on the way that gravity seems to mix small and large scales. (Wolchover’s article also hints at this.) In that case, you’d want to find not just a new physical theory, but a new question to ask, something that could eclipse the old question with something more interesting and powerful.

Nima’s search for better questions seems to drive most of his research now. But I don’t think he’s 100% certain that the old questions are wrong, so you can still occasionally see him talking about multiverses and the like.

Ultimately, we can’t predict when a new question will take over. It’s a mix of the social and the empirical, of new predictions and observations but also of which ideas are compelling and beautiful enough to get people to dismiss the old question as irrelevant. It feels like we’re due for another change…but we might not be, and even if we are it might be a long time coming.

Different Fields, Different Worlds

My grandfather is a molecular biologist. When we meet, we swap stories: the state of my field and his, different methods and focuses but often a surprising amount of common ground.

Recently he forwarded me an article by Raymond Goldstein, a biological physicist, arguing that biologists ought to be more comfortable with physical reasoning. The article is interesting in its own right, contrasting how physicists and biologists think about the relationship between models, predictions, and experiments. But what struck me most about the article wasn’t the content, but the context.

Goldstein’s article focuses on a question that seemed to me oddly myopic: should physical models be in the Results section, or the Discussion section?

As someone who has never written a paper with either a Results section or a Discussion section, I wondered why anyone would care. In my field, paper formats are fairly flexible. We usually have an Introduction and a Conclusion, yes, but in between we use however many sections we need to explain what we need to. In contrast, biology papers seem to have a very fixed structure: after the Introduction, there’s a Results section, a Discussion section, and a Materials and Methods section at the end.

At first blush, this seemed incredibly bizarre. Why describe your results before the methods you used to get them? How do you talk about your results without discussing them, but still take a full section to do it? And why do reviewers care how you divide things up in the first place?

It made a bit more sense once I thought about how biology differs from theoretical physics. In theoretical physics, the “methods” are most of the result: unsolved problems are usually unsolved because existing methods don’t solve them, and we need to develop new methods to make progress. Our “methods”, in turn, are often the part of the paper experts are most eager to read. In biology, in contrast, the methods are much more standardized. While papers will occasionally introduce new methods, there are so many unexplored biological phenomena that most of the time researchers don’t need to invent a new method: just asking a question no-one else has asked can be enough for a discovery. In that environment, the “results” matter a lot more: they’re the part that takes the most scrutiny, that needs to stand up on its own.

I can even understand the need for a fixed structure. Biology is a much bigger field than theoretical physics. My field is small enough that we all pretty much know each other. If a paper is hard to read, we’ll probably get a chance to ask the author what they meant. Biology, in contrast, is huge. An important result could come from anywhere, and anyone. Having a standardized format makes it a lot easier to scan through an unfamiliar paper and find what you need, especially when there might be hundreds of relevant papers.

The problem with a standardized system, as always, is the existence of exceptions. A more “physics-like” biology paper is more readable with “physics-like” conventions, even if the rest of the field needs to stay “biology-like”. Because of that, I have a lot of sympathy for Goldstein’s argument, but I can’t help but feel that he should be asking for more. If creating new mathematical models and refining them with observation is at the heart of what Goldstein is doing, then maybe he shouldn’t have to use Results/Discussion/Methods in the first place. Maybe he should be allowed to write biology papers that look more like physics papers.

Adversarial Collaborations for Physics

Sometimes physics debates get ugly. For the scientists reading this, imagine your worst opponents. Think of the people who always misinterpret your work while using shoddy arguments to prop up their own, where every question at a talk becomes a screaming match until you just stop going to the same conferences at all.

Now, imagine writing a paper with those people.

Adversarial collaborations, subject of a recent a contest on the blog Slate Star Codex, are a proposed method for resolving scientific debates. Two scientists on opposite sides of an argument commit to writing a paper together, describing the overall state of knowledge on the topic. For the paper to get published, both sides have to sign off on it: they both have to agree that everything in the paper is true. This prevents either side from cheating, or from coming back later with made-up objections: if a point in the paper is wrong, one side or the other is bound to catch it.

This won’t work for the most vicious debates, when one (or both) sides isn’t interested in common ground. But for some ongoing debates in physics, I think this approach could actually help.

One advantage of adversarial collaborations is in preventing accusations of bias. The debate between dark matter and MOND-like proposals is filled with these kinds of accusations: claims that one group or another is ignoring important data, being dishonest about the parameters they need to fit, or applying standards of proof they would never require of their own pet theory. Adversarial collaboration prevents these kinds of accusations: whatever comes out of an adversarial collaboration, both sides would make sure the other side didn’t bias it.

Another advantage of adversarial collaborations is that they make it much harder for one side to move the goalposts, or to accuse the other side of moving the goalposts. From the sidelines, one thing that frustrates me watching string theorists debate whether the theory can describe de Sitter space is that they rarely articulate what it would take to decisively show that a particular model gives rise to de Sitter. Any conclusion of an adversarial collaboration between de Sitter skeptics and optimists would at least guarantee that both parties agreed on the criteria. Similarly, I get the impression that many debates about interpretations of quantum mechanics are bogged down by one side claiming they’ve closed off a loophole with a new experiment, only for the other to claim it wasn’t the loophole they were actually using, something that could be avoided if both sides were involved in the experiment from the beginning.

It’s possible, even likely, that no-one will try adversarial collaboration for these debates. Even if they did, it’s quite possible the collaborations wouldn’t be able to agree on anything! Still, I have to hope that someone takes the plunge and tries writing a paper with their enemies. At minimum, it’ll be an interesting read!