On my “Who Am I?” page, I open with my background, calling myself a string theorist, then clarify: “in practice I’m more of a Particle Theorist, describing the world not in terms of short lengths of string but rather with particles that each occupy a single point in space”.
When I wrote that I didn’t think it would confuse people. Now that I’m older and wiser, I know people can be confused in a variety of ways. And since I recently saw someone confused about this particular phrase (yes I’m vagueblogging, but I suspect you’re reading this and know who you are 😉 ), I figured I’d explain it.
If you’ve learned a few things about quantum mechanics, maybe you have this slogan in mind:
“What we used to think of as particles are really waves. They spread out over an area, with peaks and troughs that interfere, and you never know exactly where you will measure them.”
With that in mind, my talk of “particles that each occupy a single point” doesn’t make sense. Doesn’t the slogan mean that particles don’t exist?
Here’s the thing: that’s the wrong slogan. The right slogan is just a bit different:
“What we used to think of as particles are ALSO waves. They spread out over an area, with peaks and troughs that interfere, and you never know exactly where you will measure them.”
The principle you were remembering is often called “wave-particle duality“. That doesn’t mean “particles don’t exist”. It means “waves and particles are the same thing”.
This matters, because just as wave-like properties are important, particle-like properties are important. And while it’s true that you can never know exactly where you will measure a particle, it’s also true that it’s useful, and even necessary, to think of it as occupying a single point.
That’s because particles can only affect each other when they’re at the same point. Physicists call this the principle of locality, the idea that there is no real “action at a distance”, everything happens because of something traveling from point A to point B. Wave-particle duality doesn’t change that, it just makes the specific point uncertain. It means you have to add up over every specific point where the particles could have interacted, but each term in your sum has to still involve a specific point: quantum mechanics doesn’t let particles affect each other non-locally.
Strings, in turn, are a little bit different. Strings have length, particles don’t. Particles interact at a point, strings can interact anywhere along the string. Strings introduce a teeny bit of non-locality.
When you compare particles and waves, you’re thinking pre-quantum mechanics, two classical things neither of which is the full picture. When you compare particles and strings, both are quantum, both are also waves. But in a meaningful sense one occupies a single point, and the other doesn’t.
Once a wunderkind student of Feynman, Wolfram is now best known for his software, Mathematica, a tool used by everyone from scientists to lazy college students. Almost all of my work is coded in Mathematica, and while it has some flaws (can someone please speed up the linear solver? Maple’s is so much better!) it still tends to be the best tool for the job.
Wolfram is also known for being a very strange person. There’s his tendency to name, or rename, things after himself. (There’s a type of Mathematica file that used to be called “.m”. Now by default they’re “.wl”, “Wolfram Language” files.) There’s his live-streamed meetings. And then there’s his physics.
In 2002, Wolfram wrote a book, “A New Kind of Science”, arguing that computational systems called cellular automata were going to revolutionize science. A few days ago, he released an update: a sprawling website for “The Wolfram Physics Project”. In it, he claims to have found a potential “theory of everything”, unifying general relativity and quantum physics in a cellular automata-like form.
If that gets your crackpot klaxons blaring, yeah, me too. But Wolfram was once a very promising physicist. And he has collaborators this time, who are currently promising physicists. So I should probably give him a fair reading.
So I compromised. I didn’t read his 448-page technical introduction. I read his 90-ish page blog post. The post is written for a non-technical audience, so I know it isn’t 100% accurate. But by seeing how someone chooses to promote their work, I can at least get an idea of what they value.
I started out optimistic, or at least trying to be. Wolfram starts with simple mathematical rules, and sees what kinds of structures they create. That’s not an unheard of strategy in theoretical physics, including in my own field. And the specific structures he’s looking at look weirdly familiar, a bit like a generalization of cluster algebras.
Reading along, though, I got more and more uneasy. That unease peaked when I saw him describe how his structures give rise to mass.
Wolfram had already argued that his structures obey special relativity. (For a critique of this claim, see this twitter thread.) He found a way to define energy and momentum in his system, as “fluxes of causal edges”. He picks out a particular “flux of causal edges”, one that corresponds to “just going forward in time”, and defines it as mass. Then he “derives” , saying,
Sometimes in the standard formalism of physics, this relation by now seems more like a definition than something to derive. But in our model, it’s not just a definition, and in fact we can successfully derive it.
In “the standard formalism of physics”, means “mass is the energy of an object at rest”. It means “mass is the energy of an object just going forward in time”. If the “standard formalism of physics” “just defines” , so does Wolfram.
I haven’t read his technical summary. Maybe this isn’t really how his “derivation” works, maybe it’s just how he decided to summarize it. But it’s a pretty misleading summary, one that gives the reader entirely the wrong idea about some rather basic physics. It worries me, because both as a physicist and a blogger, he really should know better. I’m left wondering whether he meant to mislead, or whether instead he’s misleading himself.
That feeling kept recurring as I kept reading. There was nothing else as extreme as that passage, but a lot of pieces that felt like they were making a big deal about the wrong things, and ignoring what a physicist would find the most important questions.
Science communication is a gradual process. Anything we say is incomplete, prone to cause misunderstanding. Luckily, we can keep talking, give a new explanation that corrects those misunderstandings. This of course will lead to new misunderstandings. We then explain again, and so on. It sounds fruitless, but in practice our audience nevertheless gets closer and closer to the truth.
I’ve given this kind of explanation before. And when I do, there are two things people often misunderstand. These correspond to two topics which use very similar language, but talk about different things. So this week, I thought I’d get ahead of the game and correct those misunderstandings.
The first misunderstanding: None of that post was quantum.
If that’s on your mind, and you see me say particles don’t exist, maybe you think I mean waves exist instead. Maybe when I say “fields”, you think I’m talking about waves. Maybe you think I’m choosing one side of the duality, saying that waves exist and particles don’t.
To be 100% clear: I am not saying that.
Particles and waves, in quantum physics, are both manifestations of fields. Is your field just at one specific point? Then it’s a particle. Is it spread out, with a fixed wavelength and frequency? Then it’s a wave. These are the two concepts connected by wave-particle duality, where the same object can behave differently depending on what you measure. And both of them, to be clear, come from fields. Neither is the kind of thing Democritus imagined.
The second misunderstanding: This isn’t about on-shell vs. off-shell.
To again be clear: I’m not arguing with Nima here.
Nima (and other people in our field) will sometimes talk about on-shell vs off-shell as if it was about particles vs. fields. Normal physicists will write down a general field, and let it be off-shell, we try to do calculations with particles that are on-shell. But once again, on-shell doesn’t mean Democritus-style. We still don’t know what a fully on-shell picture of physics will look like. Chances are it won’t look like the picture of sloshing, omnipresent fields we started with, at least not exactly. But it won’t bring back indivisible, unchangeable atoms. Those are gone, and we have no reason to bring them back.
Science is by definition empirical. We discover how the world works not by sitting and thinking, but by going out and observing the world. But sometimes, all the observing we can do can’t possibly answer a question. In those situations, we might need “non-empirical science”.
The blog Slate Star Codex had a seriesof posts on this topic recently. He hangs out with a crowd that supports the many-worlds interpretation of quantum mechanics: the idea that quantum events are not truly random, but instead that all outcomes happen, the universe metaphorically splitting into different possible worlds. These metaphorical universes can’t be observed, so no empirical test can tell the difference between this and other interpretations of quantum mechanics: if we could ever know the difference, it would have to be for “non-empirical” reasons.
What reasons are those? Slate Star Codex teases out a few possible intuitions. He points out that we reject theories that have “unnecessary” ideas. He imagines a world where chemists believe that mixing an acid and a base also causes a distant star to go supernova, and a creationist world where paleontologists believe fossils are placed by the devil. In both cases, there might be no observable difference between their theories and ours, but because their theories have “extra pieces” (the distant star, the devil), we reject them for non-empirical reasons. Slate Star Codex asks if this supports many-worlds: without the extra assumption that quantum events randomly choose one outcome, isn’t quantum mechanics simpler?
Ultimately, we trust science because it allows us to do things. If we understand the world, we can interact with it: we can build technology, design new experiments, and propose new theories. With this in mind, we can judge scientific theories by how well they help us do these things. A good scientific theory is one that gives us more power to interact with the world. It can do this by making correct predictions, but it can also do this by explaining things, making it easier for us to reason about them. Beyond empiricism, we can judge science by how well it teaches us.
This gives us an objection to the “supernova theory” of Slate Star Codex’s imagined chemists: it’s much more confusing to teach. To teach chemistry in that world you also have to teach the entire life cycle of stars, a subject that students won’t use in any other part of the course. The creationists’ “devil theory” of paleontology has the same problem: if their theory really makes the right predictions they’d have to teach students everything our paleontologists do: every era of geologic history, every theory of dinosaur evolution, plus an extra course in devil psychology. They end up with a mix that only makes it harder to understand the subject.
Many-worlds may seem simpler than other interpretations of quantum mechanics, but that doesn’t make it more useful, or easier to teach. You still need to teach students how to predict the results of experiments, and those results will still be random. If you teach them many-worlds, you need to add more discussion much earlier on, advanced topics like self-localizing uncertainty and decoherence. You need a quite extensive set of ideas, many of which won’t be used again, to justify rules another interpretation could have introduced much more simply. This would be fine if those ideas made additional predictions, but they don’t: like every interpretation of quantum mechanics, you end up doing the same experiments and building the same technology in the end.
I’m not saying I know many-worlds is false, or that I know another interpretation is true. All I’m saying is that, when physicists criticize many-worlds, they’re not just blindly insisting on empiricism. They’re rejecting many-worlds, in part, because all it does is make their work harder. And that, more than elegance or simplicity, is how we judge theories.
On one hand, the practical benefits of a 53-qubit computer are pretty minimal. Scott discusses some applications: you can generate random numbers, distributed in a way that will let others verify that they are truly random, the kind of thing it’s occasionally handy to do in cryptography. Still, by itself this won’t change the world, and compared to the quantum computing hype I can understand if people find this underwhelming.
Ok, I’m actually just re-phrasing what I said before. The Extended Church-Turing Thesis proposes that a classical computer (more specifically, a probabilistic Turing machine) can efficiently simulate any reasonable computation. Falsifying it means finding something that a classical computer cannot compute efficiently but another sort of computer (say, a quantum computer) can. If the calculation Google did truly can’t be done efficiently on a classical computer (this is not proven, though experts seem to expect it to be true) then yes, that’s what Google claims to have done.
So we get back to the real question: should we be impressed by quantum supremacy?
Well, should we have been impressed by the Higgs?
The detection of the Higgs boson in 2012 hasn’t led to any new Higgs-based technology. No-one expected it to. It did teach us something about the world: that the Higgs boson exists, and that it has a particular mass. I think most people accept that that’s important: that it’s worth knowing how the world works on a fundamental level.
Google may have detected the first-known violation of the Extended Church-Turing Thesis. This could eventually lead to some revolutionary technology. For now, though, it hasn’t. Instead, it teaches us something about the world.
It may not seem like it, at first. Unlike the Higgs boson, “Extended Church-Turing is false” isn’t a law of physics. Instead, it’s a fact about our capabilities. It’s a statement about the kinds of computers we can and cannot build, about the kinds of algorithms we can and cannot implement, the calculations we can and cannot do.
Facts about our capabilities are still facts about the world. They’re still worth knowing, for the same reasons that facts about the world are still worth knowing. They still give us a clearer picture of how the world works, which tells us in turn what we can and cannot do. According to the leaked paper, Google has taught us a new fact about the world, a deep fact about our capabilities. If that’s true we should be impressed, even without new technology.
Avengers: Endgame has been out for a while, so I don’t have to worry about spoilers right? Right?
Anyway, time travel. The spoiler is time travel. They bring back everyone who was eliminated in the previous movie, using time travel.
They also attempt to justify the time travel, using Ant Man-flavored quantum mechanics. This works about as plausibly as you’d expect for a superhero whose shrinking powers not only let him talk to ants, but also go to a “place” called “The Quantum Realm”. Along the way, they manage to throw in splintered references to a half-dozen almost-relevant scientific concepts. It’s the kind of thing that makes some physicists squirm.
And I enjoyed it.
Movies tend to treat time travel in one of two ways. The most reckless, and most common, let their characters rewrite history as they go, like Marty McFly almost erasing himself from existence in Back to the Future. This never makes much sense, and the characters in Avengers: Endgame make fun of it, listing a series of movies that do time travel this way (inexplicably including Wrinkle In Time, which has no time travel at all).
In the other common model, time travel has to happen in self-consistent loops: you can’t change the past, but you can go back and be part of it. This is the model used, for example, in Harry Potter, where Potter is saved by a mysterious spell only to travel back in time and cast it himself. This at least makes logical sense, whether it’s possible physically is an open question.
Avengers: Endgame uses the model of self-consistent loops, but with a twist: if you don’t manage to make your loop self-consistent you instead spawn a parallel universe, doomed to suffer the consequences of your mistakes. This is a rarer setup, but not a unique one, though the only other example I can think of at the moment is Homestuck.
Is there any physics justification for the Avengers: Endgame model? Maybe not. But you can at least guess what they were thinking.
The key clue is a quote from Tony Stark, rattling off a stream of movie-grade scientific gibberish:
“ Quantum fluctuation messes with the Planck scale, which then triggers the Deutsch Proposition. Can we agree on that? ”
From this quote, one can guess not only what scientific results inspired the writers of Avengers: Endgame, but possibly also which Wikipedia entry. David Deutsch is a physicist, and an advocate for the many-worlds interpretation of quantum mechanics. In 1991 he wrote a paper discussing what happens to quantum mechanics in the environment of a wormhole. In it he pointed out that you can make a self-consistent time travel loop, not just in classical physics, but out of a quantum superposition. This offers a weird solution to the classic grandfather paradox of time travel: instead of causing a paradox, you can form a superposition. As Scott Aaronson explains here, “you’re born with probability 1/2, therefore you kill your grandfather with probability 1/2, therefore you’re born with probability 1/2, and so on—everything is consistent.” If you believe in the many-worlds interpretation of quantum mechanics, a time traveler in this picture is traveling between two different branches of the wave-function of the universe: you start out in the branch where you were born, kill your grandfather, and end up in the branch where you weren’t born. This isn’t exactly how Avengers: Endgame handles time travel, but it’s close enough that it seems like a likely explanation.
David Deutsch’s argument uses a wormhole, but how do the Avengers make a wormhole in the first place? There we have less information, just vague references to quantum fluctuations at the Planck scale, the scale at which quantum gravity becomes important. There are a few things they could have had in mind, but one of them might have been physicists Leonard Susskind and Juan Maldacena’s conjecture that quantum entanglement is related to wormholes, a conjecture known as ER=EPR.
Long-time readers of the blog might remember I got annoyed a while back, when Caltech promoted ER=EPRusing a different Disney franchise. The key difference here is that Avengers: Endgame isn’t pretending to be educational. Unlike Caltech’s ER=EPR piece, or even the movie Interstellar, Avengers: Endgame isn’t really about physics. It’s a superhero story, one that pairs the occasional scientific term with a character goofily bouncing around from childhood to old age while another character exclaims “you’re supposed to send him through time, not time through him!” The audience isn’t there to learn science, so they won’t come away with any incorrect assumptions.
The a movie like Avengers: Endgame doesn’t teach science, or even advertise it. It does celebrate it though.
That’s why, despite the silly half-correct science, I enjoyed Avengers: Endgame. It’s also why I don’t think it’s inappropriate, as some people do, to classify movies like Star Wars as science fiction. Star Wars and Avengers aren’t really about exploring the consequences of science or technology, they aren’t science fiction in that sense. But they do build off science’s role in the wider culture. They take our world and look at the advances on the horizon, robots and space travel and quantum speculations, and they let their optimism inform their storytelling. That’s not going to be scientifically accurate, and it doesn’t need to be, any more than the comic Abstruse Goose really believes Witten is from Mars. It’s about noticing we live in a scientific world, and having fun with it.
As a kid, I wanted to know everything. Eventually, I realized this was a little unrealistic. Doomed to know some things and not others, I picked physics as a kind of triage. Other fields I could learn as an outsider: not well enough to compete with the experts, but enough to at least appreciate what they were doing. After watching a few string theory documentaries, I realized this wasn’t the case for physics: if I was going to ever understand what those string theorists were up to, I would have to go to grad school in string theory.
Over time, this goal lost focus. I’ve become a very specialized creature, an “amplitudeologist”. I didn’t have time or energy for my old questions. In an irony that will surprise no-one, a career as a physicist doesn’t leave much time for curiosity about physics.
One of the great things about this blog is how you guys remind me of those old questions, bringing me out of my overspecialized comfort zone. In that spirit, in this post I’m going to list a few things in physics that I really want to understand better. The idea is to make a public commitment: within a year, I want to understand one of these topics at least well enough to write a decent blog post on it.
Wilsonian Quantum Field Theory:
When you first learn quantum field theory as a physicist, you learn how unsightly infinite results get covered up via an ad-hoc-looking process called renormalization. Eventually you learn a more modern perspective, that these infinite results show up because we’re ignorant of the complete theory at high energies. You learn that you can think of theories at a particular scale, and characterize them by what happens when you “zoom” in and out, in an approach codified by the physicist Kenneth Wilson.
While I understand the basics of Wilson’s approach, the courses I took in grad school skipped the deeper implications. This includes the idea of theories that are defined at all energies, “flowing” from an otherwise scale-invariant theory perturbed with extra pieces. Other physicists are much more comfortable thinking in these terms, and the topic is important for quite a few deep questions, including what it means to properly define a theory and where laws of nature “live”. If I’m going to have an informed opinion on any of those topics, I’ll need to go back and learn the Wilsonian approach properly.
If you’re a fan of science fiction, you probably know that wormholes are the most realistic option for faster-than-light travel, something that is at least allowed by the equations of general relativity. “Most realistic” isn’t the same as “realistic”, though. Opening a wormhole and keeping it stable requires some kind of “exotic matter”, and that matter needs to violate a set of restrictions, called “energy conditions”, that normal matter obeys. Some of these energy conditions are just conjectures, some we even know how to violate, while others are proven to hold for certain types of theories. Some energy conditions don’t rule out wormholes, but instead restrict their usefulness: you can have non-traversable wormholes (basically, two inescapable black holes that happen to meet in the middle), or traversable wormholes where the distance through the wormhole is always longer than the distance outside.
I’ve seen a few talks on this topic, but I’m still confused about the big picture: which conditions have been proven, what assumptions were needed, and what do they all imply? I haven’t found a publicly-accessible account that covers everything. I owe it to myself as a kid, not to mention everyone who’s a kid now, to get a satisfactory answer.
Quantum Foundations is a field that many physicists think is a waste of time. It deals with the questions that troubled Einstein and Bohr, questions about what quantum mechanics really means, or why the rules of quantum mechanics are the way they are. These tend to be quite philosophical questions, where it’s hard to tell if people are making progress or just arguing in circles.
I’m more optimistic about philosophy than most physicists, at least when it’s pursued with enough analytic rigor. I’d like to at least understand the leading arguments for different interpretations, what the constraints on interpretations are and the main loopholes. That way, if I end up concluding the field is a waste of time at least I’d be making an informed decision.
George Gamow was one of the “quantum kids” who got their start at the Niels Bohr Institute in the 30’s. He’s probably best known for the Alpher, Bethe, Gamow paper, which managed to combine one of the best sources of evidence we have for the Big Bang with a gratuitous Greek alphabet pun. He was the group jester in a lot of ways: the historians here have archives full of his cartoons and in-jokes.
Naturally, he also did science popularization.
I recently read two of Gamow’s science popularization books, “Mr Tompkins” and “Thirty Years That Shook Physics”. Reading them was a trip back in time, to when people thought about physics in surprisingly different ways.
“Mr. Tompkins” started as a series of articles in Discovery, a popular science magazine. They were published as a book in 1940, with a sequel in 1945 and an update in 1965. Apparently they were quite popular among a certain generation: the edition I’m reading has a foreword by Roger Penrose.
(As an aside: Gamow mentions that the editor of Discovery was C. P. Snow…that C. P. Snow?)
Mr Tompkins himself is a bank clerk who decides on a whim to go to a lecture on relativity. Unable to keep up, he falls asleep, and dreams of a world in which the speed of light is much slower than it is in our world. Bicyclists visibly redshift, and travelers lead much longer lives than those who stay at home. As the book goes on he meets the same professor again and again (eventually marrying his daughter) and sits through frequent lectures on physics, inevitably falling asleep and experiencing it first-hand: jungles where Planck’s constant is so large that tigers appear as probability clouds, micro-universes that expand and collapse in minutes, and electron societies kept strictly monogamous by “Father Paulini”.
The structure definitely feels dated, and not just because these days people don’t often go to physics lectures for fun. Gamow actually includes the full text of the lectures that send Mr Tompkins to sleep, and while they’re not quite boring enough to send the reader to sleep they are written on a higher level than the rest of the text, with more technical terms assumed. In the later additions to the book the “lecture” aspect grows: the last two chapters involve a dream of Dirac explaining antiparticles to a dolphin in basically the same way he would explain them to a human, and a discussion of mesons in a Japanese restaurant where the only fantastical element is a trio of geishas acting out pion exchange.
Some aspects of the physics will also feel strange to a modern audience. Gamow presents quantum mechanics in a way that I don’t think I’ve seen in a modern text: while modern treatments start with uncertainty and think of quantization as a consequence, Gamow starts with the idea that there is a minimum unit of action, and derives uncertainty from that. Some of the rest is simply limited by timing: quarks weren’t fully understood even by the 1965 printing, in 1945 they weren’t even a gleam in a theorist’s eye. Thus Tompkins’ professor says that protons and neutrons are really two states of the same particle and goes on to claim that “in my opinion, it is quite safe to bet your last dollar that the elementary particles of modern physics [electrons, protons/neutrons, and neutrinos] will live up to their name.” Neutrinos also have an amusing status: they hadn’t been detected when the earlier chapters were written, and they come across rather like some people write about dark matter today, as a silly theorist hypothesis that is all-too-conveniently impossible to observe.
“Thirty Years That Shook Physics”, published in 1966, is a more usual sort of popular science book, describing the history of the quantum revolution. While mostly focused on the scientific concepts, Gamow does spend some time on anecdotes about the people involved. If you’ve read much about the time period, you’ll probably recognize many of the anecdotes (for example, the Pauli Principle that a theorist can break experimental equipment just by walking in to the room, or Dirac’s “discovery” of purling), even the ones specific to Gamow have by now been spread far and wide.
Like Mr Tompkins, the level in this book is not particularly uniform. Gamow will spend a paragraph carefully defining an average, and then drop the word “electroscope” as if everyone should know what it is. The historical perspective taught me a few things I perhaps should have already known, but found surprising anyway. (The plum-pudding model was an actual mathematical model, and people calculated its consequences! Muons were originally thought to be mesons!)
Both books are filled with Gamow’s whimsical illustrations, something he was very much known for. Apparently he liked to imitate other art styles as well, which is visible in the portraits of physicists at the front of each chapter.
1966 was late enough that this book doesn’t have the complacency of the earlier chapters in Mr Tompkins: Gamow knew that there were more particles than just electrons, nucleons, and neutrinos. It was still early enough, though, that the new particles were not fully understood. It’s interesting seeing how Gamow reacts to this: his expectation was that physics was on the cusp of another massive change, a new theory built on new fundamental principles. He speculates that there might be a minimum length scale (although oddly enough he didn’t expect it to be related to gravity).
It’s only natural that someone who lived through the dawn of quantum mechanics should expect a similar revolution to follow. Instead, the revolution of the late 60’s and early 70’s was in our understanding: not new laws of nature so much as new comprehension of just how much quantum field theory can actually do. I wonder if the generation who lived through that later revolution left it with the reverse expectation: that the next crisis should be solved in a similar way, that the world is quantum field theory (or close cousins, like string theory) all the way down and our goal should be to understand the capabilities of these theories as well as possible.
The final section of the book is well worth waiting for. In 1932, Gamow directed Bohr’s students in staging a play, the “Blegdamsvej Faust”. A parody of Faust, it features Bohr as god, Pauli as Mephistopheles, and Ehrenfest as the “erring Faust” (Gamow’s pun, not mine) that he tempts to sin with the promise of the neutrino, Gretchen. The piece, translated to English by Gamow’s wife Barbara, is filled with in-jokes on topics as obscure as Bohr’s habitual mistakes when speaking German. It’s gloriously weird and well worth a read. If you’ve ever seen someone do a revival performance, let me know!
I’m lazy this Newtonmas, so instead of writing a post of my own I’m going to recommend a few other people who do excellent work.
Quantum Frontiers is a shared blog updated by researchers connected to Caltech’s Institute for Quantum Information and Matter. While the whole blog is good, I’m going to be more specific and recommend the posts by Nicole Yunger Halpern. Nicole is really a great writer, and her posts are full of vivid imagery and fun analogies. If she’s not as well-known, it’s only because she lacks the attention-grabbing habit of getting into stupid arguments with other bloggers. Definitely worth a follow.
Recommending Slate Star Codex feels a bit strange, because it seems like everyone I’ve met who would enjoy the blog already reads it. It’s not a physics blog by any stretch, so it’s also an unusual recommendation to give here. Slate Star Codex writes about a wide variety of topics, and while the author isn’t an expert in most of them he does a lot more research than you or I would. If you’re interested in up-to-date meta-analyses on psychology, social science, and policy, pored over by someone with scrupulous intellectual honesty and an inexplicably large amount of time to indulge it, then Slate Star Codex is the blog for you.
I mentioned Piled Higher and Deeper a few weeks back, when I reviewed the author’s popular science book We Have No Idea. Piled Higher and Deeper is a webcomic about life in grad school. Humor is all about exaggeration, and it’s true that Piled Higher and Deeper exaggerates just how miserable and dysfunctional grad school can be…but not by as much as you’d think. I recommend that anyone considering grad school read Piled Higher and Deeper, and take it seriously. Grad school can really be like that, and if you don’t think you can deal with spending five or six years in the world of that comic you should take that into account.
Maybe you’ve heard the buzzword, and you imagine science fiction become reality: teleporting people across the galaxy, or ansibles communicating faster than light. Maybe you’ve heard a bit more, and know that quantum teleportation can’t transfer information faster than light, that it hasn’t been used on something even as complicated as a molecule…and you’re still confused, because if so, why call it teleportation in the first place?
There’s a simple way to clear up this confusion. You just have to realize that classical teleportation is easy.
What do I mean by “classical teleportation”?
Let’s start with the simplest teleporter you could imagine. It scans you on one end, then vaporizes you, and sends your information to a teleportation pad on the other end. The other end uses that information to build a copy of your body from some appropriate raw materials, and there you are!
(If the machine doesn’t vaporize you, then you end up with an army of resurrected Derek Parfits.)
Doing this with a person is, of course, absurdly difficult, and well beyond the reach of current technology.
And no, nothing about the Star Trek version changes that
Do it with a document, though, and you’ve essentially invented the fax machine.
Yes, faxes don’t copy a piece of paper atom by atom, but they don’t need to: they just send what’s written on it. This sort of “classical teleportation” is commonplace. Trade Pokémon, and your Pikachu gets “classical teleported” from one device to another. Send an email, and your laptop teleports it to someone else. The ability to “classically teleport” is essential for computers to function, the idea that you can take the “important information” about something and copy it somewhere else.
Note that under this definition, “classical teleportation” is not faster than light. You still need to send a signal, between a “scanner” and a “printer”, and that’s only as fast as your signal normally is. Note also that the “printer” needs some “ink”, you still need the right materials to build or record whatever is being teleported over.
So suppose you’re building a quantum computer, one that uses the unique properties of quantum mechanics. Naturally, you want to be able to take a quantum state and copy it somewhere else. You need “quantum teleportation”. And the first thing you realize is that it’s harder than it looks.
The problem comes when you try to “scan” your quantum state. You might have heard quantum states described as “inherently uncertain” or “inherently indeterminate”. For this post, a better way to think about them is “inherently unknown”. For any quantum state, there is something you can’t know about its behavior. You can’t know which slit the next electron will go through, you can’t know whether Schrödinger’s cat is alive or dead. If you did, the state wouldn’t be quantum: no matter how you figure it out, there isn’t a way to discover which slit the electron will go through without getting rid of the quantum diffraction pattern.
This means that if you try to just “classically teleport” a quantum state, you lose the very properties you care about. To “scan” your state, you have to figure out everything important about it. The only way to do that, for an arbitrary state on your teleportation pad, is to observe its behavior. If you do that, though, you’ll end up knowing too much: a state whose behavior you know is not a quantum state, and it won’t do what you want it to on the other end. You’ve tried to “clone” it, and there’s a theorem proving you can’t.
(Note that this description should make sense even if you believe in a “hidden variable” interpretation of quantum mechanics. Those hidden variables have to be “non-local”, they aren’t close enough for your “scanner” to measure them.)
Since you can’t “classically teleport” your quantum state, you have to do something more subtle. That’s where “quantum teleportation” comes in. Quantum teleportation uses “entanglement”, long-distance correlations between quantum states. With a set of two entangled states, you can sneak around the “scanning” part, manipulating the states on one end to compute instructions that let someone use the other entangled particle to rebuild the “teleported” state.
Those instructions still have to be transferred normally, once again quantum teleportation isn’t faster than light. You still need the right kind of quantum state at your target, your “printer” still needs ink. What you get, though, is a way to transport the “inherently unknown” behavior of a quantum state, without scanning it and destroying the “mystery”. Quantum teleportation isn’t easier than classical teleportation, it’s harder. What’s exciting is that it’s possible at all.
On an unrelated topic, KKLT have fired back at their critics, with an impressivesalvoofpapers. (See also this one from the same day.) I don’t have the time or expertise to write a good post about this at the moment, currently hoping someone else does!