Tag Archives: PublicPerception

Traveling This Week

I’m traveling this week, so this will just be a short post. This isn’t a scientific trip exactly: I’m in Poland, at an event connected to the 550th anniversary of the birth of Copernicus.

Not this one, but they do have nice posters!

Part of this event involved visiting the Copernicus Science Center, the local children’s science museum. The place was sold out completely. For any tired science communicators, I recommend going to a sold-out science museum: the sheer enthusiasm you’ll find there is balm for the most jaded soul.

Whatever Happened to the Nonsense Merchants?

I was recently reminded that Michio Kaku exists.

In the past, Michio Kaku made important contributions to string theory, but he’s best known for what could charitably be called science popularization. He’s an excited promoter of physics and technology, but that excitement often strays into inaccuracy. Pretty much every time I’ve heard him mentioned, it’s for some wildly overenthusiastic statement about physics that, rather than just being simplified for a general audience, is generally flat-out wrong, conflating a bunch of different developments in a way that makes zero actual sense.

Michio Kaku isn’t unique in this. There’s a whole industry in making nonsense statements about science, overenthusiastic books and videos hinting at science fiction or mysticism. Deepak Chopra is a famous figure from deeper on this spectrum, known for peddling loosely quantum-flavored spirituality.

There was a time I was worried about this kind of thing. Super-popular misinformation is the bogeyman of the science popularizer, the worry that for every nice, careful explanation we give, someone else will give a hundred explanations that are way more exciting and total baloney. Somehow, though, I hear less and less from these people over time, and thus worry less and less about them.

Should I be worried more? I’m not sure.

Are these people less popular than they used to be? Is that why I’m hearing less about them? Possibly, but I’d guess not. Michio Kaku has eight hundred thousand twitter followers. Deepak Chopra has three million. On the other hand, the usually-careful Brian Greene has a million followers, and Neil deGrasse Tyson, where the worst I’ve heard is that he can be superficial, has fourteen million.

(But then in practice, I’m more likely to reflect on content with even smaller audiences.)

If misinformation is this popular, shouldn’t I be doing more to combat it?

Popular misinformation is also going to be popular among critics. For every big-time nonsense merchant, there are dozens of people breaking down and debunking every false statement they say, every piece of hype they release. Often, these people will end up saying the same kinds of things over and over again.

If I can be useful, I don’t think it will be by saying the same thing over and over again. I come up with new metaphors, new descriptions, new explanations. I clarify things others haven’t clarified, I clear up misinformation others haven’t addressed. That feels more useful to me, especially in a world where others are already countering the big problems. I write, and writing lasts, and can be used again and again when needed. I don’t need to keep up with the Kakus and Chopras of the world to do that.

(Which doesn’t imply I’ll never address anything one of those people says…but if I do, it will be because I have something new to say back!)

Why Are Universities So International?

Worldwide, only about one in thirty people live in a different country from where they were born. Wander onto a university campus, though, and you may get a different impression. The bigger the university and the stronger its research, the more international its employees become. You’ll see international PhD students, international professors, and especially international temporary researchers like postdocs.

I’ve met quite a few people who are surprised by this. I hear the same question again and again, from curious Danes at outreach events to a tired border guard in the pre-clearance area of the Toronto airport: why are you, an American, working here?

It’s not, on the face of it, an unreasonable question. Moving internationally is hard and expensive. You may have to take your possessions across the ocean, learn new languages and customs, and navigate an unfamiliar bureaucracy. You begin as a temporary resident, not a citizen, with all the risks and uncertainty that involves. Given a choice, most people choose to stay close to home. Countries sometimes back up this choice with additional incentives. There are laws in many places that demand that, given a choice, companies hire a local instead of a foreigner. In some places these laws apply to universities as well. With all that weight, why do so many researchers move abroad?

Two different forces stir the pot, making universities international: specialization, and diversification.

Researchers may find it easier to live close to people who grew up with us, but we work better near people who share our research interests. Science, and scholarship more generally, are often collaborative: we need to discuss with and learn from others to make progress. That’s still very hard to do remotely: it requires serendipity, chance encounters in the corridor and chats at the lunch table. As researchers in general have become more specialized, we’ve gotten to the point where not just any university will do: the people who do our kind of work are few enough that we often have to go to other countries to find them.

Specialization alone would tend to lead to extreme clustering, with researchers in each area gathering in only a few places. Universities push back against this, though. A university wants to maximize the chance that one of their researchers makes a major breakthrough, so they don’t want to hire someone whose work will just be a copy of someone they already have. They want to encourage interdisciplinary collaboration, to try to get people in different areas to talk to each other. Finally, they want to offer a wide range of possible courses, to give the students (many of whom are still local), a chance to succeed at many different things. As a result, universities try to diversify their faculty, to hire people from areas that, while not too far for meaningful collaboration, are distinct from what their current employees are doing.

The result is a constant international churn. We search for jobs in a particular sweet spot: with people close enough to spur good discussion, but far enough to not overspecialize. That search takes us all over the world, and all but guarantees we won’t find a job where we were trained, let alone where we were born. It makes universities quite international places, with a core of local people augmented by opportune choices from around the world. It makes us, and the way we lead our lives, quite unusual on a global scale. But it keeps the science fresh, and the ideas moving.

AI Is the Wrong Sci-Fi Metaphor

Over the last year, some people felt like they were living in a science fiction novel. Last November, the research laboratory OpenAI released ChatGPT, a program that can answer questions on a wide variety of topics. Last month, they announced GPT-4, a more powerful version of ChatGPT’s underlying program. Already in February, Microsoft used GPT-4 to add a chatbot feature to its search engine Bing, which journalists quickly managed to use to spin tales of murder and mayhem.

For those who have been following these developments, things don’t feel quite so sudden. Already in 2019, AI Dungeon showed off how an early version of GPT could be used to mimic an old-school text-adventure game, and a tumblr blogger built a bot that imitates his posts as a fun side project. Still, the newer programs have shown some impressive capabilities.

Are we close to “real AI”, to artificial minds like the positronic brains in Isaac Asimov’s I, Robot? I can’t say, in part because I’m not sure what “real AI” really means. But if you want to understand where things like ChatGPT come from, how they work and why they can do what they do, then all the talk of AI won’t be helpful. Instead, you need to think of an entirely different set of Asimov novels: the Foundation series.

While Asimov’s more famous I, Robot focused on the science of artificial minds, the Foundation series is based on a different fictional science, the science of psychohistory. In the stories, psychohistory is a kind of futuristic social science. In the real world, historians and sociologists can find general principles of how people act, but don’t yet have the kind of predictive theories physicists or chemists do. Foundation imagines a future where powerful statistical methods have allowed psychohistorians to precisely predict human behavior: not yet that of individual people, but at least the average behavior of civilizations. They can not only guess when an empire is soon to fall, but calculate how long it will be before another empire rises, something few responsible social scientists would pretend to do today.

GPT and similar programs aren’t built to predict the course of history, but they do predict something: given part of a text, they try to predict the rest. They’re called Large Language Models, or LLMs for short. They’re “models” in the sense of mathematical models, formulas that let us use data to make predictions about the world, and the part of the world they model is our use of language.

Normally, a mathematical model is designed based on how we think the real world works. A mathematical model of a pandemic, for example, might use a list of people, each one labeled as infected or not. It could include an unknown number, called a parameter, for the chance that one person infects another. That parameter would then be filled in, or fixed, based on observations of the pandemic in the real world.

LLMs (as well as most of the rest of what people call “AI” these days) are a bit different. Their models aren’t based on what we expect about the real world. Instead, they’re in some sense “generic”, models that could in principle describe just about anything. In order to make this work, they have a lot more parameters, tons and tons of flexible numbers that can get fixed in different ways based on data.

(If that part makes you a bit uncomfortable, it bothers me too, though I’ve mostly made my peace with it.)

The surprising thing is that this works, and works surprisingly well. Just as psychohistory from the Foundation novels can predict events with much more detail than today’s historians and sociologists, LLMs can predict what a text will look like much more precisely than today’s literature professors. That isn’t necessarily because LLMs are “intelligent”, or because they’re “copying” things people have written. It’s because they’re mathematical models, built by statistically analyzing a giant pile of texts.

Just as Asimov’s psychohistory can’t predict the behavior of individual people, LLMs can’t predict the behavior of individual texts. If you start writing something, you shouldn’t expect an LLM to predict exactly how you would finish. Instead, LLMs predict what, on average, the rest of the text would look like. They give a plausible answer, one of many, for what might come next.

They can’t do that perfectly, but doing it imperfectly is enough to do quite a lot. It’s why they can be used to make chatbots, by predicting how someone might plausibly respond in a conversation. It’s why they can write fiction, or ads, or college essays, by predicting a plausible response to a book jacket or ad copy or essay prompt.

LLMs like GPT were invented by computer scientists, not social scientists or literature professors. Because of that, they get described as part of progress towards artificial intelligence, not as progress in social science. But if you want to understand what ChatGPT is right now, and how it works, then that perspective won’t be helpful. You need to put down your copy of I, Robot and pick up Foundation. You’ll still be impressed, but you’ll have a clearer idea of what could come next.

The Temptation of Spinoffs

Read an argument for a big scientific project, and you’ll inevitably hear mention of spinoffs. Whether it’s NASA bringing up velcro or CERN and the World-Wide Web, scientists love to bring up times when a project led to some unrelated technology that improved peoples’ lives.

Just as inevitably as they show up, though, these arguments face criticism. Advocates of the projects argue that promoting spinoffs misses the point, training the public to think about science in terms of unrelated near-term gadgets rather than the actual point of the experiments. They think promoters should focus on the scientific end-goals, justifying them either in terms of benefit to humanity or as a broader, “it makes the country worth defending” human goal. It’s a perspective that shows up in education too, where even when students ask “when will I ever use this in real life?” it’s not clear that’s really what they mean.

On the other side, opponents of the projects will point out that the spinoffs aren’t good enough to justify the science. Some, like velcro, weren’t actually spinoffs to begin with. Others seem like tiny benefits compared to the vast cost of the scientific projects, or like things that would have been much easier to get with funding that was actually dedicated to achieving the spinoff.

With all these downsides, why do people keep bringing spinoffs up? Are they just a cynical attempt to confuse people?

I think there’s something less cynical going on here. Things make a bit more sense when you listen to what the scientists say, not to the public, but when talking to scientists in other disciplines.

Scientists speaking to fellow scientists still mention spinoffs, but they mention scientific spinoffs. The speaker in a talk I saw recently pointed out that the LHC doesn’t just help with particle physics: by exploring the behavior of collisions of high-energy atomic nuclei it provides essential information for astrophysicists understanding neutron stars and cosmologists studying the early universe. When these experiments study situations we can’t model well, they improve the approximations we use to describe those situations in other contexts. By knowing more, we know more. Knowledge builds on knowledge, and the more we know about the world the more we can do, often in surprising and un-planned ways.

I think that when scientists promote spinoffs to the public, they’re trying to convey this same logic. Like promoting an improved understanding of stars to astrophysicists, they’re modeling the public as “consumer goods scientists” and trying to pick out applications they’d find interesting.

Knowing more does help us know more, that much is true. And eventually that knowledge can translate to improving people’s lives. But in a public debate, people aren’t looking for these kinds of principles, let alone a scientific “I’ll scratch your back if you’ll scratch mine”. They’re looking for something like a cost-benefit analysis, “why are we doing this when we could do that?”

(This is not to say that most public debates involve especially good cost-benefit analysis. Just that it is, in the end, what people are trying to do.)

Simply listing spinoffs doesn’t really get at this. The spinoffs tend to be either small enough that they don’t really argue the point (velcro, even if NASA had invented it, could probably have been more cheaply found without a space program), or big but extremely unpredictable (it’s not like we’re going to invent another world-wide web).

Focusing on the actual end-products of the science should do a bit better. That can include “scientific spinoffs”, if not the “consumer goods spinoffs”. Those collisions of heavy nuclei change our understanding of how we model complex systems. That has applications in many areas of science, from how we model stars to materials to populations, and those applications in turn could radically improve people’s lives.

Or, well, they could not. Basic science is very hard to do cost-benefit analyses with. It’s the fabled explore/exploit dilemma, whether to keep trying to learn more or focus on building on what you have. If you don’t know what’s out there, if you don’t know what you don’t know, then you can’t really solve that dilemma.

So I get the temptation of reaching to spinoffs, of pointing to something concrete in everyday life and saying “science did that!” Science does radically improve people’s lives, but it doesn’t always do it especially quickly. You want to teach people that knowledge leads to knowledge, and you try to communicate it the way you would to other scientists, by saying how your knowledge and theirs intersect. But if you want to justify science to the public, you want something with at least the flavor of cost-benefit analysis. And you’ll get more mileage out of that if you think about where the science itself can go, than if you focus on the consumer goods it accidentally spins off along the way.

Why Dark Matter Feels Like Cheating (And Why It Isn’t)

I’ve never met someone who believed the Earth was flat. I’ve met a few who believed it was six thousand years old, but not many. Occasionally, I run into crackpots who rail against relativity or quantum mechanics, or more recent discoveries like quarks or the Higgs. But for one conclusion of modern physics, the doubters are common. For this one idea, the average person may not insist that the physicists are wrong, but they’ll usually roll their eyes a little bit, ask the occasional “really?”

That idea is dark matter.

For the average person, dark matter doesn’t sound like normal, responsible science. It sounds like cheating. Scientists try to explain the universe, using stars and planets and gravity, and eventually they notice the equations don’t work, so they just introduce some new matter nobody can detect. It’s as if a budget didn’t add up, so the accountant just introduced some “dark expenses” to hide the problem.

Part of what’s going on here is that fundamental physics, unlike other fields, doesn’t have to reduce to something else. An accountant has to explain the world in terms of transfers of money, a chemist in terms of atoms and molecules. A physicist has to explain the world in terms of math, with no more restrictions than that. Whatever the “base level” of another field is, physics can, and must, go deeper.

But that doesn’t explain everything. Physics may have to explain things in terms of math, but we shouldn’t just invent new math whenever we feel like it. Surely, we should prefer explanations in terms of things we know to explanations in terms of things we don’t know. The question then becomes, what justifies the preference? And when do we get to break it?

Imagine you’re camping in your backyard. You’ve brought a pack of jumbo marshmallows. You wake up to find a hole torn in the bag, a few marshmallows strewn on a trail into the bushes, the rest gone. You’re tempted to imagine a new species of ant, with enormous jaws capable of ripping open plastic and hauling the marshmallows away. Then you remember your brother likes marshmallows, and it’s probably his fault.

Now imagine instead you’re camping in the Amazon rainforest. Suddenly, the ant explanation makes sense. You may not have a particular species of ants in mind, but you know the rainforest is full of new species no-one has yet discovered. And you’re pretty sure your brother couldn’t have flown to your campsite in the middle of the night and stolen your marshmallows.

We do have a preference against introducing new types of “stuff”, like new species of ants or new particles. We have that preference because these new types of stuff are unlikely, based on our current knowledge. We don’t expect new species of ants in our backyards, because we think we have a pretty good idea of what kinds of ants exist, and we think a marshmallow-stealing brother is more likely. That preference gets dropped, however, based on the strength of the evidence. If it’s very unlikely our brother stole the marshmallows, and if we’re somewhere our knowledge of ants is weak, then the marshmallow-stealing ants are more likely.

Dark matter is a massive leap. It’s not a massive leap because we can’t see it, but simply because it involves new particles, particles not in our Standard Model of particle physics. (Or, for the MOND-ish fans, new fields not present in Einstein’s theory of general relativity.) It’s hard to justify physics beyond the Standard Model, and our standards for justifying it are in general very high: we need very precise experiments to conclude that the Standard Model is well and truly broken.

For dark matter, we keep those standards. The evidence for some kind of dark matter, that there is something that can’t be explained by just the Standard Model and Einstein’s gravity, is at this point very strong. Far from a vague force that appears everywhere, we can map dark matter’s location, systematically describe its effect on the motion of galaxies to clusters of galaxies to the early history of the universe. We’ve checked if there’s something we’ve left out, if black holes or unseen planets might cover it, and they can’t. It’s still possible we’ve missed something, just like it’s possible your brother flew to the Amazon to steal your marshmallows, but it’s less likely than the alternatives.

Also, much like ants in the rainforest, we don’t know every type of particle. We know there are things we’re missing: new types of neutrinos, or new particles to explain quantum gravity. These don’t have to have anything to do with dark matter, they might be totally unrelated. But they do show that we should expect, sometimes, to run into particles we don’t already know about. We shouldn’t expect that we already know all the particles.

If physicists did what the cartoons suggest, it really would be cheating. If we proposed dark matter because our equations didn’t match up, and stopped checking, we’d be no better than an accountant adding “dark money” to a budget. But we didn’t do that. When we argue that dark matter exists, it’s because we’ve actually tried to put together the evidence, because we’ve weighed it against the preference to stick with the Standard Model and found the evidence tips the scales. The instinct to call it cheating is a good instinct, one you should cultivate. But here, it’s an instinct physicists have already taken into account.

LHC Black Holes for the Terminally Un-Reassured

Could the LHC have killed us all?

No, no it could not.

But…

I’ve had this conversation a few times over the years. Usually, the people I’m talking to are worried about black holes. They’ve heard that the Large Hadron Collider speeds up particles to amazingly high energies before colliding them together. They worry that these colliding particles could form a black hole, which would fall into the center of the Earth and busily gobble up the whole planet.

This pretty clearly hasn’t happened. But also, physicists were pretty confident that it couldn’t happen. That isn’t to say they thought it was impossible to make a black hole with the LHC. Some physicists actually hoped to make a black hole: it would have been evidence for extra dimensions, curled-up dimensions much larger than the tiny ones required by string theory. They figured out the kind of evidence they’d see if the LHC did indeed create a black hole, and we haven’t seen that evidence. But even before running the machine, they were confident that such a black hole wouldn’t gobble up the planet. Why?

The best argument is also the most unsatisfying. The LHC speeds up particles to high energies, but not unprecedentedly high energies. High-energy particles called cosmic rays enter the atmosphere every day, some of which are at energies comparable to the LHC. The LHC just puts the high-energy particles in front of a bunch of sophisticated equipment so we can measure everything about them. If the LHC could destroy the world, cosmic rays would have already done so.

That’s a very solid argument, but it doesn’t really explain why. Also, it may not be true for future colliders: we could build a collider with enough energy that cosmic rays don’t commonly meet it. So I should give another argument.

The next argument is Hawking radiation. In Stephen Hawking’s most famous accomplishment, he argued that because of quantum mechanics black holes are not truly black. Instead, they give off a constant radiation of every type of particle mixed together, shrinking as it does so. The radiation is faintest for large black holes, but gets more and more intense the smaller the black hole is, until the smallest black holes explode into a shower of particles and disappear. This argument means that a black hole small enough that the LHC could produce it would radiate away to nothing in almost an instant: not long enough to leave the machine, let alone fall to the center of the Earth.

This is a good argument, but maybe you aren’t as sure as I am about Hawking radiation. As it turns out, we’ve never measured Hawking radiation, it’s just a theoretical expectation. Remember that the radiation gets fainter the larger the black hole is: for a black hole in space with the mass of a star, the radiation is so tiny it would be almost impossible to detect even right next to the black hole. From here, in our telescopes, we have no chance of seeing it.

So suppose tiny black holes didn’t radiate, and suppose the LHC could indeed produce them. Wouldn’t that have been dangerous?

Here, we can do a calculation. I want you to appreciate how tiny these black holes would be.

From science fiction and cartoons, you might think of a black hole as a kind of vacuum cleaner, sucking up everything nearby. That’s not how black holes work, though. The “sucking” black holes do is due to gravity, no stronger than the gravity of any other object with the same mass at the same distance. The only difference comes when you get close to the event horizon, an invisible sphere close-in around the black hole. Pass that line, and the gravity is strong enough that you will never escape.

We know how to calculate the position of the event horizon of a black hole. It’s the Schwarzchild radius, and we can write it in terms of Newton’s constant G, the mass of the black hole M, and the speed of light c, as follows:

\frac{2GM}{c^2}

The Large Hadron Collider’s two beams each have an energy around seven tera-electron-volts, or TeV, so there are 14 TeV of energy in total in each collision. Imagine all of that energy being converted into mass, and that mass forming a black hole. That isn’t how it would actually happen: some of the energy would create other particles, and some would give the black hole a “kick”, some momentum in one direction or another. But we’re going to imagine a “worst-case” scenario, so let’s assume all the energy goes to form the black hole. Electron-volts are a weird physicist unit, but if we divide them by the speed of light squared (as we should if we’re using E=mc^2 to create a mass), then Wikipedia tells us that each electron-volt will give us 1.78\times 10^{-36} kilograms. “Tera” is the SI prefix for 10^{12}. Thus our tiny black hole starts with a mass of

14\times 10^{12}\times 1.78\times 10^{-36} = 2.49\times 10^{-23} \textrm{kg}

Plugging in Newton’s constant (6.67\times 10^{-11} meters cubed per kilogram per second squared), and the speed of light (3\times 10^8 meters per second), and we get a radius of,

\frac{2\times 6.67\times 10^{-11}\times 14\times 10^{12}\times 1.78\times 10^{-36}}{\left(3\times 10^8\right)^2} = 3.7\times 10^{-50} \textrm{m}

That, by the way, is amazingly tiny. The size of an atom is about 10^{-10} meters. If every atom was a tiny person, and each of that person’s atoms was itself a person, and so on for five levels down, then the atoms of the smallest person would be the same size as this event horizon.

Now, we let this little tiny black hole fall. Let’s imagine it falls directly towards the center of the Earth. The only force affecting it would be gravity (if it had an electrical charge, it would quickly attract a few electrons and become neutral). That means you can think of it as if it were falling through a tiny hole, with no friction, gobbling up anything unfortunate enough to fall within its event horizon.

For our first estimate, we’ll treat the black hole as if it stays the same size through its journey. Imagine the black hole travels through the entire earth, absorbing a cylinder of matter. Using the Earth’s average density of 5515 kilograms per cubic meter, and the Earth’s maximum radius of 6378 kilometers, our cylinder adds a mass of,

\pi \times \left(3.7\times 10^{-50}\right)^2 \times 2 \times 6378\times 10^3\times 5515 = 3\times 10^{-88} \textrm{kg}

That’s absurdly tiny. That’s much, much, much tinier than the mass we started out with. Absorbing an entire cylinder through the Earth makes barely any difference.

You might object, though, that the black hole is gaining mass as it goes. So really we ought to use a differential equation. If the black hole travels a distance r, absorbing mass as it goes at average Earth density \rho, then we find,

\frac{dM}{dr}=\pi\rho\left(\frac{2GM(r)}{c^2}\right)^2

Solving this, we get

M(r)=\frac{M_0}{1- M_0 \pi\rho\left(\frac{2G}{c^2}\right)^2 r }

Where M_0 is the mass we start out with.

Plug in the distance through the Earth for r, and we find…still about 3\times 10^{-88} \textrm{kg}! It didn’t change very much, which makes sense, it’s a very very small difference!

But you might still object. A black hole falling through the Earth wouldn’t just go straight through. It would pass through, then fall back in. In fact, it would oscillate, from one side to the other, like a pendulum. This is actually a common problem to give physics students: drop an object through a hole in the Earth, neglect air resistance, and what does it do? It turns out that the time the object takes to travel through the Earth is independent of its mass, and equal to roughly 84.5 minutes.

So let’s ask a question: how long would it take for a black hole, oscillating like this, to double its mass?

We want to solve,

2=\frac{1}{1- M_0 \pi\rho\left(\frac{2G}{c^2}\right)^2 r }

so we need the black hole to travel a total distance of

r=\frac{1}{2M_0 \pi\rho\left(\frac{2G}{c^2}\right)^2} = 5.3\times 10^{71} \textrm{m}

That’s a huge distance! The Earth’s radius, remember, is 6378 kilometers. So traveling that far would take

5.3\times 10^{71} \times 84.5/60/24/365 = 8\times 10^{67} \textrm{y}

Ten to the sixty-seven years. Our universe is only about ten to the ten years old. In another five times ten to the nine years, the Sun will enter its red giant phase, and swallow the Earth. There simply isn’t enough time for this tiny tiny black hole to gobble up the world, before everything is already gobbled up by something else. Even in the most pessimistic way to walk through the calculation, it’s just not dangerous.

I hope that, if you were worried about black holes at the LHC, you’re not worried any more. But more than that, I hope you’ve learned three lessons. First, that even the highest-energy particle physics involves tiny energies compared to day-to-day experience. Second, that gravitational effects are tiny in the context of particle physics. And third, that with Wikipedia access, you too can answer questions like this. If you’re worried, you can make an estimate, and check!

What Might Lie Beyond, and Why

As the new year approaches, people think about the future. Me, I’m thinking about the future of fundamental physics, about what might lie beyond the Standard Model. Physicists search for many different things, with many different motivations. Some are clear missing pieces, places where the Standard Model fails and we know we’ll need to modify it. Others are based on experience, with no guarantees but an expectation that, whatever we find, it will be surprising. Finally, some are cool possibilities, ideas that would explain something or fill in a missing piece but aren’t strictly necessary.

The Almost-Sure Things

Science isn’t math, so nothing here is really a sure thing. We might yet discover a flaw in important principles like quantum mechanics and special relativity, and it might be that an experimental result we trust turns out to be flawed. But if we chose to trust those principles, and our best experiments, then these are places we know the Standard Model is incomplete:

  • Neutrino Masses: The original Standard Model’s neutrinos were massless. Eventually, physicists discovered this was wrong: neutrinos oscillate, switching between different types in a way they only could if they had different masses. This result is familiar enough that some think of it as already part of the Standard Model, not really beyond. But the masses of neutrinos involve unsolved mysteries: we don’t know what those masses are, but more, there are different ways neutrinos could have mass, and we don’t yet know which is present in nature. Neutrino masses also imply the existence of an undiscovered “sterile” neutrino, a particle that doesn’t interact with the strong, weak, or electromagnetic forces.
  • Dark Matter Phenomena (and possibly Dark Energy Phenomena): Astronomers first suggested dark matter when they observed galaxies moving at speeds inconsistent with the mass of their stars. Now, they have observed evidence for it in a wide variety of situations, evidence which seems decisively incompatible with ordinary gravity and ordinary matter. Some solve this by introducing dark matter, others by modifying gravity, but this is more of a technical difference than it sounds: in order to modify gravity, one must introduce new quantum fields, much the same way one does when introducing dark matter. The only debate is how “matter-like” those fields need to be, but either approach goes beyond the Standard Model.
  • Quantum Gravity: It isn’t as hard to unite quantum mechanics and gravity as you might think. Physicists have known for decades how to write down a naive theory of quantum gravity, one that follows the same steps one might use to derive the quantum theory of electricity and magnetism. The problem is, this theory is incomplete. It works at low energies, but as the energy increases it loses the ability to make predictions, eventually giving nonsensical answers like probabilities greater than one. We have candidate solutions to this problem, like string theory, but we might not know for a long time which solution is right.
  • Landau Poles: Here’s a more obscure one. In particle physics we can zoom in and out in our theories, using similar theories at different scales. What changes are the coupling constants, numbers that determine the strength of the different forces. You can think of this in a loosely reductionist way, with the theories at smaller scales determining the constants for theories at larger scales. This gives workable theories most of the time, but it fails for at least one part of the Standard Model. In electricity and magnetism, the coupling constant increases as you zoom in. Eventually, it becomes infinite, and what’s more, does so at a finite energy scale. It’s still not clear how we should think about this, but luckily we won’t have to very soon: this energy scale is vastly vastly higher than even the scale of quantum gravity.
  • Some Surprises Guarantee Others: The Standard Model is special in a way that gravity isn’t. Even if you dial up the energy, a Standard Model calculation will always “make sense”: you never get probabilities greater than one. This isn’t true for potential deviations from the Standard Model. If the Higgs boson turns out to interact differently than we expect, it wouldn’t just be a violation of the Standard Model on its own: it would guarantee mathematically that, at some higher energy, we’d have to find something new. That was precisely the kind of argument the LHC used to find the Higgs boson: without the Higgs, something new was guaranteed to happen within the energy range of the LHC to prevent impossible probability numbers.

The Argument from (Theoretical) Experience

Everything in this middle category rests on a particular sort of argument. It’s short of a guarantee, but stronger than a dream or a hunch. While the previous category was based on calculations in theories we already know how to write down, this category relies on our guesses about theories we don’t yet know how to write.

Suppose we had a deeper theory, one that could use fewer parameters to explain the many parameters of the Standard Model. For example, it might explain the Higgs mass, letting us predict it rather than just measuring it like we do now. We don’t have a theory like that yet, but what we do have are many toy model theories, theories that don’t describe the real world but do, in this case, have fewer parameters. We can observe how these theories work, and what kinds of discoveries scientists living in worlds described by them would make. By looking at this process, we can get a rough idea of what to expect, which things in our own world would be “explained” in other ways in these theories.

  • The Hierarchy Problem: This is also called the naturalness problem. Suppose we had a theory that explained the mass of the Higgs, one where it wasn’t just a free parameter. We don’t have such a theory for the real Higgs, but we do have many toy models with similar behavior, ones with a boson with its mass determined by something else. In these models, though, the mass of the boson is always close to the energy scale of other new particles, particles which have a role in determining its mass, or at least in postponing that determination. This was the core reason why people expected the LHC to find something besides the Higgs. Without such new particles, the large hierarchy between the mass of the Higgs and the mass of new particles becomes a mystery, one where it gets harder and harder to find a toy model with similar behavior that still predicts something like the Higgs mass.
  • The Strong CP Problem: The weak nuclear force does what must seem like a very weird thing, by violating parity symmetry: the laws that govern it are not the same when you flip the world in a mirror. This is also true when you flip all the charges as well, a combination called CP (charge plus parity). But while it may seem strange that the weak force violates this symmetry, physicists find it stranger that the strong force seems to obey it. Much like in the hierarchy problem, it is very hard to construct a toy model that both predicts a strong force that maintains CP (or almost maintains it) and doesn’t have new particles. The new particle in question, called the axion, is something some people also think may explain dark matter.
  • Matter-Antimatter Asymmetry: We don’t know the theory of quantum gravity. Even if we did, the candidate theories we have struggle to describe conditions close to the Big Bang. But while we can’t prove it, many physicists expect the quantum gravity conditions near the Big Bang to produce roughly equal amounts of matter and antimatter. Instead, matter dominates: we live in a world made almost entirely of matter, with no evidence of large antimatter areas even far out in space. This lingering mystery could be explained if some new physics was biased towards matter instead of antimatter.
  • Various Problems in Cosmology: Many open questions in cosmology fall in this category. The small value of the cosmological constant is mysterious for the same reasons the small value of the Higgs mass is, but at a much larger and harder to fix scale. The early universe surprises many cosmologists by its flatness and uniformity, which has led them to propose new physics. This surprise is not because such flatness and uniformity is mathematically impossible, but because it is not the behavior they would expect out of a theory of quantum gravity.

The Cool Possibilities

Some ideas for physics beyond the standard model aren’t required, either from experience or cold hard mathematics. Instead, they’re cool, and would be convenient. These ideas would explain things that look strange, or make for a simpler deeper theory, but they aren’t the only way to do so.

  • Grand Unified Theories: Not the same as a “theory of everything”, Grand Unified Theories unite the three “particle physics forces”: the strong nuclear force, the weak nuclear force, and electromagnetism. Under such a theory, the different parameters that determine the strengths of those forces could be predicted from one shared parameter, with the forces only seeming different at low energies. These theories often unite the different matter particles too, but they also introduce new particles and new forces. These forces would, among other things, make protons unstable, and so giant experiments have been constructed to try to detect a proton decaying into other particles. So far none has been seen.
  • Low-Energy Supersymmetry: String theory requires supersymmetry, a relationship where matter and force particles share many properties. That supersymmetry has to be “broken”, which means that while the matter and force particles have the same charges, they can have wildly different masses, so that the partner particles are all still undiscovered. Those masses may be extremely high, all the way up at the scale of quantum gravity, but they could also be low enough to test at the LHC. Physicists hoped to detect such particles, as they could have been a good solution to the hierarchy problem. Now that the LHC hasn’t found these supersymmetric particles, it is much harder to solve the problem this way, though some people are still working on it.
  • Large Extra Dimensions: String theory also involves extra dimensions, beyond our usual three space and one time. Those dimensions are by default very small, but some proposals have them substantially bigger, big enough that we could have seen evidence for them at the LHC. These proposals could explain why gravity is so much weaker than the other forces. Much like the previous members of this category though, no evidence for this has yet been found.

I think these categories are helpful, but experts may quibble about some of my choices. I also haven’t mentioned every possible thing that could be found beyond the Standard Model. If you’ve heard of something and want to know which category I’d put it in, let me know in the comments!

Simulated Wormhole Analogies

Last week, I talked about how Google’s recent quantum simulation of a toy model wormhole was covered in the press. What I didn’t say much about, was my own opinion of the result. Was the experiment important? Was it worth doing? Did it deserve the hype?

Here on this blog, I don’t like to get into those kinds of arguments. When I talk about public understanding of science, I share the same concerns as the journalists: we all want to prevent misunderstandings, and to spread a clearer picture. I can argue that some choices hurt the public understanding and some help it, and be reasonably confident that I’m saying something meaningful, something that would resonate with their stated values.

For the bigger questions, what goals science should have and what we should praise, I have much less of a foundation. We don’t all have a clear shared standard for which science is most important. There isn’t some premise I can posit, a fundamental principle I can use to ground a logical argument.

That doesn’t mean I don’t have an opinion, though. It doesn’t even mean I can’t persuade others of it. But it means the persuasion has to be a bit more loose. For example, I can use analogies.

So let’s say I’m looking at a result like this simulated wormhole. Researchers took advanced technology (Google’s quantum computer), and used it to model a simple system. They didn’t learn anything especially new about that system (since in this case, a normal computer can simulate it better). I get the impression they didn’t learn all that much about the advanced technology: the methods used, at this point, are pretty well-known, at least to Google. I also get the impression that it wasn’t absurdly expensive: I’ve seen other people do things of a similar scale with Google’s machine, and didn’t get the impression they had to pay through the nose for the privilege. Finally, the simple system simulated happens to be “cool”: it’s a toy model studied by quantum gravity researchers, a simple version of that sci-fi standard, the traversible wormhole.

What results are like that?

Occasionally, scientists build tiny things. If the tiny things are cute enough, or cool enough, they tend to get media attention. The most recent example I can remember was a tiny snowman, three microns tall. These tiny things tend to use very advanced technology, and it’s hard to imagine the scientists learn much from making them, but it’s also hard to imagine they cost all that much to make. They’re amusing, and they absolutely get press coverage, spreading wildly over the web. I don’t think they tend to get published in Nature unless they are a bit more advanced, but I wouldn’t be too surprised if I heard of a case that did, scientific journals can be suckers for cute stories too. They don’t tend to get discussed in glowing terms linking them to historical breakthroughs.

That seems like a pretty close analogy. Taken seriously, it would suggest the wormhole simulation was probably worth doing, probably worth a press release and some media coverage, likely not worth publication in Nature, and definitely not worth being heralded as a major breakthrough.

Ok, but proponents of the experiment might argue I’m leaving something out here. This experiment isn’t just a cute simulation. It’s supposed to be a proof of principle, an early version of an experiment that will be an actually useful simulation.

As an analogy for that…did you know LIGO started taking data in 2002?

Most people first heard of the Laser Interferometer Gravitational-Wave Observatory in 2016, when they reported their first detection of gravitational waves. But that was actually “advanced LIGO”. The original LIGO ran from 2002 to 2010, and didn’t detect anything. It just wasn’t sensitive enough. Instead, it was a prototype, an early version designed to test the basic concept.

Similarly, while this wormhole situation didn’t teach anything new, future ones might. If the quantum simulation was made larger, it might be possible to simulate more complicated toy models, ones that are too complicated to simulate on a normal computer. These aren’t feasible now, but may be feasible with somewhat bigger quantum computers: still much smaller than the computers that would be needed to break encryption, or even to do simulations that are useful for chemists and materials scientists. Proponents argue that some of these quantum toy models might teach them something interesting about the mathematics of quantum gravity.

Here, though, a number of things weaken the analogy.

LIGO’s first run taught them important things about the noise they would have to deal with, things that they used to build the advanced version. The wormhole simulation didn’t show anything novel about how to use a quantum computer: the type of thing they were doing was well-understood, even if it hadn’t been used to do that yet.

Detecting gravitational waves opened up a new type of astronomy, letting us observe things we could never have observed before. For these toy models, it isn’t obvious to me that the benefit is so unique. Future versions may be difficult to classically simulate, but it wouldn’t surprise me if theorists figured out how to understand them in other ways, or gained the same insight from other toy models and moved on to new questions. They’ll have a while to figure it out, because quantum computers aren’t getting bigger all that fast. I’m very much not an expert in this type of research, so maybe I’m wrong about this…but just comparing to similar research programs, I would be surprised if the quantum simulations end up crucial here.

Finally, even if the analogy held, I don’t think it proves very much. In particular, as far as I can tell, the original LIGO didn’t get much press. At the time, I remember meeting some members of the collaboration, and they clearly didn’t have the fame the project has now. Looking through google news and the archives of the New York times, I can’t find all that much about the experiment: a few articles discussing its progress and prospects, but no grand unveiling, no big press releases.

So ultimately, I think viewing the simulation as a proof of principle makes it, if anything, less worth the hype. A prototype like that is only really valuable when it’s testing new methods, and only in so far as the thing it’s a prototype for will be revolutionary. Recently, a prototype fusion device got a lot of press for getting more energy out of a plasma than they put into it (though still much less than it takes to run the machine). People already complained about that being overhyped, and the simulated wormhole is nowhere near that level of importance.

If anything, I think the wormhole-simulators would be on a firmer footing if they thought of their work like the tiny snowmen. It’s cute, a fun side benefit of advanced technology, and as such something worth chatting about and celebrating a bit. But it’s not the start of a new era.

Simulated Wormholes for My Real Friends, Real Wormholes for My Simulated Friends

Maybe you’ve recently seen a headline like this:

Actually, I’m more worried that you saw that headline before it was edited, when it looked like this:

If you’ve seen either headline, and haven’t read anything else about it, then please at least read this:

Physicists have not created an actual wormhole. They have simulated a wormhole on a quantum computer.

If you’re willing to read more, then read the rest of this post. There’s a more subtle story going on here, both about physics and about how we communicate it. And for the experts, hold on, because when I say the wormhole was a simulation I’m not making the same argument everyone else is.

[And for the mega-experts, there’s an edit later in the post where I soften that claim a bit.]

The headlines at the top of this post come from an article in Quanta Magazine. Quanta is a web-based magazine covering many fields of science. They’re read by the general public, but they aim for a higher standard than many science journalists, with stricter fact-checking and a goal of covering more challenging and obscure topics. Scientists in turn have tended to be quite happy with them: often, they cover things we feel are important but that the ordinary media isn’t able to cover. (I even wrote something for them recently.)

Last week, Quanta published an article about an experiment with Google’s Sycamore quantum computer. By arranging the quantum bits (qubits) in a particular way, they were able to observe behaviors one would expect out of a wormhole, a kind of tunnel linking different points in space and time. They published it with the second headline above, claiming that physicists had created a wormhole with a quantum computer and explaining how, using a theoretical picture called holography.

This pissed off a lot of physicists. After push-back, Quanta’s twitter account published this statement, and they added the word “Holographic” to the title.

Why were physicists pissed off?

It wasn’t because the Quanta article was wrong, per se. As far as I’m aware, all the technical claims they made are correct. Instead, it was about two things. One was the title, and the implication that physicists “really made a wormhole”. The other was the tone, the excited “breaking news” framing complete with a video comparing the experiment with the discovery of the Higgs boson. I’ll discuss each in turn:

The Title

Did physicists really create a wormhole, or did they simulate one? And why would that be at all confusing?

The story rests on a concept from the study of quantum gravity, called holography. Holography is the idea that in quantum gravity, certain gravitational systems like black holes are fully determined by what happens on a “boundary” of the system, like the event horizon of a black hole. It’s supposed to be a hologram in analogy to 3d images encoded in 2d surfaces, rather than like the hard-light constructions of science fiction.

The best-studied version of holography is something called AdS/CFT duality. AdS/CFT duality is a relationship between two different theories. One of them is a CFT, or “conformal field theory”, a type of particle physics theory with no gravity and no mass. (The first example of the duality used my favorite toy theory, N=4 super Yang-Mills.) The other one is a version of string theory in an AdS, or anti-de Sitter space, a version of space-time curved so that objects shrink as they move outward, approaching a boundary. (In the first example, this space-time had five dimensions curled up in a sphere and the rest in the anti-de Sitter shape.)

These two theories are conjectured to be “dual”. That means that, for anything that happens in one theory, you can give an alternate description using the other theory. We say the two theories “capture the same physics”, even though they appear very different: they have different numbers of dimensions of space, and only one has gravity in it.

Many physicists would claim that if two theories are dual, then they are both “equally real”. Even if one description is more familiar to us, both descriptions are equally valid. Many philosophers are skeptical, but honestly I think the physicists are right about this one. Philosophers try to figure out which things are real or not real, to make a list of real things and explain everything else as made up of those in some way. I think that whole project is misguided, that it’s clarifying how we happen to talk rather than the nature of reality. In my mind, dualities are some of the clearest evidence that this project doesn’t make any sense: two descriptions can look very different, but in a quite meaningful sense be totally indistinguishable.

That’s the sense in which Quanta and Google and the string theorists they’re collaborating with claim that physicists have created a wormhole. They haven’t created a wormhole in our own space-time, one that, were it bigger and more stable, we could travel through. It isn’t progress towards some future where we actually travel the galaxy with wormholes. Rather, they created some quantum system, and that system’s dual description is a wormhole. That’s a crucial point to remember: even if they created a wormhole, it isn’t a wormhole for you.

If that were the end of the story, this post would still be full of warnings, but the title would be a bit different. It was going to be “Dual Wormholes for My Real Friends, Real Wormholes for My Dual Friends”. But there’s a list of caveats. Most of them arguably don’t matter, but the last was what got me to change the word “dual” to “simulated”.

  1. The real world is not described by N=4 super Yang-Mills theory. N=4 super Yang-Mills theory was never intended to describe the real world. And while the real world may well be described by string theory, those strings are not curled up around a five-dimensional sphere with the remaining dimensions in anti-de Sitter space. We can’t create either theory in a lab either.
  2. The Standard Model probably has a quantum gravity dual too, see this cute post by Matt Strassler. But they still wouldn’t have been able to use that to make a holographic wormhole in a lab.
  3. Instead, they used a version of AdS/CFT with fewer dimensions. It relates a weird form of gravity in one space and one time dimension (called JT gravity), to a weird quantum mechanics theory called SYK, with an infinite number of quantum particles or qubits. This duality is a bit more conjectural than the original one, but still reasonably well-established.
  4. Quantum computers don’t have an infinite number of qubits, so they had to use a version with a finite number: seven, to be specific. They trimmed the model down so that it would still show the wormhole-dual behavior they wanted. At this point, you might say that they’re definitely just simulating the SYK theory, using a small number of qubits to simulate the infinite number. But I think they could argue that this system, too, has a quantum gravity dual. The dual would have to be even weirder than JT gravity, and even more conjectural, but the signs of wormhole-like behavior they observed (mostly through simulations on an ordinary computer, which is still better at this kind of thing than a quantum computer) could be seen as evidence that this limited theory has its own gravity partner, with its own “real dual” wormhole.
  5. But those seven qubits don’t just have the interactions they were programmed to have, the ones with the dual. They are physical objects in the real world, so they interact with all of the forces of the real world. That includes, though very weakly, the force of gravity.

And that’s where I think things break, and you have to call the experiment a simulation. You can argue, if you really want to, that the seven-qubit SYK theory has its own gravity dual, with its own wormhole. There are people who expect duality to be broad enough to include things like that.

But you can’t argue that the seven-qubit SYK theory, plus gravity, has its own gravity dual. Theories that already have gravity are not supposed to have gravity duals. If you pushed hard enough on any of the string theorists on that team, I’m pretty sure they’d admit that.

That is what decisively makes the experiment a simulation. It approximately behaves like a system with a dual wormhole, because you can approximately ignore gravity. But if you’re making some kind of philosophical claim, that you “really made a wormhole”, then “approximately” doesn’t cut it: if you don’t exactly have a system with a dual, then you don’t “really” have a dual wormhole: you’ve just simulated one.

Edit: mitchellporter in the comments points out something I didn’t know: that there are in fact proposals for gravity theories with gravity duals. They are in some sense even more conjectural than the series of caveats above, but at minimum my claim above, that any of the string theorists on the team would agree that the system’s gravity means it can’t have a dual, is probably false.

I think at this point, I’d soften my objection to the following:

Describing the system of qubits in the experiment as a limited version of the SYK theory is in one way or another an approximation. It approximates them as not having any interactions beyond those they programmed, it approximates them as not affected by gravity, and because it’s a quantum mechanical description it even approximates the speed of light as small. Those approximations don’t guarantee that the system doesn’t have a gravity dual. But in order for them to, then our reality, overall, would have to have a gravity dual. There would have to be a dual gravity interpretation of everything, not just the inside of Google’s quantum computer, and it would have to be exact, not just an approximation. Then the approximate SYK would be dual to an approximate wormhole, but that approximate wormhole would be an approximation of some “real” wormhole in the dual space-time.

That’s not impossible, as far as I can tell. But it piles conjecture upon conjecture upon conjecture, to the point that I don’t think anyone has explicitly committed to the whole tower of claims. If you want to believe that this experiment literally created a wormhole, you thus can, but keep in mind the largest asterisk known to mankind.

End edit.

If it weren’t for that caveat, then I would be happy to say that the physicists really created a wormhole. It would annoy some philosophers, but that’s a bonus.

But even if that were true, I wouldn’t say that in the title of the article.

The Title, Again

These days, people get news in two main ways.

Sometimes, people read full news articles. Reading that Quanta article is a good way to understand the background of the experiment, what was done and why people care about it. As I mentioned earlier, I don’t think anything said there was wrong, and they cover essentially all of the caveats you’d care about (except for that last one 😉 ).

Sometimes, though, people just see headlines. They get forwarded on social media, observed at a glance passed between friends. If you’re popular enough, then many more people will see your headline than will actually read the article. For many people, their whole understanding of certain scientific fields is formed by these glancing impressions.

Because of that, if you’re popular and news-y enough, you have to be especially careful with what you put in your headlines, especially when it implies a cool science fiction story. People will almost inevitably see them out of context, and it will impact their view of where science is headed. In this case, the headline may have given many people the impression that we’re actually making progress towards travel via wormholes.

Some of my readers might think this is ridiculous, that no-one would believe something like that. But as a kid, I did. I remember reading popular articles about wormholes, describing how you’d need energy moving in a circle, and other articles about optical physicists finding ways to bend light and make it stand still. Putting two and two together, I assumed these ideas would one day merge, allowing us to travel to distant galaxies faster than light.

If I had seen Quanta’s headline at that age, I would have taken it as confirmation. I would have believed we were well on the way to making wormholes, step by step. Even the New York Times headline, “the Smallest, Crummiest Wormhole You Can Imagine”, wouldn’t have fazed me.

(I’m not sure even the extra word “holographic” would have. People don’t know what “holographic” means in this context, and while some of them would assume it meant “fake”, others would think about the many works of science fiction, like Star Trek, where holograms can interact physically with human beings.)

Quanta has a high-brow audience, many of whom wouldn’t make this mistake. Nevertheless, I think Quanta is popular enough, and respectable enough, that they should have done better here.

At minimum, they could have used the word “simulated”. Even if they go on to argue in the article that the wormhole is real, and not just a simulation, the word in the title does no real harm. It would be a lie, but a beneficial “lie to children”, the basic stock-in-trade of science communication. I think they could have defended it to the string theorists they interviewed on those grounds.

The Tone

Honestly, I don’t think people would have been nearly so pissed off were it not for the tone of the article. There are a lot of physics bloggers who view themselves as serious-minded people, opposed to hype and publicity stunts. They view the research program aimed at simulating quantum gravity on a quantum computer as just an attempt to link a dying and un-rigorous research topic to an over-hyped and over-funded one, pompous storytelling aimed at promoting the careers of people who are already extremely successful.

These people tend to view Quanta favorably, because it covers serious-minded topics in a thorough way. And so many of them likely felt betrayed, seeing this Quanta article as a massive failure of that serious-minded-ness, falling for or even endorsing the hypiest of hype.

To those people, I’d like to politely suggest you get over yourselves.

Quanta’s goal is to cover things accurately, to represent all the facts in a way people can understand. But “how exciting something is” is not a fact.

Excitement is subjective. Just because most of the things Quanta finds exciting you also find exciting, does not mean that Quanta will find the things you find unexciting unexciting. Quanta is not on “your side” in some war against your personal notion of unexciting science, and you should never have expected it to be.

In fact, Quanta tends to find things exciting, in general. They were more excited than I was about the amplituhedron, and I’m an amplitudeologist. Part of what makes them consistently excited about the serious-minded things you appreciate them for is that they listen to scientists and get excited about the things they’re excited about. That is going to include, inevitably, things those scientists are excited about for what you think are dumb groupthinky hype reasons.

I think the way Quanta titled the piece was unfortunate, and probably did real damage. I think the philosophical claim behind the title is wrong, though for subtle and weird enough reasons that I don’t really fault anybody for ignoring them. But I don’t think the tone they took was a failure of journalistic integrity or research or anything like that. It was a matter of taste. It’s not my taste, it’s probably not yours, but we shouldn’t have expected Quanta to share our tastes in absolutely everything. That’s just not how taste works.