Category Archives: Science Communication

Journalists Are Terrible at Quasiparticles

TerribleQuasiparticleHeadlineNo, they haven’t, and no, that’s not what they found, and no, that doesn’t make sense.

Quantum field theory is how we understand particle physics. Each fundamental particle comes from a quantum field, a law of nature in its own right extending across space and time. That’s why it’s so momentous when we detect a fundamental particle, like the Higgs, for the first time, why it’s not just like discovering a new species of plant.

That’s not the only thing quantum field theory is used for, though. Quantum field theory is also enormously important in condensed matter and solid state physics, the study of properties of materials.

When studying materials, you generally don’t want to start with fundamental particles. Instead, you usually want to think about overall properties, ways the whole material can move and change overall. If you want to understand the quantum properties of these changes, you end up describing them the same way particle physicists talk about fundamental fields: you use quantum field theory.

In particle physics, particles come from vibrations in fields. In condensed matter, your fields are general properties of the material, but they can also vibrate, and these vibrations give rise to quasiparticles.

Probably the simplest examples of quasiparticles are the “holes” in semiconductors. Semiconductors are materials used to make transistors. They can be “doped” with extra slots for electrons. Electrons in the semiconductor will move around from slot to slot. When an electron moves, though, you can just as easily think about it as a “hole”, an empty slot, that “moved” backwards. As it turns out, thinking about electrons and holes independently makes understanding semiconductors a lot easier, and the same applies to other types of quasiparticles in other materials.

Unfortunately, the article I linked above is pretty impressively terrible, and communicates precisely none of that.

The problem starts in the headline:

Scientists have finally discovered massless particles, and they could revolutionise electronics

Scientists have finally discovered massless particles, eh? So we haven’t seen any massless particles before? You can’t think of even one?

After 85 years of searching, researchers have confirmed the existence of a massless particle called the Weyl fermion for the first time ever. With the unique ability to behave as both matter and anti-matter inside a crystal, this strange particle can create electrons that have no mass.

Ah, so it’s a massless fermion, I see. Well indeed, there are no known fundamental massless fermions, not since we discovered neutrinos have mass anyway. The statement that these things “create electrons” of any sort is utter nonsense, however, let alone that they create electrons that themselves have no mass.

Electrons are the backbone of today’s electronics, and while they carry charge pretty well, they also have the tendency to bounce into each other and scatter, losing energy and producing heat. But back in 1929, a German physicist called Hermann Weyl theorised that a massless fermion must exist, that could carry charge far more efficiently than regular electrons.

Ok, no. Just no.

The problem here is that this particular journalist doesn’t understand the difference between pure theory and phenomenology. Weyl didn’t theorize that a massless fermion “must exist”, nor did he say anything about their ability to carry charge. Weyl described, mathematically, how a massless fermion could behave. Weyl fermions aren’t some proposed new fundamental particle, like the Higgs boson: they’re a general type of particle. For a while, people thought that neutrinos were Weyl fermions, before it was discovered that they had mass. What we’re seeing here isn’t some ultimate experimental vindication of Weyl, it’s just an old mathematical structure that’s been duplicated in a new material.

What’s particularly cool about the discovery is that the researchers found the Weyl fermion in a synthetic crystal in the lab, unlike most other particle discoveries, such as the famous Higgs boson, which are only observed in the aftermath of particle collisions. This means that the research is easily reproducible, and scientists will be able to immediately begin figuring out how to use the Weyl fermion in electronics.

Arrgh!

Fundamental particles from particle physics, like the Higgs boson, and quasiparticles, like this particular Weyl fermion, are completely different things! Comparing them like this, as if this is some new efficient trick that could have been used to discover the Higgs, just needlessly confuses people.

Weyl fermions are what’s known as quasiparticles, which means they can only exist in a solid such as a crystal, and not as standalone particles. But further research will help scientists work out just how useful they could be. “The physics of the Weyl fermion are so strange, there could be many things that arise from this particle that we’re just not capable of imagining now,” said Hasan.

In the very last paragraph, the author finally mentions quasiparticles. There’s no mention of the fact that they’re more like waves in the material than like fundamental particles, though. From this description, it makes it sound like they’re just particles that happen to chill inside crystals, like they’re agoraphobic or something.

What the scientists involved here actually discovered is probably quite interesting. They’ve discovered a new sort of ripple in the material they studied. The ripple can carry charge, and because it can behave like a massless particle it can carry charge much faster than electrons can. (To get a basic idea as to how this works, think about waves in the ocean. You can have a wave that goes much faster than the ocean’s current. As the wave travels, no actual water molecules travel from one side to the other. Instead, it is the motion that travels, the energy pushing the wave up and down being transferred along.)

There’s no reason to compare this to particle physics, to make it sound like another Higgs boson. This sort of thing dilutes the excitement of actual particle discoveries, perpetuating the misconception of particles as just more species to find and catalog. Furthermore, it’s just completely unnecessary: condensed matter is a very exciting field, one that the majority of physicists work on. It doesn’t need to ride on the coat-tails of particle physics rhetoric in order to capture peoples’ attention. I’ve seen journalists do this kind of thing before, comparing new quasiparticles and composite particles with fundamental particles like the Higgs, and every time I cringe. Don’t you have any respect for the subject you’re writing about?

No-One Can Tell You What They Don’t Understand

On Wednesday, Amanda Peet gave a Public Lecture at Perimeter on string theory and black holes, while I and other Perimeter-folk manned the online chat. If you missed it, it’s recorded online here.

We get a lot of questions in the online chat. Some are quite insightful, some are basic, and some…well, some are kind of strange. Like the person who asked us how holography could be compatible with irrational numbers.

In physics, holography is the idea that you can encode the physics of a wider space using only information on its boundary. If you remember the 90’s or read Buzzfeed a lot, you might remember holograms: weird rainbow-colored images that looked 3d when you turned your head.

On a computer screen, they instead just look awkward.

Holograms in physics are a lot like that, but rather than a 2d image looking like a 3d object, they can be other combinations of dimensions as well. The most famous, AdS/CFT, relates a ten-dimensional space full of strings to a four-dimensional space on its boundary, where the four-dimensional space contains everybody’s favorite theory, N=4 super Yang-Mills.

So from this explanation, it’s probably not obvious what holography has to do with irrational numbers. That’s because there is no connection: holography has nothing to do with irrational numbers.

Naturally, we were all a bit confused, so one of us asked this person what they meant. They responded by asking if we knew what holograms and irrational numbers were. After all, the problem should be obvious then, right?

In this sort of situation, it’s tempting to assume you’re being trolled. In reality, though, the problem was one of the most common in science communication: people can’t tell you what they don’t understand, because they don’t understand it.

When a teacher asks “any questions?”, they’re assuming students will know what they’re missing. But a deep enough misunderstanding doesn’t show itself that way. Misunderstand things enough, and you won’t know you’re missing anything. That’s why it takes real insight to communicate science: you have to anticipate ways that people might misunderstand you.

In this situation, I thought about what associations people have with holograms. While some might remember the rainbow holograms of old, there are other famous holograms that might catch peoples’ attention.

Please state the nature of the medical emergency.

In science fiction, holograms are 3d projections, ways that computers can create objects out of thin air. The connection to a 2d image isn’t immediately apparent, but the idea that holograms are digital images is central.

Digital images are the key, here. A computer has to express everything in a finite number of bits. It can’t express an irrational number, a number with a decimal expansion that goes on to infinity, at least not without tricks. So if you think that holography is about reality being digital, rather than lower-dimensional, then the question makes perfect sense: how could a digital reality contain irrational numbers?

This is the sort of thing we have to keep in mind when communicating science. It’s easy to misunderstand, to take some aspect of what someone said and read it through a different lens. We have to think about how others will read our words, we have to be willing to poke and prod until we root out the source of the confusion. Because nobody is just going to tell us what they don’t get.

Outreach as the End Product of Science

Sabine Hossenfelder recently wrote a blog post about physics outreach. In it, she identifies two goals: inspiration, and education.

Inspiration outreach is all about making science seem cool. It’s the IFLScience side of things, stoking the science fandom and getting people excited.

Education outreach, by contrast, is about making sure peoples’ beliefs are accurate. It teaches the audience something about the world around them, giving them a better understanding of how the world works.

In both cases, though, Sabine finds it hard to convince other scientists that outreach is valuable. Maybe inspiration helps increase grant funding, maybe education makes people vote better on scientific issues like climate change…but there isn’t a lot of research that shows that outreach really accomplishes either.

Sabine has a number of good suggestions in her post for how to make outreach more effective, but I’d like to take a step back and suggest that maybe we as a community are thinking about outreach in the wrong way. And in order to do that, I’m going to do a little outreach myself, and talk about black holes.

The black hole of physics outreach.

Black holes are collapsed stars, crushed in on themselves by their own gravity so much that one you get close enough (past the event horizon) not even light can escape. This means that if you sent an astronaut past the event horizon, there would be no way for them to communicate with you: any way they might try to get information to you would travel, at most, at the speed of light.

Einstein’s equations keep working fine past the event horizon, but despite that there are some people who view any prediction of what happens inside to be outside the scope of science. If there’s no way to report back, then how could we ever test our predictions? And if we can’t test our predictions, aren’t we missing the cornerstone of science itself?

In a rather entertaining textbook, physicists Edwin F. Taylor and John Archibald Wheeler suggest a way around this: instead of sending just one astronaut, send multiple! Send a whole community! That way, while we might not be able to test our predictions about the inside of the event horizon, the scientific community that falls in certainly can. For them, those predictions aren’t just meaningless speculation, but testable science.

If something seems unsatisfying about this, congratulations: you now understand the purpose of outreach.

As long as scientific advances never get beyond a small community, we’re like Taylor and Wheeler’s astronauts inside the black hole. We can test our predictions among each other, verify them to our heart’s content…but if they never reach the wider mass of humanity, then what have we really accomplished? Have we really created knowledge, when only a few people will ever know it?

In my Who Am I? post, I express the hope that one day the science I blog about will be as well known as electrons and protons. That might sound farfetched, but I really do think it’s possible. In one hundred years, electrons and protons went from esoteric discoveries of a few specialists to something children learn about in grade school. If science is going to live up to its purpose, if we’re going to escape the black hole of our discipline, then in another hundred years quantum field theory needs to do the same. And by doing outreach work, each of us is taking steps in that direction.

What’s the Matter with Dark Matter, Matt?

It’s very rare that I disagree with Matt Strassler. That said, I can’t help but think that, when he criticizes the press for focusing their LHC stories on dark matter, he’s missing an important element.

From his perspective, when the media says that the goal of the new run of the LHC is to detect dark matter, they’re just being lazy. People have heard of dark matter. They might have read that it makes up 23% of the universe, more than regular matter at 4%. So when an LHC physicist wants to explain what they’re working on to a journalist, the easiest way is to talk about dark matter. And when the journalist wants to explain the LHC to the public, they do the same thing.

This explanation makes sense, but it’s a little glib. What Matt Strassler is missing is that, from the public’s perspective, dark matter really is a central part of the LHC’s justification.

Now, I’m not saying that the LHC’s main goal is to detect dark matter! Directly detecting dark matter is pretty low on the LHC’s list of priorities. Even if it detects a new particle with the right properties to be dark matter, it still wouldn’t be able to confirm that it really is dark matter without help from another experiment that actually observes some consequence of the new particle among the stars. I agree with Matt when he writes that the LHC’s priorities for the next run are

  1. studying the newly discovered Higgs particle in great detail, checking its properties very carefully against the predictions of the “Standard Model” (the equations that describe the known apparently-elementary particles and forces)  to see whether our current understanding of the Higgs field is complete and correct, and

  2. trying to find particles or other phenomena that might resolve the naturalness puzzle of the Standard Model, a puzzle which makes many particle physicists suspicious that we are missing an important part of the story, and

  3. seeking either dark matter particles or particles that may be shown someday to be “associated” with dark matter.

Here’s the thing, though:

From the public’s perspective, why do we need to study the properties of the Higgs? Because we think it might be different than the Standard Model predicts.

Why do we think it might be different than the Standard Model predicts? More generally, why do we expect the world to be different from the Standard Model at all? Well there are a few reasons, but they generally boil down to two things: the naturalness puzzle, and the fact that the Standard Model doesn’t have anything that could account for dark matter.

Naturalness is a powerful motivation, but it’s hard to sell to the general public. Does the universe appear fine-tuned? Then maybe it just is fine-tuned! Maybe someone fine-tuned it!

These arguments miss the real problem with fine-tuning, but they’re hard to correct in a short article. Getting the public worried about naturalness is tough, tough enough that I don’t think we can demand it of the average journalist, or accuse them of being lazy if they fail to do it.

That leaves dark matter. And for all that naturalness is philosophically murky, dark matter is remarkably clear. We don’t know what 96% of the universe is made of! That’s huge, and not just in a “gee-whiz-cool” way. It shows, directly and intuitively, that physics still has something it needs to solve, that we still have particles to find. Unless you are a fan of (increasingly dubious) modifications to gravity like MOND, dark matter is the strongest possible justification for machines like the LHC.

The LHC won’t confirm dark matter on its own. It might not directly detect it, that’s still quite up-in-the-air. And even if it finds deviations from the Standard Model, it’s not likely they’ll be directly caused by dark matter, at least not in a simple way.

But the reason that the press is describing the LHC’s mission in terms of dark matter isn’t just laziness. It’s because, from the public’s perspective, dark matter is the only vaguely plausible reason to spend billions of dollars searching for new particles, especially when we’ve already found the Higgs. We’re lucky it’s such a good reason.

What Counts as a Fundamental Force?

I’m giving a presentation next Wednesday for Learning Unlimited, an organization that presents educational talks to seniors in Woodstock, Ontario. The talk introduces the fundamental forces and talks about Yang and Mills before moving on to introduce my work.

While practicing the talk today, someone from Perimeter’s outreach department pointed out a rather surprising missing element: I never mention gravity!

Most people know that there are four fundamental forces of nature. There’s Electromagnetism, there’s Gravity, there’s the Weak Nuclear Force, and there’s the Strong Nuclear Force.

Listed here by their most significant uses.

What ties these things together, though? What makes them all “fundamental forces”?

Mathematically, gravity is the odd one out here. Electromagnetism, the Weak Force, and the Strong Force all share a common description: they’re Yang-Mills forces. Gravity isn’t. While you can sort of think of it as a Yang-Mills force “squared”, it’s quite a bit more complicated than the Yang-Mills forces.

You might be objecting that the common trait of the fundamental forces is obvious: they’re forces! And indeed, you can write down a force law for gravity, and a force law for E&M, and umm…

[Mumble Mumble]

Ok, it’s not quite as bad as xkcd would have us believe. You can actually write down a force law for the weak force, if you really want to, and it’s at least sort of possible to talk about the force exerted by the strong interaction.

All that said, though, why are we thinking about this in terms of forces? Forces are a concept from classical mechanics. For a beginning physics student, they come up again and again, in free-body diagram after free-body diagram. But by the time a student learns quantum mechanics, and quantum field theory, they’ve already learned other ways of framing things where forces aren’t mentioned at all. So while forces are kind of familiar to people starting out, they don’t really match onto anything that most quantum field theorists work with, and it’s a bit weird to classify things that only really appear in quantum field theory (the Weak Nuclear Force, the Strong Nuclear Force) based on whether or not they’re forces.

Isn’t there some connection, though? After all, gravity, electromagnetism, the strong force, and the weak force may be different mathematically, but at least they all involve bosons.

Well, yes. And so does the Higgs.

The Higgs is usually left out of listings of the fundamental forces, because it’s not really a “force”. It doesn’t have a direction, instead it works equally at every point in space. But if you include spin 2 gravity and spin 1 Yang-Mills forces, why not also include the spin 0 Higgs?

Well, if you’re doing that, why not include fermions as well? People often think of fermions as “matter” and bosons as “energy”, but in fact both have energy, and neither is made of it. Electrons and quarks are just as fundamental as photons and gluons and gravitons, just as central a part of how the universe works.

I’m still trying to decide whether my presentation about Yang-Mills forces should also include gravity. On the one hand, it would make everything more familiar. On the other…pretty much this entire post.

Pics or It Didn’t Happen

I got a tumblr recently.

One thing I’ve noticed is that tumblr is a very visual medium. While some people can get away with massive text-dumps, they’re usually part of specialized communities. The content that’s most popular with a wide audience is, almost always, images. And that’s especially true for science-related content.

This isn’t limited to tumblr either. Most of my most successful posts have images. Most successful science posts in general involve images. Think of the most interesting science you’ve seen on the internet: chances are, it was something visual that made it memorable.

The problem is, I’m a theoretical physicist. I can’t show you pictures of nebulae in colorized glory, or images showing the behavior of individual atoms. I work with words, equations, and, when I’m lucky, diagrams.

Diagrams tend to work best, when they’re an option. I have no doubt that part of the Amplituhedron‘s popularity with the press owes to Andy Gilmore’s beautiful illustration, as printed in Quanta Magazine’s piece:

Gotta get me an artist.

The problem is, the nicer one of these illustrations is, the less it actually means. For most people, the above is just a pretty picture. Sometimes it’s possible to do something more accurate, like a 3d model of one of string theory’s six-dimensional Calabi-Yau manifolds:

What, you expected a six-dimensional intrusion into our world *not* to look like Yog-Sothoth?

A lot of the time, though, we don’t even have a diagram!

In those sorts of situations, it’s tempting to show an equation. After all, equations are the real deal, the stuff we theorists are actually manipulating.

Unless you’ve got an especially obvious equation, though, there’s basically only one thing the general public will get out of it. Either the equation is surprisingly simple,

Isn’t it cute?

Or it’s unreasonably complicated,

Why yes, this is one equation that covers seventeen pages. You're lucky I didn't post the eight-hundred page one.

Why yes, this is one equation that covers seventeen pages. You’re lucky I didn’t post the eight-hundred page one.

This is great for first impressions, but it’s not very repeatable. Show people one giant equation, and they’ll be impressed. Show them two, and they won’t have any idea what the difference is supposed to be.

If you’re not showing diagrams or equations, what else can you show?

The final option is, essentially, to draw a cartoon. Forget about showing what’s “really going on”, physically or mathematically. That’s what the article is for. For an image, just pick something cute and memorable that references the topic.

When I did an article for Ars Technica back in 2013, I didn’t have any diagrams to show, or any interesting equations. Their artist, undeterred, came up with a cute picture of sushi with an N=4 on it.

That sort of thing really helps! It doesn’t tell you anything technical, it doesn’t explain what’s going on…but it does mean that every time I think of the article, that image pops into my head. And in a world where nothing lasts without a picture to document it, that’s a job well done.

Why You Should Be Skeptical about Faster-than-Light Neutrinos

While I do love science, I don’t always love IFL Science. They can be good at drumming up enthusiasm, but they can also be ridiculously gullible. Case in point: last week, IFL Science ran a piece on a recent paper purporting to give evidence for faster-than-light particles.

Faster than light! Sounds cool, right? Here’s why you should be skeptical:

If a science article looks dubious, you should check out the source. In this case, IFL Science links to an article on the preprint server arXiv.

arXiv is a freely accessible website where physicists and mathematicians post their articles. The site has multiple categories, corresponding to different fields. It’s got categories for essentially any type of physics you’d care to include, with the option to cross-list if you think people from multiple areas might find your work interesting.

So which category is this paper in? Particle physics? Astrophysics?

General Physics, actually.

General Physics is arXiv’s catch-all category. Some of it really is general, and can’t be put into any more specific place. But most of it, including this, falls into another category: things arXiv’s moderators think are fishy.

arXiv isn’t a journal. If you follow some basic criteria, it won’t reject your articles. Instead, dubious articles are put into General Physics, to signify that they don’t seem to belong with the other scholarship in the established categories. General Physics is a grab-bag of weird ideas and crackpot theories, a mix of fringe physicists and overenthusiastic amateurs. There probably are legitimate papers in there too…but for every paper in there, you can guarantee that some experienced researcher found it suspicious enough to send into exile.

Even if you don’t trust the moderators of arXiv, there are other reasons to be wary of faster-than-light particles.

According to Einstein’s theory of relativity, massless particles travel at the speed of light, while massive particles always travel slower. To travel faster than the speed of light, you need to have a very unusual situation: a particle whose mass is an imaginary number.

Particles like that are called tachyons, and they’re a staple of science fiction. While there was a time when they were a serious subject of physics speculation, nowadays the general view is that tachyons are a sign we’re making bad assumptions.

Assuming that someone is a republic serial villain is a good example.

Why is that? It has to do with the nature of mass.

In quantum field theory, what we observe as particles arise as ripples in quantum fields, extending across space and time. The harder it is to make the field ripple, the higher the particle’s mass.

A tachyon has imaginary mass. This means that it isn’t hard to make the field ripple at all. In fact, exactly the opposite happens: it’s easier to ripple than to stay still! Any ripple, no matter how small, will keep growing until it’s not just a ripple, but a new default state for the field. Only when it becomes hard to change again will the changes stop. If it’s hard to change, though, then the particle has a normal, non-imaginary mass, and is no longer a tachyon!

Thus, the modern understanding is that if a theory has tachyons in it, it’s because we’re assuming that one of the quantum fields has the wrong default state. Switching to the correct default gets rid of the tachyons.

There are deeper problems with the idea proposed in this paper. Normally, the only types of fields that can have tachyons are scalars, fields that can be defined by a single number at each point, sort of like a temperature. The particles this article is describing aren’t scalars, though, they’re fermions, the type of particle that includes everyday matter like electrons. Those sorts of particles can’t be tachyons at all without breaking some fairly important laws of physics. (For a technical explanation of why this is, Lubos Motl’s reply to the post here is pretty good.)

Of course, this paper’s author knows all this. He’s well aware that he’s suggesting bending some fairly fundamental laws, and he seems to think there’s room for it. But that, really, is the issue here: there’s room for it. The paper isn’t, as IFL Science seems to believe, six pieces of evidence for faster-than-light particles. It’s six measurements that, if you twist them around and squint and pick exactly the right model, have room for faster-than-light particles. And that’s…probably not worth an article.

Misleading Headlines and Tacky Physics, Oh My!

It’s been making the rounds on the blogosphere (despite having come out three months ago). It’s probably showed up on your Facebook feed. It’s the news that (apparently) one of the biggest discoveries of recent years may have been premature. It’s….

The Huffington Post writing a misleading headline to drum up clicks!

The article linked above is titled “Scientists Raise Doubts About Higgs Boson Discovery, Say It Could Be Another Particle”. And while that is indeed technically all true, it’s more than a little misleading.

When the various teams at the Large Hadron Collider announced their discovery of the Higgs, they didn’t say it was exactly the Higgs predicted by the Standard Model. In fact, it probably shouldn’t be: most of the options for extending the Standard Model, like supersymmetry, predict a Higgs boson with slightly different properties. Until the Higgs is measured more precisely, these slightly different versions won’t be ruled out.

Of course, “not ruled out” is not exactly newsworthy, which is the main problem with this article. The Huffington Post quotes a paper that argues, not that there is new evidence for an alternative to the Higgs, but simply that one particular alternative that the authors like hasn’t been ruled out yet.

Also, it’s probably the tackiest alternative out there.

The theory in question is called Technicolor, and if you’re imagining a certain coat then you may have an idea of how tacky we’re talking.

Any Higgs will do…

To describe technicolor, let’s take a brief aside and talk about the colors of quarks.

Rather than having one type of charge going from plus to minus like Electromagnetism, the Strong Nuclear Force has three types of charge, called red, green, and blue. Quarks are charged under the strong force, and can be red, green, or blue, while the antimatter partners of quarks have the equivalent of negative charges, anti-red, anti-green, and anti-blue. The strong force binds quarks together into protons and neutrons. The strong force is also charged under itself, which means that not only does it bind quarks together, it also binds itself together, so that it only acts at very very short range.

In combination, these two facts have one rather surprising consequence. A proton contains three quarks, but a proton’s mass is over a hundred times the total mass of three quarks. The same is true of neutrons.

The reason why is that most of the mass isn’t coming from the quarks, it’s coming from the strength of the strong force. Mass, contrary to what you might think, isn’t fundamental “stuff”. It’s just a handy way of talking about energy that isn’t due to something we can easily see. Particles have energy because they move, but they also have energy due to internal interactions, as well as interactions with other fields like the Higgs field. While a lone quark’s mass is due to its interaction with the Higgs field, the quarks inside a proton are also interacting with each other, gaining enormous amounts of energy from the strong force trapped within. That energy, largely invisible from an outside view, contributes most of what we see as the mass of the proton.

Technicolor asks the following: what if it’s not just protons and neutrons? What if the mass of everything, quarks and electrons and the W and Z bosons, was due not truly to the Higgs, but to another force, like the strong force but even stronger? The Higgs we think we saw at the LHC would not be fundamental, but merely a composite, made up of  two “techni-quarks” with “technicolor” charges. [Edited to remove confusion with Preon Theory]

It’s…an idea. But it’s never been a very popular one.

Part of the problem is that the simpler versions of technicolor have been ruled out, so theorists are having to invoke increasingly baroque models to try to make it work. But that, to some extent, is also true of supersymmetry.

A bigger problem is that technicolor is just kind of…tacky.

Technicolor doesn’t say anything deep about the way the universe works. It doesn’t propose new [types of] symmetries, and it doesn’t say anything about what happens at the very highest energies. It’s not really tied in to any of the other lines of speculation in physics, it doesn’t lead to a lot of discussion between researchers. It doesn’t require an end, a fundamental lowest level with truly fundamental particles. You could potentially keep adding new levels of technicolor, new things made up of other things made up of other things, ad infinitum.

And the fleas that bite ’em, presumably.

[Note: to clarify, technicolor theories don’t actually keep going like this, their extra particles don’t require another layer of technicolor to gain their masses. That would be an actual problem with the concept itself, not a reason it’s tacky. It’s tacky because, in a world where most physicists feel like we’ve really gotten down to the fundamental particles, adding new composite objects seems baroque and unnecessary, like adding epicycles. Fleas upon fleas as it were.]

In a word, it’s not sexy.

Does that mean it’s wrong? No, of course not. As the paper linked by Huffington Post points out, technicolor hasn’t been ruled out yet.

Does that mean I think people shouldn’t study it? Again, no. If you really find technicolor meaningful and interesting, go for it! Maybe you’ll be the kick it needs to prove itself!

But good grief, until you manage that, please don’t spread your tacky, un-sexy theory all over Facebook. A theory like technicolor should get press when it’s got a good reason, and “we haven’t been ruled out yet” is never, ever, a good reason.

 

[Edit: Esben on Facebook is more well-informed about technicolor than I am, and pointed out some issues with this post. Some of them are due to me conflating technicolor with another old and tacky theory, while some were places where my description was misleading. Corrections in bold.]

Why I Can’t Explain Ghosts: Or, a Review of a Popular Physics Piece

Since today is Halloween, I really wanted to write a post talking about the spookiest particles in physics, ghosts.

And their superpartners, ghost riders.

The problem is, in order to explain ghosts I’d have to explain something called gauge symmetry. And gauge symmetry is quite possibly the hardest topic in modern physics to explain to a general audience.

Deep down, gauge symmetry is the idea that irrelevant extra parts of how we represent things in physics should stay irrelevant. While that sounds obvious, it’s far from obvious how you can go from that to predicting new particles like the Higgs boson.

Explaining this is tough! Tough enough that I haven’t thought of a good way to do it yet.

Which is why I was fairly stoked when a fellow postdoc pointed out a recent popular physics article by Juan Maldacena, explaining gauge symmetry.

Juan Maldacena is a Big Deal. He’s the guy who figured out the AdS/CFT correspondence, showing that string theory (in a particular hyperbola-shaped space called AdS) and everybody’s favorite N=4 super Yang-Mills theory are secretly the same, a discovery which led to a Big Blue Dot on Paperscape. So naturally, I was excited to see what he had to say.

Big Blue Dot pictured here.

Big Blue Dot pictured here.

The core analogy he makes is with currencies in different countries. Just like gauge symmetry, currencies aren’t measuring anything “real”: they’re arbitrary conventions put in place because we don’t have a good way of just buying things based on pure “value”. However, also like gauge symmetry, then can have real-life consequences, as different currency exchange rates can lead to currency speculation, letting some people make money and others lose money. In Maldacena’s analogy the Higgs field works like a precious metal, making differences in exchange rates manifest as different prices of precious metals in different countries.

It’s a solid analogy, and one that is quite close to the real mathematics of the problem (as the paper’s Appendix goes into detail to show). However, I have some reservations, both about the paper as a whole and about the core analogy.

In general, Maldacena doesn’t do a very good job of writing something publicly accessible. There’s a lot of stilted, academic language, and a lot of use of “we” to do things other than lead the reader through a thought experiment. There’s also a sprinkling of terms that I don’t think the average person will understand; for example, I doubt the average college student knows flux as anything other than a zany card game.

Regarding the analogy itself, I think Maldacena has fallen into the common physicist trap of making an analogy that explains things really well…if you already know the math.

This is a problem I see pretty frequently. I keep picking on this article, and I apologize for doing so, but it’s got a great example of this when it describes supersymmetry as involving “a whole new class of number that can be thought of as the square roots of zero”. That’s a really great analogy…if you’re a student learning about the math behind supersymmetry. If you’re not, it doesn’t tell you anything about what supersymmetry does, or how it works, or why anyone might study it. It relates something unfamiliar to something unfamiliar.

I’m worried that Maldacena is doing that in this paper. His setup is mathematically rigorous, but doesn’t say much about the why of things: why do physicists use something like this economic model to understand these forces? How does this lead to what we observe around us in the real world? What’s actually going on, physically? What do particles have to do with dimensionless constants? (If you’re curious about that last one, I like to think I have a good explanation here.)

It’s not that Maldacena ignores these questions, he definitely puts effort into answering them. The problem is that his analogy itself doesn’t really address them. They’re the trickiest part, the part that people need help picturing and framing, the part that would benefit the most from a good analogy. Instead, the core imagery of the piece is wasted on details that don’t really do much for a non-expert.

Maybe I’m wrong about this, and I welcome comments from non-physicists. Do you feel like Maldacena’s account gives you a satisfying idea of what gauge symmetry is?

The Hardest Audience Knows Just Enough to Be Dangerous

You’d think that it would be hard to explain physics to people who know absolutely nothing about physics.

And you might be right, if there was anyone these days who knew absolutely nothing about physics. If someone didn’t know what atoms were, or didn’t know what a physicist was, then yes it would take quite a while to explain anything more than the basics. But most people know what atoms are, and know what physicists are, and at least have a basic idea that there are things called protons and neutrons and electrons.

And that’s often enough. Starting with a basis like that, I can talk people through the Large Hadron Collider, I can get them to picture Feynman Diagrams, I can explain, roughly, what it is I do.

On the other end, it’s not all that hard to explain what I do to people in my sub-field. Working on the same type of physics is like sharing a language, we have all sorts of terms to make explaining easier. While it’s still possible to trip up and explain too much or too little (a recent talk I gave left out the one part that one member of the audience needed…because everyone else would have gotten nothing out of it), you’re protected by a buffer of mutual understanding.

The hardest talks aren’t for the public, and they aren’t for fellow amplitudes-researchers. They’re for a general physics audience.

If you’re talking to physicists, you can’t start with protons and neutrons. Do that, and your audience is going to get annoyed with you rather quickly. You can’t rely on the common understanding everyone has of physics. In addition to making your audience feel like they’re being talked down to, you won’t manage to say anything substantial. You need to start at a higher level so that when you do describe what you do, it’s in enough detail that your audience feels like they really understand it.

At the same time, you can’t start with the jargon of your sub-field. If you want to really explain something (and not just have fifteen minutes of background before everyone tunes out) you need to build off of a common understanding.

The tricky part is, that “common understanding” is more elusive than you might think. For example, pretty much every physicist has some familiarity with Quantum Field Theory…but that can mean anything from “uses it every day” to “saw it a couple times back in grad school”. Too much background, and half your audience is bored. Too little, and half your audience is lost. You have to strike the proper balance, trying to show everyone enough to feel satisfied.

There are tricks to make this easier. I’ve noticed that some of the best speakers begin with a clever and unique take on something everyone understands. That way, people in very different fields will still have something they recognize, while people in the same field will still be seeing something new. Of course, the tricky part is coming up with a new example in the first place!

In general, I need to get better at estimating where my audience is. Talking to you guys is fun, but I ought to also practice a “physics voice” for discussions with physicists (as well as grants and applications), and an “amplitudes voice” for fellow specialists. The key to communication, as always, is knowing your audience.