Monthly Archives: April 2014

Particles are not Species

It has been estimated that there are 7.5 million undiscovered species of animals, plants and fungi. Most of these species are insects. If someone wanted billions of dollars to search the Amazon rainforest with the goal of cataloging every species of insect, you’d want them to have a pretty good reason. Maybe they are searching for genes that could cure diseases, or trying to understand why an ecosystem is dying.

The primary goal of the Large Hadron Collider is to search for new subatomic particles. If we’re spending billions searching for these things, they must have some use, right? After all, it’s all well and good knowing about a bunch of different particles, but there must be a whole lot of sorts of particles out there, at least if you judge by science fiction (these two are also relevant). Surely we could just focus on finding the useful ones, and ignore the rest?

The thing is, particle physics isn’t like that. Particles aren’t like insects, you don’t find rare new types scattered in out-of-the-way locations. That’s because each type of particle isn’t like a species of animal. Instead, each particle is a fundamental law of nature.

Move over Linnaeus.

Move over Linnaeus.

It wasn’t always like this. In the late 50’s and early 60’s, particle accelerators were producing a zoo of new particles with no clear rhyme or reason, and it looked like they would just keep producing more. That impression changed when Murray Gell-Mann proposed his Eightfold Way, which led to the development of the quark model. He explained the mess of new particles in terms of a few fundamental particles, the quarks, which made up the more complicated particles that were being discovered.

Nowadays, the particles that we’re trying to discover aren’t, for the most part, the zoo of particles of yesteryear. Instead, we’re looking for new fundamental particles.

What makes a particle fundamental?

The new particles of the early 60’s were a direct consequence of the existence of quarks. Once you understood how quarks worked, you could calculate the properties of all of the new particles, and even predict ones that hadn’t been found yet.

By contrast, fundamental particles aren’t based on any other particles, and you can’t predict everything about them. When we discover a new fundamental particle like the Higgs boson, we’re discovering a new, independent law of nature. Each fundamental particle is a law that states, across all of space and time, “if this happens, make this particle”. It’s a law that holds true always and everywhere, regardless of how often the particle is actually produced.

Think about the laws of physics like the cockpit of a plane. In front of the pilot is a whole mess of controls, dials and switches and buttons. Some of those controls are used every flight, some much more rarely. There are probably buttons on that plane that have never been used. But if a single button is out of order, the plane can’t take off.

Each fundamental particle is like a button on that plane. Some turn “on” all the time, while some only turn “on” in special circumstances. But each button is there all the same, and if you’re missing one, your theory is incomplete. It may agree with experiments now, but eventually you’re going to run into problems of one sort or another that make your theory inconsistent.

The point of discovering new particles isn’t just to find the one that will give us time travel or let us blow up Vulcan. Technological applications would be nice, but the real point is deeper: we want to know how reality works, and for every new fundamental particle we discover, we’ve found out a fact that’s true about the whole universe.

The Four Ways Physicists Name Things

If you’re a biologist and you discover a new animal, you’ve always got Latin to fall back on. If you’re an astronomer, you can describe what you see. But if you’re a physicist, your only option appears to involve falling back on one of a few terrible habits.

The most reasonable option is just to name it after a person. Yang-Mills and the Higgs Boson may sound silly at first, but once you know the stories of C. N. Yang, Robert Mills, Peter Higgs and Satyendra Nath Bose you start appreciating what the names mean. While this is usually the most elegant option, the increasingly collaborative nature of physics means that many things have to be named with a series of initials, like ABJM, BCJ and KKLT.

A bit worse is the tendency to just give it the laziest name possible. What do you call the particles that “glue” protons and neutrons together? Why gluons, of course, yuk yuk yuk!

This is particularly common when it comes to supersymmetry, where putting the word “super” in front of something almost always works. If that fails, it’s time to go for more specific conventions: to find the partner of an existing particle, if the new particle is a boson, just add “s-” for “super”“scalar” apparently to the name. This creates perfectly respectable names like stau, sneutrino, and selectron. If the new particle is a fermion, instead you add “-ino” to the end, getting something like a gluino if you start with a gluon. If you’ve heard of neutrinos, you may know that neutrino means “little neutral one”. You might perfectly rationally expect that gluino means “little gluon”, if you had any belief that physicists name things logically. We don’t. A gluino is called a gluino because it’s a fermion, and neutrinos are fermions, and the physicists who named it were too lazy to check what “neutrino” actually means.

Pictured: the superpartner of Nidoran?

Worse still are names that are obscure references and bad jokes. These are mercifully rare, and at least memorable when they occur. In quantum mechanics, you write down probabilities using brackets of two quantum states, \langle a | b\rangle. What if you need to separate the two states, \langle a| and |b\rangle? Then you’ve got a “bra” and a “ket”!

Or have you heard the story of how quarks were named? Quarks, for those of you unfamiliar with them, are found in protons and neutrons in groups of three. Murray Gell-Mann, one of the two people who first proposed the existence of quarks, got their name from Finnegan’s Wake, a novel by James Joyce, which at one point calls for “Three quarks for Muster Mark!” While this may at first sound like a heartwarming tale of respect for the literary classics, it should be kept in mind that a) Finnegan’s Wake is a novel composed almost entirely of gibberish, read almost exclusively by people who pretend to understand it to seem intelligent and b) this isn’t exactly the most important or memorable line in the book. So Gell-Mann wasn’t so much paying homage to a timeless work of literature as he was referencing the most mind-numbingly obscure piece of nerd trivia before the invention of Mara Jade. Luckily these days we have better ways to remember the name.

Albeit wrinklier ways.

The final, worst category, though, don’t even have good stories going for them. They are the names that tell you absolutely nothing about the thing they are naming.

Probably the worst examples of this from my experience are the a-theorem and the c-theorem. In both cases, a theory happened to have a parameter in it labeled by a letter. When a theorem was proven about that parameter, rather than giving it a name that told you anything at all about what it was, people just called it by the name of the parameter. Mathematics is full of names like this too. Without checking Wikipedia, what’s the difference between a set, a group, and a category? What the heck is a scheme?

If you ever have to name something, be safe and name it after a person. If you don’t, just try to avoid falling into these bad habits of physics naming.

A Question of Audience

I’ve been thinking a bit about science communication recently.

One of the most important parts of communicating science (or indeed, communicating anything) is knowing your audience. Much of the time if a piece is flawed, it’s flawed because the author didn’t have a clear idea of who they’re talking to.

A persistent worry among people who communicate science to the public is that we’re really just talking to ourselves. If all the people praising you for your clear language are scientists, then maybe it’s time to take a step back and think about whether you’re actually being understood by anyone else.

This blog’s goal has always been to communicate science to the general public, and most of my posts are written with as little background assumed as possible. That said, I sometimes wonder whether that’s actually the audience I’m reaching.
Wordpress has a handy feature to let me track which links people click on to get to my blog, which gives me a rough way to gauge my audience.

When a new post goes up, I get around ten to twenty clicks from Facebook. Those are people I know, which for the most part these days means physicists. I get a couple clicks from Twitter, where my followers are a mix of young scientists, science journalists, and amateurs interested in science. On WordPress, my followers are also a mix of specialists and enthusiasts. Most interesting, to me at least, are the followers who get to my blog via Google searches. Naturally, they come in regardless of whether I have a new post or not, adding an extra twenty-five or so views every day. Judging by the sites (google.fr, google.ca) these people come from all over the world, and judging by their queries they run from physics PhD students to people with no physics knowledge whatsoever.

Overall then, I think I’m doing a pretty good job getting the word out. As my site’s Google rankings improve, more non-physicists will read what I have to say. It’s a diverse audience, but I think I’m up to the challenge.

Numerics, or, Why can’t you just tell the computer to do it?

When most people think of math, they think of the math they did in school: repeated arithmetic until your brain goes numb, followed by basic algebra and trig. You weren’t allowed to use calculators on most tests for the simple reason that almost everything you did could be done by a calculator in a fraction of the time.

Real math isn’t like that. Mathematicians handle proofs and abstract concepts, definitions and constructions and functions and generally not a single actual number in sight. That much, at least, shouldn’t be surprising.

What might be surprising is that even tasks which seem very much like things computers could do easily take a fair bit of human ingenuity.

In physics, I do a lot of integrals. For those of you unfamiliar with calculus, integrals can be thought of as the area between a curve and the x-axis.

Areas seem like the sort of thing it would be easy for a computer to find. Chop the space into little rectangles, add up all the rectangles under the curve, and if your rectangles are small enough you should get the right answer. Broadly, this is the method of numerical integration. Since computers can do billions of calculations per second, you can chop things up into billions of rectangles and get as close as you’d like, right?

Heck, ten is a lot. Can we just do ten?

Heck, ten is a lot. Can we just do ten?

For some curves, this works fine. For others, though…

Ten might not be enough for this one.

Ten might not be enough for this one.

See how the left side of that plot goes off the chart? That curve goes to infinity. No matter how many rectangles you put on that side, you still won’t have any that are infinitely tall, so you’ll still miss that part of the curve.

Surprisingly enough, the area under this curve isn’t infinite. Do the integral correctly, and you get a result of 2. Set a computer to calculate this integral via the sort of naïve numerical integration discussed above though, and you’ll never find that answer. You need smarter methods: smart humans doing the math, or smart humans programming the computer.

Another way this can come up is if you’re adding up two parts of something that go to infinity in opposite directions. Try to integrate each part by itself and you’ll be stuck.

firstplot

secondplot

But add them together, and you get something quite a bit more tractable.

Yeah, definitely a ten-rectangle job.

Yeah, definitely a ten-rectangle job.

Numerical integration, and computers in general, are a very important tool in a scientist’s arsenal. But in order to use them, we have to be smart, and know what we’re doing. Knowing how to use our tools right can take almost as much expertise and care as working without tools.

So no, I can’t just tell the computer to do it.