Author Archives: 4gravitons

Made of Energy, or Made of Nonsense?

I did a few small modifications to the blog settings this week. Comments now support Markdown, reply-chains in the comments can go longer, and there are a few more sharing buttons on the posts. I’m gearing up to do a more major revamp of the blog in July for when the name changes over from 4 gravitons and a grad student to just 4 gravitons.

io9 did an article recently on scientific ideas that scientists wish the public would stop misusing. They’ve got a lot of good ones (Proof, Quantum, Organic), but they somehow managed to miss one of the big ones: Energy. Matt Strassler has a nice, precise article on this particular misconception, but nonetheless I think it’s high time I wrote my own.

There’s a whole host of misconceptions regarding energy. Some of them are simple misuses of language, like zero-calorie energy drinks:

Zero Purpose

Energy can be measured in several different units. You can use Joules, or electron-Volts, or dynes…or calories. Calories are a measure of energy, so zero calories quite literally means zero energy.

Now, that’s not to say the makers of zero calorie energy drinks are lying. They’re just using a different meaning of energy from the scientific one. Their drinks give you vim and vigor, the get-up-and-go required to make money playing computer games. For most of the public, that “get-up-and-go” is called energy, even if scientifically it’s not.

That’s not really a misconception, more of an amusing use of language. This next one though really makes my blood boil.

Raise your hand if you’ve seen a Sci-Fi movie or TV show where some creature is described as being made of “pure energy”. Whether they’re peaceful, ultra-advanced ascended beings, or genocidal maniacs from another dimension, the concept of creatures made of “pure energy” shows up again and again and again.

You can’t fight the Drej, they’re pure bullshit!

Even if you aren’t the type to take Sci-Fi technobabble seriously, you’ve probably heard that matter and antimatter annihilate to form energy, or that photons are made out of energy. These sound more reasonable, but they rest on the same fundamental misconception:

Nothing is “made out of energy”.

Rather,

Energy is a property that things have.

Energy isn’t a substance, it isn’t a fluid, it isn’t some kind of nebulous stuff you can make into an indestructible alien body. Things have energy, but nothing is energy.

What about light, then? And what happens when antimatter collides with matter?

Light, just like anything else, has energy. The difference between light and most other things is that light also does not have mass.

In everyday life, we like to think of mass as some sort of basic “stuff”. If things are “made out of mass” or “made out of matter”, and something like light doesn’t have mass, then it must be made out of some other “stuff”, right?

The thing is, mass isn’t really “stuff” any more than energy is. Just like energy, mass is a property that things have. In fact, as I’ve talked about some before, mass is really just a type of energy. Specifically, mass is the energy something has when left alone and at rest. That’s the meaning of Einstein’s famous equation, E equals m c squared: it tells you how to take a known mass and calculate the rest energy that it implies.

Lots of hype for a unit conversion formula, huh?

In the case of light, all of its energy can be thought of in terms of its (light-speed) motion, so it has no mass. That might tempt you to think of it as being “made of energy”, but really, you and light are not so different.

You are made of atoms, and atoms are made of protons, neutrons, and electrons. Let’s consider a proton. A proton’s mass, expressed in the esoteric units physicists favor, is 938 Mega-electron-Volts. That’s how much energy a proton has alone and and rest. A proton is made of three quarks, so you’d think that they would contribute most of its mass. In reality, though, the quarks in protons have masses of only a few Mega-electron-Volts. Most of a proton’s mass doesn’t come from the mass of the quarks.

Quarks interact with each other via the strong nuclear force, the strongest fundamental force in existence. That interaction has a lot of energy, and when viewed from a distance that energy contributes almost all of the proton’s mass. So if light is “made of energy”, so are you.

So why do people say that matter and anti-matter annihilate to make energy?

A matter particle and its anti-matter partner are opposite in a lot of ways. In particular, they have opposite charges: not just electric charge, but other types of charge too.

Charge must be conserved, so if a particle collides with its anti-particle the result has a total charge of zero, as the opposite charges of the two cancel each other out. Light has zero charge, so it’s one of the most common results of a matter-antimatter collision. When people say that matter and antimatter produce “pure energy”, they really just mean that they produce light.

So next time someone says something is “made of energy”, be wary. Chances are, they aren’t talking about something fully scientific.

Does Science have Fads?

97% of climate scientists agree that global warming exists, and is most probably human-caused. On a more controversial note, string theorists vastly outnumber adherents of other approaches to quantum gravity, such as Loop Quantum Gravity.

As many who disagree with climate change or string theory would argue, the majority is not always right. Science should be concerned with truth, not merely with popularity. After all, what if scientists are merely taking part in a fad? What makes climate change any more objectively true than pet rocks?

Apparently this wikipedia’s best example of a fad.

People are susceptible to fads, after all. A style of music becomes popular, and everyone’s listening to the same sounds. A style of clothing, and everything’s wearing the same thing. So if an idea in science became popular, everyone might…write the same papers?

That right there is the problem. Scientists only succeed by creating meaningfully original work. If we don’t discover something new, we can’t publish, and as the old saying goes it’s publish or perish out there. Even if social pressure gets us working on something, if we’re going to get any actual work done there has to be enough there, at least, for us to do something different, something no-one has done before.

This doesn’t mean scientists can’t be influenced by popularity, but it means that that influence is limited by the requirements of doing meaningful, original work. In the case of climate change, climate scientists investigate the topic with so many different approaches and look at so many different areas of impact (for example, did you know rising CO2 levels make the ocean acidic?) that the whole field simply wouldn’t function if climate change wasn’t real: there’d be a contradiction, and most of the myriad projects involving it simply wouldn’t work. As I’ve talked about before, science is an interlocking system, and it’s hard to doubt one part without being forced to doubt everything else.

What about string theory? Here, the situation is a little different. There aren’t experiments testing string theory, so whether or not string theory describes the real world won’t have much effect on whether people can write string theory papers.

The existence of so many string theory papers does say something, though. The up-side of not involving experiments is that you can’t go and test something slightly different and write a paper about it. In order to be original, you really need to calculate something that nobody expected you to calculate, or notice a trend nobody expected to exist. The fact that there are so many more string theorists than loop quantum gravity theorists is in part because there are so many more interesting string theory projects than interesting loop quantum gravity projects.

In string theory, projects tend to be interesting because they unveil some new aspect of quantum field theory, the class of theories that explain the behavior of subatomic particles. Given how hard quantum field theory is, any insight is valuable, and in my experience these sorts of insights are what most string theorists are after. So while string theory’s popularity says little about whether it describes the real world, it says a lot about its ability to say interesting things about quantum field theory. And since quantum field theories do describe the real world, string theory’s continued popularity is also evidence that it continues to be useful.

Climate change and string theory aren’t fads, not exactly. They’re popular, not simply because they’re popular, but because they make important contributions and valuable to science. And as long as science continues to reward original work, that’s not about to change.

The Many (Body) Problems of the Academic Lifestyle

I’m visiting Perimeter this week, searching for apartments in the area. This got me thinking about how often one has to move in academia. You move for college, you move for grad school, you move for each postdoc job, and again when you start as a professor. Even then, you may not get to stay where you are if you don’t manage to get tenure, and it may be healthier to resign yourself to moving every seven years rather than assuming you’re going to settle down.

Most of life isn’t built around the idea that people move across the country (or the world!) every 2-7 years, so naturally this causes a few problems for those on the academic path. Below are some well-known, and not-so-well-known, problems facing academics due to their frequent relocations:

The two-body problem:

Suppose you’re married, or in a committed relationship. Better hope your spouse has a flexible job, because in a few years you’re going to be moving to another city. This is even harder if your spouse is also an academic, as that requires two rare academic jobs to pop up in the same place. And woe betide you if you’re out of synch, and have to move at different times. Many couples end up having to resort to some sort of long-distance arrangement, which further complicates matters.

The N-body problem:

Like the two-body problem, but for polyamorous academics. Leads to poly-chains up and down the East Coast.

The 2+N-body problem:

Alternatively, add a time dimension to your two-body problem via the addition of children. Now your kids are busily being shuffled between incommensurate school systems. But you’re an academic, you can teach them anything they’re missing, right?

The warm body problem:

Of course, all this assumes you’re in a relationship. If you’re single, you instead have the problem of never really having a social circle beyond your department, having to tenuously rebuild your social life every few years. What sorts of clubs will the more socially awkward of you enter, just to have some form of human companionship?

The large body of water problem:

We live in an age where everything is connected, but that doesn’t make distance cheap. An ocean between you and your collaborators means you’ll rarely be awake at the same time. And good luck crossing that ocean again, not every job will be eager to pay relocation expenses.

The obnoxious governing body problem:

Of course, the various nations involved won’t make all this travel easy. Many countries have prestigious fellowships only granted on the condition that the winner returns to their home country for a set length of time. Since there’s no guarantee that anyone in your home country does anything similar to what you do, this sort of requirement can have people doing whatever research they can find, however tangentially related, or trying to avoid the incipient bureaucratic nightmare any way they can.

 

Experimentalist Says What?

I’m a theoretical physicist. That means I work with pencil and paper, or with my laptop, or at most with a computer cluster. I don’t have a lab, and even if I did I wouldn’t have any equipment to store there.

By contrast, most physicists (and most scientists in general) are experimentalists, the people who actually do experiments, actually work in labs, and actually use piles and piles of expensive equipment. Naturally, these two groups have very different ways of doing things, spawned by different requirements for their jobs. This leads to very different ways of talking. We theorists sometimes get confused by the quaint turns of phrase used by experimentalists, so I’ve put together this handy translation guide:

 

Lab: Kind of like an office, but has a bunch of big machines in it for some reason. Also, in some of them they don’t even drink coffee, some nonsense about toxic contaminants. I don’t know how they get any work done with all those test tubes all over the place.

PI: Not Private Investigator, but close! The Primary Investigator is the big cheese among the experimentalists, the one who owns all the big machines. All of the others must bow before him or her, even fellow professors must grovel if they want to use the PI’s expensive equipment. Naturally, this makes experimentalists very hierarchical, a sharp contrast to theorists who are obviously totally egalitarian.

Poster: Let me tell you a secret about experimentalists: there are a lot of them. Way more than there are theorists. So many, that if they all go to a conference it’s impossible for them all to give talks! That’s where posters come in: some of the experimentalists all stand in a room in front of rectangles of cardboard covered in charts, while the others walk around and ask questions. Traditionally, these posters are printed an hour before the conference, obviously for maximum freshness and not at all because of procrastination.

Group: Like our Institutes, but (because there are a lot of experimentalists) there isn’t just one per university and (because of the shared lab) they actually have something to talk about. This leads to regular group meetings, because when you’re using expensive equipment you actually need to show you’re doing something worthwhile with it.

IRB: For the medical and psychological folks, the Internal Review Board is there to tell you that, no, you can’t infect monkeys with flesh-eating bacteria just to see what happens. They’re also the people who ask you whether a grammatical change in your online survey will pose risks to pregnant women, which is clearly exactly as important. Theorists don’t have these, because numbers are an oppressed underclass with no rights to speak of. EHS (Environmental Health and Safety) fills a similar role for those who only oppress yeast and their own grad students.

Annual Meeting: Experimentalists tend to be part of big organizations like the American Physical Society. And that’s all well and good, occupies a space on the CV and so forth. What’s somewhat more baffling is their tendency to trust those organizations to run conferences. Generally these are massive affairs, with people from all sorts of sub-fields participating. This only works because experimentalists have the mysterious ability to walk into each other’s talks and actually understand what’s going on, even if the subject matter is very different from what they’re used to. Experts suggest this has something to do with actually studying real things in the real world, but this is a hypothesis at best.

Insert Muscle Joke Here

I’m graduating this week, so I probably shouldn’t spend too much time writing this post. I ought to mention, though, that there has been some doubt about the recent discovery by the BICEP2 telescope of evidence for gravitational waves in the cosmic microwave background caused by the early inflation of the universe. Résonaances got to the story first and Of Particular Significance has some good coverage that should be understandable to a wide audience.

In brief, the worry is that the signal detected by BICEP2 might not be caused by inflation, but instead by interstellar dust. While the BICEP2 team used several models of dust to show that it should be negligible, the controversy centers around one of these models in particular, one taken from another, similar experiment called PLANCK.

The problem is, BICEP2 didn’t get PLANCK’s information on dust directly. Instead, it appears they took the data from a slide in a talk by the PLANCK team. This process, known as “data scraping”, involves taking published copies of the slides and reading information off of the charts presented. If BICEP2 misinterpreted the slide, they might have miscalculated the contribution by interstellar dust.

If you’re like me, the whole idea of data scraping seems completely ludicrous. The idea of professional scientists sneaking information off of a presentation, rather than simply asking the other team for data like reasonable human beings, feels almost cartoonishly wrong-headed.

It’s a bit more understandable, though, when you think about the culture behind these big experiments. The PLANCK and BICEP2 teams are colleagues, but they are also competitors. There is an enormous amount of glory in finding evidence for something like cosmic inflation first, and an equally enormous amount of shame in screwing up and announcing something that turns out to be wrong. As such, these experiments are quite protective of their data. Not only might someone with early access to the data preempt them on an important discovery, they might rush to publish a conclusion that is wrong. That’s why most of these big experiments spend a large amount of time checking and re-checking the data, communicating amongst themselves and settling on an interpretation before they feel comfortable releasing it to the wider community. It’s why BICEP2 couldn’t just ask PLANCK for their data.

From BICEP2’s perspective, they can expect that plots presented at a talk by PLANCK should be accurate, digital plots. Unlike Fox News, scientists have an obligation to present their data in a way that isn’t misleading. And while relying on such a dubious source seems like a bad idea, by all accounts that’s not what the BICEP2 team did. PLANCK’s data was just one dust model used by the team, kept in part because it agreed well with other, non-“data-scraped” models.

It’s a shame that these experiments are so large and prestigious that they need to guard their data in such a potentially destructive way. My sub-field is generally much nicer about this sort of thing: the stakes are lower, and the groups are smaller and have less media attention, so we’re able to share data when we need to. In fact, my most recent paper got a significant boost from some data shared by folks at the Perimeter Institute.

Only time will tell whether the BICEP2 result wins out, or whether it was a fluke caused by caustic data-sharing practices. A number of other experiments are coming online within the next year, and one of them may confirm or deny what BICEP2 has showed.

How do I get where you are?

I’ve mentioned before that this blog will be undergoing a redesign this summer, transitioning from 4gravitons.wordpress.com to just 4gravitons.wordpress.com. One part of that redesign will be the introduction of new categories to help people search for content, as well as new guides like the ones for N=4 super Yang-Mills and the (2,0) theory for some of those categories. Of those, one planned category/guide will discuss careers in physics, with an eye towards explaining some of the often-unstated assumptions behind the process.

I’ve already posted on being a graduate research assistant and on what a postdoc is. I haven’t said much yet about the process leading up to becoming a graduate student. In this post, I’m going to give an overview of a career in theoretical physics, with a focus on what happens before you find an advisor. This is going to be inherently biased, based as it will be on my experiences. In particular, each country’s education system is different, so much of this will only be relevant for students in the US.

Let’s start at the beginning.

A very good place to start.

If you want to become a theoretical physicist, you’d better start by taking physics and math courses in high school. Unfortunately, this is where socioeconomic status has a big effect. Some schools have Advanced Placement or International Baccalaureate courses that let you get a head-start on college, many don’t. Some schools don’t even have physics courses at all anymore. My only advice here is to get what you can, when you can. If you can take a physics course, do it. If you can take calculus, do it. If you can take classes that will give you university credit, take them.

After high school, you go to college for a Bachelor’s degree in physics. Getting into college these days is some sort of ridiculous popularity contest, and I don’t pretend to be able to give advice on that. What I can say is that once you’re in college, coursework is important, but research is more important. Graduate schools will look at how well you did in your courses and how advanced those courses were, but they will pay special attention to who you get recommendations from, and whether you did research with them. Whether or not your college has anyone who you can research with, you should consider doing summer research somewhere interesting. With programs like the NSF’s Research Experience for Undergraduates (or REU) you can apply to get hooked up with interesting projects and mentors. In addition to looking good on an application to grad school, doing research helps boost your self-confidence: knowing that you can do something real really helps you start feeling like a scientist. Research also teaches you specialized skills much faster than coursework can: I’ve learned much more about programming from having to use it on projects than from any actual programming course.

That said, coursework is also useful. You want courses that will familiarize you with basic tools of your field, physics courses on classical mechanics and quantum mechanics and electromagnetism and math courses on linear algebra and differential equations. You want to take a math course on group theory, but only if it’s taught by a physicist, as mathematicians focus on different aspects. More than any of that, though, you want to try to take at least a few graduate-level courses in while you’re still in college.

That’s important, because grad school in theoretical physics is kind of a mess. You’ll be there for around five years in total (I was in at the low end with four, some people take six or seven). However, you take most if not all of your courses in the first two years. In general, during that time you are paid as a Teaching Assistant. The school pays your tuition and a livable (if barely) wage, and in return you lead lab sections or grade papers. Teaching experience can be a positive thing, but you don’t want to keep doing it for too long, because the point of grad school isn’t teaching or courses, it’s research. Your goal is to find an advisor who is willing to pay you out of one of their (usually government) grants, so that you can transition from Teaching Assistant to Research Assistant. This is hard to do while you’re still taking courses: you won’t have time, and worse, you won’t know everything you need. Theoretical physics requires a lot of background, and much of it gets taught in grad school. Here at Stony Brook, you’d be taking graduate-level quantum mechanics, quantum field theory, and string theory. Until recently, each one of those was a one-year course, and the most logical way to take them was one after the other. Add that up, and that’s three years…kind of a problem when you want to start research after two. That’s why getting ahead in courses, however and whenever you can, is so important: not so much for the courses themselves, but so you can get past them and do research.

Research is what you do for the rest of your time in grad school. It’s what you do after you graduate, when you become a postdoc. It (and teaching) are what you do as a professor, what you are judged on when they decide whether or not you get tenure. Working through research is going to teach you more than any other experience you will have, so get as much of it as you can. And good luck!

Look what I made!

In a few weeks, I’ll be giving a talk for Stony Brook’s Graduate Awards Colloquium, to an audience of social science grad students and their parents.

One of the most useful tools when talking to people in other fields is a shared image. You want something from your field that they’ve seen, that they’re used to, that they’ll recognize. Building off of that kind of thing can be a great way to communicate.

If there’s one particle physics image that lots and lots of people have seen, it’s the Standard Model. Generally, it’s organized into charts like this:

Standard_Model_of_Elementary_Particles

I thought that if people saw a chart like that, but for N=4 super Yang-Mills, it might make the theory seem a bit more familiar. N=4 super Yang-Mills has a particle much like the Standard Model’s gluon with spin 1, paired with four gluinos, particles that are sort of but not really like quarks with spin 1/2, and six scalars, particles whose closest analogue in the Standard Model is the Higgs with spin 0.

In N=4 super Yang-Mills, none of these particles have any mass, since if supersymmetry isn’t “broken” all particles have the same mass. So where mass is written in the Standard Model table, I can just put zero. The table I linked also gives the electric charge of each particle. That doesn’t really mean anything for N=4 super Yang-Mills. It isn’t a theory that tries to describe the real world, so there’s no direct equivalent to a real-world force like electromagnetism. Since everything in the theory has to have the same charge, again due to supersymmetry, I can just list all of their “electric charges” as zero.

Putting it all together, I get the diagram below. The theory has eleven particles in total, so it won’t fit into a nice neat square. Still, this should be more familiar than most of the ways I could present things.

N4SYMParticleContent

The PhD Defense

Last Wednesday I completed the final stage of my PhD, the Defense. I booted up a projector and, in a room filled with esteemed physicists, eager grad students, and a three foot sub, I summarized the last two years of my work. A few questions later, people were shaking my hand and calling me “Doctor von Hippel”.

Now that I’m transitioning out of the grad student world, my blog will be transitioning too. I’ll be starting work as a Postdoctoral Fellow in the Fall at the Perimeter Institute for Theoretical Physics. Some time in between, probably in July, this blog will undergo a redesign, hopefully becoming easier to navigate. I’ll also be dropping the “and a grad student” from the title, switching to a new URL, 4gravitons.wordpress.com. Don’t worry, traffic from the old address will be forwarded, so infrequent readers won’t lose track. That said, if anyone with more experience has some advice about making the transition more seamless I’d love to hear it.

There are a lot of stereotypes about the PhD Defense, and mine broke almost all of them. My advisor hadn’t been directly involved in my work, my committee chair was one of the nicest, mellowest professors I’ve ever known, my experimentalist asked me a theoretical physics question, and my external member was NimafrigginArkani-Hamed.

That said, I’ve also seen several other PhD Defenses, and I have to say that the stereotypes are usually right on the money. And since I’m on a bit of a list-based comedy kick recently, let me introduce you to the four members of your PhD committee:

First, of course, is your advisor. If you two collaborate closely, you may find yourself presenting material that your advisor had a hand in. Naturally, the other committee members will ask questions about this material, and naturally you will answer them. Naturally, those answers will not be how your advisor would have explained it, so naturally your advisor will start explaining it themselves. (After all, it’s their work that’s being questioned!) Manage things well and the whole defense will be an argument between your advisor and the other committee members, and you won’t have to say anything at all!

Second is your committee chair. This is someone from your field, chosen for their general eminence and chair-ish-ness. They’ve done a lot of these before, and in their mind they’ve developed a special bond with the students, a bond forged by questions. See, if you have a typical committee chair, they will ask you the toughest, most nitpicky, most downright irrelevant lines of questions possible. The chair’s goal isn’t to keep things moving, it’s to make sure that you took their class and remember everything from it, no matter how much time that takes away from discussing your actual dissertation.

Third you must face your experimentalist. According to the ancient ideals of academia (ideals somehow unbreakably important for grad students and largely irrelevant for top-level university administrators), a dissertation must be judged not only by the yes-men of your own sub-field, but also by someone from the rest of your department. For a theoretical physicist, that means bringing in an experimental physicist. You may try to make things accessible to this person, but eventually you have to actually start talking about your work. This is healthy, as it will allow them much-needed sleep. Once they awake, they will bless you with a question that represents the most tenuous link they can draw between their own work and yours, generally asking after the mass of some subatomic particle. Once you have demonstrated your ignorance in some embarrassing fashion the experimentalist may return to sleep.

Finally, the defense brings in a special individual, the external member. Not only must you prove your worth to an experimentalist, but to someone from outside of your department altogether! For the lucky, this means someone who does similar work at a nearby university. For the terminally rural, this instead means finding the closest department and bringing in someone who will at least recognize some of the words in your talk. For us, this generally means a mathematician. Like the experimentalist, they will favor you with bewildered looks or snores, depending on temperament. Unlike the experimentalist, they are under no illusion that anything they do is relevant to anything you do, so their questions will be mercifully brief.

Grilled by these four, you then leave the room, allowing them to talk about the weather or their kids or something before they ask you back in to tell you that, of course, you’ve got your PhD. Because after all that, anything else would just be rude.

Particles are not Species

It has been estimated that there are 7.5 million undiscovered species of animals, plants and fungi. Most of these species are insects. If someone wanted billions of dollars to search the Amazon rainforest with the goal of cataloging every species of insect, you’d want them to have a pretty good reason. Maybe they are searching for genes that could cure diseases, or trying to understand why an ecosystem is dying.

The primary goal of the Large Hadron Collider is to search for new subatomic particles. If we’re spending billions searching for these things, they must have some use, right? After all, it’s all well and good knowing about a bunch of different particles, but there must be a whole lot of sorts of particles out there, at least if you judge by science fiction (these two are also relevant). Surely we could just focus on finding the useful ones, and ignore the rest?

The thing is, particle physics isn’t like that. Particles aren’t like insects, you don’t find rare new types scattered in out-of-the-way locations. That’s because each type of particle isn’t like a species of animal. Instead, each particle is a fundamental law of nature.

Move over Linnaeus.

Move over Linnaeus.

It wasn’t always like this. In the late 50’s and early 60’s, particle accelerators were producing a zoo of new particles with no clear rhyme or reason, and it looked like they would just keep producing more. That impression changed when Murray Gell-Mann proposed his Eightfold Way, which led to the development of the quark model. He explained the mess of new particles in terms of a few fundamental particles, the quarks, which made up the more complicated particles that were being discovered.

Nowadays, the particles that we’re trying to discover aren’t, for the most part, the zoo of particles of yesteryear. Instead, we’re looking for new fundamental particles.

What makes a particle fundamental?

The new particles of the early 60’s were a direct consequence of the existence of quarks. Once you understood how quarks worked, you could calculate the properties of all of the new particles, and even predict ones that hadn’t been found yet.

By contrast, fundamental particles aren’t based on any other particles, and you can’t predict everything about them. When we discover a new fundamental particle like the Higgs boson, we’re discovering a new, independent law of nature. Each fundamental particle is a law that states, across all of space and time, “if this happens, make this particle”. It’s a law that holds true always and everywhere, regardless of how often the particle is actually produced.

Think about the laws of physics like the cockpit of a plane. In front of the pilot is a whole mess of controls, dials and switches and buttons. Some of those controls are used every flight, some much more rarely. There are probably buttons on that plane that have never been used. But if a single button is out of order, the plane can’t take off.

Each fundamental particle is like a button on that plane. Some turn “on” all the time, while some only turn “on” in special circumstances. But each button is there all the same, and if you’re missing one, your theory is incomplete. It may agree with experiments now, but eventually you’re going to run into problems of one sort or another that make your theory inconsistent.

The point of discovering new particles isn’t just to find the one that will give us time travel or let us blow up Vulcan. Technological applications would be nice, but the real point is deeper: we want to know how reality works, and for every new fundamental particle we discover, we’ve found out a fact that’s true about the whole universe.

The Four Ways Physicists Name Things

If you’re a biologist and you discover a new animal, you’ve always got Latin to fall back on. If you’re an astronomer, you can describe what you see. But if you’re a physicist, your only option appears to involve falling back on one of a few terrible habits.

The most reasonable option is just to name it after a person. Yang-Mills and the Higgs Boson may sound silly at first, but once you know the stories of C. N. Yang, Robert Mills, Peter Higgs and Satyendra Nath Bose you start appreciating what the names mean. While this is usually the most elegant option, the increasingly collaborative nature of physics means that many things have to be named with a series of initials, like ABJM, BCJ and KKLT.

A bit worse is the tendency to just give it the laziest name possible. What do you call the particles that “glue” protons and neutrons together? Why gluons, of course, yuk yuk yuk!

This is particularly common when it comes to supersymmetry, where putting the word “super” in front of something almost always works. If that fails, it’s time to go for more specific conventions: to find the partner of an existing particle, if the new particle is a boson, just add “s-” for “super”“scalar” apparently to the name. This creates perfectly respectable names like stau, sneutrino, and selectron. If the new particle is a fermion, instead you add “-ino” to the end, getting something like a gluino if you start with a gluon. If you’ve heard of neutrinos, you may know that neutrino means “little neutral one”. You might perfectly rationally expect that gluino means “little gluon”, if you had any belief that physicists name things logically. We don’t. A gluino is called a gluino because it’s a fermion, and neutrinos are fermions, and the physicists who named it were too lazy to check what “neutrino” actually means.

Pictured: the superpartner of Nidoran?

Worse still are names that are obscure references and bad jokes. These are mercifully rare, and at least memorable when they occur. In quantum mechanics, you write down probabilities using brackets of two quantum states, \langle a | b\rangle. What if you need to separate the two states, \langle a| and |b\rangle? Then you’ve got a “bra” and a “ket”!

Or have you heard the story of how quarks were named? Quarks, for those of you unfamiliar with them, are found in protons and neutrons in groups of three. Murray Gell-Mann, one of the two people who first proposed the existence of quarks, got their name from Finnegan’s Wake, a novel by James Joyce, which at one point calls for “Three quarks for Muster Mark!” While this may at first sound like a heartwarming tale of respect for the literary classics, it should be kept in mind that a) Finnegan’s Wake is a novel composed almost entirely of gibberish, read almost exclusively by people who pretend to understand it to seem intelligent and b) this isn’t exactly the most important or memorable line in the book. So Gell-Mann wasn’t so much paying homage to a timeless work of literature as he was referencing the most mind-numbingly obscure piece of nerd trivia before the invention of Mara Jade. Luckily these days we have better ways to remember the name.

Albeit wrinklier ways.

The final, worst category, though, don’t even have good stories going for them. They are the names that tell you absolutely nothing about the thing they are naming.

Probably the worst examples of this from my experience are the a-theorem and the c-theorem. In both cases, a theory happened to have a parameter in it labeled by a letter. When a theorem was proven about that parameter, rather than giving it a name that told you anything at all about what it was, people just called it by the name of the parameter. Mathematics is full of names like this too. Without checking Wikipedia, what’s the difference between a set, a group, and a category? What the heck is a scheme?

If you ever have to name something, be safe and name it after a person. If you don’t, just try to avoid falling into these bad habits of physics naming.