Still Traveling

I’m still traveling this week, so this will  be a short post.

Last year, when I went to Amplitudes I left Europe right after. This felt like a bit of a waste: an expensive, transcontinental flight, and I was only there for a week?

So this year, I resolved to visit a few more places. I was at the Niels Bohr Institute in Copenhagen earlier this week.

IMG_20160712_205034_hdr

Where the live LHC collisions represented as lights shining on the face of the building are rather spoiled by the lack of any actual darkness to see them by.

Now, I’m at Mainz, visiting Johannes Henn.

Oddly enough, I’ve got family connections to both places. My great-grandfather spent some time at the Niels Bohr Institute on his way out of Europe, and I have a relative who works at Mainz. So while the primary purpose of this trip was research, I’ve gotten to learn a little family history in the process.

Amplitudes 2016

I’m at Amplitudes this week, in Stockholm.

IMG_20160704_225049

The land of twilight at 11pm

Last year, I wrote a post giving a tour of the field. If I had to write it again this year most of the categories would be the same, but the achievements listed would advance in loops and legs, more complicated theories and more insight.

The ambitwistor string now goes to two loops, while my collaborators and I have pushed the polylogarithm program to five loops (dedicated post on that soon!) A decent number of techniques can now be applied to QCD, including a differential equation-based method that was used to find a four loop, three particle amplitude. Others tied together different approaches, found novel structures in string theory, or linked amplitudes techniques to physics from other disciplines. The talks have been going up on YouTube pretty quickly, due to diligent work by Nordita’s tech guy, so if you’re at all interested check it out!

The (but I’m Not a) Crackpot Style Guide

Ok, ok, I believe you. You’re not a crackpot. You’re just an outsider, one with a brilliant new idea that would overturn the accepted paradigms of physics, if only someone would just listen.

Here’s the problem: you’re not alone. There are plenty of actual crackpots. We get contacted by them fairly regularly. And most of the time, they’re frustrating and unpleasant to deal with.

If you want physicists to listen to you, you need to show us you’re not one of those people. Otherwise, most of us won’t bother.

I can’t give you a foolproof way to do that. But I can give some suggestions that will hopefully make the process a little less frustrating for everyone involved.

Don’t spam:

Nobody likes spam. Nobody reads spam. If you send a mass email to every physicist whose email address you can find, none of them will read it. If you repeatedly post the same thing in a comment thread, nobody will read it. If you want people to listen to you, you have to show that you care about what they have to say, and in order to do that you have to tailor your message. This leads in to the next point,

Ask the right people:

Before you start reaching out, you should try to get an idea of who to talk to. Physics is quite specialized, so if you’re taking your ideas seriously you should try to contact people with a relevant specialization.

Now, I know what you’re thinking: your ideas are unique, no-one in physics is working on anything similar.

Here, it’s important to distinguish the problem you’re trying to solve with how you’re trying to solve it. Chances are, no-one else is working on your specific idea…but plenty of people are interested in the same problems.

Think quantum mechanics is built on shoddy assumptions? There are people who spend their lives trying to modify quantum mechanics. Have a beef against general relativity? There’s a whole sub-field of people who modify gravity.

These people are a valuable resource for you, because they know what doesn’t work. They’ve been trying to change the system, and they know just how hard it is to change, and just what evidence you need to be consistent with.

Contacting someone whose work just uses quantum mechanics or relativity won’t work. If you’re making elementary mistakes, we can put you on the right track…but if you think you’re making elementary mistakes, you should start out by asking help from a forum or the like, not contacting a professional. If you think you’ve really got a viable replacement to an established idea, you need to contact people who work on overturning established ideas, since they’re most aware of the complicated webs of implications involved. Relatedly,

Take ownership of your work:

I don’t know how many times someone has “corrected” something in the comments, and several posts later admitted that the “correction” comes from their own theory. If you’re arguing from your own work, own it! If you don’t, people will assume you’re trying to argue from an established theory, and are just confused about how that theory works. This is a special case of a broader principle,

Epistemic humility:

I’m not saying you need to be humble in general, but if you want to talk productively you need to be epistemically humble. That means being clear about why you know what you know. Did you get it from a mathematical proof? A philosophical argument? Reading pop science pieces? Something you remember from high school? Being clear about your sources makes it easier for people to figure out where you’re coming from, and avoids putting your foot in your mouth if it turns out your source is incomplete.

Context is crucial:

If you’re commenting on a blog like this one, pay attention to context. Your comment needs to be relevant enough that people won’t parse it as spam.

If all a post does is mention something like string theory, crowing about how your theory is a better explanation for quantum gravity isn’t relevant. Ditto for if all it does is mention a scientific concept that you think is mistaken.

What if the post is promoting something that you’ve found to be incorrect, though? What if someone is wrong on the internet?

In that case, it’s important to keep in mind the above principles. A popularization piece will usually try to present the establishment view, and merits a different response than a scientific piece arguing something new. In both cases, own your own ideas and be specific about how you know what you know. Be clear on whether you’re talking about something that’s controversial, or something that’s broadly agreed on.

You can get an idea of what works and what doesn’t by looking at comments on this blog. When I post about dark matter, or cosmic inflation, there are people who object, and the best ones are straightforward about why. Rather than opening with “you’re wrong”, they point out which ideas are controversial. They’re specific about whose ideas they’re referencing, and are clear about what is pedagogy and what is science.

Those comments tend to get much better responses than the ones that begin with cryptic condemnations, follow with links, and make absolute statements without backing them up.

On the internet, it’s easy for misunderstandings to devolve into arguments. Want to avoid that? Be direct, be clear, be relevant.

Book Review: The Invention of Science

I don’t get a lot of time to read for pleasure these days. When I do, it’s usually fiction. But I’ve always had a weakness for stories from the dawn of science, and David Wootton’s The Invention of Science: A New History of the Scientific Revolution certainly fit the bill.

517hucfpasl-_sx329_bo1204203200_

Wootton’s book is a rambling tour of the early history of science, from Brahe’s nova in 1572 to Newton’s Optics in 1704. Tying everything together is one clear, central argument: that the scientific revolution involved, not just a new understanding of the world, but the creation of new conceptual tools. In other words, the invention of science itself.

Wootton argues this, for the most part, by tracing changes in language. Several chapters have a common structure: Wootton identifies a word, like evidence or hypothesis, that has an important role in how we talk about science. He then tracks that word back to its antecedents, showing how early scientists borrowed and coined the words they needed to describe the new type of reasoning they had pioneered.

Some of the most compelling examples come early on. Wootton points out that the word “discover” only became common in European languages after Columbus’s discovery of the new world: first in Portugese, then later in the rest of Europe. Before then, the closest term meant something more like “find out”, and was ambiguous: it could refer to finding something that was already known to others. Thus, early writers had to use wordy circumlocutions like “found out that which was not known before” to refer to genuine discovery.

The book covers the emergence of new social conventions in a similar way. For example, I was surprised to learn that the first recorded priority disputes were in the sixteenth century. Before then, discoveries weren’t even typically named for their discoverers: “the Pythagorean theorem”, oddly enough, is a name that wasn’t used until after the scientific revolution was underway. Beginning with explorers arguing over the discovery of the new world and anatomists negotiating priority for identifying the bones of the ear or the “discovery” of the clitoris, the competitive element of science began to come into its own.

Along the way, Wootton highlights episodes both familiar and obscure. You’ll find Bruno and Torricelli, yes, but also disputes over whether the seas are higher than the land or whether a weapon could cure wounds it caused via the power of magnetism. For anyone as fascinated by the emergence of science as I am, it’s a joyous wealth of detail.

If I had one complaint, it would be that for a lay reader far too much of Wootton’s book is taken up by disputes with other historians. His particular foes are relativists, though he spares some paragraphs to attack realists too. Overall, his dismissals of his opponents are so pat, and his descriptions of their views so self-evidently silly, that I can’t help but suspect that he’s not presenting them fairly. Even if he is, the discussion is rather inside baseball for a non-historian like me.

I read part of Newton’s Principia in college, and I was hoping for a more thorough discussion of Newton’s role. While he does show up, Wootton seems to view Newton as a bit of an enigma: someone who insisted on using the old language of geometric proofs while clearly mastering the new science of evidence and experiment. In this book, Newton is very much a capstone, not a focus.

Overall, The Invention of Science is a great way to learn about the twists and turns of the scientific revolution. If you set aside the inter-historian squabbling (or if you like that sort of thing) you’ll find a book brim full of anecdotes from the dawn of modern thought, and a compelling argument that what we do as scientists is neither an accident of culture nor obvious common-sense, but a hard-won invention whose rewards we are still reaping today.

Most of String Theory Is Not String Pheno

Last week, Sabine Hossenfelder wrote a post entitled “Why not string theory?” In it, she argued that string theory has a much more dominant position in physics than it ought to: that it’s crowding out alternative theories like Loop Quantum Gravity and hogging much more funding than it actually merits.

If you follow the string wars at all, you’ve heard these sorts of arguments before. There’s not really anything new here.

That said, there were a few sentences in Hossenfelder’s post that got my attention, and inspired me to write this post.

So far, string theory has scored in two areas. First, it has proved interesting for mathematicians. But I’m not one to easily get floored by pretty theorems – I care about math only to the extent that it’s useful to explain the world. Second, string theory has shown to be useful to push ahead with the lesser understood aspects of quantum field theories. This seems a fruitful avenue and is certainly something to continue. However, this has nothing to do with string theory as a theory of quantum gravity and a unification of the fundamental interactions.

(Bolding mine)

Here, Hossenfelder explicitly leaves out string theorists who work on “lesser understood aspects of quantum field theories” from her critique. They’re not the big, dominant program she’s worried about.

What Hossenfelder doesn’t seem to realize is that right now, it is precisely the “aspects of quantum field theories” crowd that is big and dominant. The communities of string theorists working on something else, and especially those making bold pronouncements about the nature of the real world, are much, much smaller.

Let’s define some terms:

Phenomenology (or pheno for short) is the part of theoretical physics that attempts to make predictions that can be tested in experiments. String pheno, then, covers attempts to use string theory to make predictions. In practice, though, it’s broader than that: while some people do attempt to predict the results of experiments, more work on figuring out how models constructed by other phenomenologists can make sense in string theory. This still attempts to test string theory in some sense: if a phenomenologist’s model turns out to be true but it can’t be replicated in string theory then string theory would be falsified. That said, it’s more indirect. In parallel to string phenomenology, there is also the related field of string cosmology, which has a similar relationship with cosmology.

If other string theorists aren’t trying to make predictions, what exactly are they doing? Well, a large number of them are studying quantum field theories. Quantum field theories are currently our most powerful theories of nature, but there are many aspects of them that we don’t yet understand. For a large proportion of string theorists, string theory is useful because it provides a new way to understand these theories in terms of different configurations of string theory, which often uncovers novel and unexpected properties. This is still physics, not mathematics: the goal, in the end, is to understand theories that govern the real world. But it doesn’t involve the same sort of direct statements about the world as string phenomenology or string cosmology: crucially, it doesn’t depend on whether string theory is true.

Last week, I said that before replying to Hossenfelder’s post I’d have to gather some numbers. I was hoping to find some statistics on how many people work on each of these fields, or on their funding. Unfortunately, nobody seems to collect statistics broken down by sub-field like this.

As a proxy, though, we can look at conferences. Strings is the premier conference in string theory. If something has high status in the string community, it will probably get a talk at Strings. So to investigate, I took a look at the talks given last year, at Strings 2015, and broke them down by sub-field.

strings2015topics

Here I’ve left out the historical overview talks, since they don’t say much about current research.

“QFT” is for talks about lesser understood aspects of quantum field theories. Amplitudes, my own sub-field, should be part of this: I’ve separated it out to show what a typical sub-field of the QFT block might look like.

“Formal Strings” refers to research into the fundamentals of how to do calculations in string theory: in principle, both the QFT folks and the string pheno folks find it useful.

“Holography” is a sub-topic of string theory in which string theory in some space is equivalent to a quantum field theory on the boundary of that space. Some people study this because they want to learn about quantum field theory from string theory, others because they want to learn about quantum gravity from quantum field theory. Since the field can’t be cleanly divided into quantum gravity and quantum field theory research, I’ve given it its own category.

While all string theory research is in principle about quantum gravity, the “Quantum Gravity” section refers to people focused on the sorts of topics that interest non-string quantum gravity theorists, like black hole entropy.

Finally, we have String Cosmology and String Phenomenology, which I’ve already defined.

Don’t take the exact numbers here too seriously: not every talk fit cleanly into a category, so there were some judgement calls on my part. Nonetheless, this should give you a decent idea of the makeup of the string theory community.

The biggest wedge in the diagram by far, taking up a majority of the talks, is QFT. Throwing in Amplitudes (part of QFT) and Formal Strings (useful to both), and you’ve got two thirds of the conference. Even if you believe Hossenfelder’s tale of the failures of string theory, then, that only matters to a third of this diagram. And once you take into account that many of the Holography and Quantum Gravity people are interested in aspects of QFT as well, you’re looking at an even smaller group. Really, Hossenfelder’s criticism is aimed at two small slices on the chart: String Pheno, and String Cosmo.

Of course, string phenomenologists also have their own conference. It’s called String Pheno, and last year it had 130 participants. In contrast, LOOPS’ 2015, the conference for string theory’s most famous “rival”, had…190 participants. The fields are really pretty comparable.

Now, I have a lot more sympathy for the string phenomenologists and string cosmologists than I do for loop quantum gravity. If other string theorists felt the same way, then maybe that would cause the sort of sociological effect that Hossenfelder is worried about.

But in practice, I don’t think this happens. I’ve met string theorists who didn’t even know that people still did string phenomenology. The two communities are almost entirely disjoint: string phenomenologists and string cosmologists interact much more with other phenomenologists and cosmologists than they do with other string theorists.

You want to talk about sociology? Sociologically, people choose careers and fund research because they expect something to happen soon. People don’t want to be left high and dry by a dearth of experiments, don’t feel comfortable working on something that may only be vindicated long after they’re dead. Most people choose the safe option, the one that, even if it’s still aimed at a distant goal, is also producing interesting results now (aspects of quantum field theories, for example).

The people that don’t? Tend to form small, tight-knit, passionate communities. They carve out a few havens of like-minded people, and they think big thoughts while the world around them seems to only care about their careers.

If you’re a loop quantum gravity theorist, or a quantum gravity phenomenologist like Hossenfelder, and you see some of your struggles in that paragraph, please realize that string phenomenology is like that too.

I feel like Hossenfelder imagines a world in which string theory is struck from its high place, and alternative theories of quantum gravity are of comparable size and power. But from where I’m sitting, it doesn’t look like it would work out that way. Instead, you’d have alternatives grow to the same size as similarly risky parts of string theory, like string phenomenology. And surprise, surprise: they’re already that size.

In certain corners of the internet, people like to argue about “punching up” and “punching down”. Hossenfelder seems to think she’s “punching up”, giving the big dominant group a taste of its own medicine. But by leaving out string theorists who study QFTs, she’s really “punching down”, or at least sideways, and calling out a sub-group that doesn’t have much more power than her own.

Quick Post

I’m traveling this week, so I don’t have time for a long post. I am rather annoyed with Sabine Hossenfelder’s recent post about string theory, but I don’t have time to write much about it now.

(Broadly speaking, she dismisses string theory’s success in investigating quantum field theories as irrelevant to string theory’s dominance, but as far as I’ve seen the only part of string theory that has any “institutional dominance” at all is the “investigating quantum field theories” part, while string theorists who spend their time making statements about the real world are roughly as “marginalized” as non-string quantum gravity theorists. But I ought to gather some numbers before I really commit to arguing this.)

Fun with Misunderstandings

Perimeter had its last Public Lecture of the season this week, with Mario Livio giving some highlights from his book Brilliant Blunders. The lecture should be accessible online, either here or on Perimeter’s YouTube page.

These lectures tend to attract a crowd of curious science-fans. To give them something to do while they’re waiting, a few local researchers walk around with T-shirts that say “Ask me, I’m a scientist!” Sometimes we get questions about the upcoming lecture, but more often people just ask us what they’re curious about.

Long-time readers will know that I find this one of the most fun parts of the job. In particular, there’s a unique challenge in figuring out just why someone asked a question. Often, there’s a hidden misunderstanding they haven’t recognized.

The fun thing about these misunderstandings is that they usually make sense, provided you’re working from the person in question’s sources. They heard a bit of this and a bit of that, and they come to the most reasonable conclusion they can given what’s available. For those of us who have heard a more complete story, this often leads to misunderstandings we would never have thought of, but that in retrospect are completely understandable.

One of the simpler ones I ran into was someone who was confused by people claiming that we were running out of water. How could there be a water shortage, he asked, if the Earth is basically a closed system? Where could the water go?

The answer is that when people are talking about a water shortage, they’re not talking about water itself running out. Rather, they’re talking about a lack of safe drinking water. Maybe the water is polluted, or stuck in the ocean without expensive desalinization. This seems like the sort of thing that would be extremely obvious, but if you just hear people complaining that water is running out without the right context then you might just not end up hearing that part of the story.

A more involved question had to do with time dilation in general relativity. The guy had heard that atomic clocks run faster if you’re higher up, and that this was because time itself runs faster in lower gravity.

Given that, he asked, what happens if someone travels to an area of low gravity and then comes back? If more time has passed for them, then they’d be in the future, so wouldn’t they be at the “wrong time” compared to other people? Would they even be able to interact with them?

This guy’s misunderstanding came from hearing what happens, but not why. While he got that time passes faster in lower gravity, he was still thinking of time as universal: there is some past, and some future, and if time passes faster for one person and slower for another that just means that one person is “skipping ahead” into the other person’s future.

What he was missing was the explanation that time dilation comes from space and time bending. Rather than “skipping ahead”, a person for whom time passes faster just experiences more time getting to the same place, because they’re traveling on a curved path through space-time.

As usual, this is easier to visualize in space than in time. I ended up drawing a picture like this:

IMG_20160602_101423

Imagine person A and person B live on a circle. If person B stays the same distance from the center while person A goes out further, they can both travel the same angle around the circle and end up in the same place, but A will have traveled further, even ignoring the trips up and down.

What’s completely intuitive in space ends up quite a bit harder to visualize in time. But if you at least know what you’re trying to think about, that there’s bending involved, then it’s easier to avoid this guy’s kind of misunderstanding. Run into the wrong account, though, and even if it’s perfectly correct (this guy had heard some of Hawking’s popularization work on the subject), if it’s not emphasizing the right aspects you can come away with the wrong impression.

Misunderstandings are interesting because they reveal how people learn. They’re windows into different thought processes, into what happens when you only have partial evidence. And because of that, they’re one of the most fascinating parts of science popularization.

Mass Is Just Energy You Haven’t Met Yet

How can colliding two protons give rise to more massive particles? Why do vibrations of a string have mass? And how does the Higgs work anyway?

There is one central misunderstanding that makes each of these topics confusing. It’s something I’ve brought up before, but it really deserves its own post. It’s people not realizing that mass is just energy you haven’t met yet.

It’s quite intuitive to think of mass as some sort of “stuff” that things can be made out of. In our everyday experience, that’s how it works: combine this mass of flour and this mass of sugar, and get this mass of cake. Historically, it was the dominant view in physics for quite some time. However, once you get to particle physics it starts to break down.

It’s probably most obvious for protons. A proton has a mass of 938 MeV/c², or 1.6×10⁻²⁷ kg in less physicist-specific units. Protons are each made of three quarks, two up quarks and a down quark. Naively, you’d think that the quarks would have to be around 300 MeV/c². They’re not, though: up and down quarks both have masses less than 10 MeV/c². Those three quarks account for less than a fiftieth of a proton’s mass.

The “extra” mass is because a proton is not just three quarks. It’s three quarks interacting. The forces between those quarks, the strong nuclear force that binds them together, involves a heck of a lot of energy. And from a distance, that energy ends up looking like mass.

This isn’t unique to protons. In some sense, it’s just what mass is.

The quarks themselves get their mass from the Higgs field. Far enough away, this looks like the quarks having a mass. However, zoom in and it’s energy again, the energy of interaction between quarks and the Higgs. In string theory, mass comes from the energy of vibrating strings. And so on. Every time we run into something that looks like a fundamental mass, it ends up being just another energy of interaction.

If mass is just energy, what about gravity?

When you’re taught about gravity, the story is all about mass. Mass attracts mass. Mass bends space-time. What gets left out, until you actually learn the details of General Relativity, is that energy gravitates too.

Normally you don’t notice this, because mass contributes so much more to energy than anything else. That’s really what E=m is really about: it’s a unit conversion formula. It tells you that if you want to know how much energy a given mass “really is”, you multiply it by the speed of light squared. And that’s a large enough number that most of the time, when you notice energy gravitating, it’s because that energy looks like a big chunk of mass. (It’s also why physicists like silly units like MeV/c² for mass: we can just multiply by c² and get an energy!)

It’s really tempting to think about mass as a substance, of mass as always conserved, of mass as fundamental. But in physics we often have to toss aside our everyday intuitions, and this is no exception. Mass really is just energy. It’s just energy that we’ve “zoomed out” enough not to notice.

What Does It Mean to Know the Answer?

My sub-field isn’t big on philosophical debates. We don’t tend to get hung up on how to measure an infinite universe, or in arguing about how to interpret quantum mechanics. Instead, we develop new calculation techniques, which tends to nicely sidestep all of that.

If there’s anything we do get philosophical about, though, any question with a little bit of ambiguity, it’s this: What counts as an analytic result?

“Analytic” here is in contrast to “numerical”. If all we need is a number and we don’t care if it’s slightly off, we can use numerical methods. We have a computer use some estimation trick, repeating steps over and over again until we have approximately the right answer.

“Analytic”, then, refers to everything else. When you want an analytic result, you want something exact. Most of the time, you don’t just want a single number: you want a function, one that can give you numbers for whichever situation you’re interested in.

It might sound like there’s no ambiguity there. If it’s a function, with sines and cosines and the like, then it’s clearly analytic. If you can only get numbers out through some approximation, it’s numerical. But as the following example shows, things can get a bit more complicated.

Suppose you’re trying to calculate something, and you find the answer is some messy integral. Still, you’ve simplified the integral enough that you can do numerical integration and get some approximate numbers out. What’s more, you can express the integral as an infinite series, so that any finite number of terms will get close to the correct result. Maybe you even know a few special cases, situations where you plug specific numbers in and you do get an exact answer.

It might sound like you only know the answer numerically. As it turns out, though, this is roughly how your computer handles sines and cosines.

When your computer tries to calculate a sine or a cosine, it doesn’t have access to the exact solution all of the time. It does have some special cases, but the rest of the time it’s using an infinite series, or some other numerical trick. Type in a random sine into your calculator and it will be just as approximate as if you did a numerical integration.

So what’s the real difference?

Rather than how we get numbers out, think about what else we know. We know how to take derivatives of sines, and how to integrate them. We know how to take limits, and series expansions. And we know their relations to other functions, including how to express them in terms of other things.

If you can do that with your integral, then you’ve probably got an analytic result. If you can’t, then you don’t.

What if you have only some of the requirements, but not the others? What if you can take derivatives, but don’t know all of the identities between your functions? What if you can do series expansions, but only in some limits? What if you can do all the above, but can’t get numbers out without a supercomputer?

That’s where the ambiguity sets in.

In the end, whether or not we have the full analytic answer is a matter of degree. The closer we can get to functions that mathematicians have studied and understood, the better grasp we have of our answer and the more “analytic” it is. In practice, we end up with a very pragmatic approach to knowledge: whether we know the answer depends entirely on what we can do with it.

Particles Aren’t Vibrations (at Least, Not the Ones You Think)

You’ve probably heard this story before, likely from Brian Greene.

In string theory, the fundamental particles of nature are actually short lengths of string. These strings can vibrate, and like a string on a violin, that vibration is arranged into harmonics. The more energy in the string, the more complex the vibration. In string theory, each of these vibrations corresponds to a different particle, explaining how the zoo of particles we observe can come out of a single type of fundamental string.

250px-moodswingerscale-svg

Particles. Probably.

It’s a nice story. It’s even partly true. But it gives a completely wrong idea of where the particles we’re used to come from.

Making a string vibrate takes energy, and that energy is determined by the tension of the string. It’s a lot harder to wiggle a thick rubber band than a thin one, if you’re holding both tightly.

String theory’s strings are under a lot of tension, so it takes a lot of energy to make them vibrate. From our perspective, that energy looks like mass, so the more complicated harmonics on a string correspond to extremely massive particles, close to the Planck mass!

Those aren’t the particles you’re used to. They’re not electrons, they’re not dark matter. They’re particles we haven’t observed, and may never observe. They’re not how string theory explains the fundamental particles of nature.

So how does string theory go from one fundamental type of string to all of the particles in the universe, if not through these vibrations? As it turns out, there are several different ways it can happen, tricks that allow the lightest and simplest vibrations to give us all the particles we’ve observed.* I’ll describe a few.

The first and most important trick here is supersymmetry. Supersymmetry relates different types of particles to each other. In string theory, it means that along with vibrations that go higher and higher, there are also low-energy vibrations that behave like different sorts of particles. In a sense, string theory sticks a quantum field theory inside another quantum field theory, in a way that would make Xzibit proud.

Even with supersymmetry, string theory doesn’t give rise to all of the right sorts of particles. You need something else, like compactifications or branes.

The strings of string theory live in ten dimensions, it’s the only place they’re mathematically consistent. Since our world looks four-dimensional, something has to happen to the other six dimensions. They have to be curled up, in a process called compactification. There are lots and lots (and lots) of ways to do this compactification, and different ways of curling up the extra dimensions give different places for strings to move. These new options make the strings look different in our four-dimensional world: a string curled around a donut hole looks very different from one that moves freely. Each new way the string can move or vibrate can give rise to a new particle.

Another option to introduce diversity in particles is to use branes. Branes (short for membranes) are surfaces that strings can end on. If two strings end on the same brane, those ends can meet up and interact. If they end on different branes though, then they can’t. By cleverly arranging branes, then, you can have different sets of strings that interact with each other in different ways, reproducing the different interactions of the particles we’re familiar with.

In string theory, the particles we’re used to aren’t just higher harmonics, or vibrations with more and more energy. They come from supersymmetry, from compactifications and from branes. The higher harmonics are still important: there are theorems that you can’t fix quantum gravity with a finite number of extra particles, so the infinite tower of vibrations allows string theory to exploit a key loophole. They just don’t happen to be how string theory gets the particles of the Standard Model. The idea that every particle is just a higher vibration is a common misconception, and I hope I’ve given you a better idea of how string theory actually works.

 

*But aren’t these lightest vibrations still close to the Planck mass? Nope! See the discussion with TE in the comments for details.