Tag Archives: science

Science Never Forgets

I’ll just be doing a short post this week, I’ve been busy at a workshop on Flux Tubes here at Perimeter.

If you’ve ever heard someone tell the history of string theory, you’ve probably heard that it was first proposed not as a quantum theory of gravity, but as a way to describe the strong nuclear force. Colliders of the time had discovered particles, called mesons, that seemed to have a key role in the strong nuclear force that held protons and neutrons together. These mesons had an unusual property: the faster they spun, the higher their mass, following a very simple and regular pattern known as a Regge trajectory. Researchers found that they could predict this kind of behavior if, rather than particles, these mesons were short lengths of “string”, and with this discovery they invented string theory.

As it turned out, these early researchers were wrong. Mesons are not lengths of string, rather, they are pairs of quarks. The discovery of quarks explained how the strong force acted on protons and neutrons, each made of three quarks, and it also explained why mesons acted a bit like strings: in each meson, the two quarks are linked by a flux tube, a roughly cylindrical area filled with the gluons that carry the strong nuclear force. So rather than strings, mesons turned out to be more like bolas.

Leonin sold separately.

If you’ve heard this story before, you probably think it’s ancient history. We know about quarks and gluons now, and string theory has moved on to bigger and better things. You might be surprised to hear that at this week’s workshop, several presenters have been talking about modeling flux tubes between quarks in terms of string theory!

The thing is, science never forgets a good idea. String theory was superseded by quarks in describing the strong force, but it was only proposed in the first place because it matched the data fairly well. Now, with string theory-inspired techniques, people are calculating the first corrections to the string-like behavior of these flux tubes, comparing them with simulations of quarks and gluons, and finding surprisingly good agreement!

Science isn’t a linear story, where the past falls away to the shiny new theories of the future. It’s a marketplace. Some ideas are traded more widely, some less…but if a product works, even only sometimes, chances are someone out there will have a reason to buy it.

Who Plagiarizes an Acknowledgements Section?

I’ve got plagiarists on the brain.

Maybe it was running into this interesting discussion about a plagiarized application for the National Science Foundation’s prestigious Graduate Research Fellowship Program. Maybe it’s due to the talk Paul Ginsparg, founder of arXiv, gave this week about, among other things, detecting plagiarism.

Using arXiv’s repository of every paper someone in physics thought was worth posting, Ginsparg has been using statistical techniques to sift out cases of plagiarism. Probably the funniest cases involved people copying a chunk of their thesis acknowledgements section, as excerpted here. Compare:

“I cannot describe how indebted I am to my wonderful girlfriend, Amanda, whose love and encouragement will always motivate me to achieve all that I can. I could not have written this thesis without her support; in particular, my peculiar working hours and erratic behaviour towards the end could not have been easy to deal with!”

“I cannot describe how indebted I am to my wonderful wife, Renata, whose love and encouragement will always motivate me to achieve all that I can. I could not have written this thesis without her support; in particular, my peculiar working hours and erratic behaviour towards the end could not have been easy to deal with!”

Why would someone do this? Copying the scientific part of a thesis makes sense, in a twisted way: science is hard! But why would someone copy the fluff at the end, the easy part that’s supposed to be a genuine take on your emotions?

The thing is, the acknowledgements section of a thesis isn’t exactly genuine. It’s very formal: a required section of the thesis, with tacit expectations about what’s appropriate to include and what isn’t. It’s also the sort of thing you only write once in your life: while published papers also have acknowledgements sections, they’re typically much shorter, and have different conventions.

If you ever were forced to write thank-you notes as a kid, you know where I’m going with this.

It’s not that you don’t feel grateful, you do! But when you feel grateful, you express it by saying “thank you” and moving on. Writing a note about it isn’t very intuitive, it’s not a way you’re used to expressing gratitude, so the whole experience feels like you’re just following a template.

Literally in some cases.

That sort of situation: where it doesn’t matter how strongly you feel something, only whether you express it in the right way, is a breeding ground for plagiarism. Aunt Mildred isn’t going to care what you write in your thank-you note, and Amanda/Renata isn’t going to be moved by your acknowledgements section. It’s so easy to decide, in that kind of situation, that it’s better to just grab whatever appropriate text you can than to teach yourself a new style of writing.

In general, plagiarism happens because there’s a disconnect between incentives and what they’re meant to be for. In a world where very few beginning graduate students actually have a solid research plan, the NSF’s fellowship application feels like a demand for creative lying, not an honest way to judge scientific potential. In countries eager for highly-cited faculty but low on preexisting experts able to judge scientific merit, tenure becomes easier to get by faking a series of papers than by doing the actual work.

If we want to get rid of plagiarism, we need to make sure our incentives match our intent. We need a system in which people succeed when they do real work, get fellowships when they honestly have talent, and where we care about whether someone was grateful, not how they express it. If we can’t do that, then there will always be people trying to sneak through the cracks.

The Cycle of Exploration

Science is often described as a journey of exploration. You might imagine scientists carefully planning an expedition, gathering their equipment, then venturing out into the wilds of Nature, traveling as far as they can before returning with tales of the wonders they discovered.

Is it capybaras? Please let it be capybaras.

Is it capybaras? Please let it be capybaras.

This misses an important part of the story, though. In science, exploration isn’t just about discovering the true nature of Nature, as important as that is. It’s also about laying the groundwork for future exploration.

Picture our explorers, traveling out into the wilderness with no idea what’s in store. With only a rough idea of the challenges they might face, they must pack for every possibility: warm clothing for mountains, sunscreen for the desert, canoes to ford rivers, cameras in case they encounter capybaras. Since they can only carry so much, they can only travel so far before they run out of supplies.

Once they return, though, the explorers can assess what they did and didn’t need. Maybe they found a jungle, full of capybaras. The next time they travel they’ll make sure to bring canoes and cameras, but they can skip the warm coats. This lets them free up more room, letting them bring more supplies that’s actually useful. In the end, this lets them travel farther.

Science is a lot like this. The more we know, the better questions we can ask, and the further we can explore. It’s true not just for experiments, but for theoretical work as well. Here’s a slide from a talk I’m preparing, about how this works in my sub-field of Amplitudeology.

Unfortunately not a capybara.

Unfortunately not a capybara.

In theoretical physics, you often start out doing a calculation using the most general methods you have available. Once you’ve done it, you understand a bit more about your results: in particular, you can start figuring out which parts of the general method are actually unnecessary. By paring things down, you can figure out a new method, one that’s more efficient and allows for more complicated calculations. Doing those calculations then reveals new patterns, letting you propose even newer methods and do even more complicated calculations.

It’s the circle of exploration, and it really does move us all, motivating everything we do. With each discovery, we can go further, learn more, than the last attempt, keeping science churning long into the future.

Science is Debugging

What do I do, when I get to work in the morning?

I debug programs.

I debug programs literally, in that most of the calculations I do are far to complicated to do by hand. I write programs to do my calculations, and invariably these programs have bugs. So, I debug.

I debug programs in a broader sense, too.

In science, a research program is a broad approach, taken by a number of people, used to make progress on some set of important scientific questions. Someone suggests a way forward (“Let’s try using an ansatz of transcendental functions!” “Let’s try to represent our calculations with a geometrical object!”) and they and others apply the new method to as many problems as they can. Eventually the program loses steam, and a new program is proposed.

The thing about these programs is, they’re pretty much never fully fleshed out at the beginning. There’s a general idea, and a good one, but it usually requires refinement. If you just follow the same steps as the first person in the program you’re bound to fail. Instead, you have to tweak the program, broadening it and adapting it to the problem you’re trying to solve.

It’s a heck of a lot like debugging a computer program, really. You start out with a hastily written script, and you try applying it as-is, hoping that it works. Often it doesn’t, and you have to go back, step by step, and figure out what’s going wrong.

So when I debug computer programs at work, I’m doing it with a broader goal. I’m running a scientific program, looking for bugs in that. If and when I find them, I can write new computer programs to figure out what’s going wrong. Then I have to debug those computer programs…

I’ll just leave this here.

Does Science have Fads?

97% of climate scientists agree that global warming exists, and is most probably human-caused. On a more controversial note, string theorists vastly outnumber adherents of other approaches to quantum gravity, such as Loop Quantum Gravity.

As many who disagree with climate change or string theory would argue, the majority is not always right. Science should be concerned with truth, not merely with popularity. After all, what if scientists are merely taking part in a fad? What makes climate change any more objectively true than pet rocks?

Apparently this wikipedia’s best example of a fad.

People are susceptible to fads, after all. A style of music becomes popular, and everyone’s listening to the same sounds. A style of clothing, and everything’s wearing the same thing. So if an idea in science became popular, everyone might…write the same papers?

That right there is the problem. Scientists only succeed by creating meaningfully original work. If we don’t discover something new, we can’t publish, and as the old saying goes it’s publish or perish out there. Even if social pressure gets us working on something, if we’re going to get any actual work done there has to be enough there, at least, for us to do something different, something no-one has done before.

This doesn’t mean scientists can’t be influenced by popularity, but it means that that influence is limited by the requirements of doing meaningful, original work. In the case of climate change, climate scientists investigate the topic with so many different approaches and look at so many different areas of impact (for example, did you know rising CO2 levels make the ocean acidic?) that the whole field simply wouldn’t function if climate change wasn’t real: there’d be a contradiction, and most of the myriad projects involving it simply wouldn’t work. As I’ve talked about before, science is an interlocking system, and it’s hard to doubt one part without being forced to doubt everything else.

What about string theory? Here, the situation is a little different. There aren’t experiments testing string theory, so whether or not string theory describes the real world won’t have much effect on whether people can write string theory papers.

The existence of so many string theory papers does say something, though. The up-side of not involving experiments is that you can’t go and test something slightly different and write a paper about it. In order to be original, you really need to calculate something that nobody expected you to calculate, or notice a trend nobody expected to exist. The fact that there are so many more string theorists than loop quantum gravity theorists is in part because there are so many more interesting string theory projects than interesting loop quantum gravity projects.

In string theory, projects tend to be interesting because they unveil some new aspect of quantum field theory, the class of theories that explain the behavior of subatomic particles. Given how hard quantum field theory is, any insight is valuable, and in my experience these sorts of insights are what most string theorists are after. So while string theory’s popularity says little about whether it describes the real world, it says a lot about its ability to say interesting things about quantum field theory. And since quantum field theories do describe the real world, string theory’s continued popularity is also evidence that it continues to be useful.

Climate change and string theory aren’t fads, not exactly. They’re popular, not simply because they’re popular, but because they make important contributions and valuable to science. And as long as science continues to reward original work, that’s not about to change.

Particles are not Species

It has been estimated that there are 7.5 million undiscovered species of animals, plants and fungi. Most of these species are insects. If someone wanted billions of dollars to search the Amazon rainforest with the goal of cataloging every species of insect, you’d want them to have a pretty good reason. Maybe they are searching for genes that could cure diseases, or trying to understand why an ecosystem is dying.

The primary goal of the Large Hadron Collider is to search for new subatomic particles. If we’re spending billions searching for these things, they must have some use, right? After all, it’s all well and good knowing about a bunch of different particles, but there must be a whole lot of sorts of particles out there, at least if you judge by science fiction (these two are also relevant). Surely we could just focus on finding the useful ones, and ignore the rest?

The thing is, particle physics isn’t like that. Particles aren’t like insects, you don’t find rare new types scattered in out-of-the-way locations. That’s because each type of particle isn’t like a species of animal. Instead, each particle is a fundamental law of nature.

Move over Linnaeus.

Move over Linnaeus.

It wasn’t always like this. In the late 50’s and early 60’s, particle accelerators were producing a zoo of new particles with no clear rhyme or reason, and it looked like they would just keep producing more. That impression changed when Murray Gell-Mann proposed his Eightfold Way, which led to the development of the quark model. He explained the mess of new particles in terms of a few fundamental particles, the quarks, which made up the more complicated particles that were being discovered.

Nowadays, the particles that we’re trying to discover aren’t, for the most part, the zoo of particles of yesteryear. Instead, we’re looking for new fundamental particles.

What makes a particle fundamental?

The new particles of the early 60’s were a direct consequence of the existence of quarks. Once you understood how quarks worked, you could calculate the properties of all of the new particles, and even predict ones that hadn’t been found yet.

By contrast, fundamental particles aren’t based on any other particles, and you can’t predict everything about them. When we discover a new fundamental particle like the Higgs boson, we’re discovering a new, independent law of nature. Each fundamental particle is a law that states, across all of space and time, “if this happens, make this particle”. It’s a law that holds true always and everywhere, regardless of how often the particle is actually produced.

Think about the laws of physics like the cockpit of a plane. In front of the pilot is a whole mess of controls, dials and switches and buttons. Some of those controls are used every flight, some much more rarely. There are probably buttons on that plane that have never been used. But if a single button is out of order, the plane can’t take off.

Each fundamental particle is like a button on that plane. Some turn “on” all the time, while some only turn “on” in special circumstances. But each button is there all the same, and if you’re missing one, your theory is incomplete. It may agree with experiments now, but eventually you’re going to run into problems of one sort or another that make your theory inconsistent.

The point of discovering new particles isn’t just to find the one that will give us time travel or let us blow up Vulcan. Technological applications would be nice, but the real point is deeper: we want to know how reality works, and for every new fundamental particle we discover, we’ve found out a fact that’s true about the whole universe.

Hype versus Miscommunication, or the Language of Importance

A fellow amplitudes-person was complaining to me recently about the hype surrounding the debate regarding whether black holes have “firewalls”. New York Times coverage seems somewhat excessive for what is, in the end, a fairly technical debate, and its enthusiasm was (rightly?) mocked in several places.

There’s an attitude I often run into among other physicists. The idea is that when hype like this happens, it’s because senior physicists are, at worst, cynically manipulating the press to further their positions or, at best, so naïve that they really see what they’re working on as so important that it deserves hype-y coverage. Occasionally, the blame will instead be put on the journalists, with largely the same ascribed motivations: cynical need for more page views, or naïve acceptance of whatever story they’re handed.

In my opinion, what’s going on there is a bit deeper, and not so easily traceable to any particular person.

In the articles on the (2, 0) theory I put up in the last few weeks, I made some disparaging comments about the tone of this Scientific American blog post. After exchanging a few tweets with the author, I think I have a better idea of what went down.

The problem here is that when you ask a scientist about something they’re excited about, they’re going to tell you why they’re excited about it. That’s what happened here when Nima Arkani-Hamed was interviewed for the above article: he was asked about the (2, 0) theory, and he seems to have tried to convey his enthusiasm with a metaphor that explained how the situation felt to him.

The reason this went wrong and led to a title as off-base and hype-sounding as “the Ultimate Ultimate Theory of Physics” was that we (scientists and science journalists) are taught to express enthusiasm in the language of importance.

There has been an enormous resurgence in science communication in recent years, but it has come with a very us-vs.-them mentality. The prevailing attitude is that the public will only pay attention to a scientific development if they are told that it is important. As such, both scientists and journalists try to make whatever they’re trying to communicate sound central, either to daily life or to our understanding of the universe. When both sides of the conversation are operating under this attitude, it creates an echo chamber where a concept’s importance is blown up many times greater than it really deserves, without either side doing anything other than communicating science in the only way they know.

We all have to step back and realize that most of the time, science isn’t interesting because of its absolute “importance”. Rather, a puzzle is often interesting simply because it is a puzzle. That’s what’s going on with the (2, 0) theory, or with firewalls: they’re hard to figure out, and that’s why we care.

Being honest about this is not going to lose us public backing, or funding. It’s not just scientists who value interesting things because they are challenging. People choose the path of their lives not based on some absolute relevance to the universe at large, but because things make sense in context. You don’t fall in love because the target of your affections is the most perfect person in the universe, you fall in love because they’re someone who can constantly surprise you.

Scientists are in love with what they do. We need to make sure that that, and not some abstract sense of importance, is what we’re communicating. If we do that, if we calm down and make a bit more effort to be understood, maybe we can win back some of the trust that we’ve lost by appearing to promote Ultimate Ultimate Theories of Everything.