Tag Archives: DoingScience

The Cycle of Exploration

Science is often described as a journey of exploration. You might imagine scientists carefully planning an expedition, gathering their equipment, then venturing out into the wilds of Nature, traveling as far as they can before returning with tales of the wonders they discovered.

Is it capybaras? Please let it be capybaras.

Is it capybaras? Please let it be capybaras.

This misses an important part of the story, though. In science, exploration isn’t just about discovering the true nature of Nature, as important as that is. It’s also about laying the groundwork for future exploration.

Picture our explorers, traveling out into the wilderness with no idea what’s in store. With only a rough idea of the challenges they might face, they must pack for every possibility: warm clothing for mountains, sunscreen for the desert, canoes to ford rivers, cameras in case they encounter capybaras. Since they can only carry so much, they can only travel so far before they run out of supplies.

Once they return, though, the explorers can assess what they did and didn’t need. Maybe they found a jungle, full of capybaras. The next time they travel they’ll make sure to bring canoes and cameras, but they can skip the warm coats. This lets them free up more room, letting them bring more supplies that’s actually useful. In the end, this lets them travel farther.

Science is a lot like this. The more we know, the better questions we can ask, and the further we can explore. It’s true not just for experiments, but for theoretical work as well. Here’s a slide from a talk I’m preparing, about how this works in my sub-field of Amplitudeology.

Unfortunately not a capybara.

Unfortunately not a capybara.

In theoretical physics, you often start out doing a calculation using the most general methods you have available. Once you’ve done it, you understand a bit more about your results: in particular, you can start figuring out which parts of the general method are actually unnecessary. By paring things down, you can figure out a new method, one that’s more efficient and allows for more complicated calculations. Doing those calculations then reveals new patterns, letting you propose even newer methods and do even more complicated calculations.

It’s the circle of exploration, and it really does move us all, motivating everything we do. With each discovery, we can go further, learn more, than the last attempt, keeping science churning long into the future.

The Real Problem with Fine-Tuning

You’ve probably heard it said that the universe is fine-tuned.

The Standard Model, our current best understanding of the rules that govern particle physics, is full of lots of fiddly adjustable parameters. The masses of fundamental particles and the strengths of the fundamental forces aren’t the sort of thing we can predict from first principles: we need to go out, do experiments, and find out what they are. And you’ve probably heard it argued that, if these fiddly parameters were even a little different from what they are, life as we know it could not exist.

That’s fine-tuning…or at least, that’s what many people mean when they talk about fine-tuning. It’s not exactly what physicists mean though. The thing is, almost nobody who studies particle physics thinks the parameters of the Standard Model are the full story. In fact, any theory with adjustable parameters probably isn’t the full story.

It all goes back to a point I made a while back: nature abhors a constant. The whole purpose of physics is to explain the natural world, and we have a long history of taking things that look arbitrary and linking them together, showing that reality has fewer parameters than we had thought. This is something physics is very good at. (To indulge in a little extremely amateurish philosophy, it seems to me that this is simply an inherent part of how we understand the world: if we encounter a parameter, we will eventually come up with an explanation for it.)

Moreover, at this point we have a rough idea of what this sort of explanation should look like. We have experience playing with theories that don’t have any adjustable parameters, or that only have a few: M theory is an example, but there are also more traditional quantum field theories that fill this role with no mention of string theory. From our exploration of these theories, we know that they can serve as the kind of explanation we need: in a world governed by one of these theories, people unaware of the full theory would observe what would look at first glance like a world with many fiddly adjustable parameters, parameters that would eventually turn out to be consequences of the broader theory.

So for a physicist, fine-tuning is not about those fiddly parameters themselves. Rather, it’s about the theory that predicts them. Because we have experience playing with these sorts of theories, we know roughly the sorts of worlds they create. What we know is that, while sometimes they give rise to worlds that appear fine-tuned, they tend to only do so in particular ways. Setups that give rise to fine-tuning have consequences: supersymmetry, for example, can give rise to an apparently fine-tuned universe but has to have “partner” particles that show up in powerful enough colliders. In general, a theory that gives rise to apparent fine-tuning will have some detectable consequences.

That’s where physicists start to get worried. So far, we haven’t seen any of these detectable consequences, and it’s getting to the point where we could have, had they been the sort many people expected.

Physicists are worried about fine-tuning, but not because it makes the universe “unlikely”. They’re worried because the more finely-tuned our universe appears, the harder it is to find an explanation for it in terms of the sorts of theories we’re used to working with, and the less likely it becomes that someone will discover a good explanation any time soon. We’re quite confident that there should be some explanation, hundreds of years of scientific progress strongly suggest that to be the case. But the nature of that explanation is becoming increasingly opaque.

My Travels, and Someone Else’s

I arrived in São Paulo, Brazil a few days ago. I’m going to be here for two months as part of a partnership between Perimeter and the International Centre for Theoretical Physics – South American Institute for Fundamental Research. More specifically, I’m here as part of a program on Integrability, a set of tricks that can, in limited cases, let physicists bypass the messy approximations we often have to use.

I’m still getting my metaphorical feet under me here, so I haven’t had time to think of a proper blog post. However, if you’re interested in hearing about the travels of physicists in general, a friend of mine from Stony Brook is going to the South Pole to work on the IceCube neutrino detection experiment, and has been writing a blog about it.

Merry Newtonmas!

Yesterday, people around the globe celebrated the birth of someone whose new perspective and radical ideas changed history, perhaps more than any other.

I’m referring, of course, to Isaac Newton.

Ho ho ho!

Born on December 25, 1642, Newton is justly famed as one of history’s greatest scientists. By relating gravity on Earth to the force that holds the planets in orbit, Newton arguably created physics as we know it.

However, like many prominent scientists, Newton’s greatness was not so much in what he discovered as how he discovered it. Others had already had similar ideas about gravity. Robert Hooke in particular had written to Newton mentioning a law much like the one Newton eventually wrote down, leading Hooke to accuse Newton of plagiarism.

Newton’s great accomplishment was not merely proposing his law of gravitation, but justifying it, in a way that no-one had ever done before. When others (Hooke for example) had proposed similar laws, they were looking for a law that perfectly described the motion of the planets. Kepler had already proposed ellipse-shaped orbits, but it was clear by Newton and Hooke’s time that such orbits did not fully describe the motion of the planets. Hooke and others hoped that if some sufficiently skilled mathematician started with the correct laws, they could predict the planets’ motions with complete accuracy.

The genius of Newton was in attacking this problem from a different direction. In particular, Newton showed that his laws of gravitation do result in (incorrect) ellipses…provided that there was only one planet.

With multiple planets, things become much more complicated. Even just two planets orbiting a single star is so difficult a problem that it’s impossible to write down an exact solution.

Sensibly, Newton didn’t try to write down an exact solution. Instead, he figured out an approximation: since the Sun is much bigger than the planets, he could simplify the problem and arrive at a partial solution. While he couldn’t perfectly predict the motions of the planets, he knew more than that they were just “approximately” ellipses: he had a prediction for how different from ellipses they should be.

That step was Newton’s great contribution. That insight, that science was able not just to provide exact answers to simpler problems but to guess how far those answers might be off, was something no-one else had really thought about before. It led to error analysis in experiments, and perturbation methods in theory. More generally, it led to the idea that scientists have to be responsible, not just for getting things “almost right”, but for explaining how their results are still wrong.

So this holiday season, let’s give thanks to the man whose ideas created science as we know it. Merry Newtonmas everyone!

Where do you get all those mathematical toys?

I’m at a conference at Caltech this week, so it’s going to be a shorter post than usual.

The conference is on something call the Positive Grassmannian, a precursor to Nima Arkani-Hamed’s much-hyped Amplituhedron. Both are variants of a central idea: take complicated calculations in physics and express them in terms of clean, well-defined mathematical objects.

Because of this, this conference is attended not just by physicists, but by mathematicians as well, and it’s been interesting watching how the two groups interact.

From a physics perspective, mathematicians are great because they give us so many useful tools! Many significant advances in my field happened because a physicist talked to a mathematician and learned that a problem that had stymied the physics world had already been solved in the math community.

This tends to lead to certain expectations among physicists. If a mathematician gives a talk at a physics conference, we expect them to present something we can use. Our ideal math talk is like when Q presents the gadgets at the beginning of a Bond movie: a ton of new toys with just enough explanation for us to use them to save the day in the second act.

Pictured: Mathematicians, through Physicist eyes

You may see the beginning of a problem here, once you realize that physicists are the James Bond in this analogy.

Physicists like to see themselves as the protagonists of their own stories. That’s true of every field, though, to some degree or another. And it’s certainly true of mathematicians.

Mathematicians don’t go to physics conferences just to be someone’s supporting cast. They do it because physics problems are interesting to them: by hearing what physicists are working on they hope to get inspiration for new mathematical structures, concepts jury-rigged together by physicists that represent corners that mathematics hasn’t yet explored. Their goal is to take home an idea that they can turn into something productive, gaining glory among their fellow mathematicians. And if that sounds familiar…

Pictured: Physicists, through Mathematician eyes

While it’s amusing to watch the different expectations go head-to-head, the best collaborations between physicists and mathematicians are those where both sides respect that the other is the protagonist of their own story. Allow for give-and-take, paying attention not just to what you find interesting but to what the other person does, without assuming a tired old movie script, and it’s possible to make great progress.

Of course, that’s true of life in general as well.

Research or Conference? Can’t it be both?

“If you’re there for two months, for sure you’ll be doing research.”

I wanted to be snarky. I wanted to point out that, as a theoretical physicist, I do research wherever I go. I wanted to say that I even did research on the drive over. (This may not have been true, I think I mostly thought about Magic the Gathering cards.)

More than any of those, though, I wanted to get my travel visa. So instead I said,

“That’s fair.”

“Mmhmm, that’s fair.” Looking down at the invitation letter, she triumphantly pointed to the name of the inviting institution: “South American Institute for Fundamental Research.”

A bit of background: I’m going to Brazil this winter. Partly, this is because winter in Canada is not especially desirable, but it’s also because Sao Paulo’s International Center for Theoretical Physics is running a Program on Integrability, the arcane set of techniques that seeks to bypass the approximate perturbations we often use in particle physics and find full, exact results.

What do I mean by a Program? It’s not the sort of scientific program I’ve talked about before, though the ideas are related. When an institute holds a Program, they’re declaring a theme. For a certain length of time (generally from a few months to a whole semester), there will be a large number of talks at the institute focused on some particular scientific theme. The institute invites people from all over the world who work on that theme. Those people are there to give and attend talks, but they’re also there to share ideas with each other, to network and collaborate and do research.

This is where things get tricky. See, Brazil has multiple types of visas. A Tourist Visa can be used, among other things, for attending a scientific conference. On the other hand, someone coming to Brazil to do research uses Visa 1.

A Program is essentially a long conference…but it’s also an opportunity to do research. So are most short conferences, though! In theoretical physics we have workshops, short conferences explicitly focused on collaboration and research, but even if a conference isn’t a workshop you can bet that we’ll be doing some research there, for sure. We don’t need labs, and some of us don’t even need computers, research can happen whenever the inspiration strikes. The distinction between conferences and research, from our perspective, is an arbitrary one.

In physics, we like to cut through this sort of ambiguity by looking at what’s really important. I wanted to figure out what about research makes the Brazilian government use a different visa for it, whether it was about motivating people to enter the country for specific reasons or tracking certain sorts of activities. I wanted to understand that, because it would let me figure out whether my own research fell under those reasons, and thus figure out objectively which type of visa I ought to have.

I wanted to ask about all of this…but more than any of that, I wanted to get my travel visa. So I applied for the visa they told me to, and left.

Physical Truths, Lost to the Ages

For all you tumblr-ers out there (tumblr-ists? tumblr-dwellers?), 4 gravitons is now on tumblr. It’s mostly going to be links to my blog posts, with the occasional re-blog of someone else’s work if something catches my eye.

Nima Arkani-Hamed gave a public lecture at Perimeter yesterday, which I encourage you to watch if you have time, once it’s up on the Perimeter site. He also gave a technical talk earlier in the day, where he finished up by making the following (intentionally) provocative statement:

There is no direct evidence of what happened during the Big Bang that could have survived till today.

He clarified that he doesn’t just mean “evidence we can currently detect”. Rather, there’s a limit on what we can know, even with the most precise equipment possible. The details of what happened at the Big Bang (the sorts of precise details that would tell you, for example, whether it is best described by String Theory or some other picture) would get diluted as the universe expands, until today they would be so subtle and so rare that they fall below the level we could even in principle detect. We simply don’t have enough information available, no matter how good our technology gets, to detect them in a statistically significant way.

If this talk had happened last week, I could have used this in my spooky Halloween post. This is exactly the sort of thing that keeps physicists up at night: the idea that, fundamentally, there may be things we can never truly know about the universe, truths lost to the ages.

It’s not quite as dire as it sounds, though. To explain why, let me mention another great physics piece, Tom Stoppard’s Arcadia.

Despite appearances, this is in fact a great work of physics popularization.

Arcadia is a play about entropy. The play depicts two time periods, the early 19th century and the present day. In the present day a pair of scholars, Hannah and Bernard, argue about the events of the 19th century, when the house was occupied by a mathematically precocious girl named Thomasina and her tutor Septimus. Thomasina makes early discoveries about fractals and (to some extent) chaos theory, while Septimus gradually falls in love with her. In the present, the two scholars gradually get closer to the truth, going from a false theory that one of the guests at the house was killed by Lord Byron, to speculation that Septimus was the one to discover fractals, to finally getting a reasonably accurate idea of how the events of the story unfolded. Still, they never know everything, and the play emphasizes that certain details (documents burned in a fire, the true feelings of some of the people) will be forever lost to the ages.

The key point here is that, even with incomplete information, even without the ability to fully test their hypotheses and get all the details, the scholars can still make progress. They can propose accounts of what happened, accounts that have implications they can test, that might be proven wrong or right by future discoveries. Their accounts will also have implications they can’t test: lost letters, feelings never written down. But the better their account, the more it will explain, and the longer it will agree with anything new they manage to turn up.

That’s the way out of the problem Nima posed. We can’t know the truth of what happened at the Big Bang directly. But if we have a theory of physics that describes everything we can test, it’s likely to also make a prediction for what happened in the Big Bang. In science, most of the time you don’t have direct evidence. Rather, you have a successful theory, one that has succeeded under scrutiny many times in many contexts, enough that you trust it even when it goes out of the area you’re comfortable testing. That’s why physicists can make statements about what it’s like on the inside of a black hole, and it’s why it’s still good science to think about the Big Bang even if we can’t gather direct evidence about the details of how it took place.

All that said, Nima is well aware of this, and the problem still makes him uncomfortable. It makes me uncomfortable too. Saying that something is completely outside of our ability to measure, especially something as fundamental and important as the Big Bang, is not something we physicists can generally be content with. Time will tell whether there’s a way around the problem.

A Nobel for Blue LEDs, or, How Does That Count as Physics?

When I first heard about this year’s Nobel Prize in Physics, I didn’t feel the need to post on it. The prize went to Isamu Akasaki, Hiroshi Amano and Shuji Nakamura, whose discoveries enabled blue LEDs. It’s a more impressive accomplishment than it might seem: while red LEDs have been around since the 60’s and 70’s, blue LEDs were only developed in the 90’s, and only with both can highly efficient, LED-based white light sources be made. Still, I didn’t consider posting on it because it’s pretty much entirely outside my field.

p-device20blue20led-23

Shiny, though

It took a conversation with another PI postdoc to point out one way I can comment on the Nobel, and it started when we tried to figure out what type of physicists Akasaki, Amano, and Nakamura are. After tossing around terms like “device physicist” and “condensed matter”, someone wondered whether the development of blue LEDs wasn’t really a matter of engineering.

At that point I realized, I’ve talked about something like this before.

Physicists work on lots of different things, and many of them don’t seem to have much to do with physics. They study geometry and topology, biological molecules and the nature of evolution, income inequality and, yes, engineering.

On the surface, these don’t have much to do with physics. A friend of mine used to quip that condensed matter physicists seem to just “pick whatever they want to research”.

There is something that ties all of these topics together, though. They’re all things that physicists are good at.

Physics grad school gives you a wide variety of tools with which to understand the world. Thermodynamics gives you a way to understand large, complicated systems with statistics, while quantum field theory lets you understand everything with quantum properties, not just fundamental particles but materials as well. This batch of tools can be applied to “traditional” topics, but they’re equally applicable if you’re researching something else entirely, as long as it obeys the right kinds of rules.

In the end, the best definition of physics is the most useful one. Physicists should be people who can benefit from being part of physics organizations, from reading physics journals, and especially from training (and having been) physics grad students. The whole reason we have scientific disciplines in the first place is to make it easier for people with common interests to work together. That’s why Akasaki, Amano, and Nakamura aren’t “just” engineers, and why I and my fellow string theorists aren’t “just” mathematicians. We use our knowledge of physics to do our jobs, and that, more than anything else, makes us physicists.


Edit: It has been pointed out to me that there’s a bit more to this story than the main accounts have let on. Apparently another researcher named Herbert Paul Maruska was quite close to getting a blue LED up and running back in the early 1970’s, getting far enough to have a working prototype. There’s a whole fascinating story about the quest for a blue LED, related here. Maruska seems to be on friendly terms with Akasaki, Amano, and Nakamura, and doesn’t begrudge them their recognition.

Science is Debugging

What do I do, when I get to work in the morning?

I debug programs.

I debug programs literally, in that most of the calculations I do are far to complicated to do by hand. I write programs to do my calculations, and invariably these programs have bugs. So, I debug.

I debug programs in a broader sense, too.

In science, a research program is a broad approach, taken by a number of people, used to make progress on some set of important scientific questions. Someone suggests a way forward (“Let’s try using an ansatz of transcendental functions!” “Let’s try to represent our calculations with a geometrical object!”) and they and others apply the new method to as many problems as they can. Eventually the program loses steam, and a new program is proposed.

The thing about these programs is, they’re pretty much never fully fleshed out at the beginning. There’s a general idea, and a good one, but it usually requires refinement. If you just follow the same steps as the first person in the program you’re bound to fail. Instead, you have to tweak the program, broadening it and adapting it to the problem you’re trying to solve.

It’s a heck of a lot like debugging a computer program, really. You start out with a hastily written script, and you try applying it as-is, hoping that it works. Often it doesn’t, and you have to go back, step by step, and figure out what’s going wrong.

So when I debug computer programs at work, I’m doing it with a broader goal. I’m running a scientific program, looking for bugs in that. If and when I find them, I can write new computer programs to figure out what’s going wrong. Then I have to debug those computer programs…

I’ll just leave this here.

Love It or Hate It, Don’t Fear the Multiverse

“In an infinite universe, anything is possible.”

A nice maxim for science fiction, perhaps. But it probably doesn’t sound like productive science.

A growing number of high profile scientists and science popularizers have come out in favor of the idea that there may exist a “multiverse” of multiple universes, and that this might explain some of the unusual properties of our universe. If there are multiple universes, each with different physical laws, then we must exist in one of the universes with laws capable of supporting us, no matter how rare or unlikely such a universe is. This sort of argument is called anthropic reasoning.

(If you’re picky about definitions and don’t like the idea of more than one universe, think instead of a large universe with many different regions, each one separated from the others. There are some decent physics-based reasons to suppose we live in such a universe.)

Not to mention continuity reasons.

Why is anyone in favor of this idea? It all goes back to the Higgs.

The Higgs field interacts with other particles, giving them mass. What most people don’t mention is that the effect, in some sense, goes both ways. Because the Higgs interacts with other particles, the mass of the Higgs is also altered. This alteration is large, much larger than the observed mass of the Higgs. (In fact, in a sense it’s infinite!)

In order for the Higgs to have the mass we observe, then, something has to cancel out these large corrections. That cancellation can either be a coincidence, or there can be a reason for it.

The trouble is, we’re running out of good reasons. One of the best was supersymmetry, the idea that each particle has a partner with tightly related properties. But if supersymmetry was going to save the day, we probably would have detected some of those partners at the Large Hadron Collider by now. More generally, it can be argued that almost all possible “good reasons” require some new particle to be found at the LHC.

If there are no good reasons, then we’re stuck with a coincidence. (This is often referred to as the Naturalness Problem in particle physics.) And it’s this uncomfortable coincidence that has driven prominent physicists to the arms of the multiverse.

There’s a substantial backlash, though. Many people view the multiverse as a cop-out. Some believe it to be even more toxic than that: if there’s a near-infinite number of possible universes then in principle any unusual feature of our universe could be explained by anthropic reasoning, which sounds like it could lead to the end of physics as we know it.

You can disdain the multiverse as a cop-out, but, as I’ll argue here, you shouldn’t fear it. Those who think the multiverse will destroy physics are fundamentally misunderstanding the way physics research works.

The key thing to keep in mind is that almost nobody out there prefers the multiverse. When a prominent physicist supports the multiverse, that doesn’t mean they’re putting aside productive work on other solutions to the problem. In general, it means they don’t have other solutions to the problem. Supporting the multiverse isn’t going to stop them from having ideas they wouldn’t have had to begin with.

And indeed, many of these people are quite supportive of alternatives to the multiverse. I’ve seen Nima Arkani-Hamed talk about the multiverse, and he generally lists a number of other approaches (some quite esoteric!) that he has worked (and failed to make progress) on, and encourages the audience to look into them.

Physics isn’t a zero-sum game, nor is it ruled by a few prominent people. If a young person has a good idea about how to explain something without the multiverse, they’re going to have all the support and recognition that such an idea deserves.

What the multiverse adds is another track, another potentially worthwhile line of research. Surprising as it may seem, the multiverse doesn’t automatically answer every question. It might not even answer the question of the mass of the Higgs! All that the existence of a multiverse tells us is that we should exist somewhere where intelligent life could exist…but if intelligent life is more likely to exist in a universe very different from ours, then we’re back to square one. There’s a lot of research involved in figuring out just what the multiverse implies, research by people who wouldn’t have been working on this sort of problem if the idea of the multiverse hadn’t been proposed.

That’s the key take-away message here. The multiverse may be wrong, but just considering it isn’t going to destroy physics. Rather, it’s opened up new avenues of research, widening the community of those trying to solve the Naturalness Problem. It may well be a cop-out for individuals, but science as a whole doesn’t have cop-outs: there’s always room for someone with a good idea to sweep away the cobwebs and move things forward.