Tag Archives: physics

Physical Truths, Lost to the Ages

For all you tumblr-ers out there (tumblr-ists? tumblr-dwellers?), 4 gravitons is now on tumblr. It’s mostly going to be links to my blog posts, with the occasional re-blog of someone else’s work if something catches my eye.

Nima Arkani-Hamed gave a public lecture at Perimeter yesterday, which I encourage you to watch if you have time, once it’s up on the Perimeter site. He also gave a technical talk earlier in the day, where he finished up by making the following (intentionally) provocative statement:

There is no direct evidence of what happened during the Big Bang that could have survived till today.

He clarified that he doesn’t just mean “evidence we can currently detect”. Rather, there’s a limit on what we can know, even with the most precise equipment possible. The details of what happened at the Big Bang (the sorts of precise details that would tell you, for example, whether it is best described by String Theory or some other picture) would get diluted as the universe expands, until today they would be so subtle and so rare that they fall below the level we could even in principle detect. We simply don’t have enough information available, no matter how good our technology gets, to detect them in a statistically significant way.

If this talk had happened last week, I could have used this in my spooky Halloween post. This is exactly the sort of thing that keeps physicists up at night: the idea that, fundamentally, there may be things we can never truly know about the universe, truths lost to the ages.

It’s not quite as dire as it sounds, though. To explain why, let me mention another great physics piece, Tom Stoppard’s Arcadia.

Despite appearances, this is in fact a great work of physics popularization.

Arcadia is a play about entropy. The play depicts two time periods, the early 19th century and the present day. In the present day a pair of scholars, Hannah and Bernard, argue about the events of the 19th century, when the house was occupied by a mathematically precocious girl named Thomasina and her tutor Septimus. Thomasina makes early discoveries about fractals and (to some extent) chaos theory, while Septimus gradually falls in love with her. In the present, the two scholars gradually get closer to the truth, going from a false theory that one of the guests at the house was killed by Lord Byron, to speculation that Septimus was the one to discover fractals, to finally getting a reasonably accurate idea of how the events of the story unfolded. Still, they never know everything, and the play emphasizes that certain details (documents burned in a fire, the true feelings of some of the people) will be forever lost to the ages.

The key point here is that, even with incomplete information, even without the ability to fully test their hypotheses and get all the details, the scholars can still make progress. They can propose accounts of what happened, accounts that have implications they can test, that might be proven wrong or right by future discoveries. Their accounts will also have implications they can’t test: lost letters, feelings never written down. But the better their account, the more it will explain, and the longer it will agree with anything new they manage to turn up.

That’s the way out of the problem Nima posed. We can’t know the truth of what happened at the Big Bang directly. But if we have a theory of physics that describes everything we can test, it’s likely to also make a prediction for what happened in the Big Bang. In science, most of the time you don’t have direct evidence. Rather, you have a successful theory, one that has succeeded under scrutiny many times in many contexts, enough that you trust it even when it goes out of the area you’re comfortable testing. That’s why physicists can make statements about what it’s like on the inside of a black hole, and it’s why it’s still good science to think about the Big Bang even if we can’t gather direct evidence about the details of how it took place.

All that said, Nima is well aware of this, and the problem still makes him uncomfortable. It makes me uncomfortable too. Saying that something is completely outside of our ability to measure, especially something as fundamental and important as the Big Bang, is not something we physicists can generally be content with. Time will tell whether there’s a way around the problem.

The Hardest Audience Knows Just Enough to Be Dangerous

You’d think that it would be hard to explain physics to people who know absolutely nothing about physics.

And you might be right, if there was anyone these days who knew absolutely nothing about physics. If someone didn’t know what atoms were, or didn’t know what a physicist was, then yes it would take quite a while to explain anything more than the basics. But most people know what atoms are, and know what physicists are, and at least have a basic idea that there are things called protons and neutrons and electrons.

And that’s often enough. Starting with a basis like that, I can talk people through the Large Hadron Collider, I can get them to picture Feynman Diagrams, I can explain, roughly, what it is I do.

On the other end, it’s not all that hard to explain what I do to people in my sub-field. Working on the same type of physics is like sharing a language, we have all sorts of terms to make explaining easier. While it’s still possible to trip up and explain too much or too little (a recent talk I gave left out the one part that one member of the audience needed…because everyone else would have gotten nothing out of it), you’re protected by a buffer of mutual understanding.

The hardest talks aren’t for the public, and they aren’t for fellow amplitudes-researchers. They’re for a general physics audience.

If you’re talking to physicists, you can’t start with protons and neutrons. Do that, and your audience is going to get annoyed with you rather quickly. You can’t rely on the common understanding everyone has of physics. In addition to making your audience feel like they’re being talked down to, you won’t manage to say anything substantial. You need to start at a higher level so that when you do describe what you do, it’s in enough detail that your audience feels like they really understand it.

At the same time, you can’t start with the jargon of your sub-field. If you want to really explain something (and not just have fifteen minutes of background before everyone tunes out) you need to build off of a common understanding.

The tricky part is, that “common understanding” is more elusive than you might think. For example, pretty much every physicist has some familiarity with Quantum Field Theory…but that can mean anything from “uses it every day” to “saw it a couple times back in grad school”. Too much background, and half your audience is bored. Too little, and half your audience is lost. You have to strike the proper balance, trying to show everyone enough to feel satisfied.

There are tricks to make this easier. I’ve noticed that some of the best speakers begin with a clever and unique take on something everyone understands. That way, people in very different fields will still have something they recognize, while people in the same field will still be seeing something new. Of course, the tricky part is coming up with a new example in the first place!

In general, I need to get better at estimating where my audience is. Talking to you guys is fun, but I ought to also practice a “physics voice” for discussions with physicists (as well as grants and applications), and an “amplitudes voice” for fellow specialists. The key to communication, as always, is knowing your audience.

A Nobel for Blue LEDs, or, How Does That Count as Physics?

When I first heard about this year’s Nobel Prize in Physics, I didn’t feel the need to post on it. The prize went to Isamu Akasaki, Hiroshi Amano and Shuji Nakamura, whose discoveries enabled blue LEDs. It’s a more impressive accomplishment than it might seem: while red LEDs have been around since the 60’s and 70’s, blue LEDs were only developed in the 90’s, and only with both can highly efficient, LED-based white light sources be made. Still, I didn’t consider posting on it because it’s pretty much entirely outside my field.

p-device20blue20led-23

Shiny, though

It took a conversation with another PI postdoc to point out one way I can comment on the Nobel, and it started when we tried to figure out what type of physicists Akasaki, Amano, and Nakamura are. After tossing around terms like “device physicist” and “condensed matter”, someone wondered whether the development of blue LEDs wasn’t really a matter of engineering.

At that point I realized, I’ve talked about something like this before.

Physicists work on lots of different things, and many of them don’t seem to have much to do with physics. They study geometry and topology, biological molecules and the nature of evolution, income inequality and, yes, engineering.

On the surface, these don’t have much to do with physics. A friend of mine used to quip that condensed matter physicists seem to just “pick whatever they want to research”.

There is something that ties all of these topics together, though. They’re all things that physicists are good at.

Physics grad school gives you a wide variety of tools with which to understand the world. Thermodynamics gives you a way to understand large, complicated systems with statistics, while quantum field theory lets you understand everything with quantum properties, not just fundamental particles but materials as well. This batch of tools can be applied to “traditional” topics, but they’re equally applicable if you’re researching something else entirely, as long as it obeys the right kinds of rules.

In the end, the best definition of physics is the most useful one. Physicists should be people who can benefit from being part of physics organizations, from reading physics journals, and especially from training (and having been) physics grad students. The whole reason we have scientific disciplines in the first place is to make it easier for people with common interests to work together. That’s why Akasaki, Amano, and Nakamura aren’t “just” engineers, and why I and my fellow string theorists aren’t “just” mathematicians. We use our knowledge of physics to do our jobs, and that, more than anything else, makes us physicists.


Edit: It has been pointed out to me that there’s a bit more to this story than the main accounts have let on. Apparently another researcher named Herbert Paul Maruska was quite close to getting a blue LED up and running back in the early 1970’s, getting far enough to have a working prototype. There’s a whole fascinating story about the quest for a blue LED, related here. Maruska seems to be on friendly terms with Akasaki, Amano, and Nakamura, and doesn’t begrudge them their recognition.

Feeling Perturbed?

You might think of physics as the science of certainties and exact statements: action and reaction, F=ma, and all that. However, most calculations in physics aren’t exact, they’re approximations. This is especially true today, but it’s been true almost since the dawn of physics. In particular, approximations are performed via a method known as perturbation theory.

Perturbation theory is a trick used to solve problems that, for one reason or another, are too difficult to solve all in one go. It works by solving a simpler problem, then perturbing that solution, adjusting it closer to the target.

To give an analogy: let’s say you want to find the area of a circle, but you only know how to draw straight lines. You could start by drawing a square: it’s easy to find the area, and you get close to the area of the circle. But you’re still a long ways away from the total you’re aiming for. So you add more straight lines, getting an octagon. Now it’s harder to find the area, but you’re closer to the full circle. You can keep adding lines, each step getting closer and closer.

And so on.

And so on.

This, broadly speaking, is what’s going on when particle physicists talk about loops. The calculation with no loops (or “tree-level” result) is the easier problem to solve, omitting quantum effects. Each loop then is the next stage, more complicated but closer to the real total.

There are, as usual, holes in this analogy. One is that it leaves out an important aspect of perturbation theory, namely that it involves perturbing with a parameter. When that parameter is small, perturbation theory works, but as it gets larger the approximation gets worse and worse. In the case of particle physics, the parameter is the strength of the forces involves, with weaker forces (like the weak nuclear force, or electromagnetism) having better approximations than stronger forces (like the strong nuclear force). If you squint, this can still fit the analogy: different shapes might be harder to approximate than the circle, taking more sets of lines to get acceptably close.

Where the analogy fails completely, though, is when you start approaching infinity. Keep adding more lines, and you should be getting closer and closer to the circle each time. In quantum field theory, though, this frequently is not the case. As I’ve mentioned before, while lower loops keep getting closer to the true (and experimentally verified) results, going all the way out to infinite loops results not in the full circle, but in an infinite result instead. There’s an understanding of why this happens, but it does mean that perturbation theory can’t be thought of in the most intuitive way.

Almost every calculation in particle physics uses perturbation theory, which means almost always we are just approximating the real result, trying to draw a circle using straight lines. There are only a few theories where we can bypass this process and look at the full circle. These are known as integrable theories. N=4 super Yang-Mills may be among them, one of many reasons why studying it offers hope for a deeper understanding of particle physics.

The Many (Body) Problems of the Academic Lifestyle

I’m visiting Perimeter this week, searching for apartments in the area. This got me thinking about how often one has to move in academia. You move for college, you move for grad school, you move for each postdoc job, and again when you start as a professor. Even then, you may not get to stay where you are if you don’t manage to get tenure, and it may be healthier to resign yourself to moving every seven years rather than assuming you’re going to settle down.

Most of life isn’t built around the idea that people move across the country (or the world!) every 2-7 years, so naturally this causes a few problems for those on the academic path. Below are some well-known, and not-so-well-known, problems facing academics due to their frequent relocations:

The two-body problem:

Suppose you’re married, or in a committed relationship. Better hope your spouse has a flexible job, because in a few years you’re going to be moving to another city. This is even harder if your spouse is also an academic, as that requires two rare academic jobs to pop up in the same place. And woe betide you if you’re out of synch, and have to move at different times. Many couples end up having to resort to some sort of long-distance arrangement, which further complicates matters.

The N-body problem:

Like the two-body problem, but for polyamorous academics. Leads to poly-chains up and down the East Coast.

The 2+N-body problem:

Alternatively, add a time dimension to your two-body problem via the addition of children. Now your kids are busily being shuffled between incommensurate school systems. But you’re an academic, you can teach them anything they’re missing, right?

The warm body problem:

Of course, all this assumes you’re in a relationship. If you’re single, you instead have the problem of never really having a social circle beyond your department, having to tenuously rebuild your social life every few years. What sorts of clubs will the more socially awkward of you enter, just to have some form of human companionship?

The large body of water problem:

We live in an age where everything is connected, but that doesn’t make distance cheap. An ocean between you and your collaborators means you’ll rarely be awake at the same time. And good luck crossing that ocean again, not every job will be eager to pay relocation expenses.

The obnoxious governing body problem:

Of course, the various nations involved won’t make all this travel easy. Many countries have prestigious fellowships only granted on the condition that the winner returns to their home country for a set length of time. Since there’s no guarantee that anyone in your home country does anything similar to what you do, this sort of requirement can have people doing whatever research they can find, however tangentially related, or trying to avoid the incipient bureaucratic nightmare any way they can.

 

Amplitudes on Paperscape

Paperscape is a very cool tool developed by Damien George and Rob Knegjens. It analyzes papers from arXiv, the paper repository where almost all physics and math papers live these days. By putting papers that cite each other closer together and pushing papers that don’t cite each other further apart, Paperscape creates a map of all the papers on arXiv, arranged into “continents” based on the links between them. Papers with more citations are shown larger, newer papers are shown brighter, and subject categories are indicated by color-coding.

Here’s a zoomed-out view:

PaperscapeFullMap

Already you can see several distinct continents, corresponding to different arXiv categories like high energy theory and astrophysics.

If you want to find amplitudes on this map, just zoom in between the purple continent (high energy theory, much of which is string theory) and the green one (high energy lattice, nuclear experiment, high energy experiment, and high energy phenomenology, broadly speaking these are all particle physics).

PaperscapeAmplitudesMap

When you zoom in, Paperscape shows words that commonly appear in a given region of papers. Zoomed in this far, you can see amplitudes!

Amplitudeologists like me live on an island between particle physics and string theory. We’re connected on both sides by bridges of citations and shared terms, linking us to people who study quarks and gluons on one side to people who study strings and geometry on the other. Think of us like Manhattan, an island between two shores, densely networked in to the surroundings.

PaperscapeZoomedMap

Zoom in further, and you can see common keywords for individual papers. Exploring around here shows not only what is getting talked about, but what sort of subjects as well. You can see by the color-coding that many papers in amplitudes are published as hep-th, or high energy theory, but there’s a fair number of papers from hep-ph (phenomenology) and from nuclear physics as well.

There’s a lot of interesting things you can do with Paperscape. You can search for individuals, or look at individual papers, seeing who they cite and who cite them. Try it out!

What’s up with arXiv?

First of all, I wanted to take a moment to say that this is the one-year anniversary of this blog. I’ve been posting every week, (almost always) on Friday, since I first was motivated to start blogging back in November 2012. It’s been a fun ride, through ups and downs, Ars Technica and Amplituhedra, and I hope it’s been fun for you, the reader, as well!

I’ve been giving links to arXiv since my very first post, but I haven’t gone into detail about what arXiv is. Since arXiv is a rather unique phenomenon, it could use a more full description.

arXivpic

The word arXiv is pronounced much like the normal word archive, just think of the capital X like a Greek letter Chi.

Much as the name would suggest, arXiv is an archive, specifically a preprint archive. A pre-print is in a sense a paper before it becomes a paper; more accurately, it is a scientific paper that has not yet been published in a journal. In the past, such preprints would be kept by individual universities, or passed between interested individuals. Now arXiv, for an increasing range of fields (first physics and mathematics, now also computer science, quantitative biology, quantitative finance, and statistics) puts all of the preprints in one easily accessible, free to access place.

Different fields have different conventions when it comes to using arXiv. As a theoretical physicist, I can only really speak to how we use the system.

When theoretical physicists write a paper, it is often not immediately clear which journal we should submit it to. Different journals have different standards, and a paper that will gather more interest can be published in a more prestigious journal. In order to gauge how much interest a paper will raise, most theoretical physicists will put their papers up on arXiv as preprints first, letting them sit there for a few months to drum up attention and get feedback before formally submitting the paper to a journal.

The arXiv isn’t just for preprints, though. Once a paper is published in a journal, a copy of the paper remains on arXiv. Often, the copy on arXiv will be updated when the paper is updated, changed to the journal’s preferred format and labeled with the correct journal reference. So arXiv, ultimately, contains almost all of the papers published in theoretical physics in the last decade or two, all free to read.

But it’s not just papers! The digital format of arXiv makes it much easier to post other files alongside a paper, so that many people upload not just their results, but the computer code they used to generate them, or their raw data in long files. You can also post papers too long or unwieldy to publish in a journal, making arXiv an excellent dropping-off point for information in whatever format you think is best.

We stand at the edge of a new age of freely accessible science. As more and more disciplines start to use arXiv and similar services, we’ll have more flexibility to get more information to more people, while still keeping the advantage of peer review for publication in actual journals. It’s going to be very interesting to see where things go from here.

New Guide, Taking Suggestions

Hello readers!

Some of you have probably read the guide to N=4 super Yang-Mills theory linked at the top of my blog’s home page. A few of you even discovered this blog via the guide.

I’m thinking about doing another series of posts, like those, explaining a different theory. I’d like to get an idea of which theory you guys are most interested in seeing described. Whichever I choose, it will be largely along the same lines as the N=4 posts, so focused less on technical details and more on presenting something that a layman can understand.

Here are some of the options I’m considering:

  • N=8 Supergravity: Very broadly speaking, this is gravity’s equivalent of N=4 super Yang-Mills. It’s connected to N=4 in a variety of interesting ways, and it’s something I’d like to work more with at some point.
  • The (2,0) Theory: This was the motivation behind my first paper. It’s harder to explain than some of the other theories because it doesn’t have an easy analogy with the particles of the real world. It’s also even harder to work with, to the point that saying something rigorous about it is often worthy of a paper on its own.
  • String Theory/M Theory: This is a big topic, and there are many sites out there already that cover aspects of it. What I might try to do is look for an angle of approach that others haven’t covered, and try to explain some slightly more technical aspects of the situation in a popularly approachable way.

I could also give a more detailed description of some method from amplitudeology, like generalized unitarity or symbology.

Finally, I could always just keep posting like I have been doing. But this seems like a good time to add to my site’s utility. So what do you think? What should I talk about?

Dammit Jim, I’m a Physicist not a Graphic Designer!

Over the last week I’ve been working with a few co-authors to get a paper ready for publication. For my part, this has mostly meant making plots of our data. (Yes, theorists have data! It’s the result of calculations, not observations, but it’s still data!)

As it turns out, making the actual plots is only the first and easiest step. We have a huge number of data points, which means the plots ended up being very large files. To fix this I had to smooth out the files so they don’t include every point, a process called rasterizing the images. I also needed to make sure that the labels of the plots matched the fonts in the paper, and that the images in the paper were of the right file type to be included, which in turn meant understanding the sort of information retained by each type of image file. I had to learn which image files include transparency and which don’t, which include fonts as text and which use images, and which fonts were included in each program I used. By the end, I learned more about graphic design than I ever intended to.

In a company, this sort of job would be given to a graphic designer on-staff, or a hired expert. In academia, however, we don’t have the resources for that sort of thing, so we have to become experts in the nitty-gritty details of how to get our work in publishable form.

As it turns out, this is part of a wider pattern in academia. Any given project doesn’t have a large staff of specialists or a budget for outside firms, so everyone involved has to become competent at tasks that a business would parcel out to experts. This is why a large part of work in physics isn’t really physics per se; rather, we theorists often spend much of our time programming, while experimentalists often have to build and repair their experimental apparatus. The end result is that much of what we do is jury-rigged together, with an amateur understanding of most of the side disciplines involved. Things work, but they aren’t as efficient or as slick as they could be if assembled by a real expert. On the other hand, it makes things much cheaper, and it’s a big contributor to the uncanny ability of physicists to know about other peoples’ disciplines.

Hawking vs. Witten: A Primer

Have you seen the episode of Star Trek where Data plays poker with Stephen Hawking? How about the times he appeared on Futurama or the Simpsons? Or the absurd number of times he has come up in one way or another on The Big Bang Theory?

Stephen Hawking is probably the most recognizable theoretical physicist to laymen. Wheelchair-bound and speaking through a voice synthesizer, Hawking presents a very distinct image, while his work on black holes and the big bang, along with his popular treatments of science in books like A Brief History of Time, has made him synonymous in the public’s mind with genius.

He is not, however, the most recognizable theoretical physicist when talking to physicists. If Sheldon from The Big Bang Theory were a real string theorist he wouldn’t be obsessed with Hawking. He might, however, be obsessed with Edward Witten.

Edward Witten is tall and has an awkwardly high voice (for a sample, listen to the clip here). He’s also smart, smart enough to dabble in basically every subfield of theoretical physics and manage to make important contributions while doing so. He has a knack for digging up ideas from old papers and dredging out the solution to current questions of interest.

And far more than Hawking, he represents a clear target for parody, at least when that parody is crafted by physicists and mathematicians. Abstruse Goose has a nice take on his role in theoretical physics, while his collaboration with another physicist named Seiberg on what came to be known as Seiberg-Witten theory gave rise to the cyber-Witten pun.

If you would look into the mouth of physics-parody madness, let this link be your guide…

So why hasn’t this guy appeared on Futurama? (After all, his dog does!)

Witten is famous among theorists, but he hasn’t done as much as Hawking to endear himself to the general public. He hasn’t written popular science books, and he almost never gives public talks. So when a well-researched show like The Big Bang Theory wants to mention a famous physicist, they go to Hawking, not to Witten, because people know about him. And unless Witten starts interfacing more with the public (or blog posts like this become more common), that’s not about to change.