Tag Archives: academia

Who Plagiarizes an Acknowledgements Section?

I’ve got plagiarists on the brain.

Maybe it was running into this interesting discussion about a plagiarized application for the National Science Foundation’s prestigious Graduate Research Fellowship Program. Maybe it’s due to the talk Paul Ginsparg, founder of arXiv, gave this week about, among other things, detecting plagiarism.

Using arXiv’s repository of every paper someone in physics thought was worth posting, Ginsparg has been using statistical techniques to sift out cases of plagiarism. Probably the funniest cases involved people copying a chunk of their thesis acknowledgements section, as excerpted here. Compare:

“I cannot describe how indebted I am to my wonderful girlfriend, Amanda, whose love and encouragement will always motivate me to achieve all that I can. I could not have written this thesis without her support; in particular, my peculiar working hours and erratic behaviour towards the end could not have been easy to deal with!”

“I cannot describe how indebted I am to my wonderful wife, Renata, whose love and encouragement will always motivate me to achieve all that I can. I could not have written this thesis without her support; in particular, my peculiar working hours and erratic behaviour towards the end could not have been easy to deal with!”

Why would someone do this? Copying the scientific part of a thesis makes sense, in a twisted way: science is hard! But why would someone copy the fluff at the end, the easy part that’s supposed to be a genuine take on your emotions?

The thing is, the acknowledgements section of a thesis isn’t exactly genuine. It’s very formal: a required section of the thesis, with tacit expectations about what’s appropriate to include and what isn’t. It’s also the sort of thing you only write once in your life: while published papers also have acknowledgements sections, they’re typically much shorter, and have different conventions.

If you ever were forced to write thank-you notes as a kid, you know where I’m going with this.

It’s not that you don’t feel grateful, you do! But when you feel grateful, you express it by saying “thank you” and moving on. Writing a note about it isn’t very intuitive, it’s not a way you’re used to expressing gratitude, so the whole experience feels like you’re just following a template.

Literally in some cases.

That sort of situation: where it doesn’t matter how strongly you feel something, only whether you express it in the right way, is a breeding ground for plagiarism. Aunt Mildred isn’t going to care what you write in your thank-you note, and Amanda/Renata isn’t going to be moved by your acknowledgements section. It’s so easy to decide, in that kind of situation, that it’s better to just grab whatever appropriate text you can than to teach yourself a new style of writing.

In general, plagiarism happens because there’s a disconnect between incentives and what they’re meant to be for. In a world where very few beginning graduate students actually have a solid research plan, the NSF’s fellowship application feels like a demand for creative lying, not an honest way to judge scientific potential. In countries eager for highly-cited faculty but low on preexisting experts able to judge scientific merit, tenure becomes easier to get by faking a series of papers than by doing the actual work.

If we want to get rid of plagiarism, we need to make sure our incentives match our intent. We need a system in which people succeed when they do real work, get fellowships when they honestly have talent, and where we care about whether someone was grateful, not how they express it. If we can’t do that, then there will always be people trying to sneak through the cracks.

The Hardest Audience Knows Just Enough to Be Dangerous

You’d think that it would be hard to explain physics to people who know absolutely nothing about physics.

And you might be right, if there was anyone these days who knew absolutely nothing about physics. If someone didn’t know what atoms were, or didn’t know what a physicist was, then yes it would take quite a while to explain anything more than the basics. But most people know what atoms are, and know what physicists are, and at least have a basic idea that there are things called protons and neutrons and electrons.

And that’s often enough. Starting with a basis like that, I can talk people through the Large Hadron Collider, I can get them to picture Feynman Diagrams, I can explain, roughly, what it is I do.

On the other end, it’s not all that hard to explain what I do to people in my sub-field. Working on the same type of physics is like sharing a language, we have all sorts of terms to make explaining easier. While it’s still possible to trip up and explain too much or too little (a recent talk I gave left out the one part that one member of the audience needed…because everyone else would have gotten nothing out of it), you’re protected by a buffer of mutual understanding.

The hardest talks aren’t for the public, and they aren’t for fellow amplitudes-researchers. They’re for a general physics audience.

If you’re talking to physicists, you can’t start with protons and neutrons. Do that, and your audience is going to get annoyed with you rather quickly. You can’t rely on the common understanding everyone has of physics. In addition to making your audience feel like they’re being talked down to, you won’t manage to say anything substantial. You need to start at a higher level so that when you do describe what you do, it’s in enough detail that your audience feels like they really understand it.

At the same time, you can’t start with the jargon of your sub-field. If you want to really explain something (and not just have fifteen minutes of background before everyone tunes out) you need to build off of a common understanding.

The tricky part is, that “common understanding” is more elusive than you might think. For example, pretty much every physicist has some familiarity with Quantum Field Theory…but that can mean anything from “uses it every day” to “saw it a couple times back in grad school”. Too much background, and half your audience is bored. Too little, and half your audience is lost. You have to strike the proper balance, trying to show everyone enough to feel satisfied.

There are tricks to make this easier. I’ve noticed that some of the best speakers begin with a clever and unique take on something everyone understands. That way, people in very different fields will still have something they recognize, while people in the same field will still be seeing something new. Of course, the tricky part is coming up with a new example in the first place!

In general, I need to get better at estimating where my audience is. Talking to you guys is fun, but I ought to also practice a “physics voice” for discussions with physicists (as well as grants and applications), and an “amplitudes voice” for fellow specialists. The key to communication, as always, is knowing your audience.

Science is Debugging

What do I do, when I get to work in the morning?

I debug programs.

I debug programs literally, in that most of the calculations I do are far to complicated to do by hand. I write programs to do my calculations, and invariably these programs have bugs. So, I debug.

I debug programs in a broader sense, too.

In science, a research program is a broad approach, taken by a number of people, used to make progress on some set of important scientific questions. Someone suggests a way forward (“Let’s try using an ansatz of transcendental functions!” “Let’s try to represent our calculations with a geometrical object!”) and they and others apply the new method to as many problems as they can. Eventually the program loses steam, and a new program is proposed.

The thing about these programs is, they’re pretty much never fully fleshed out at the beginning. There’s a general idea, and a good one, but it usually requires refinement. If you just follow the same steps as the first person in the program you’re bound to fail. Instead, you have to tweak the program, broadening it and adapting it to the problem you’re trying to solve.

It’s a heck of a lot like debugging a computer program, really. You start out with a hastily written script, and you try applying it as-is, hoping that it works. Often it doesn’t, and you have to go back, step by step, and figure out what’s going wrong.

So when I debug computer programs at work, I’m doing it with a broader goal. I’m running a scientific program, looking for bugs in that. If and when I find them, I can write new computer programs to figure out what’s going wrong. Then I have to debug those computer programs…

I’ll just leave this here.

(Interstellar) Dust In The Wind…

The news has hit the blogosphere: the team behind the Planck satellite has released new dust measurements, and they seem to be a nail in the coffin of BICEP2’s observation of primordial gravitational waves.

Some background for those who haven’t been following the story:

BICEP2, a telescope in Antarctica, is set up to observe the Cosmic Microwave Background, light left over from the very early universe. Back in March, they announced that they had seen characteristic ripples in that light, ripples that they believed were caused by gravitational waves in the early universe. By comparing the size of these gravitational waves to their (quantum-small) size when they were created, they could make statements about the exponential expansion of the early universe (called inflation). This amounted to better (and more specific) evidence about inflation than anyone else had ever found, so naturally people were very excited about it.

However, doubt was rather quickly cast on these exciting results. Like all experimental science, BICEP2 needed to estimate the chance that their observations could be caused by something more mundane. In particular, interstellar dust can cause similar “ripples” to those they observed. They argued that dust would have contributed a much smaller effect, so their “ripples” must be the real deal…but to make this argument, they needed an estimate of how much dust they should have seen. They had several estimates, but one in particular was based on data “scraped” off of a slide from a talk by the Planck collaboration.

Unfortunately, it seems that the BICEP2 team misinterpreted this “scraped” data. Now, Planck have released the actual data, and it seems like dust could account for BICEP2’s entire signal.

I say “could” because more information is needed before we know for sure. The BICEP2 and Planck teams are working together now, trying to tease out whether BICEP2’s observations are entirely dust, or whether there might still be something left.

I know I’m not the only person who wishes that this sort of collaboration could have happened before BICEP2 announced their discovery to the world. If Planck had freely shared their early data with BICEP2, they would have had accurate dust estimates to begin with, and they wouldn’t have announced all of this prematurely.

Of course, expecting groups to freely share data when Nobel prizes and billion-dollar experiments are on the line is pretty absurdly naive. I just wish we lived in a world where none of this was at issue, where careers didn’t ride on “who got there first”.

I’ve got no idea how to bring about such a world, of course. Any suggestions?

Perimeter!

I’m moving in at Perimeter this week, so I don’t have time to write a long post. For those who aren’t familiar with it, the Perimeter Institute for Theoretical Physics is an independent research institute, not affiliated with any university. Instead, it’s funded by a combination of government and private sources (for why private sources might fund theoretical physics, read my discussion here). Because it’s not a university they have budgets to do things like hire people to make the transition process easier, so everything has been really nice and well-organized.

The postdoc offices are really nice, with a view of the nearby park, shown below.

On the Perimeter...of Waterloo Park

On the Perimeter…of Waterloo Park

Stop! Impostor!

Ever felt like you don’t belong? Like you don’t deserve to be where you are, that you’re just faking competence you don’t really have?

If not, it may surprise you to learn that this is a very common feeling among successful young academics. It’s called impostor syndrome, and it happens to some very talented people.

It’s surprisingly easy to rationalize success as luck, to assume praise comes from people who don’t know the full story. In science, we’re surrounded by people who seem to come up with brilliant insights on a regular basis. We see others’ successes far more often than we see their failures, and often we forget that science is at its heart a process of throwing ideas against a wall until something sticks. Hyper-aware of our own failures, when we present ourselves as successful we can feel like we’re putting on a paper-thin disguise, constantly at risk that someone will see through it.

As paper-thin disguises go, I prefer the classics.

In my experience, theoretical physics is especially heavy on impostor syndrome, for a number of reasons.

First, there’s the fact that beginning grad students really don’t know all they need to. Theoretical physics requires a lot of specialized knowledge, and most grad students just have the bare bones basics of a physics undergrad degree. On the strength of those basics, you’re somehow supposed to convince a potential advisor, an established, successful scientist, that you’re worth paying attention to.

Throw in the fact that many people have a little more than the basics, whether from undergrad research projects or grad-level courses taken early, and you have a group where everyone is trying to seem more advanced than they are. There’s a very real element of fake it till you make it, of going to talks and picking up just enough of the lingo to bluff your way through a conversation.

And the thing is, even after you make it, you’ll probably still feel like you’re faking it.

As I’ve mentioned before, there’s an enormous amount of jury-rigging that goes into physics research. There are a huge number of side-disciplines that show up at one point or another, from numerical methods to programming to graphic design. We can’t hire a professional to handle these things, we have to learn them ourselves. As such, we become minor dabblers in a whole mess of different fields. Work on something enough and others will start looking to you for help. It won’t feel like you’re an expert, though, because you know in the back of your mind that the real experts know so much more.

In the end, the best approach I’ve found is simply to keep saying yes. Keep using what you know, going to talks and trying new things. The more you “pretend” to know what you’re doing, the more experience you’ll get, until you really do know what you’re doing. There’s always going to be more to learn, but chances are if you’re feeling impostor syndrome you’ve already learned a lot. Take others’ opinions of you at face value, and see just how far you can go.

The Many (Body) Problems of the Academic Lifestyle

I’m visiting Perimeter this week, searching for apartments in the area. This got me thinking about how often one has to move in academia. You move for college, you move for grad school, you move for each postdoc job, and again when you start as a professor. Even then, you may not get to stay where you are if you don’t manage to get tenure, and it may be healthier to resign yourself to moving every seven years rather than assuming you’re going to settle down.

Most of life isn’t built around the idea that people move across the country (or the world!) every 2-7 years, so naturally this causes a few problems for those on the academic path. Below are some well-known, and not-so-well-known, problems facing academics due to their frequent relocations:

The two-body problem:

Suppose you’re married, or in a committed relationship. Better hope your spouse has a flexible job, because in a few years you’re going to be moving to another city. This is even harder if your spouse is also an academic, as that requires two rare academic jobs to pop up in the same place. And woe betide you if you’re out of synch, and have to move at different times. Many couples end up having to resort to some sort of long-distance arrangement, which further complicates matters.

The N-body problem:

Like the two-body problem, but for polyamorous academics. Leads to poly-chains up and down the East Coast.

The 2+N-body problem:

Alternatively, add a time dimension to your two-body problem via the addition of children. Now your kids are busily being shuffled between incommensurate school systems. But you’re an academic, you can teach them anything they’re missing, right?

The warm body problem:

Of course, all this assumes you’re in a relationship. If you’re single, you instead have the problem of never really having a social circle beyond your department, having to tenuously rebuild your social life every few years. What sorts of clubs will the more socially awkward of you enter, just to have some form of human companionship?

The large body of water problem:

We live in an age where everything is connected, but that doesn’t make distance cheap. An ocean between you and your collaborators means you’ll rarely be awake at the same time. And good luck crossing that ocean again, not every job will be eager to pay relocation expenses.

The obnoxious governing body problem:

Of course, the various nations involved won’t make all this travel easy. Many countries have prestigious fellowships only granted on the condition that the winner returns to their home country for a set length of time. Since there’s no guarantee that anyone in your home country does anything similar to what you do, this sort of requirement can have people doing whatever research they can find, however tangentially related, or trying to avoid the incipient bureaucratic nightmare any way they can.

 

Experimentalist Says What?

I’m a theoretical physicist. That means I work with pencil and paper, or with my laptop, or at most with a computer cluster. I don’t have a lab, and even if I did I wouldn’t have any equipment to store there.

By contrast, most physicists (and most scientists in general) are experimentalists, the people who actually do experiments, actually work in labs, and actually use piles and piles of expensive equipment. Naturally, these two groups have very different ways of doing things, spawned by different requirements for their jobs. This leads to very different ways of talking. We theorists sometimes get confused by the quaint turns of phrase used by experimentalists, so I’ve put together this handy translation guide:

 

Lab: Kind of like an office, but has a bunch of big machines in it for some reason. Also, in some of them they don’t even drink coffee, some nonsense about toxic contaminants. I don’t know how they get any work done with all those test tubes all over the place.

PI: Not Private Investigator, but close! The Primary Investigator is the big cheese among the experimentalists, the one who owns all the big machines. All of the others must bow before him or her, even fellow professors must grovel if they want to use the PI’s expensive equipment. Naturally, this makes experimentalists very hierarchical, a sharp contrast to theorists who are obviously totally egalitarian.

Poster: Let me tell you a secret about experimentalists: there are a lot of them. Way more than there are theorists. So many, that if they all go to a conference it’s impossible for them all to give talks! That’s where posters come in: some of the experimentalists all stand in a room in front of rectangles of cardboard covered in charts, while the others walk around and ask questions. Traditionally, these posters are printed an hour before the conference, obviously for maximum freshness and not at all because of procrastination.

Group: Like our Institutes, but (because there are a lot of experimentalists) there isn’t just one per university and (because of the shared lab) they actually have something to talk about. This leads to regular group meetings, because when you’re using expensive equipment you actually need to show you’re doing something worthwhile with it.

IRB: For the medical and psychological folks, the Internal Review Board is there to tell you that, no, you can’t infect monkeys with flesh-eating bacteria just to see what happens. They’re also the people who ask you whether a grammatical change in your online survey will pose risks to pregnant women, which is clearly exactly as important. Theorists don’t have these, because numbers are an oppressed underclass with no rights to speak of. EHS (Environmental Health and Safety) fills a similar role for those who only oppress yeast and their own grad students.

Annual Meeting: Experimentalists tend to be part of big organizations like the American Physical Society. And that’s all well and good, occupies a space on the CV and so forth. What’s somewhat more baffling is their tendency to trust those organizations to run conferences. Generally these are massive affairs, with people from all sorts of sub-fields participating. This only works because experimentalists have the mysterious ability to walk into each other’s talks and actually understand what’s going on, even if the subject matter is very different from what they’re used to. Experts suggest this has something to do with actually studying real things in the real world, but this is a hypothesis at best.

Insert Muscle Joke Here

I’m graduating this week, so I probably shouldn’t spend too much time writing this post. I ought to mention, though, that there has been some doubt about the recent discovery by the BICEP2 telescope of evidence for gravitational waves in the cosmic microwave background caused by the early inflation of the universe. Résonaances got to the story first and Of Particular Significance has some good coverage that should be understandable to a wide audience.

In brief, the worry is that the signal detected by BICEP2 might not be caused by inflation, but instead by interstellar dust. While the BICEP2 team used several models of dust to show that it should be negligible, the controversy centers around one of these models in particular, one taken from another, similar experiment called PLANCK.

The problem is, BICEP2 didn’t get PLANCK’s information on dust directly. Instead, it appears they took the data from a slide in a talk by the PLANCK team. This process, known as “data scraping”, involves taking published copies of the slides and reading information off of the charts presented. If BICEP2 misinterpreted the slide, they might have miscalculated the contribution by interstellar dust.

If you’re like me, the whole idea of data scraping seems completely ludicrous. The idea of professional scientists sneaking information off of a presentation, rather than simply asking the other team for data like reasonable human beings, feels almost cartoonishly wrong-headed.

It’s a bit more understandable, though, when you think about the culture behind these big experiments. The PLANCK and BICEP2 teams are colleagues, but they are also competitors. There is an enormous amount of glory in finding evidence for something like cosmic inflation first, and an equally enormous amount of shame in screwing up and announcing something that turns out to be wrong. As such, these experiments are quite protective of their data. Not only might someone with early access to the data preempt them on an important discovery, they might rush to publish a conclusion that is wrong. That’s why most of these big experiments spend a large amount of time checking and re-checking the data, communicating amongst themselves and settling on an interpretation before they feel comfortable releasing it to the wider community. It’s why BICEP2 couldn’t just ask PLANCK for their data.

From BICEP2’s perspective, they can expect that plots presented at a talk by PLANCK should be accurate, digital plots. Unlike Fox News, scientists have an obligation to present their data in a way that isn’t misleading. And while relying on such a dubious source seems like a bad idea, by all accounts that’s not what the BICEP2 team did. PLANCK’s data was just one dust model used by the team, kept in part because it agreed well with other, non-“data-scraped” models.

It’s a shame that these experiments are so large and prestigious that they need to guard their data in such a potentially destructive way. My sub-field is generally much nicer about this sort of thing: the stakes are lower, and the groups are smaller and have less media attention, so we’re able to share data when we need to. In fact, my most recent paper got a significant boost from some data shared by folks at the Perimeter Institute.

Only time will tell whether the BICEP2 result wins out, or whether it was a fluke caused by caustic data-sharing practices. A number of other experiments are coming online within the next year, and one of them may confirm or deny what BICEP2 has showed.

How do I get where you are?

I’ve mentioned before that this blog will be undergoing a redesign this summer, transitioning from 4gravitons.wordpress.com to just 4gravitons.wordpress.com. One part of that redesign will be the introduction of new categories to help people search for content, as well as new guides like the ones for N=4 super Yang-Mills and the (2,0) theory for some of those categories. Of those, one planned category/guide will discuss careers in physics, with an eye towards explaining some of the often-unstated assumptions behind the process.

I’ve already posted on being a graduate research assistant and on what a postdoc is. I haven’t said much yet about the process leading up to becoming a graduate student. In this post, I’m going to give an overview of a career in theoretical physics, with a focus on what happens before you find an advisor. This is going to be inherently biased, based as it will be on my experiences. In particular, each country’s education system is different, so much of this will only be relevant for students in the US.

Let’s start at the beginning.

A very good place to start.

If you want to become a theoretical physicist, you’d better start by taking physics and math courses in high school. Unfortunately, this is where socioeconomic status has a big effect. Some schools have Advanced Placement or International Baccalaureate courses that let you get a head-start on college, many don’t. Some schools don’t even have physics courses at all anymore. My only advice here is to get what you can, when you can. If you can take a physics course, do it. If you can take calculus, do it. If you can take classes that will give you university credit, take them.

After high school, you go to college for a Bachelor’s degree in physics. Getting into college these days is some sort of ridiculous popularity contest, and I don’t pretend to be able to give advice on that. What I can say is that once you’re in college, coursework is important, but research is more important. Graduate schools will look at how well you did in your courses and how advanced those courses were, but they will pay special attention to who you get recommendations from, and whether you did research with them. Whether or not your college has anyone who you can research with, you should consider doing summer research somewhere interesting. With programs like the NSF’s Research Experience for Undergraduates (or REU) you can apply to get hooked up with interesting projects and mentors. In addition to looking good on an application to grad school, doing research helps boost your self-confidence: knowing that you can do something real really helps you start feeling like a scientist. Research also teaches you specialized skills much faster than coursework can: I’ve learned much more about programming from having to use it on projects than from any actual programming course.

That said, coursework is also useful. You want courses that will familiarize you with basic tools of your field, physics courses on classical mechanics and quantum mechanics and electromagnetism and math courses on linear algebra and differential equations. You want to take a math course on group theory, but only if it’s taught by a physicist, as mathematicians focus on different aspects. More than any of that, though, you want to try to take at least a few graduate-level courses in while you’re still in college.

That’s important, because grad school in theoretical physics is kind of a mess. You’ll be there for around five years in total (I was in at the low end with four, some people take six or seven). However, you take most if not all of your courses in the first two years. In general, during that time you are paid as a Teaching Assistant. The school pays your tuition and a livable (if barely) wage, and in return you lead lab sections or grade papers. Teaching experience can be a positive thing, but you don’t want to keep doing it for too long, because the point of grad school isn’t teaching or courses, it’s research. Your goal is to find an advisor who is willing to pay you out of one of their (usually government) grants, so that you can transition from Teaching Assistant to Research Assistant. This is hard to do while you’re still taking courses: you won’t have time, and worse, you won’t know everything you need. Theoretical physics requires a lot of background, and much of it gets taught in grad school. Here at Stony Brook, you’d be taking graduate-level quantum mechanics, quantum field theory, and string theory. Until recently, each one of those was a one-year course, and the most logical way to take them was one after the other. Add that up, and that’s three years…kind of a problem when you want to start research after two. That’s why getting ahead in courses, however and whenever you can, is so important: not so much for the courses themselves, but so you can get past them and do research.

Research is what you do for the rest of your time in grad school. It’s what you do after you graduate, when you become a postdoc. It (and teaching) are what you do as a professor, what you are judged on when they decide whether or not you get tenure. Working through research is going to teach you more than any other experience you will have, so get as much of it as you can. And good luck!