The Many (Body) Problems of the Academic Lifestyle

I’m visiting Perimeter this week, searching for apartments in the area. This got me thinking about how often one has to move in academia. You move for college, you move for grad school, you move for each postdoc job, and again when you start as a professor. Even then, you may not get to stay where you are if you don’t manage to get tenure, and it may be healthier to resign yourself to moving every seven years rather than assuming you’re going to settle down.

Most of life isn’t built around the idea that people move across the country (or the world!) every 2-7 years, so naturally this causes a few problems for those on the academic path. Below are some well-known, and not-so-well-known, problems facing academics due to their frequent relocations:

The two-body problem:

Suppose you’re married, or in a committed relationship. Better hope your spouse has a flexible job, because in a few years you’re going to be moving to another city. This is even harder if your spouse is also an academic, as that requires two rare academic jobs to pop up in the same place. And woe betide you if you’re out of synch, and have to move at different times. Many couples end up having to resort to some sort of long-distance arrangement, which further complicates matters.

The N-body problem:

Like the two-body problem, but for polyamorous academics. Leads to poly-chains up and down the East Coast.

The 2+N-body problem:

Alternatively, add a time dimension to your two-body problem via the addition of children. Now your kids are busily being shuffled between incommensurate school systems. But you’re an academic, you can teach them anything they’re missing, right?

The warm body problem:

Of course, all this assumes you’re in a relationship. If you’re single, you instead have the problem of never really having a social circle beyond your department, having to tenuously rebuild your social life every few years. What sorts of clubs will the more socially awkward of you enter, just to have some form of human companionship?

The large body of water problem:

We live in an age where everything is connected, but that doesn’t make distance cheap. An ocean between you and your collaborators means you’ll rarely be awake at the same time. And good luck crossing that ocean again, not every job will be eager to pay relocation expenses.

The obnoxious governing body problem:

Of course, the various nations involved won’t make all this travel easy. Many countries have prestigious fellowships only granted on the condition that the winner returns to their home country for a set length of time. Since there’s no guarantee that anyone in your home country does anything similar to what you do, this sort of requirement can have people doing whatever research they can find, however tangentially related, or trying to avoid the incipient bureaucratic nightmare any way they can.

 

Experimentalist Says What?

I’m a theoretical physicist. That means I work with pencil and paper, or with my laptop, or at most with a computer cluster. I don’t have a lab, and even if I did I wouldn’t have any equipment to store there.

By contrast, most physicists (and most scientists in general) are experimentalists, the people who actually do experiments, actually work in labs, and actually use piles and piles of expensive equipment. Naturally, these two groups have very different ways of doing things, spawned by different requirements for their jobs. This leads to very different ways of talking. We theorists sometimes get confused by the quaint turns of phrase used by experimentalists, so I’ve put together this handy translation guide:

 

Lab: Kind of like an office, but has a bunch of big machines in it for some reason. Also, in some of them they don’t even drink coffee, some nonsense about toxic contaminants. I don’t know how they get any work done with all those test tubes all over the place.

PI: Not Private Investigator, but close! The Primary Investigator is the big cheese among the experimentalists, the one who owns all the big machines. All of the others must bow before him or her, even fellow professors must grovel if they want to use the PI’s expensive equipment. Naturally, this makes experimentalists very hierarchical, a sharp contrast to theorists who are obviously totally egalitarian.

Poster: Let me tell you a secret about experimentalists: there are a lot of them. Way more than there are theorists. So many, that if they all go to a conference it’s impossible for them all to give talks! That’s where posters come in: some of the experimentalists all stand in a room in front of rectangles of cardboard covered in charts, while the others walk around and ask questions. Traditionally, these posters are printed an hour before the conference, obviously for maximum freshness and not at all because of procrastination.

Group: Like our Institutes, but (because there are a lot of experimentalists) there isn’t just one per university and (because of the shared lab) they actually have something to talk about. This leads to regular group meetings, because when you’re using expensive equipment you actually need to show you’re doing something worthwhile with it.

IRB: For the medical and psychological folks, the Internal Review Board is there to tell you that, no, you can’t infect monkeys with flesh-eating bacteria just to see what happens. They’re also the people who ask you whether a grammatical change in your online survey will pose risks to pregnant women, which is clearly exactly as important. Theorists don’t have these, because numbers are an oppressed underclass with no rights to speak of. EHS (Environmental Health and Safety) fills a similar role for those who only oppress yeast and their own grad students.

Annual Meeting: Experimentalists tend to be part of big organizations like the American Physical Society. And that’s all well and good, occupies a space on the CV and so forth. What’s somewhat more baffling is their tendency to trust those organizations to run conferences. Generally these are massive affairs, with people from all sorts of sub-fields participating. This only works because experimentalists have the mysterious ability to walk into each other’s talks and actually understand what’s going on, even if the subject matter is very different from what they’re used to. Experts suggest this has something to do with actually studying real things in the real world, but this is a hypothesis at best.

Insert Muscle Joke Here

I’m graduating this week, so I probably shouldn’t spend too much time writing this post. I ought to mention, though, that there has been some doubt about the recent discovery by the BICEP2 telescope of evidence for gravitational waves in the cosmic microwave background caused by the early inflation of the universe. Résonaances got to the story first and Of Particular Significance has some good coverage that should be understandable to a wide audience.

In brief, the worry is that the signal detected by BICEP2 might not be caused by inflation, but instead by interstellar dust. While the BICEP2 team used several models of dust to show that it should be negligible, the controversy centers around one of these models in particular, one taken from another, similar experiment called PLANCK.

The problem is, BICEP2 didn’t get PLANCK’s information on dust directly. Instead, it appears they took the data from a slide in a talk by the PLANCK team. This process, known as “data scraping”, involves taking published copies of the slides and reading information off of the charts presented. If BICEP2 misinterpreted the slide, they might have miscalculated the contribution by interstellar dust.

If you’re like me, the whole idea of data scraping seems completely ludicrous. The idea of professional scientists sneaking information off of a presentation, rather than simply asking the other team for data like reasonable human beings, feels almost cartoonishly wrong-headed.

It’s a bit more understandable, though, when you think about the culture behind these big experiments. The PLANCK and BICEP2 teams are colleagues, but they are also competitors. There is an enormous amount of glory in finding evidence for something like cosmic inflation first, and an equally enormous amount of shame in screwing up and announcing something that turns out to be wrong. As such, these experiments are quite protective of their data. Not only might someone with early access to the data preempt them on an important discovery, they might rush to publish a conclusion that is wrong. That’s why most of these big experiments spend a large amount of time checking and re-checking the data, communicating amongst themselves and settling on an interpretation before they feel comfortable releasing it to the wider community. It’s why BICEP2 couldn’t just ask PLANCK for their data.

From BICEP2’s perspective, they can expect that plots presented at a talk by PLANCK should be accurate, digital plots. Unlike Fox News, scientists have an obligation to present their data in a way that isn’t misleading. And while relying on such a dubious source seems like a bad idea, by all accounts that’s not what the BICEP2 team did. PLANCK’s data was just one dust model used by the team, kept in part because it agreed well with other, non-“data-scraped” models.

It’s a shame that these experiments are so large and prestigious that they need to guard their data in such a potentially destructive way. My sub-field is generally much nicer about this sort of thing: the stakes are lower, and the groups are smaller and have less media attention, so we’re able to share data when we need to. In fact, my most recent paper got a significant boost from some data shared by folks at the Perimeter Institute.

Only time will tell whether the BICEP2 result wins out, or whether it was a fluke caused by caustic data-sharing practices. A number of other experiments are coming online within the next year, and one of them may confirm or deny what BICEP2 has showed.

How do I get where you are?

I’ve mentioned before that this blog will be undergoing a redesign this summer, transitioning from 4gravitons.wordpress.com to just 4gravitons.wordpress.com. One part of that redesign will be the introduction of new categories to help people search for content, as well as new guides like the ones for N=4 super Yang-Mills and the (2,0) theory for some of those categories. Of those, one planned category/guide will discuss careers in physics, with an eye towards explaining some of the often-unstated assumptions behind the process.

I’ve already posted on being a graduate research assistant and on what a postdoc is. I haven’t said much yet about the process leading up to becoming a graduate student. In this post, I’m going to give an overview of a career in theoretical physics, with a focus on what happens before you find an advisor. This is going to be inherently biased, based as it will be on my experiences. In particular, each country’s education system is different, so much of this will only be relevant for students in the US.

Let’s start at the beginning.

A very good place to start.

If you want to become a theoretical physicist, you’d better start by taking physics and math courses in high school. Unfortunately, this is where socioeconomic status has a big effect. Some schools have Advanced Placement or International Baccalaureate courses that let you get a head-start on college, many don’t. Some schools don’t even have physics courses at all anymore. My only advice here is to get what you can, when you can. If you can take a physics course, do it. If you can take calculus, do it. If you can take classes that will give you university credit, take them.

After high school, you go to college for a Bachelor’s degree in physics. Getting into college these days is some sort of ridiculous popularity contest, and I don’t pretend to be able to give advice on that. What I can say is that once you’re in college, coursework is important, but research is more important. Graduate schools will look at how well you did in your courses and how advanced those courses were, but they will pay special attention to who you get recommendations from, and whether you did research with them. Whether or not your college has anyone who you can research with, you should consider doing summer research somewhere interesting. With programs like the NSF’s Research Experience for Undergraduates (or REU) you can apply to get hooked up with interesting projects and mentors. In addition to looking good on an application to grad school, doing research helps boost your self-confidence: knowing that you can do something real really helps you start feeling like a scientist. Research also teaches you specialized skills much faster than coursework can: I’ve learned much more about programming from having to use it on projects than from any actual programming course.

That said, coursework is also useful. You want courses that will familiarize you with basic tools of your field, physics courses on classical mechanics and quantum mechanics and electromagnetism and math courses on linear algebra and differential equations. You want to take a math course on group theory, but only if it’s taught by a physicist, as mathematicians focus on different aspects. More than any of that, though, you want to try to take at least a few graduate-level courses in while you’re still in college.

That’s important, because grad school in theoretical physics is kind of a mess. You’ll be there for around five years in total (I was in at the low end with four, some people take six or seven). However, you take most if not all of your courses in the first two years. In general, during that time you are paid as a Teaching Assistant. The school pays your tuition and a livable (if barely) wage, and in return you lead lab sections or grade papers. Teaching experience can be a positive thing, but you don’t want to keep doing it for too long, because the point of grad school isn’t teaching or courses, it’s research. Your goal is to find an advisor who is willing to pay you out of one of their (usually government) grants, so that you can transition from Teaching Assistant to Research Assistant. This is hard to do while you’re still taking courses: you won’t have time, and worse, you won’t know everything you need. Theoretical physics requires a lot of background, and much of it gets taught in grad school. Here at Stony Brook, you’d be taking graduate-level quantum mechanics, quantum field theory, and string theory. Until recently, each one of those was a one-year course, and the most logical way to take them was one after the other. Add that up, and that’s three years…kind of a problem when you want to start research after two. That’s why getting ahead in courses, however and whenever you can, is so important: not so much for the courses themselves, but so you can get past them and do research.

Research is what you do for the rest of your time in grad school. It’s what you do after you graduate, when you become a postdoc. It (and teaching) are what you do as a professor, what you are judged on when they decide whether or not you get tenure. Working through research is going to teach you more than any other experience you will have, so get as much of it as you can. And good luck!

Look what I made!

In a few weeks, I’ll be giving a talk for Stony Brook’s Graduate Awards Colloquium, to an audience of social science grad students and their parents.

One of the most useful tools when talking to people in other fields is a shared image. You want something from your field that they’ve seen, that they’re used to, that they’ll recognize. Building off of that kind of thing can be a great way to communicate.

If there’s one particle physics image that lots and lots of people have seen, it’s the Standard Model. Generally, it’s organized into charts like this:

Standard_Model_of_Elementary_Particles

I thought that if people saw a chart like that, but for N=4 super Yang-Mills, it might make the theory seem a bit more familiar. N=4 super Yang-Mills has a particle much like the Standard Model’s gluon with spin 1, paired with four gluinos, particles that are sort of but not really like quarks with spin 1/2, and six scalars, particles whose closest analogue in the Standard Model is the Higgs with spin 0.

In N=4 super Yang-Mills, none of these particles have any mass, since if supersymmetry isn’t “broken” all particles have the same mass. So where mass is written in the Standard Model table, I can just put zero. The table I linked also gives the electric charge of each particle. That doesn’t really mean anything for N=4 super Yang-Mills. It isn’t a theory that tries to describe the real world, so there’s no direct equivalent to a real-world force like electromagnetism. Since everything in the theory has to have the same charge, again due to supersymmetry, I can just list all of their “electric charges” as zero.

Putting it all together, I get the diagram below. The theory has eleven particles in total, so it won’t fit into a nice neat square. Still, this should be more familiar than most of the ways I could present things.

N4SYMParticleContent

The PhD Defense

Last Wednesday I completed the final stage of my PhD, the Defense. I booted up a projector and, in a room filled with esteemed physicists, eager grad students, and a three foot sub, I summarized the last two years of my work. A few questions later, people were shaking my hand and calling me “Doctor von Hippel”.

Now that I’m transitioning out of the grad student world, my blog will be transitioning too. I’ll be starting work as a Postdoctoral Fellow in the Fall at the Perimeter Institute for Theoretical Physics. Some time in between, probably in July, this blog will undergo a redesign, hopefully becoming easier to navigate. I’ll also be dropping the “and a grad student” from the title, switching to a new URL, 4gravitons.wordpress.com. Don’t worry, traffic from the old address will be forwarded, so infrequent readers won’t lose track. That said, if anyone with more experience has some advice about making the transition more seamless I’d love to hear it.

There are a lot of stereotypes about the PhD Defense, and mine broke almost all of them. My advisor hadn’t been directly involved in my work, my committee chair was one of the nicest, mellowest professors I’ve ever known, my experimentalist asked me a theoretical physics question, and my external member was NimafrigginArkani-Hamed.

That said, I’ve also seen several other PhD Defenses, and I have to say that the stereotypes are usually right on the money. And since I’m on a bit of a list-based comedy kick recently, let me introduce you to the four members of your PhD committee:

First, of course, is your advisor. If you two collaborate closely, you may find yourself presenting material that your advisor had a hand in. Naturally, the other committee members will ask questions about this material, and naturally you will answer them. Naturally, those answers will not be how your advisor would have explained it, so naturally your advisor will start explaining it themselves. (After all, it’s their work that’s being questioned!) Manage things well and the whole defense will be an argument between your advisor and the other committee members, and you won’t have to say anything at all!

Second is your committee chair. This is someone from your field, chosen for their general eminence and chair-ish-ness. They’ve done a lot of these before, and in their mind they’ve developed a special bond with the students, a bond forged by questions. See, if you have a typical committee chair, they will ask you the toughest, most nitpicky, most downright irrelevant lines of questions possible. The chair’s goal isn’t to keep things moving, it’s to make sure that you took their class and remember everything from it, no matter how much time that takes away from discussing your actual dissertation.

Third you must face your experimentalist. According to the ancient ideals of academia (ideals somehow unbreakably important for grad students and largely irrelevant for top-level university administrators), a dissertation must be judged not only by the yes-men of your own sub-field, but also by someone from the rest of your department. For a theoretical physicist, that means bringing in an experimental physicist. You may try to make things accessible to this person, but eventually you have to actually start talking about your work. This is healthy, as it will allow them much-needed sleep. Once they awake, they will bless you with a question that represents the most tenuous link they can draw between their own work and yours, generally asking after the mass of some subatomic particle. Once you have demonstrated your ignorance in some embarrassing fashion the experimentalist may return to sleep.

Finally, the defense brings in a special individual, the external member. Not only must you prove your worth to an experimentalist, but to someone from outside of your department altogether! For the lucky, this means someone who does similar work at a nearby university. For the terminally rural, this instead means finding the closest department and bringing in someone who will at least recognize some of the words in your talk. For us, this generally means a mathematician. Like the experimentalist, they will favor you with bewildered looks or snores, depending on temperament. Unlike the experimentalist, they are under no illusion that anything they do is relevant to anything you do, so their questions will be mercifully brief.

Grilled by these four, you then leave the room, allowing them to talk about the weather or their kids or something before they ask you back in to tell you that, of course, you’ve got your PhD. Because after all that, anything else would just be rude.

Particles are not Species

It has been estimated that there are 7.5 million undiscovered species of animals, plants and fungi. Most of these species are insects. If someone wanted billions of dollars to search the Amazon rainforest with the goal of cataloging every species of insect, you’d want them to have a pretty good reason. Maybe they are searching for genes that could cure diseases, or trying to understand why an ecosystem is dying.

The primary goal of the Large Hadron Collider is to search for new subatomic particles. If we’re spending billions searching for these things, they must have some use, right? After all, it’s all well and good knowing about a bunch of different particles, but there must be a whole lot of sorts of particles out there, at least if you judge by science fiction (these two are also relevant). Surely we could just focus on finding the useful ones, and ignore the rest?

The thing is, particle physics isn’t like that. Particles aren’t like insects, you don’t find rare new types scattered in out-of-the-way locations. That’s because each type of particle isn’t like a species of animal. Instead, each particle is a fundamental law of nature.

Move over Linnaeus.

Move over Linnaeus.

It wasn’t always like this. In the late 50’s and early 60’s, particle accelerators were producing a zoo of new particles with no clear rhyme or reason, and it looked like they would just keep producing more. That impression changed when Murray Gell-Mann proposed his Eightfold Way, which led to the development of the quark model. He explained the mess of new particles in terms of a few fundamental particles, the quarks, which made up the more complicated particles that were being discovered.

Nowadays, the particles that we’re trying to discover aren’t, for the most part, the zoo of particles of yesteryear. Instead, we’re looking for new fundamental particles.

What makes a particle fundamental?

The new particles of the early 60’s were a direct consequence of the existence of quarks. Once you understood how quarks worked, you could calculate the properties of all of the new particles, and even predict ones that hadn’t been found yet.

By contrast, fundamental particles aren’t based on any other particles, and you can’t predict everything about them. When we discover a new fundamental particle like the Higgs boson, we’re discovering a new, independent law of nature. Each fundamental particle is a law that states, across all of space and time, “if this happens, make this particle”. It’s a law that holds true always and everywhere, regardless of how often the particle is actually produced.

Think about the laws of physics like the cockpit of a plane. In front of the pilot is a whole mess of controls, dials and switches and buttons. Some of those controls are used every flight, some much more rarely. There are probably buttons on that plane that have never been used. But if a single button is out of order, the plane can’t take off.

Each fundamental particle is like a button on that plane. Some turn “on” all the time, while some only turn “on” in special circumstances. But each button is there all the same, and if you’re missing one, your theory is incomplete. It may agree with experiments now, but eventually you’re going to run into problems of one sort or another that make your theory inconsistent.

The point of discovering new particles isn’t just to find the one that will give us time travel or let us blow up Vulcan. Technological applications would be nice, but the real point is deeper: we want to know how reality works, and for every new fundamental particle we discover, we’ve found out a fact that’s true about the whole universe.

The Four Ways Physicists Name Things

If you’re a biologist and you discover a new animal, you’ve always got Latin to fall back on. If you’re an astronomer, you can describe what you see. But if you’re a physicist, your only option appears to involve falling back on one of a few terrible habits.

The most reasonable option is just to name it after a person. Yang-Mills and the Higgs Boson may sound silly at first, but once you know the stories of C. N. Yang, Robert Mills, Peter Higgs and Satyendra Nath Bose you start appreciating what the names mean. While this is usually the most elegant option, the increasingly collaborative nature of physics means that many things have to be named with a series of initials, like ABJM, BCJ and KKLT.

A bit worse is the tendency to just give it the laziest name possible. What do you call the particles that “glue” protons and neutrons together? Why gluons, of course, yuk yuk yuk!

This is particularly common when it comes to supersymmetry, where putting the word “super” in front of something almost always works. If that fails, it’s time to go for more specific conventions: to find the partner of an existing particle, if the new particle is a boson, just add “s-” for “super”“scalar” apparently to the name. This creates perfectly respectable names like stau, sneutrino, and selectron. If the new particle is a fermion, instead you add “-ino” to the end, getting something like a gluino if you start with a gluon. If you’ve heard of neutrinos, you may know that neutrino means “little neutral one”. You might perfectly rationally expect that gluino means “little gluon”, if you had any belief that physicists name things logically. We don’t. A gluino is called a gluino because it’s a fermion, and neutrinos are fermions, and the physicists who named it were too lazy to check what “neutrino” actually means.

Pictured: the superpartner of Nidoran?

Worse still are names that are obscure references and bad jokes. These are mercifully rare, and at least memorable when they occur. In quantum mechanics, you write down probabilities using brackets of two quantum states, \langle a | b\rangle. What if you need to separate the two states, \langle a| and |b\rangle? Then you’ve got a “bra” and a “ket”!

Or have you heard the story of how quarks were named? Quarks, for those of you unfamiliar with them, are found in protons and neutrons in groups of three. Murray Gell-Mann, one of the two people who first proposed the existence of quarks, got their name from Finnegan’s Wake, a novel by James Joyce, which at one point calls for “Three quarks for Muster Mark!” While this may at first sound like a heartwarming tale of respect for the literary classics, it should be kept in mind that a) Finnegan’s Wake is a novel composed almost entirely of gibberish, read almost exclusively by people who pretend to understand it to seem intelligent and b) this isn’t exactly the most important or memorable line in the book. So Gell-Mann wasn’t so much paying homage to a timeless work of literature as he was referencing the most mind-numbingly obscure piece of nerd trivia before the invention of Mara Jade. Luckily these days we have better ways to remember the name.

Albeit wrinklier ways.

The final, worst category, though, don’t even have good stories going for them. They are the names that tell you absolutely nothing about the thing they are naming.

Probably the worst examples of this from my experience are the a-theorem and the c-theorem. In both cases, a theory happened to have a parameter in it labeled by a letter. When a theorem was proven about that parameter, rather than giving it a name that told you anything at all about what it was, people just called it by the name of the parameter. Mathematics is full of names like this too. Without checking Wikipedia, what’s the difference between a set, a group, and a category? What the heck is a scheme?

If you ever have to name something, be safe and name it after a person. If you don’t, just try to avoid falling into these bad habits of physics naming.

A Question of Audience

I’ve been thinking a bit about science communication recently.

One of the most important parts of communicating science (or indeed, communicating anything) is knowing your audience. Much of the time if a piece is flawed, it’s flawed because the author didn’t have a clear idea of who they’re talking to.

A persistent worry among people who communicate science to the public is that we’re really just talking to ourselves. If all the people praising you for your clear language are scientists, then maybe it’s time to take a step back and think about whether you’re actually being understood by anyone else.

This blog’s goal has always been to communicate science to the general public, and most of my posts are written with as little background assumed as possible. That said, I sometimes wonder whether that’s actually the audience I’m reaching.
Wordpress has a handy feature to let me track which links people click on to get to my blog, which gives me a rough way to gauge my audience.

When a new post goes up, I get around ten to twenty clicks from Facebook. Those are people I know, which for the most part these days means physicists. I get a couple clicks from Twitter, where my followers are a mix of young scientists, science journalists, and amateurs interested in science. On WordPress, my followers are also a mix of specialists and enthusiasts. Most interesting, to me at least, are the followers who get to my blog via Google searches. Naturally, they come in regardless of whether I have a new post or not, adding an extra twenty-five or so views every day. Judging by the sites (google.fr, google.ca) these people come from all over the world, and judging by their queries they run from physics PhD students to people with no physics knowledge whatsoever.

Overall then, I think I’m doing a pretty good job getting the word out. As my site’s Google rankings improve, more non-physicists will read what I have to say. It’s a diverse audience, but I think I’m up to the challenge.

Numerics, or, Why can’t you just tell the computer to do it?

When most people think of math, they think of the math they did in school: repeated arithmetic until your brain goes numb, followed by basic algebra and trig. You weren’t allowed to use calculators on most tests for the simple reason that almost everything you did could be done by a calculator in a fraction of the time.

Real math isn’t like that. Mathematicians handle proofs and abstract concepts, definitions and constructions and functions and generally not a single actual number in sight. That much, at least, shouldn’t be surprising.

What might be surprising is that even tasks which seem very much like things computers could do easily take a fair bit of human ingenuity.

In physics, I do a lot of integrals. For those of you unfamiliar with calculus, integrals can be thought of as the area between a curve and the x-axis.

Areas seem like the sort of thing it would be easy for a computer to find. Chop the space into little rectangles, add up all the rectangles under the curve, and if your rectangles are small enough you should get the right answer. Broadly, this is the method of numerical integration. Since computers can do billions of calculations per second, you can chop things up into billions of rectangles and get as close as you’d like, right?

Heck, ten is a lot. Can we just do ten?

Heck, ten is a lot. Can we just do ten?

For some curves, this works fine. For others, though…

Ten might not be enough for this one.

Ten might not be enough for this one.

See how the left side of that plot goes off the chart? That curve goes to infinity. No matter how many rectangles you put on that side, you still won’t have any that are infinitely tall, so you’ll still miss that part of the curve.

Surprisingly enough, the area under this curve isn’t infinite. Do the integral correctly, and you get a result of 2. Set a computer to calculate this integral via the sort of naïve numerical integration discussed above though, and you’ll never find that answer. You need smarter methods: smart humans doing the math, or smart humans programming the computer.

Another way this can come up is if you’re adding up two parts of something that go to infinity in opposite directions. Try to integrate each part by itself and you’ll be stuck.

firstplot

secondplot

But add them together, and you get something quite a bit more tractable.

Yeah, definitely a ten-rectangle job.

Yeah, definitely a ten-rectangle job.

Numerical integration, and computers in general, are a very important tool in a scientist’s arsenal. But in order to use them, we have to be smart, and know what we’re doing. Knowing how to use our tools right can take almost as much expertise and care as working without tools.

So no, I can’t just tell the computer to do it.