Note: The version below is altered from the original, which was near-gibberish in a few spots. Why? Because I mistakenly posted a pre-edit version that contained the raw 'transcription' from voice-recognition software I've been trying out. (I suppose it could have been a lot worse.)
Here, more or less as I meant it to appear:
Kevin Dunbar is a researcher who studies how scientists study things -- how they fail and succeed. In the early 1990s, he began an unprecedented research project: observing four biochemistry labs at Stanford University. Philosophers have long theorized about how science happens, but Dunbar wanted to get beyond theory. He wasn't satisfied with abstract models of the scientific method -- that seven-step process we teach schoolkids before the science fair -- or the dogmatic faith scientists place in logic and objectivity. Dunbar knew that scientists often don't think the way the textbooks say they are supposed to. He suspected that all those philosophers of science -- from Aristotle to Karl Popper -- had missed something important about what goes on in the lab. (As Richard Feynman famously quipped, "Philosophy of science is about as useful to scientists as ornithology is to birds.") So Dunbar decided to launch an "in vivo" investigation, attempting to learn from the messiness of real experiments.
He ended up spending the next year staring at postdocs and test tubes: The researchers were his flock, and he was the ornithologist. Dunbar brought tape recorders into meeting rooms and loitered in the hallway; he read grant proposals and the rough drafts of papers; he peeked at notebooks, attended lab meetings, and videotaped interview after interview. He spent four years analyzing the data. "I'm not sure I appreciated what I was getting myself into," Dunbar says. "I asked for complete access, and I got it. But there was just so much to keep track of."
Dunbar came away from his in vivo studies with an unsettling insight: Science is a deeply frustrating pursuit. Although the researchers were mostly using established techniques, more than 50 percent of their data was unexpected. (In some labs, the figure exceeded 75 percent.) "The scientists had these elaborate theories about what was supposed to happen," Dunbar says. "But the results kept contradicting their theories. It wasn't uncommon for someone to spend a month on a project and then just discard all their data because the data didn't make sense." Perhaps they hoped to see a specific protein but it wasn't there. Or maybe their DNA sample showed the presence of an aberrant gene. The details always changed, but the story remained the same: The scientists were looking for X, but they found Y.
This Wired story from Jonah Lehrer examines something that too often goes unexamined: The practice of science is often quite messy. This puts in on par with many other serious endeavors: You plan your work, then try to work your plan. But no matter how sound your plan, unexpected events will often force you off course -- and sometimes to different destinations altogether.
I think this is why writers sometimes get upset when they hear non-writers say something like, "Oh yes, I've been meaning to write a book someday." As if writing a book requires just a bit of time and a couple of ideas. Paul Theroux, I think it was, in one of his books, describes losing patience with a doctor he met at a party and saying to him, "Oh yes, I have been meaning to write a novel one of these days when I have the time." If I remember the passage correctly -- I read this a couple of decades ago -- Theroux replied "I''ve been meaning to do a couple of lobectomies one of these days when I get the time."
Do check out the Wired piece. Along with Jonah's deft touch, you get a nice framing anecdote about interstellar noise and an introduction to Dunbar, who runs the -- gotta love this lab name -- The Laboratory for Complex Cognition and Scientific Reasoning : People
Comments
David, it appears you wrote this in a hurry. However, this doesn't seem to make any sense at all: " Make the best then most what you want . . ."
What did you really mean? It would be helpful to know.
Posted by: P. Jennings | December 21, 2009 6:00 PM
Yep. I spend a lot of time with my trainees helping them to appreciate that it is much more important to their long-term success to know when to *stop* continuing down a particular path than it is to know when to start.
Posted by: Comrade PhysioProf | December 21, 2009 9:15 PM
Dunbar's comparison of the two teams' lab meetings also bolsters the argument for interdisciplinary research. The team with the more diverse mix of specialties solved the problem much more quickly. That's interesting in itself.
Posted by: Dan Ferber | December 21, 2009 9:59 PM
There are two different avenues that most scientists pursue at one time or another. Most commonly, scientists start with a question that fits within a theoretical framework. They then posit alternate hypotheses for the phenomena they are studying that make explicit, falsifiable predictions, which are tested. The other avenue starts with an observation that provokes a question, which is then put into a theoretical framework. The problem with the first approach is that the theory might be incorrect and constrain or mislead one’s thinking. The problem with the other is that observations are nothing without a theoretical framework.
As far as inconsistent results that don’t correspond to the various predictions of the alternate hypotheses, this is where science gets fun. It means that the predictions that are rejected falsify the hypotheses. In such a situation, the scientist learns something. It might also mean that the underlying theory is incorrect and it is time to start over. This can be hair-raising, but also potentially very productive.
Posted by: Maurie Beck | December 30, 2009 9:29 PM