Andrew Gelman has been wondering how much time he should spend criticizing crappy research, and so am I. He wrote the post after a discussion with Jeff Leek of Simply Statistics about replication and criticism. Harsh criticism of preliminary studies could discourage new research, which would definitely be a bad thing, and Leek notes that shaming people who do research that gets retracted is probably not going to help science be self-correcting.
In an earlier post on the topic, Gelman says that he is not against the publication of these early, possibly incorrect results. But if a bad study, especially about health, gets a lot of publicity, it could be harmful to people who read about it and take it seriously. In the later post, Gelman writes,
“A key decision point is what to do when we encounter bad research that gets publicity. Should we hype it up (the “Psychological Science” strategy), slam it (which is often what I do), ignore it (Jeff’s suggestion), or do further research to contextualize it (as Dan Kahan sometimes does)?”
More broadly, I’ve been wondering how much time should we spend criticizing bad science and math journalism or bad behavior in general. If misinformation is reported far and wide, it might be important to do some debunking, but in many cases, responding to bad things will just give the bad things more attention. (For example, I don’t think we should give Westboro more media coverage.)
Recently, there’s been a little kerfuffle about the fact that physicist Lawrence Krauss, among others, appears in a geocentric documentary made by a Holocaust denier. Krauss says that he doesn’t know how he got in the documentary and wants us to just ignore it. “Many people have suggested I litigate,” he writes. “But this approach seems to me to be completely wrong because it would elevate the profile of something that shouldn’t even rise to the level of popular discussion.” And in this case, I agree. The articles criticizing this documentary have given it tons of free publicity. Without them, it would have disappeared into the ether.
Right now I’m particularly fed up with news stories about how ignorant people are (as Cathy O’Neil writes, these are the stories about how many people think the sun goes around the earth), and I want to debunk them. After seeing a couple articles with a specific claim I found a little hard to believe, I got a hold of the poll data. And surprise, surprise: the articles make people out to be more ignorant than the poll seems to suggest (and the poll itself seems much less a representative sample of some society than a bunch of people who responded to an online poll that was only open for a few hours). Some misleading information about this poll is published in a few places, but it hasn’t gone viral. If I wrote about it, more people would see the original misleading information and might remember it instead of my correction. I would get some satisfaction from criticizing the other articles, but I don’t think it would help anything. Besides, I like to make people happy with math, and writing about something true and interesting is probably a better way to do it than taking something else down.
I really liked the end of Gelman’s post:
“A few months ago after I published an article criticizing some low-quality published research, I received the following email:
‘There are two kinds of people in science: bumblers and pointers. Bumblers are the people who get up every morning and make mistakes, trying to find truth but mainly tripping over their own feet, occasionally getting it right but typically getting it wrong. Pointers are the people who stand on the sidelines, point at them, and say “You bumbled, you bumbled.” These are our only choices in life.’
The sad thing is, this email came from a psychology professor! Pretty sad to think that he thought those were our two choices in life. I hope he doesn’t teach this to his students. I like to do both, indeed at the same time: When I do research (“bumble”), I aim criticism at myself, poking holes in everything I do (“pointing”). And when I criticize (“pointing”), I do so in the spirit of trying to find truth (“bumbling”).”
I’m going to take the easy way out and agree that we need balance. Personally, I don’t think I’m suited for doing a lot of pointing. Occasionally I write about something I think is bad and why, but mostly I’m going to keep writing about stuff I like and hope that my good stuff distracts from other crappy stuff. (As John D. Cook writes, quality over quantity. I found that post via another post of Gelman’s.) But there are other people who do a great job at criticizing the crappy stuff. This StatsChat post from Thomas Lumley about whether Generation Y spends a lot of money on fancy food cracked me up.
Of course, one reason I don’t do as much pointing is that I write more about math and less about statistics and how it’s used in other sciences. I think there’s more need and opportunity for pointing in those fields. When done well, I think pointing out bad statistical practice and the bad journalism it sometimes spawns might help journalists and readers approach scientific studies with the appropriate amount of skepticism and ask the right questions about them. A girl can dream.
Another good source for calling out bad use of statistics is the “Forsooth” section of Chance News: http://test.causeweb.org/wiki/chance/index.php/Main_Page
I agree that we need balance. There may be a place for public shaming, especially if bad data analysis is getting too much credulous media attention, but it’s not the kindest approach, and could backfire, as Kraus suggests. I have had private communications with reporters who seemed to be misinterpreting data; I don’t know if it does any good. (My dad was the master of quiet complaints to the maître d’. “They can’t improve if no one tells them what they’re doing wrong.”)
But since you’ve illustrated your post with a 3-D pie chart, let me ask this: how should one respond, if at all, to an NSF officer’s use of a 3-D pie chart in a presentation at a mathematics conference? Wouldn’t the officer want to know that at least some in the room are reflexively cringing?
The Gelman piece has 4 simple key words for me: “bad ideas have consequences”!!!!!
I really don’t understand the notion of “ignoring” (not calling out) bad research… whether it be sloppy methodology, misused statistics, weak conclusions, or outright fraud, bad research (that taxpayers usually fund) ought be critiqued, and the quicker the better (…also, researchers should have thicker skins! 😉
There’s tremendous pressure in academia to simply do any sort of research, but far less pressure to actually do good research or to replicate research (epidemiology, biomedicine, and psychology are among many areas needing greater scrutiny). Left ignored, bad research has a way of growing and crowding out good research.
The immediacy and volume of response to published studies now possible via the Web (instead of waiting 6 months or more for a letter to the editor to appear in some journal) is a huge positive (even with some negatives associated with such rapid feedback).
When people say they love science, they really mean they love GOOD science… “preliminary” science that adequately states its flaws and inadequacies, and calls for more research to be done, is fine, but bad science (that overstates its case) shouldn’t be tolerated… or ignored.