Should we Trust Numbers that Much?

Taken from http://tinyurl.com/cv2dcxk

Nowadays, if one reads an article from a newspaper or a magazine, it is very likely there will be mention of some statistics, whether about the percentage of Americans that favor a candidate, of students that claim to be satisfied with their investment in getting a college degree, or of many more things about which someone may be puzzled to know some study (or research) has been published. For example, take a look at this paragraph from “Is Mitt Romney Damaged Goods after a Brutal Primary Season.”

Romney’s negatives reached a record high last month with 50 percent of all voters, and 52 percent of registered voters, in a Washington    Post/ABC News poll that reflected a dim view of him as a prospective president. A Gallup “Swing States” survey shows President Obama leading Romney 51 to 42 percent, and among women 52 to 35 percent—a huge gender gap. (The Daily Beast)

 

What about this one quoted in Josh Fishman’s article in the Chronicle of Higher Education:

More than 60 percent of college undergraduates, and more than 40 percent of graduate students, admit to cheating in some way on their written work, according to a national survey by Clemson University’s International Center for Academic Integrity.

Here is another rather long example from Psychology Today, which may scare or impress someone with a modest math training:

Half of the respondents simply estimated the probability that Jack (Dick) was an engineer. This part of the experiment was set up to replicate Kahneman & Tversky’s (1973) original finding of base rate neglect. The replication was only partially successful. Jack was judged to be more likely to be an engineer when the base rate probability of being an engineer was high (M = 77%) than when it was low (M = 56%, t(58) = 2.25, p = .03. However, Dick was not judged to be significantly more likely to be an engineer given a high (M = 53%) instead of a low (M = 44%) base rate, t(58) = 1.64. Note that the statistical demonstration of complete base rate neglect in this paradigm requires a failure to reject a null hypothesis. Our data failed to replicate this failure in the case of Jack. Some reviewers of this field of research have concluded that base rate neglect is rarely as complete as it was in Kahneman & Tversky’s original demonstration (Koehler, 1996).

Given only partial base rate neglect, the novel idea of reducing this neglect with an anchoring manipulation had only limited room to show itself. To test this idea, we asked the other half of the participants to first respond to the question “Would you say that the probability that Jack (Dick) is an engineer is greater than 70% (lower than 30%)? Virtually all responses were “no.” That was as it should be for most conditions because no rational or heuristic argument could be made for estimates lying beyond the base rates. The exception to this rule is Jack, who, looking like an engineer, should (and did) receive an estimate greater than 70% when the base rate was 70%. In this exceptional case, anchoring had no effect. Jack was judged to be an engineer with an average chance of 77 and 78%, respectively with and without the anchor. Conversely, when participants first rejected the anchor of 30% as implausible they judged Jack to be less likely to be an engineer (M = 56%) than when they did not respond to the anchor first (M = 66%), t(58) = 4.75. This is good support for the anchoring-can-reduce-base-rate-neglect hypothesis. For judgments of Jack, the effect size of the difference more than doubled from the replication to the anchoring condition, d = .58 to 1.25. Anchoring also improved judgments of Dick. Without anchors, there was base rate neglect, as indicated by a nonsignificant difference between the two estimates. With anchors, the difference between (M = 54%) and (M = 39%) was significant, t(58) = 3.52, p = .001). The effect size doubled from d = .43 to .91.

One then may ask what explains this trend? It seems the assumption behind the overuse of numbers is that they are taken to be ‘objective’; therefore, they should transcend any bias from the authors, which then implies whatever they assert with those numbers must be true. Another implication may be that no one wants to take the responsibility of any error in one’s research since everybody can blame it on the numbers.

This is a delicate if not dangerous situation: many of these arguments are supposed to be addressed to an audience not necessarily trained in advanced mathematics to evaluate those claims made from these numbers; unfortunately, the people who could be able to evaluate some of these claims are a minority. This leads one to believe that major decisions may be taken on false assumptions, whether intentional or not, or from just plain misinformation.

In this context, should the use of mathematics be more controlled? Should the presence of mathematicians be more obvious in places where quantitative methods are used to guarantee their validity and justly assess their limitations? Would that be considered as an invasion by mathematicians? Even better, would this apparent misuse of mathematics be a motivation to teach students mathematics in such a way that they can become better judge of these numbers rather than teaching them rudimentary rules they hardly understand let alone be able to use later for themselves?

 

This entry was posted in General, Math in Pop Culture, Mathematics in Society, Teaching. Bookmark the permalink.