Have you ever been a participant in a news event? If so, you may have been taken aback by the differences between what you experienced and what was reported. The reporter may have been trying to be accurate, but perhaps due to the pressure of deadlines didn’t have time to check the facts. Or the report may have demonstrated an obvious bias.
Firsthand knowledge of inaccurate reporting tends to make one a bit leery of taking any report at face value. Yet, if we are going to know about most newsworthy events and developments, we are dependent on others to tell us. As we assimilate news reports, therefore, it is important that we consider the various ways inaccuracies, distortions, or even false ideas can be conveyed—whether intentionally or unintentionally.
To that end, we review three books that address various aspects of the problem. While a number of readers would challenge some of the authors’ arguments, we will not debate the issues here. Instead, we will simply focus on a few of the authors’ main concepts, not as the final word on the subject but as points worth considering as we are presented with the day’s news.
Stumbling Over Statistics
Joel Best (Damned Lies and Statistics) points out numerous potential problems associated with the production of statistical data. His purpose is to help consumers of news to be better prepared to evaluate such information.
Best explains, “During the nineteenth century . . . statistics—numeric statements about social life—became an authoritative way to describe social problems. There was growing respect for science, and statistics offered a way to bring the authority of science to debates about social policy.” However, as he goes on to state, from the beginning statistics “have had two purposes, one public, the other often hidden. Their public purpose is to give an accurate, true description of society. But people also use statistics to support particular views about social problems. Numbers are created and repeated because they supply ammunition for political struggles. . . . It is naive simply to accept numbers as accurate, without examining who is using them and why.”
Unfortunately, “the public tends to . . . treat statistics as facts,” whereas the reality is often something else. Best notes that “facts always can be questioned, but they hold up under questioning. . . . Although we sometimes treat social statistics as straightforward, hard facts, we ought to ask how those numbers are created.” He then offers some tips to help readers determine the validity of social statistics.
For a new or previously unrecognized problem, says Best, there is most likely no available data. In such cases, a statistic may be just a guess with no objective facts to back it up. In addition, social problems such as crimes have a “dark figure”—the number of cases that are not reported. Trying to determine the dark figure may also devolve into guesswork. Unless clearly labeled as a guess, such a figure may be deceptive, and even if it is so labeled, its status can easily be transmuted from guess to fact as it is circulated, accepted and then repeated by others.
In considering the value of any statistic, the reader also needs to understand the definition on which it is based. The broader the definition, the more incidents will fall within its parameters. Best notes that “activists often couple big statistics based on broad definitions with compelling examples of the most serious cases. For example, claims about child abuse might feature the case of a murdered child as a typical example, yet offer a statistical estimate that includes millions of less serious instances of abuse and neglect.”
“Although we sometimes treat social statistics as straightforward, hard facts, we ought to ask how those numbers are created.”
A popular source of statistics is surveys, which are often used to assess the public’s opinion or experience. But in order to understand the results of a survey, the reader needs to be aware of how the questions were worded, the order in which they were asked, and how the responses were interpreted. As Best comments, however, “the media report statistics (‘Research shows that . . .’) without explaining how the study measured the social problem.” He cites the example of one survey that concluded that a quarter of all female college students had been raped when, in fact, “nearly three quarters of the respondents identified as rape victims indicated [elsewhere in the same survey] they did not consider the incident a rape.”
To make matters more complicated, most statistics are based on sampling because it is often impractical or impossible to count every occurrence of some condition. There are two potential flaws in sampling: the size of the sample and the degree to which it is representative of the population as a whole. As noted by the author, “selecting a representative sample is a key challenge in social science. . . . Few samples are random.”
Further, the author brings out two common problems related to sampling. One is the use of extreme examples as if they were typical of the problem as a whole. He cites the use of a murdered runaway as not being representative of the larger problem of teenagers who run away from home. The flip side of this error is attributing a danger to the entire population that only affects a small pocket of individuals: “In general, social problems are patterned; people do not run away—or commit crimes, become homeless, or become infected with HIV—at random. But people promoting social problems often find it advantageous to gloss over these patterns, to imply that everyone shares the same risks and therefore we all have the same, substantial stake in solving the social problem.”
Best identifies mutant statistics, distorted versions of the original figures, as yet another problem. Numbers, even if they are accurate, can be misunderstood and misinterpreted.
Numbers, even if they are accurate, can be misunderstood and misinterpreted.
“The worst—that is, the most inaccurate—social statistic ever,” according to Best, was in a graduate student’s dissertation prospectus. Quoting a 1995 issue of a social science journal, the student wrote: “Every year since 1950, the number of American children gunned down has doubled.” A quick calculation uncovers the logical impossibility of this assertion, since if only one child was gunned down in 1950 and that number doubled every year from 1950 to 1995, more than 35 trillion children would have been gunned down in 1995.
The basis of the flawed statistic was a 1994 statement by the Children’s Defense Fund, which said that the number of American children killed each year by guns had doubled since 1950. Obviously the graduate student’s quote reflected the mutation of the original statement from a simple doubling over a 44-year period to an impossible doubling each year.
Another aspect of this problem arises when bad statistics are used to generate other inaccurate figures.
Best recommends a critical approach to evaluating statistics—not negative or hostile but thoughtful and analytical. This is not an easy path, because it’s not just a matter of developing a checklist of statistical errors as a guide. “It is probably impossible to produce a complete list of statistical flaws—no matter how long the list, there will be other possible problems that could affect statistics. The goal is not to memorize a list, but to develop a thoughtful approach . . . to ask questions about numbers.”
Behind the Headlines
Getting a clear picture of scientific research and developments from various news accounts is far more complicated than just understanding the potential for misuse or abuse of statistics. As David Murray, Joel Schwartz and S. Robert Lichter (It Ain’t Necessarily So) point out, it is inevitable that those who present the news affect what is reported.
“The news clearly has a relationship to the truth,” they point out, “but it is never simply equivalent to it. . . . That which is scientifically true is often complex and hedged with qualifications. Consequently, scientific research may not make for satisfying ‘news’: it may not attract public attention. . . . [The] pathway from institution to headline entails a regular process of evaluation and decision making that is very often opaque.” This leads them to ask, “How accurate is the transmission of information, and how much may we trust in the conclusions?”
One of the first factors to come into play is the decision about what to report and what not to. Amanda Bennett, Atlanta bureau chief for the Wall Street Journal, argues that only stories conforming to a governing “template” tend to appear—the template being “what editors and other people who are not on the ground have decided is The Story.” This approach, the authors maintain, tends to eliminate findings that contradict whatever the accepted template is at a given time. This can result in a one-sided view of topics. Using a wide range of sources can mitigate the problem.
“The answers that pollsters receive (and newspapers report) greatly depend on precisely what the pollsters ask and how they ask it.”
Echoing Best’s point about surveys, Murray, Schwartz and Lichter caution readers about the use of polls. They stress the need to evaluate the results of polls, noting that “the answers that pollsters receive (and newspapers report) greatly depend on precisely what the pollsters ask and how they ask it. . . . Newspapers are interested in telling us the answers, the findings of polls and the conclusions . . . that interpret those findings. But the answers are sometimes determined (and always influenced) by the questions—the exact wording of the questions posed by the interviewers, the order in which the questions are asked, and in some cases even the way in which they are asked (in person or via the telephone). For this reason, the answers are seldom very meaningful unless you also know about the questions that elicited them.”
Reliance on polls produced by organizations to promote their views is also cautioned against, “since the questions may well have been rigged to reach the organization’s desired conclusion.” Going even further, these authors note, “Ideally, you also need to know what the respondents were primed to think about before the question was asked.”
The Science of Journalism
The competitive pressure to produce stories that grab and hold an audience plays into the problem of getting a clear picture on scientific issues. According to Murray, Schwartz and Lichter, “the interest in drama means that the qualifications, caveats, and uncertainties that are the bread and butter of scientific research can instead be treated as the roughage of journalistic accounts.”
A particularly troubling example of this tendency comes from a researcher in the field of global warming. According to the authors, climatologist Stephen H. Schneider, a believer in the dangers of global warming, notes: “On the one hand, as scientists we are ethically bound to the scientific method, in effect promising to tell the truth, the whole truth and nothing but—which means that we must include all the doubts, the caveats, the ifs, ands, and buts. On the other hand, we are not just scientists but human beings as well. And like most people we’d like to see the world a better place, which in this context translates into our working to reduce the risk of potentially disastrous climatic change. To do that we need to get some broad-based support, to capture the public’s imagination. That, of course, means getting loads of media coverage. So we have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we might have.”
The controversy over global warming is cited as an example of the problems inherent in the media’s presentation of information based on scientific research: “The most striking fact is that these cautious, reserved statements concerning the limits of climate models simply never made the news. The public policy stakes in climate science are enormous. Caught as we are between industry pressure and advocates’ alarms, the gap between scientific uncertainty and media conviction is increasingly noteworthy. Yet one suspects that journalists who are aware of the caveats and demurrals in the scientific literature are shy about publishing them, for fear that they, too, will be seen as interested parties advancing an anti-environmental agenda, or worse, as gullible naifs taken in by industry. There may be a subtle form of self-censorship among science journalists.”
Murray, Schwartz and Lichter recognize the important role journalism plays in informing the public. At the same time, they are compelled to express a deep concern about the effect journalists and their organizations can have over what is reported: “The simple answer to the question of what goes wrong in the reporting of research is ‘altogether too much.’ Error, neglect, ideology, interested motivation, malfeasance, human frailty, and the complex demands of a competitive environment have all been invoked and allotted their proportion of blame. . . . We have also seen the impact of press release journalism, in which the media rely too heavily on sketchy ‘executive summaries’ of complex findings, often generated by policy advocates. . . . Additionally, we have witnessed the imposition on the facts, albeit most often unconsciously, of a variety of narrative templates or scenarios. These frameworks operate as filters and as prisms: they preselect relevant facts and distort the natural shape of facts as they pass through the journalistic lens, while also arranging the elements into satisfying storylines that fit preconceptions or past narratives.”
A View From Inside
Bernard Goldberg (Bias), was a CBS news correspondent for 28 years. He says he tried for years, to no avail, to point out to his bosses that their coverage of the news had a distinct liberal bias—an unusual view for someone who “had never voted for a Republican president in [his] entire life.” Eventually goaded by an acquaintance about a report with a particularly egregious bias, he wrote an op-ed piece for the Wall Street Journal in which he stated, “The old argument that the networks and other ‘media elites’ have a liberal bias is so blatantly true that it is hardly worth discussing anymore. No, we don’t sit around in dark corners and plan strategies on how we’re going to slant the news. We don’t have to. It comes naturally to most reporters.”
Goldberg asserts: “Too many news people, especially the ones at worldwide headquarters in New York, where all the big decisions are made, basically talk to other people just like themselves.” He invites his readers to “think back to that famous observation by the New Yorker’s otherwise brilliant film critic Pauline Kael, who in 1972 couldn’t figure out how Richard Nixon had won the presidency. ‘I can’t believe it!’ she said. ‘I don’t know a single person who voted for him!’” Yet Nixon triumphed in 49 states, while his opponent managed only one. Goldberg insists that this is a typical example of how “hopelessly out of touch with everyday Americans” most journalists are.
Goldberg insists that this is a typical example of how “hopelessly out of touch with everyday Americans” most journalists are.
To illustrate his concern that liberal media bias is harmful to society, Goldberg includes a chapter about “the most important story you never saw on TV.” He holds that television news doesn’t “report the really big story—arguably one of the biggest stories of our time—that [the] absence of mothers from American homes is without any historical precedent, and that millions upon millions of American children have been left . . . ‘to fend for themselves’—with dire consequences.”
The author goes on to quote sociologist Andrew Hacker: “Among married women with preschool children under the age of six, fully seven in ten now have paid employment.” Hacker adds that this represents “a new approach to motherhood,” one in which “most [women] are disinclined to make caring for their children their primary occupation.”
Goldberg quotes social scientist Mary Eberstadt as he explains this huge social shift: “Partly, it’s because divorce has become so commonplace in America that ‘a sizeable majority of Americans have tacitly, but nonetheless decidedly, placed the whole phenomenon [of kids being without their mothers at crucial times of the day] beyond public judgment.’” Goldberg adds that publishing such stories “is not going to be popular” and it “is not a good way to get votes, Nielsen or otherwise.”
In addition, he sees television news as being unwilling to cover this topic because “media elites will not take on feminists. Feminists are the pressure group that media elites (and their wives and friends) are most aligned with. Feminists tend to see any discussion that raises troubling questions about latchkey kids or younger children in day care . . . as an out-and-out attack on women and the freedoms they’ve won since the 1970s.”
As we read a newspaper while eating breakfast, or listen to radio news while commuting to work, or take in a TV news program in the evening, we need to be aware of the process that brings us that news. We need to be aware of the numerous choices that led to the story, perhaps about the topic or the wording of questions in a survey, or how the results were interpreted. We need to reflect carefully on what is said or written, not naively accepting that all that is conveyed is accurate or without omissions or distortions—either witting or unwitting.
Because we can’t be everywhere or know the essence of every study or research project, we are dependent on others to provide us with information. But we owe it to ourselves to carefully consider and evaluate what we read, what we hear and what we come to believe.