Here we are presenting different evidences and ways of using scientific method in social research.

1. Lead and Intelligence:

Health alerts about lead in drinking water and its threat to children have appeared with rising urgency. In 1991, the U.S. Centers for Disease Control (CDC) reduced the intervention level for lead exposure to 25 micrograms per deciliter of blood the third such reduction since 1970.

This action resulted from research connecting blood levels of as little as 10 jag/ dl to intelligence deficits in children. Based in part on the same research, the Environmental Protection Agency (EPA) in 1991 adopted new rules for lead levels in community water systems.

ADVERTISEMENTS:

Dr. Herbert Needleman, one of the leading scholars in this field, has received much of the credit for raising concern about lead in the environment (“Is there lead in your water?”, 1993). For example, he found lower IQ scores in children with higher levels of lead as measured from their baby teeth.

In the same year that the CDC and EPA were basing policy in part on Needleman’s work, a third federal agency, the National Institutes of Health (NIH), received complaints about his work. By April 1992, Needleman faced an open hearing on charges that he had engaged in scientific misconduct in the 1979 study.

Dr. Clare Ernhart and Dr. Sandra Scarr had raised doubts about his conduct and report of that earlier research. They testified against him as pact of Office of Scientific Integrity of NIH. This episode teaches some important lessons about social research.

2. The Needleman Case:

ADVERTISEMENTS:

The story begins in 1975 when Needleman’s team began collecting baby teeth from 3329 first- and second-grade children and then measuring the lead content of these teeth.

While trying to identify children with high and low lead levels, the team collected intelligence measures from 270 of the subjects most likely to be high or low in lead content.

However, the researchers excluded some of those tested and compared just 58 children with high-lead levels with 100 children with low levels in the paper published in 1979. Needleman went on to conduct other studies that pointed to lead’s adverse effects on human intelligence.

Recognized as an expert and concerned about protecting children against the dangers of lead, he had a major impact on public policy.

ADVERTISEMENTS:

In 1990, the Department of Justice asked Needleman to assist in a suit brought under the Superfund Act. Superfund bills the cost of cleaning up toxic waste to those who caused the pollution, and it often has to wage legal battles to extract these payments.

In this case, the Justice Department wanted to force the cleanup of lead tailings from a mine in Midvale, Utah. The defense hired Ernhart and Scarr as witnesses. Knowing that Needleman’s testimony for the government would rely in part on his 1979 study, Ernhart and Scarr sought access to his original data.

To prepare for the trial, they spent two days in his lab checking his work. Before the trial could begin, the litigants settled the case with $63 million obtained for cleaning up the mine site.

Ernhart and Scarr’s brief view of Needleman’s data raised questions about his 1979 report they wrote a complaint to the NIH Office of Scientific Integrity (OSI, since renamed the Office of Research Integrity or ORI and moved to the Public Health Service).

ADVERTISEMENTS:

Of their several concerns, one had to do with the way Needleman chose only some of the tested children for analysis. They suspected that he picked just the subjects whose pattern of lead levels and IQ scores fit his belief.

The OSI instructed the University of Pittsburgh Needleman’s home institution, to explore the charges in October 199’1.

The resulting hearings took on the bitterness of a legal trial complete with published rebuttals and charges about selfish motives.

Needleman likened the hearing to witch trials (1992). He cast Ernhart and Scarr as paid defenders of a lead industry that wanted to protect its profits by casting doubt on his work.

ADVERTISEMENTS:

For their part, his critics denied serving the lead industry and told of the human and professional costs of serving as honest whistle-blowers.

This Pittsburgh inquiry resulted in a final report in May 1992 (Needleman Hearing Board, 1992). This report absolved Needleman of scientific misconduct, finding no evidence that he intentionally biased his data or methods.

However, the hearing board did find that “Needleman deliberately misrepresented his procedures” in the 1979 study the report said that “misrepresentations may have been done to make.

The procedures appear more rigorous than they were, perhaps to ensure publication” the hearing board judged that this behavior did not fit the definitions of misconduct that focus on taking data and plagiarism.

ADVERTISEMENTS:

But others wondered why such misreporting did not fall within another rule that forbids serious deviations from commonly accepted research practices.

3. The Moral of the Story:

Researchers often disagree about results, but they seldom take such differences before hearing boards. More often, the scientists argue with each other in published articles and let other researchers decide for themselves.

Sometimes, a scholar will share the challenged data with critics for additional analysis, perhaps even working with them to produce a joint finding.

In Needleman’s case, the scientists had a history of distrust based on their conflict as expert witnesses in civil trials about lead exposure and toxic waste cleanup. Because the 1979 study has become a weapon in these disputes, the researchers chose not to work together to resolve their differences.

Instead one side turned to the research integrity office of the government, which in turn handed the problem to a university. Charged with fighting research fraud, these offices had little experience with a case bordering on method differences.

The procedures of this case pleased neither side. Needleman sued the federal government and the University of Pittsburgh, charging that they had denied him due process. Scarr and Ernhart hoped additional information would lead to a more severe judgment on later review.

Whatever the final outcome of this dispute, we can draw some important conclusions from it.

I. Social researchers can address very important matters. In this case the stakes involved the mental health of the nation’s children, the economic well-being of a major industry, crucial federal policies on the environment, lawsuits for monetary damages, and the reputations of prominent scholars.

II. This case shows how science works through the adversarial process. Researchers should doubt their own findings and those of other scholars. As consumers of research, we should not believe everything we read. Instead, we should assume a doubtful posture in the face of research claims.

We call this posture skepticism. This term does not mean unyielding disbelief but rather the habit of checking the evidence. Skepticism requires us to distinguish poor research, unworthy of our belief, from good research, which deserves at least provisional acceptance.

The dispute about Needleman’s findings, although unusual in its form, represents a normal and accepted approach to getting at the truth. This episode highlights the importance of research methods as the focus for scientific debate and as the content of this text.

III. This dispute forces us to view our research practice as an ethical duty. Scientific integrity consists of a kind of utter honesty-a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid-not only what you think is right about it…. You must do the best you can-if you know anything at all wrong, or possibly wrong-to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it.

This view of integrity challenges us to help our worst critics attack our most cherished conclusions, we will need a detachment from our theories if we are to value the credibility of our results more than victory in our disputes.

IV. Does lead affect IQ? Improved analyses of Needleman’s original data gave evidence in support of his lead-IQ link that was even stronger than that reported in his 1979 article (Taylor, 1992, citing the Needleman Hearing Board’s Final Report, 1992).

However, these results come from only one small sample, and other research findings have given mixed results. The current EPA and CDC positions agree with Needleman’s conclusion, but they could change should new data appear.

4. Assertion, Authority, and Evidence:

Social research produces claims about causation for example, that A causes B. However, some causal claims appear without evidence. Anyone can assert a causal relation, but we need not accept it without support. If the causal claim has no evidence, why should anyone believe it or prefer it to a rival view that has support?

Sometimes claims draw their support not from evidence but rather from the authority, expertise, or rank of the source. If the authority refers to evidence, we expect to see the data in order to make our own judgment. We often hear assertions that some new treatment can cure a terrible disorder such as schizophrenia, cancer, or heroin addiction.

Perhaps a few patients testify to the success of the new cure. Recruiting desperate, paying clients with the promise of a miracle drug may motivate such claims. However, neither the fame nor the academic degree of the source will substitute for evidence.

Some authorities base their assertions entirely on faith with no claims to scientific foundation. Clashes between claims based on faith and those based on evidence have made for some dramatic moments. One of the most famous came to a head in Galileo’s heresy trial.

The Copernican model of the solar system held that the earth moved around the sun rather than the sun around the earth.

In 1616 a church court condemned this view as being contrary to the Bible. In 1632 Galileo published his Dialogue on the Two Principal World Systems, which seemed to favor the Copernican view.

The Inquisition summoned him to Rome for trial in 1633, forced him to recant, and prohibited his book. He remained under house arrest for the last eight years of his life (Hummel, 1986).

Contrary to the popular view, this trial did not derive from a simple conflict of science versus religion. The matter involved complex personal jealousies and power struggles. Redondi (1983/1987) even suggests that Galileo’s trial stemmed from theological disputes other than his support of Copernicanism.

Although we may never know the full story of the trial, Galileo gave an eloquent defense of science: “I do not feel obliged to believe that the same God who has endowed us with sense, reason, and intellect has intended us to forgo the in use”.

The centuries have vindicated Galileo. In 1757 the Church took books teaching the mobility of the earth off the Index of Prohibited Books. In 1979, Pope John Paul II called for a reexamination of the Galileo case. Thirteen years later, the Church found him not guilty (Montalbano, 1992) the Vatican has published its secret archives on the Galileo case and admitted that the judges were wrong (Poupard, 1983).

One irony of this episode is that Galileo had many friends in the Church (including the Pope). They advised him not to claim proof for his theory in order to avoid confronting the Church. As it turned out, Galileo should not have claimed that his theory was proved since he had made some mistakes (for example, in his theory of tides).

This episode shows that assertions based on good evidence prevail over those based on authority and, in their turn, yield to better ones based on better evidence. In the long run, the more truthful and useful explanation should emerge from this competition between rival ideas.

5. Philosophy of Science:

Our skepticism about social research goes beyond rare cases of data fraud or common disputes about methods. Philosophers of knowledge have long wondered how and l even whether we can know about our world.

The phrase “know about our world” implies that certain “facts” exist that we can learn. Science pursues these facts by empirical methods, that is, methods based on experience of the world. But philosophers disagree about how far we can trust our observations.

In the social sciences, empiricism sometimes goes by the name positivism. Positivism rejects speculation and instead emphasizes positive facts. In this regard, social science shares a unity of method with the natural sciences.

That is, we can test theories by seeing how well they fit the facts that we observe. Although no consensus has formed around an alternate view, traditional positivism has many critics.

What we usually mean by the notion of observation is that we feel sensations within us that we attribute to external causes. When I say “I see a tree,” I really mean that I have an inner visual sensation consistent with what I have learned is called a tree.

But how can you or I be sure that a tree really exists? Perhaps I am hallucinating and my inner sensations come not from a tree at all but rather some malfunction of my nervous system. We “know” the world only indirectly:

“We do not actually see physical objects, any more than we hear electromagnetic waves when we listen to the wireless”

In short, the positive data that we had hoped to anchor our theories seem like constructions. Our scientific facts resemble collective judgments subject to disagreement and revision.

To speak of facts suggests that we can say what does or does not exist in the world. The branch of philosophy called ontology deals with this problem of the ultimate nature of things.

Do external things really exist out there to serve as sources of our sensations? Belief that there are such real sources is called realism. We cannot demonstrate realism. We can never prove the reality of an external source with suspect perceptions.

Most scientists and laypeople act and talk most of the time as though they believed in realism. Nevertheless, some philosophers have argued for another view called factionalism or instrumentalism. This latter view regards the supposed external sources of our perceptions as fictions dependent on our observing instruments.

Supposing that real facts exist, we still have the problem of showing how we know them. The term epistemology applies to this concern with the relation between knower and known. Claiming that you know something implies that you can defend the methods by which you got your knowledge. The ever present rival to your claim is that you have misperceived.

6. Selective Perceptions:

Much evidence suggests that our observations are selective and subject to error. According to Thomas Kuhn (1970), normal science consists in solving puzzles within a framework of widely accepted beliefs, values, assumptions, and techniques. Scientists working on a problem share certain basic assumptions and research tools that shape their observation of reality. Kuhn called this shared framework a paradigm and considered it a lens through which we see the world.

Whole generations of researchers may engage in normal science within a paradigm before enough conflicting data force a paradigm shift.

Such paradigm shifts or revolutions occur when existing theories can no longer adjust to handle discrepant findings. Paradigm shifts resemble gestalt perceptual shifts. Kuhn illustrates this by a psychology experiment in which subjects viewed cards from a deck.

This deck had some peculiar cards, such as black hearts and red spades, but the subjects were not told about them in advance. Most subjects needed repeated viewings before noticing these odd cards. Seemingly, the subjects looked at black hearts and “saw” red hearts because they believed that only red hearts existed.

When they grasped the idea that black hearts could exist, it was as though someone threw a switch in their minds. Suddenly they could “see” the cards as they existed rather than as imagined. We need to reflect on the framework in which we think and do research. Would we notice the black hearts and red spades if they appeared in our data?

Another major critique of scientific observation came from Karl Marx who challenged its neutrality and completeness. For Marx, sensation implied an active noticing based on motivation for some action (Russell, 1945). We only perceive a few out of the universe of possible stimuli.

We select for attention those that affect our interests and disregard those that do not. Marx thus locates science in the context of politics and economics, driven by the self-interest of the researchers who themselves belong to economic classes.

7. Faith of Science:

We face other problems beyond perceiving the world accurately. Positivism holds that the mission of science is to discover the timeless laws governing the world. This notion implies what Bertrand Russell (1.948) called the “faith of science”.

By this phrase he meant that we assume that regularities exist in the connection of events and that these regularities or “laws” have continuity over time and space. We cannot prove this covering law, but we must believe it if we expect to find stable regularities with our science.

The great success of the physical sciences in the past two centuries lends credence to this faith. For example, our lunar astronauts confirmed that physical relationships discovered on earth hold on the moon as well.

However, the overthrow of Newtonian physics by Einstein early in this century shook the confidence in our capacity to discover timeless physical laws (Stove, 1982). Social scientists have long doubted their chances of matching the success of the natural sciences.

In the social domain, some scientists reject the existence of objective laws knowable by observation. Rather, these critics hold, our understanding of the world is a social construction dependent on the “historically situated interchanges among people”.

8. Fallibilism:

Suppose physical or social events do follow laws independent of the socially constructed perception of them. Philosophers of science warn us that such causal connections will resist discovery. One problem has to do with induction, finding an idea among observed events that might explain other, not yet observe events. Hume, writing in the 1700s, made a strong case against such an inductive leap (Stove, 1982).

Repeated instances of an observation, no matter how many, cannot guarantee its future repetition. However, most people would say that such repetition does increase the chances of its occurring again. Nevertheless, we must remind ourselves that we run the risk of making inductive mistakes-that is, we are fallible in this regard. Fallibilism refers to the posture of suspecting our own inductions.

In sum, the tools of our knowing, both the procedures of measurement and the induction of lawful patterns, come from human experience and risk human error. We can assert a causal connection. But we do so only under warrant of (that is, limited by and no more valid than) our methods for perceiving such relations.

This limited and cautious approach to research provides a continuing topic of debate about the philosophical foundations of social science (Gholson & Barker, 1985; Manicas & Secord, 1983).

9. The Strategy of Research

Theory as Testable Explanation:

Social research tries to explain human events. What causes people to abuse their children, to become depressed, to remain homeless, and to fail to learn to read and write, to commit crimes? Besides our natural curiosity about how things work, we have a strong practical motive to explain, predict, and shape certain human conditions.

Social research includes a great many activities, each falling in one of three main clusters: tentative explaining, observing, and testing rival views against data. We need all three to do social research. If all we did was imagine different explanations, we would never have a basis for choosing among them. On the other hand, proposing tentative explanations helps make sense out of diverse observations and guides us in making still better observations. Such tentative explanations constitute theory.

We can usually think of two or more different theories to explain many events. Collecting data helps us decide which theory best fits reality. In order to help us understand causation, our data must come into contact with theory, For example, we may observe and describe the incidence of death by cholera or suicide. But merely counting and sorting deaths, what we call descriptive research, does not explain them.

However, observing with a theory in mind becomes causal research by joining a cause to an effect. For example, John Snow suspected that fouled water caused cholera. In the period from 1848 to 1854, he linked the different rates of cholera deaths to the different companies supplying London houses with water (Lilienfeld, 1976, pp. 24-25).

In the same way, Emile Durkheim linked changes overtime in the rate of suicide with changing economic conditions (Durkheim, 1897/1951). These men could have looked at an enormous number of social and physical factors as possible causes of death. Their theories helped them to narrow their focus to water supply and economic conditions.

In the last step of the research cycle we compare our causal idea with our observations. Does our theory fit? Does another theory fit better? Science consists of seeing whether data confirm or disconfirm our explanations. Popper (1987) argued that we should not simply look for confirmations. Rather, he said, any “genuine test of a theory is an attempt to falsify it, or refute it.

Testability is falsifiability”. As an example of pseudoscience, he offered astrology “with its stupendous mass of empirical evidence based on observation-on horoscopes and on biographies” but without the quality of refutability.

Rules of Evidence:

In order to judge our theory’s fit, we rely on standard decision rules. Our research reports make public both theories and data, so that anyone can look over our shoulder and second-guess us using these same guidelines. Researchers usually demand that we meet three criteria before claiming a causal link: (1) co variation; (2) cause prior to effect; and (3) absence of plausible rival hypothesis or explanation.

The first criterion seems simple enough. If A causes B, they should move together or co-vary. If polluted water causes cholera, we expect to find more cholera cases in houses supplied with bad water and fewer cases in ones with pure water.

If rapidly changing economic conditions cause suicide, we should count more suicides in changing economic times and fewer in stable ones. Knowing that two things do not co-vary, on the other hand, casts doubt on the theory that they have a causal link. However, association alone does not tell us the type of causal link between A and B.

The philosopher Hume warned us of our habit of mind that tends to see causation in the association of events. When two events coincide again and again, we come to expect one when we notice the other. We often wrongly treat this “prediction” as “causation.

“However, we must separate these two notions in our minds. Russell (1948) illustrates this problem with the story of “Geulinex’s two clocks.” These perfect timepieces always move together such that when one points to the hour, the other chimes.

They co-vary and allow us to make good predictions from the hands of one to the chimes of the other. But we would not make a causal claim. No one supposes that one clock causes the other to chime. In fact, a prior event causes both, namely the work of the clock maker. Thus we need more criteria beyond simple association to judge causation.

The second requirement deals only in part with this problem of telling co variation from causation. A cause should precede its effect. Economic change cannot cause suicide if the upturn in suicide rates comes before the change in the economy. Knowing the sequence of events can help us rule out one causal direction.

But knowing that two events are correlated and that one comes before the other still does not settle the question recall Geulinex’s two clocks, and suppose that one clock is set one second before the other so that its chimes always sound before the other. Would we argue that the former clock causes the latter’s chimes just because it occurs first? Of course, we would not.

The third rule for causation also deals with the problem of Geulinex’s two clocks. It says that we must be able to rule out any rival explanation as not plausible. By plausible we mean reasonable or believable. This test of causation can prove hard to pass.

A rival explanation that seems unlikely to one researcher may later appear quite likely to others. Anything that can cause two events to appear linked serves as a plausible rival explanation.

Much of what social researchers do helps guard against such rival explanations. We grade social research largely on its success in ruling out rival explanations. Someone may think of a new and plausible rival years after a study is published. Thus, the social researcher must design studies in ways that minimize as much as possible, present and future competing explanations. To the extent that a researcher shows co variation and temporal precedence and casts doubt on opposing rationales, we will accept his or her causal claim.

The threat of competing inferences shapes almost every aspect of data collection and research design. Whether as a consumer or producer of social research, you must learn to judge research on the basis of how well it limits and rejects rival interpretations.

This text covers the major types of research threats. One threat arises when we collect measures. We cannot claim that A causes B if our measures fail to reflect both A and 6. Another threat has to do with the fact that much of social research comes from samples.

We must take care not to claim that a finding holds true for a whole population when it occurs only in a small group drawn from that population. A third problem concerns the many different ways we can design our studies.

Designs differ in their control of third variables that might cause A and 6 to appear linked. Finally, we must guard against the temptation to generalize findings to people, places, or times not actually represented in our study. You must consider not just one of these threats in reading research, but rather remain alert lo all of them.

Because of these threats, social research does not always reach conclusions agreed upon by all. Rather than providing laws of social behavior, it gives evidence for and against preliminary, would-be laws. This evidence requires interpretation.

Almost weekly, we hear of results that, if believed, would change our behavior (for example, that lead causes intelligence loss in children) or raise fears in some of us (for example, that left-handers have a shorter life expectancy, Coren & Halpern, 1991). In the same announcements we may also hear that the conclusions could change pending further research, leaving us to decide how much faith to place in the claims.

Constantly weighing the conflicting findings of scientists can prove frustrating. Why is it that researchers cannot decide which scientists have the right answers and settle such debates once and for all? This conflict between opposing researchers becomes most urgent in court cases that rely on the expert testimony of scientists.

When such experts give conflicting views, the courts must seek ways to choose the more credible scientist. State and federal courts have sometimes relied on the 1923 Frye rule, which allows “experts into court only if their testimony was founded on theories, methods, and procedures ‘generally accepted’ as valid among other scientists in the same field”.

However, this principle of ignoring “junk science” has come under fire by those plaintiffs whose cases depend on the challenged experts. The Supreme Court took up this question in the case Merrell Dow, which involved claims that the drug Bendectin caused birth defects. Lower courts, following the Frye rule, said that the plaintiff’s experts could not give their views because their evidence was not accepted as reliable by most scientists.

The Supreme Court, in its decision of June 28, 1993, reversed the lower courts and relaxed this rule. The courts can still screen out unreliable “experts.” However, judges must now do so not on the basis of the witnesses’ acceptance by other scientists but rather on the quality of their methods. Justice Harry Blackmun wrote that “Proposed testimony must be supported by appropriate validation-i.e. ‘good grounds’…”

This decision comforts those scientists who distrust a rule that imposes certainty or publicly grades researchers. By freezing the research process at some fixed “truth” or anointing good and bad researchers, we might hinder future Galileos who point to new ways of seeing things.

Later research may displace the currently most favored theory, and it will do so more quickly in a climate that tolerates conflicting ideas. Scientists draw the line at fraud and have set up ethical guidelines against making up or falsely reporting data. However, they worry about science courts that punish researchers for using improper research methods. Instead, researchers compete in the marketplace of ideas, hoping to earn research support, publications, and promotions by convincing their peers of the excellence of their methods in this spirit, the Supreme Court’s decision in Daubertv. Merrell trusts judges and juries to sift well from bad science. This text aims to give you the power to judge for yourself the quality of research that will affect your life.