Besides the concern for protecting human subjects, several other important ethical dilemmas exist, including ones that arise in socially sensitive research (Sieber & Stanley, 1988). Conflicts involving other figures in the research enterprise have led to additional guidelines. In some cases, these ethics standards elicit little controversy. For other conflicts, guidelines will continue to trigger public and professional debate.

Ethical conflicts in research derive from the variety of players and their divergent self-interests and values. The major players include the researcher, the subject, the funder, and society as me consumer of research. Other parties also figure in the process of science, including professional associations, scholarly journals, research institutions such as universities, and government in its regulatory role.

The distinctions among these parties are not always clear-cut, since one may play two or more parts-for example, the government as funder, as consumer of research results, as regulator, and as investigator (for instance, the collection of census data).

In addition, each of these players may have different priorities. Since these priorities sometimes collide, ethical problems can arise between any two parties in the research process. Competing interests may well try to achieve their aims at the expense of the other parties involved.

ADVERTISEMENTS:

I. Researcher versus Researcher

Plagiarism:

One ethical conflict raises little controversy. Virtually all professional groups prohibit plagiarism, and copyright laws provide remedies for stealing another author’s published writing. However, the explosive increase in scientific journals makes it difficult to catch plagiarized work, especially when it is published in obscure journals.

Most cases of research misconduct involve plagiarism (Lafollette, 1992). Just one plagiarist can publish dozens of articles before being caught (for example, the Al-gambit case described in Broad & Wade, 1982).

ADVERTISEMENTS:

When he or she is caught, the plagiarist can suffer severe penalties. Jerri Husch. a sociologist, discovered in 1988 that large sections of her 1984 Ph.D. dissertation on Musak in the workplace had appeared unaccredited in a book by Stephen Barnes.

Dr. Husch initiated a complaint, and in 1989 a committee of the American Sociological Association concluded that Barnes had plagiarized the material.

When no satisfaction was obtained from Dr. Barnes or his publisher, the organization raised the matter with Dr. Barnes’s employer, Eastern New Mexico University, where he worked as Dean of Fine Arts. By the end of 1989, Dr. Barnes had left his job (Buff, 1989), apparently, in part, due to this pressure.

Some plagiarism disputes are less clear-cut than using another’s published work without permission.

ADVERTISEMENTS:

The professor who puts his or her name on a student’s work shows how research can be “stolen” before the publication stage. In an era of large-scale team research, disputes can easily arise about authorship and the sharing of credit for creative contributions. Such disputes, unless prevented by agreements in the early stages of collaboration, can and do produce long-lasting hostility.

Peer Review of Research. In contrast to the rules against plagiarism, the ethical codes have achieved less clarity on another potential conflict between researchers.

In employment and promotion decisions, in reviews of papers submitted for publication, and in grant requests for research funds, researchers usually receive judgments by their peers on the merits of their research.

These decisions can govern which research reports will receive wide publication or none at all and which proposals will receive funding or never be carried out.

ADVERTISEMENTS:

Such peer review procedures require fairness and objectivity. Suspicion that the peer review mechanism is working with bias could produce interpersonal conflict between competing scholars.

Possible abuses in peer review also raise concern, about distortions in the substantive direction of future research as decisions favor certain types of research proposals and research careers over others.

The peer review process tries to protect itself by using multiple reviewers. But whereas lawbreakers usually have the protection of 12-member juries, professional researchers are often judged by as few as two to five peers.

Research on the peer review process in awarding grants has cast serious doubt on its reliability. One research team had 150 National Science Foundation proposals reevaluated.

ADVERTISEMENTS:

They concluded that ‘”the fate of a particular grant application is roughly half determined by the characteristics of the proposal and the principal investigator, and about half by apparently random elements which might be characterized as the ‘luck of the reviewer draw'”

Research on peer review of manuscripts submitted for publication also suggests low levels of inter-reviewer agreement (Cochiti, 1980) and only modest relationships between peer judgments of quality and later citation (Gottfredson, 1978). However, “blind” review techniques used by many of the best journals offer some protection against sex, ethnic, or personal bias in manuscript review.

II. Society versus Researcher

Fraud:

ADVERTISEMENTS:

With great embarrassment, scientists have had to admit that their colleagues sometimes engage in fraud. The problem rarely involves financial fraud, looting the research grant to buy expensive cars and vacations, although perhaps that exists as well.

The main problem is falsifying data. The motive to make up or trim data may derive from the desire to gain career success through publications. But the impact of such fraud goes beyond the careers of individual researchers and can reach all the way to socially significant public policy.

Cases of documented fraud, although rare historically, appear to be on the rise (Broad & Wade, 1982; Lafollette, 1992; Miller & Hersen, 1992).

One such case involved Stephen Breuning, a young psychologist whose research was funded by the National Institute of Mental Health (NIMH). Breuning published studies between 1980 and 1984 on the effects of psychoactive drugs on the mentally retarded. His findings influenced treatment practices in some states, including Connecticut.

However, “only a few of the experimental subjects described in [some of the investigator’s] publications and progress reports were ever studied.

Some of the questionable reports were coauthored by respected researchers who, despite little or no involvement in the research, lent their names to the reports or whose names were added without their permission.

The NIMH investigating panel recommended that Breuning be barred from additional NIMH funding for 10 years and that the case be handed over to the Department of Justice, which subsequently indicted him, the first such indictment in federal court. His institution had to reimburse NIMH more than $163,000 for misspent grants (Bales, 1988).

In a subsequent plea bargain, Breuning pleaded guilty, becoming subject to a maximum penalty of 10 years in prison and $20,000 in fines. His actual sentence consisted of 60 days in a halfway house, 250 hours of community service, and 5 years of probation.

We may never know the extent of harm done to retarded patients whose treatment was guided by these fraudulent findings were self-citations by Breuning or his coauthors. Non-self-citations fell after 1986 when the scandal broke and reflected little apparent impact of his claims.

The Burt Case-Fraud or Politics:

The most famous case of alleged fraud in the social sciences involves Sir Cyril Burt. He died a greatly esteemed scholar in 1971, once called the “dean of the world’s psychologists'”.

Burl co-founded the British Journal of Statistical Psychology in 1947 and controlled or helped control it until 1963.

He studied the source of intelligence-genetic inheritance or rearing and the environment-using identical twins. Identical twins usually score the same on intelligence tests. Because most twins grow up in the same environment as well as sharing the same genes, we cannot tell which source causes their similarity.

However, identical twins reared apart have the same genes but grow up in different environments. Scholars who think environment causes intelligence would expect low correlations between the intelligence scores of twins reared apart.

Those who believe that genes cause intelligence expect high correlations regardless of environmental differences in rearing. Burt claimed to have found more identical twins reared apart than anyone else in the intelligence area-15 such sets in 1943, 21 by 1955, twice that number in 1958, and 53 in his last report of 1966.

Burt reported high correlations supporting the genetic heritability of intelligence. These findings served as the basis for Burt’s defense of British educational policy, which then selected children into higher and lower tracks based on early tests.

They also figured in the volatile debate about the relation of race and intelligence. Burt’s work roused great animosity from both other psychologists and people in political circles. However, his great stature protected him from charges of fraud until after his death.

The first doubts about his work arose from the incredible consistency of his reports, which repeatedly gave the same correlation for intelligence of twins reared apart. Another question concerned publications in Burt’s journal on his data by two authors named Howard and Conway.

A reporter named Oliver Gillie of the London Times tried to contact them in 1976 to get their views of Burt’s data. Notable to locate them, he charged that Burt had written the papers himself and assigned authorship to nonexistent people.

Other scholars who disliked Burt’s genetic views on intelligence added their weight to the charges that Burt had falsified his data. When Hearnshaw, a respected biographer, concluded that Burt had committed fraud, the case seemed closed on the greatest scandal in the history of social science (1979).

In the late 1980s, two independent scholars, with no stake in the genetics versus environment debate, reviewed the Burt case (Fletcher, 1991; Joynson, 1989). They converged on a view that partially exonerates Burt and charges his accusers with character defamation.

Burt clearly engaged in misconduct by writing and publishing the papers attributed to Howard and Conway. However, these individuals existed and served as unsalaried social workers who helped Burt with his data collection in the 1920s and 1930s.

The lapse in time explains the difficulty Gillie had in finding them. Burt’s action in assigning them the authorships may have been his misguided way of giving them credit for their help. As for the identical correlations, it now appears that Burt was simply carrying over findings reported in one article to later articles and not claiming new computations with the same results.

The best check on Burt’s claims comes from other independent research. At least two separate studies of identical twins reared apart have reported intelligence correlations of .77, exactly the same as Burt (Jensen, 1992). Apparently, both the media and psychologists in leadership positions wrongly accused

Burt in order to impugn his views on the heritability of intelligence in so doing, they not only smeared a leading scholar after he could no longer defend himself, but they cast doubt on the methods he had helped pioneer, such as measurement of mental abilities.

Waste:

We can think of fraud as a special case of waste in which research funds have disappeared without return of valid results. Another kind of waste can occur even when no fraud takes place.

An instance of this conflict between funder and researcher appears in the Hutchinson versus Proxmire case (Kiesler & Lowman, 1980). Senator Proxmire waged a campaign to save taxpayers’ money from being wasted on trivial research.

He publicized extreme cases of such alleged waste with his Golden Fleece Award. Ronald Hutchinson received this embarrassing award in 1975 for his work on aggression in monkeys. Proxmire took credit for stopping Hutchinson’s federal funding, and Hutchinson sued Proxmire.

The Supreme Court ruled in favor of Hutchinson’s claim that he was not a public figure subject to Proxmire’s public ridicule and remanded the suit back to a lower court. Proxmire then settled out of court in 1980 for a public apology and $10,000.

Although Dr. Hutchinson won the battle: Senator Proxmire may have won the war. The chilling effect of public ridicule and the power of lawmakers to restrict public research funds surely have an impact on what researchers can explore with public support! No professional association’s ethical guidelines define what “significant” social research is.

Presumably, individual professionals have freedom to pursue their curiosity. Indeed, an honored intellectual tradition supports pure or basic research for its own sake regardless of its potential uses. On the other hand, the taxpayers’ representatives have an obligation to spend scarce public funds wisely.

The allocation of public funds to research must, therefore, take into account the welfare of the entire society and not just the curiosity of researchers.

But how should the conflict between the researcher’s interests and the social interest be resolved? Researchers often claim that it is shortsighted to limit support to applied research.

Since applied research depends on theory and pure research, it will wither in the future if the funders stifle the curiosity of the pure researchers now. On the other hand, we must seek the best possible uses of our scarce research funds and thus pay special attention to our funding priorities.

But if politicians guard us from trivial research, who guards against their awarding research grants in their own or other special interests? Awarding government research funds by peer review (juries of fellow scientists) gives no guarantee that funds will spread evenly across congressional districts.

Increasingly, lawmakers are pork-barreling research grants to provide jobs in home states and districts (Clifford, 1987). In one instance, a Massachusetts politician took credit for getting a $7.7 million research center for his home district.

This occurred over the advice of a technical review panel, which favored another bidder, whose proposal would have cost the taxpayers $3.2 million less (Coroes, 1984). Professional research associations lobby the government to increase the funds put into their fields of interest.

Universities and private companies also lobby Congress to give funds outside of the usual peer review process for special interest projects. The value of such “earmarked” projects by Congress increased 70-fold between 1980 and 1992 (Agnew, 1993).

III. Public Interest versus Private Interest

Science operates within and reflects society. Science, wittingly or not, often carries out the values of the society, or at least that part of society from which scientists come (for perspectives in the sociology of science, see Barber & Hirsch, and 1962). Those scientists see reality through the blinders society’s values causes concern for those who want science to understand society in order to make better. Collecting data within the political and economic status quo will prove irrelevant, it is argued, for the goal of seeing beyond the status quo to a different future (Sarason, 1981).

Private Interests:

In a complex society, no single homogeneous value set exists from which everyone approaches political and social problems. Rather, we find many competing private inters the government may reflect those private interests that were on the winning side of the last election of coup.

Thus, research funded in one administration may be eliminated by the next administration. Just as the government can support research with its funds outside of peer review, it can cut off funding that has earned peer review approval.

This can happen when the research topic touches a political nerve For example, in September 1992 the National Institutes of Health withdrew previously approved funds for a University of Maryland conference on ethical issues related to genetic studies of crime.

Opponents of the conference feared that this research would try to link violence to race. Proponents viewed the withdrawal of funds as bureaucratic cowardice in the face of pressure for political correctness (Touchette 1992). The fear raised by this conference shows the distrust that many hold that science might be used to harm an ethnic or racial group

A more clear-cut type of private interest is the corporation that can sponsor research to serve its own ends. For example, a tobacco firm may fund research on public attitudes toward smoking prohibition (for example, to prevent laws that outlaw smoking in restaurants).

Naturally, the funder has a vested interest in finding techniques that would increase profits by blocking the adoption of such laws. What effect does this natural self-interest have on the researcher’s effort? To what extent should such privately employed researchers concern themselves with the “public interest?”

In recognition of this potential conflict, professional associations have provided guidelines regarding research sponsorship. For example, the Code of Ethics of the American Sociological Association calls for identification of all sources of research support or special relationships with sponsors (American Sociological Association, 1989). Similarly, the anthropologists’ Statement of Ethics warns that in “working for governmental agencies or private businesses, anthropologists should be especially careful not to promise or imply acceptance of conditions contrary to professional ethics or competing commitments”

In recent years, universities have begun to add force to such requirements. Professors often must report their links with private industry and their funding from non-public sources. Such reporting permits an assessment of possible conflict of interest, and local boards can provide peer review and guidance in questionable cases.

Public Interest:

Although conflicts involving special interests may seem clear enough, identifying the public interest appears much more difficult. What if the public interest of a nation supported social research that was, arguably, unethical? A notorious example of this kind of research appeared in Project Camelot (Horowitz, 1973), a study that gauged the causes of revolutions in the Third World.

Supported by the U.S. military, it sought techniques for avoiding or coping with revolutions. Survey of other methods was to be used in various developing countries after Camelot’s conception in 1963.

The uproar over Camelot ignited when a Chilean sociology professor challenged the military implications of the project. The Chilean press and the Chilean senate viewed Camelot as espionage.

This criticism resulted in U.S. congressional hearings and the termination of Camelot by the Defense Department in August 1965.

With respect to such research projects as Camelot, Beals (1969) asks “whether social science should be the handmaiden of government or strive for freedom and autonomy” (p. 16). Since the “true” public interest may be in the eye of the beholder, a resolution of this matter will not be simple.

Given the increasing importance of science in public policy and social welfare, scientists bear an increasing responsibility to consider the consequences of their work.

Just how to anticipate the long-run consequences and how to weigh the potentially different effects on different segments of society (for example, poor versus rich within a society or First World versus Third World mitions) has not been worked out in any laws or professional codes.

Perhaps for that reason, the movement of “public interest science” will continue as an educational and consciousness-raising force before it can be formulated by consensus.

IV. Protecting Research integrity

In response to the problem of protecting human subjects, the government created preventive procedures such as IRBs for screening research proposals. However, no similar system exists for preventing other kinds of research misconduct such as plagiarism or fraud. At best, individual whistle- blowers can raise charges when they spot such abuses.

Legal Remedies:

Of course, an author can seek legal redress in cases of plagiarism upon finding that someone has published his or her words without proper credit. Such actions can proceed under the copyright protection laws if the author or publisher has copyrighted the work. A more recent law- permits legal action in cases of fraud that wastes public funds.

Under the False Claims Amendment Act of 1986, any individual can bring suit, on behalf of the United States, in a federal district court to recover funds fraudulently paid to contractors or grantees. The Department of Justice may or may not choose to join such suits. But the whistle-blower who brings such a suit may receive a reward up to 30 percent of triple the damages suffered by the government as a result of the researcher’s fraud (Charrow & Saks, 1992). Thus, if you believe that a researcher is making up data and misspending a government grant, you can take him or her to court and earn a reward for your trouble if you prevail.

Institutional Hearings:

Ideally, researchers will avoid misconduct if they know the rules of proper research. Government research agencies, under congressional pressure, have tried to spell out and enforce rules for research integrity. For example, the Public Health Service now has an Office of Research Integrity (ORI), which proposes guidelines for misconduct (for example, prohibiting plagiarism or making up data).

However, when charges of misconduct arise, we must have a means for judging them. The government now expects each research institution to establish its own office of scientific integrity to disseminate the rules of proper research and to judge infractions reported locally.

For example, the research integrity office of the University of Pittsburgh handled the Needleman case discussed in the beginning of this chapter. Thus, if you suspect that a researcher on your campus has committed misconduct, you can report it to your local office of research integrity.

There you should find established procedures for investigating and judging your complaint. These procedures should help not only the whistle-blower; they should also provide some protection for the accused against loss of reputation t false complaints.

Because these research integrity offices have appeared on many college campuses only in recent years, some are still working out their procedures. We can expect that some cases (for example, Needleman’s) will continue on from these local jurisdictions into the courts or to the federal Office of Research Integrity before final resolution.

Ultimately, if the federal Office of Research Integrity decides that a researcher has engaged in misconduct, it could withhold future funding as punishment.