Editorial

Scand J Work Environ Health 1995;21(4):241-243    pdf

https://doi.org/10.5271/sjweh.32 | Issue date: Aug 1995

Ethics in research

by Hernberg S

Much has been written on ethics in research, but the issue is so important that it cannot be overstated. Protecting the participants of surveys and trials from researchers' misconduct was historically the first aim of ethical codes for research, and many documents now exist in which guidelines are given for ethical standards for research on humans. Its history goes back to the Nuremberg Trial, resulting in 1947 in the ten standards of the Code of Nuremberg, which later gave rise to the declarations of Helsinki (1, 2). In spite of the multitude of literature that exists on ethics in research, there are still many issues for which the interpretation of what is ethical and what is not is ambiguous. Even worse, economic and career interests may overrule ethical considerations in some instances. In this issue Tomatis (3) gives four examples of preventive research for which serious doubts can be raised as to the ethical standard. Although the focus of the studies is not strictly in the realm of occupational health, the matters addressed are general and worthy of consideration also for those active in occupational health research. This viewpoint is becoming even more pertinent because of the current trend to carry out interventive research for the study and prevention of nonspecific work-related diseases with a multiple etiology.

Another matter that has caused much concern recently is fraud and other misconduct in science. These problems have been ventilated both in the medical literature and in daily newspapers. Although the incidents reported are few compared with the enormous amount of honest research published every year, the way they are treated in the news media has cast serious doubt on the morals of scientists in general. And we must remember that even one case of misconduct is one too many. It is therefore the duty of every research institution and every single researcher to fight vigorously everything that can serve to fuel mistrust towards the integrity of research. This means not only applying a high standard of ethics to one's own behavior but also being alert to others' possible misconduct.

Misconduct can be taken to include any violation of ethical rules, but in this context I will dwell on a few aspects only, all having to do with dishonesty or other types of misconduct towards the scientific community, colleagues, funding agencies or customers, not with the violation of the rights of research subjects.

In a review published in 1992, Hamilton (4) reported that a survey of members of the American Association for the Advancement of Science revealed that 27% of the respondents knew of data fabrication, data falsification, or plagiarism. Greenberg & Martell (5) followed up this report by conducting a more-detailed survey among members of the Society for Risk Analysis in which unethical behavior in the workplace was inquired about. The survey comprised a list of 24 misbehaviors and was sent out to a random sample of 440 members, 142 (32%) of whom returned the questionnaire. Most of the respondents (79%) considered their disciplines to be epidemiology, exposure assessment, industrial hygiene, and toxicology. More than 30% of the respondents had observed deliberate overstating of positive and understating of negative results, release of results before peer review, failure to share credit on a publication, deliberate failure to acknowledge data limitation, and design of research to favor a specific result. Other common types of misconduct were refusal to share data to avoid competition, avoidance of work on subjects that threaten support, suppression of findings to avoid negative results, borrowing from another's proposal, and plagiarism. Items such as deliberate failure to control data quality, change of definitions after the fact, deliberately delaying peer review of another's paper or proposal, fabrication of data, and destruction of data contradicting the thesis were less frequent (14--6%), but did occur. Greenberg & Martell pointed out that, while some forms of behavior, such as plagiarism and data fabrication and falsification, are clearly unethical, others, such as the release of results before peer review, are considered unethical by some researchers but not by others. This variation shows how relative our concepts of ethics can be in the broad gray area between definitely "good" and definitely "bad."

I would like to add a few more items to Greenberg & Martell's list. First of all, poor science is always unethical. It means the misuse of the subjects' time, of the research grant, of the institution's resources, and of the input of colleagues and technicians. Deliberately or undeliberately misleading the readers of a research report is not ethical either. Therefore, striving for perfection in research is not only a scientific, but also an ethical goal. An institutionalized quality assurance system for research helps in this process; it is indeed warranted already for ethical reasons. Such a system includes peer reviews, ethical reviews, audits during the course of the project, the obligation of seniors to supervise and educate juniors, training of all staff involved, proper archiving of data, and speedy publication in journals that are easily available to colleagues (6). There are, of course, many more aspects to quality assurance in research, but they do not belong to this context.

Another type of unethical conduct is violating the scientific and sometimes even the personal integrity of colleagues in the context of hearings and court proceedings. There have been several such instances during the past several years, one of the worst probably being the court trial in Australia in which veterans exposed to "Agent Orange" tried to claim benefits for what they considered exposure-related injuries. In this process the manufacturer had hired well-known epidemiologists who, together with lawyers, in a remarkably unethical way challenged the integrity of a well-known researcher with a high reputation. It is not only the act of such behavior that is unethical, it is also the acceptance of substantial monetary compensation for doing so that is open to criticism. Even worse is what recently happened in Finland. A faculty professor of medicine stated under oath in a court trial, in which a tobacco company was accused of having caused laryngeal cancer in a smoker, that a causal connection between smoking and respiratory cancer had not been scientifically proved. He received a very substantial fee for this but is now under trial for having sworn a false oath; the outcome of this trial is not yet clear. Such behavior represents flagrant misconduct; it could, in fact, be labeled scientific prostitution.

However, there is a number of less flagrant types of misconduct (apart from those surveyed by Greenberg & Martell), many of them of a borderline nature. In my view one of them is the (nowadays less and less common) habit of the institute or department director of insisting to have his or her name included in all articles from the unit. Conversely, a junior may also insist on listing the boss as one of the authors, hoping that the "heavy" name will help acceptance of the manuscript by journals. Another is using other staff members' (usually juniors') work for presenting one's "own" papers at international meetings. This practice was earlier typical of directors of research institutes of the former socialist countries. For example, the then director of the DDR national institute of occupational medicine submitted 28 abstracts to the Congress of the International Commission on Occupational Health in Sydney in 1987. One was accepted. A third example is the use of research grants for purposes other than agreed on in the contract. For example, a researcher who has been hired for a certain purpose on external grants is ordered by his superiors to take part in lecturing or other activities, or his input is used for research that has nothing to do with the grant. Some well-known "research entrepreneurs" are known for allocating all their research grants to a "pool" of external funds, and no one can really judge what monies are used for what research. The overall outcome may be impressive, but one may ask if this conduct really corresponds to what was agreed upon in the signed research contract. "Passive" misconduct could include giving tasks that are too difficult to junior researchers and leaving them without guidance, selecting "high risk" topics for postgraduate students' doctoral theses, neglecting the duty of surveillance and guidance of project staff, and neglecting to stick to agreements, schedules, and other stipulations included in the study protocol. The list could be made much longer, but I prefer to leave something to the discussion that I hope will emerge from these thoughts.

Greenberg & Martell (5) do not believe that science's "self-correcting mechanism" can discover and prevent fraudulent research and other unethical forms of behavior. They point out that the consequences of unethical science can affect many sectors: bad policy decisions can be made, other researchers can waste time and money following false leads, and government officials can force regulatory straightjackets on science because of unethical research. They propose numerous remedies for misconduct, such as a code of ethics, more attention from journal editors, advisory boards composed by prominent scientists, and more awareness of the problem, leading to better embracement of ethical viewpoints by the researchers themselves. However, it may be easier to control grave forms of misconduct, such as fraud and plagiarism, than all the milder forms that often are of a borderline nature. The latter can probably only be fought by creating an open and honest climate within the research institution itself, not by any outside inspection. Misconduct must be considered so unacceptable that if someone succumbs to it he or she loses the respect of colleagues. Seminars and staff meetings devoted to this topic will certainly foster such a climate. The director of the institution and other seniors have a serious responsibility in this respect, both to provide good examples of ethical conduct and thinking through their own actions and to initiate meetings and discussions among staff. Internal supervision is also important, of course, preferably in the form of counseling, not punishment, even if the latter should not be ruled out in severe cases. Ethical committees, usually preoccupied with reviewing study plans, should extend their surveillance also to the conduct of research, and directors of research institutions should increase their alertness. I agree with Greenberg & Martell (5) in that also scientific journals should be more alert to ethical matters, and I do hope that both this editorial and Tomatis' review will result in the expression of more opinions in this journal. Such contributions are invited.