DIRTY INFORMATION AND CLEAN CONSCIENCE: COMMUNICATION PROBLEMS IN STUDYING "BAD GUYS"* Jim Thomas James Marquart Department of Sociology Department of Sociology Northern Illinois Univ. Mississippi State Univ. Dekalb, IL 60115 Mississippi State, MS (39762) Published in Communication and Social Structure, C. Couch and D. Maines, eds. Springfield: Charles Thomas Publisher, 1987, pp. 81-96. Published in Communication and Social Structure, C. Couch and D. Maines, eds. Springfield: Charles Thomas Publisher, 1987. *We are indebted to Leo Carroll, Peter Hall, Monica Hardesty, Ruth Horowitz, David Maines, Peter K. Manning, Gary Marx, Mari Molseed and John Van Maanen for their suggestions. DIRTY INFORMATION AND CLEAN CONSCIENCE: COMMUNICATION PROBLEMS IN STUDYING "BAD GUYS" Communication--an exchange of symbols for the purpose of conveying meaning--can be a tense affair. On the one hand, communicative activity aims to reveal. Symbols, however, can be manipulated to provide multiple meanings, distorted meanings, or false meanings. Communication then becomes an act of concealment. This creates problems for those who depend upon communication as a research instrument. Ethnographers aim to discover--perhaps uncover is a better word--the manner in which participants in a culture construct, interpret and maintain their culture. However, there are many topics that people engaged in some activities may prefer to leave covered, and they then become less than fully-disclosing. This may require creative discovery tactics. Ironically, once we have uncovered secrets, it may be necessary to again conceal them, as well as the means by which they were obtained, because of a discrediting potential. Such information is especially likely to arise when studying the social world of "bad guys," people who tend to have more knowledge of, or themselves tend to commit, serious social transgressions. We call this "dirty information" when, once acquired by the researcher, difficult decisions must be made on how to use it. Although research methodology is normally subjected to ethical critique, researchers cannot always act ethically. We argue that ethical ambivalence and contradiction are built into the structure of the research situation, thus reducing the utility of general professional and abstract ethical principles as behavioral guidelines (cf. Maines and Kappas, 1978). Although there has been relatively little disagreement about the validity of general ethical standards (Gray, 1975, p. 7), in general, we agree with Swidler (1986) that the broad cultural values embedded in ethical systems and professional codes serve as little more than a "tool kit" that facilitates the construction of appropriate strategies for action. We offer an alternative way of judging choices for obtaining and managing dirty information--not so much to guide those ethnographers who continually confront such problems and usually make such choices routinely--but to sensitize researchers who may not ordinarily be aware of these difficulties when passing judgment on their colleagues. - 1 - THE DILEMMAS OF DIRTY INFORMATION Dirty Information refers to data which, if revealed, could have serious repercussions on someone. That "someone" could be the researcher, the individuals or groups being researched, a research-sponsoring agency, or even persons uninvolved in the research. The repercussions include professional discreditation, social stigma, lawsuits, criminal charges, or even the death of informants. We prefer the term "dirty information" to "destructive information" (Goffman, 1971, p. 141), "guilty knowledge" (Fetterman, 1984; Marx, 1984) or "dirty data" (Van Maanen, 1982). These terms connote information illicitly obtained, for which the researcher is ethically accountable. However, not all illict information is unethical. In addition, the term "dirty data" is too closely associated with the problem of inaccurate or "bugged" data in quantitative research, and we prefer to disassociate the concept from problems data coding. We have drawn explicitely from problem information obtained in studying prisoners, but this type of information can occur among numerous other groups as well. When our scientific inquiries uncover improper kickbacks in formal organizations, child molestation between a coach and little league players, or drug transactions between teachers and students, the information becomes part of the larger analytic mosaic and poses dilemmas for researchers. Such information cannot be anticipated simply on the grounds of probability, because it is impossible to anticipate its content or the manner of revelation. The valences of the issues we raise may vary between substantive areas of research, but the nature of the problems of managing such information are constant. Hence, our discussion is primarily conceptual and is generalizable to all settings. Dirty information raises sticky questions: How should researchers ferret out concealed information? How do researchers fulfill their obligations to protect privacy, to protect informants from research-generated repercussions, to protect sponsors (universities, grant agencies, "professional others") from embarrassment, or to keep themselves out of jail? Such questions are conventionally addressed by adducing "professional ethics" by which both research performance and scientific data are normatively evaluated. As Appell (1980) has argued in a related context, however, methodological discussions of ethics tend to provide a rhetorical strategy by which to conceal our failings rather than spur us to confront our problems. It is not always ethical behavior that the profession seeks, but rather its appearance, a cynical exercise at best, and a hypocritical one at worst. As Punch (1986, p. 37) has suggested, professional ethics, such as the ASA professional code (American Sociologist 1968), - 2 - are helpful in alerting researchers to the ethical dimensions of their work, but are not always helpful in resolving dilemmas faced in the field. Fieldwork takes us into a potentially vast range of social settings which can lead to unpredictable consequences for researcher and researched. The ethical factors associated with the control and regulation of social scientific research are accentuated in participant observation because the fieldworker often has to be interactionally "deceitful" in order to survive and succeed. Ethical codes fail to solve the situational ethics of the field and threaten to restrict considerably a great deal of research (Punch, 1986, p. 71). Following Punch's position, we suggest that researchers often must lie, deceive or engage in unethical behavior in order to act honorably. However, rather than evaluate judgments of such practices with lofty ethical standards or commitment to "professional ethics," we should evaluate normally "unethical" behaviors as situated activity grounded within socially contingent meanings. We suggest that rather than always attempt to follow an ethical line of behavior, researchers should recognize instead the option of choosing an honorable line of behavior. As we shall argue, the two are not identical. One difference between ethics and honor is typified by two questions: The first, guided by ethics, is organizationally pragmatic: "Will I get into trouble?" The second, guided by honor, reflects self-reflection: "Can I live with myself?" Ethics, at least as practiced by social scientists, tend to be embedded in organizational norms or transcendent rules or principles. Violation of an ethical principle leads to guilt, which is based on "internalization of values" (Lynd, 1958, p 21) derived from such fundamental cultural principles as professional standards or religious principles. Honor, by contrast, is a discrete socially constructed code that both prescribes and proscribes behavior as it "ought" be conducted. Although there may be better labels, we find this term convenient to identify an alternative to ethical standards when attempting to resolve pricky problems in fieldwork. As Horowitz (1983, p. 21) has argued in a related context, honor is a code that symbolizes cultural relationships and corresponding "proper" behavior. It is quite possible to act ethically, thus avoiding the stigma of a guilty act, yet be left with the feeling that "I screwed up anyway!" This type of experience may be labelled shame, or external disapproval based on violating the expectations of a particular group in a specific context (Lynd, 1958, p. 21). Honor, of course, may be, and usually is, guided - 3 - by ethical principles, but it is mediated by the cultural requisites of a given group. But where the principles of ethics are transcendent, the principles of honor are situationally contingent and group-specific. As a consequence, honorable behavior is less amenable to explicit guidelines and poses greater problems for discretionary judgment. To paraphrase Goffman (1971, p. 142), the basic problem for the research performance is that of information control. An audience must not acquire destructive information about the situation that is being defined for them. But what happens when there are multiple audiences, multiple interpretions of the meaning of information, multiple purposes in our behavior, and--most sticky of all--multiple definitions of honor? The research enterprise is situated within a complex ecological matrix with permeable boundries, occasionally contradictory expectations or demands, and differing systems of evaluative criteria of a performance. How we conceptualize and engage in a given project often is guided by an ambiguous configuration of goals and responsibilities either inferred from or directly demanded by the competing audiences. The canons of science, for example, may require sins of commission, such as revelation and display, that are anathema to the preferences of informants or colleagues. Conversely, the protection of informants may require sins of commission or omission that violate professional norms or even law. Because different audiences may have different codes of honor, the honorable management of dirty information is not always easy[1]: Strategies of action are cultural products; the symbolic experiences, mythic lore, and ritual practices of a group or society create moods and motivations, ways of organizing experience and evaluating reality, modes of regulating conduct, and ways of forming social bonds, which provide resources for constructing strategies of action. When we notice cultural differences we recognize that people do not all go about their business in the same ways (Swidler, 1986, p. 284). Because researchers' audiences go about their business in different ways, what counts as honorable behavior may vary dramatically. Definitions of honor may be closely associated with a patron, turfmaster, paymaster, gatekeeper or others upon whom the researcher depends. Or it may be associated with "doing right by our informants" and others with whom the researcher identifies. Hence, the researcher may be forced to manage two conflicting types of information: One that may damage "common folk," and the other that will threaten power brokers. As a consequence, an honorable resolution is compounded by the competing and perhaps incompatible needs of, or our commitment to, different audiences. - 4 - Strategies used to avoid the stigma of unethical behavior in the research performance can occur in several ways. First, as one conference critic suggested, when confronted with difficult data, researchers could stop probing and move on to a topic less threatening[2]. Second, the researcher can attempt to appease a primary audience and ignore the others. This requires chosing between and commiting one's actions to competing codes of honor. Third, the researcher may recode behavior or information in a way that presents honorable motives by developing a vocabulary to justify choices or redefine behaviors from a different perspective. For example, one is not "betraying a confidence" when reporting unlawful behavior to the authorities; one is instead "upholding the law." Dirty information may arise in unanticipated ways, which further complicates honorable responses. It may be acquired innocently. Information that originally seemed quite innocuous, even inane, may create a legal brouhaha, as might occur when employee accounts of job dissatisfaction are subpoened for a felony trial (Brajuha and Hollowell, 1986; Hollowell, 1985). Dirty information may also arise accidentally. A casual stroll down a street with street gang members may elicit unforseen data should a companion blow away a rival invading the turf. A routine conversation between a prisoner and researcher may be disrupted by a guard's assault on an inmate. An informant may inadvertently confess to a serious felony, information about corruption may be revealed, or one group of informants may disclose secrets of a rival group, either maliciously or unwittingly[3]. When this occurs, do we report it to the proper authority? Do we ignore it? Whatever the choice, it is a conscious one, and cannot be avoided. Sometimes researchers may be "tested" by being given false data. The purpose is to see how the researcher will react, whether he or she can be trusted, and "whose side" the researcher is on. Typical of this is Van Maanen's (1978, p. 342) example of apparent police on-duty drinking, or the situation in which a prison inmate might reveal dramatic "secrets" to a researcher to test the reaction and consequences. When does protection become complicity? When does it begin to compromise fieldwork? How far should we go in aiding or abetting our informants, a question raised by critics of Patrick's (1973) study of Glasgow gangs? How we respond in a field setting often is shaped by the attitudes and perspectives we acquire from those with whom we side. Especially when faced with a situation that requires immediate response, we rarely reflect on ethical or practical principles. We react just as we have been "taught" by our informants, and our transition to their - 5 - perspective may be incremental and scarcely perceptible. Some might even say we become corrupt: I was party to much discrediting information regarding the legality and propriety of police actions in Union City. On several occasions I was present when illegal acts took place and, although not directing the line of action, I was as culpable legally as any witness to such actions would be. During and after such incidents, it was clear that I had moral choices to make regarding my conduct. I made these choices as would most policemen. I kept my mouth shut. And, after a short time, I was too vulnerable to legal sanction, for I had not reported what I heard or saw (Van Maanen, 1978, p. 341). SPECIFIC DILEMMAS: THREE SCENARIOS Three scenarios drawn from prison research illustrate several problems in managing dirty information and provide as well a display of the difference between "ethical" and "honorable" resolutions. Our scenarios are intended as hypothetical and heuristic, and should not necessarily be construed as actual experiences of the authors or their colleagues. Their epistemological status is intentionally vague, but they are neither arbitrary nor unrealistic. They are chosen to illustrate concrete situations that can spontaneously arise to which a researcher must make immediate decisions that affect subsequent research or reputation. Although these examples are more likely to occur in unconventional research settings, they are generalizable to any setting in which the researcher is confronted with unanticipated events or information that require managing. Serendipitous Information. Sometimes researchers inadvertantly obtain more information than intended. Especially in so-called "deviant" settings, information may be revealed that places the recipient in a quandry. How far should the researcher go in pursuing this information? How should it be verified? Is it necessary to actually smoke marijuana to determine whether the accounts of drug availability in prison are factual? Colleagues typically respond that it is best to avoid learning such information. Buy why is remaining in ignorance necessarily an honorable course of action? What should the researcher do if he or she is confronted with a fight, or worse, with information that a "hit" has just been placed on a prisoner[4]? Consider these possible fieldnotes: - 6 - I was visiting Scooter today, and he asked me if I wanted to "do some" [drugs]. He said we had to wait until the evening shift came on duty, and he would show me how they smoked. He claimed to be high all day, but getting together with his "organization" [gang] was the best part, and I could see the "horse" [guard] who brought it in. He showed me his "stash" and a substantial sum of currency, prohibited in prison, and offered to take me through his rounds. So we smoked and drank, in full view of staff, while observing certain rituals to at least simulate covertness. To decline the invitation to participate might jeopardize further information from informants. It would also be a lost opportunity to pursue and verify questions about "doing time." On the other hand, the risks of discovery would be catastrophic to the research, and could entail felony charges. Several questions arise: What, if anything, would tapping this information contribute the prison experience that is new? How can the topic of "time" be adequately done in the absence of such basic data, and at what point do the risks begin to outweigh the benefits? The decision to either pursue or ignore information shapes the nature of the data available to examine, the status of the researcher among group members and the analytic power of our subsequent discourse. Should we ignore data that might otherwise be crucial to the study, or should we expose ourselves to risks? An "ethical" response in this scenario would include at least considering reporting guards' (if not inmates') felonious behavior to the proper authorities. If this is not viable, then surely an "ethical" researcher might decline the invitation to participate, consider withdrawing from the setting, or at least begin asking less revealing questions. Reporting infractions, however, does not seem honorable given the implied trust required to obtain the information, and the canons of sciences are hardly served by avoiding messy situations. In our view, the only honorable course of action, given commitments both to "science" and to our informants, is to keep quiet, continue developing data, and constantly assess whether the data obtained are worth the risk and unobtainable from "authorized" sources. This may violate strictures against unethical behavior, but it preserves the research goals and commitments to those who provide us with information. Researcher Culpability. On occasion, researchers themselves may find they are direct accomplices to, or even initiators of, behaviors that are not simply compromising, but blatantly illegal. Especially in full immersion studies where researchers act in the same capacity as their subjects, behaviors appropriate to the adopted role may require unconventional responses[5]. The experienced observer eventually - 7 - learns how to obtain data--by fair means or foul--that may reveal flagrant violations of both policy and law. But if organizational rules require behaviors that we find personally or professionally questionable, should we refuse to obey, even if it means exclusion from the field? In a prison setting, the following might occur: I (Hall officer) had a hard time in the North Dining hall with an inmate who budged in line to eat with his friend. Man, we had a huge argument right there in the food line after I told him to get to the back of the line. I finally got him out (of the Dining Hall) and put him on the wall. I told my supervisor about the guy right away. The inmate then yelled "yeh, you can go ahead and lock me up (solitary) or beat me if that's how you get your kicks." Me and the supervisor brought the guy into the Major's office. Once in the office, this idiot (inmate) threw his chewing gum in a garbage can and tried to look tough. One officer jumped up and slapped him across the face and I tackled him. A third officer joined us and we punched and kicked the shit out of him. I picked him up and pulled his head back by the hair while one officer pulled out his knife and said "you know, I ought to out go ahead and cut your lousy head off." Revealing such information has obvious discrediting potential for participating prison staff. Should the researcher report such blatant abuses to responsible authorities? Should such information be revealed when writing up findings? If so, how does one conceal the identity of the informant? The use of the first-person "I" in fieldnotes suggests that the researcher was also a participant, and since there were only a few staff participants, his or her identity could easily be deduced. If the researcher was one of the attackers, should this be revealed? The incident nicely displays how formal rules of prisoner control may be contravened by informal control techniques. Multiple audiences, however, make an honorable discursive strategy for revealing these techniques raher difficult, because to satisfy the "ethics" of the profession by avoiding socially improper behavior may contradict the norms and values of the group being researched. In this incident, a researcher bound to conventional ethics or norms might severely criticize a colleague who participated in violence. Such criticism, however, neglects the situated codes which the involved researcher must interpret and use to select appropriate behaviors. If, in fact, the researcher was involved in the beating, should his or her behavior be deemed unethical? Perhaps, but this seems to beg the question of "honor" owed to co-workers, of the antagonism and provocation of the inmate, and of the prison culture and its control mechanisms from which such behavior stems. - 8 - Professional vs. Personal Roles. Managing dirty information also may be complicated by researchers' discrepant roles, each reflecting the norms, expectations, culture and sanctions of a variety of social communities. Researchers often possess the same human frailties as their subjects. This occasionally impels researchers to act as "people" first, shunting aside the "professional role" and acting in a way appropriate to an emergent incident, rather than remaining dispassionate and detached. Among volatile populations, the probability of intense involvement may be high, as the following example suggests: I was in this classroom, and Johnson was looking at a paper Highpockets left on his desk while he got a drink. Highpockets came back in, and said "I'd appreciate it if you didn't look at my stuff." Johnson said "fuck you," and Highpockets smacked him, and the fun began. Johnson is hooked up with the Green Hornets, and Highpockets was a Prince, and both gangs jumped in. First I thought they were staging it to test me, but then Buzzy smashed Dingy's nose with a chair. I thought the guards would be in to break it up, but they never came, and the inmates later said "the police [guards] never come 'til it's over." I was afraid to jump in, but was afraid not to, so I grabbed Buzzy, my size, who was getting the shit kicked out of him by a couple of Hornets, and made like I was going to smack him for using a chair. I certainly wasn't going to pick on Charlie, who's 6-6 and about 240. I thought that by jumping Buzzy it would get him away from the Hornets' punishment and maybe shift the mood. I accidently knocked Charlie over a chair and had Buzzy pinned against the wall, screaming at him with my fist cocked. Everybody was suprised; they stopped to see if I would hit him, so I laughed, let him go, and called him a "crazy motherfucker." Everybody laughed, and Charlie said "Man, you're crazier than us." The guards later called me in and asked what happened, and I lied and said I wasn't in the room. They walked Dingy [to segregation] because he looked the worst, and Buzzy, because he obviously had been fighting. The guards told me to be careful around "them killers," and they would keep closer watch in the future. The Princes never asked why I picked on Buzzy, but one said afterwards "Nice going. I would have done the same thing." Buzzy thanked me later, and neither gang held it against me, and my status seemed to increase considerably, and inmates talked more freely. How should volatile situations be handled? When should one act like a "researcher," and when does a "real-life" response become more appropriate? Can the research persona even be so neatly defined, let alone isolated from our perceived roles? What are the risks if one is perceived to "choose sides?" How far should the researcher go to - 9 - protect personal status or informant safety? How violence erupts and dissipates is crucial to understanding prison tension, inmate hierarchy and relations between gangs and between prisoners and staff. But how are these dimensions tapped for data? Although some might criticize researcher violence as highly unethical, such a response can serve the dual purpose of halting violence and preserving the research agenda. DISCUSSION Choosing between competing definitions of honor does not imply "personal ethics." It does, however, require that researchers be sensitive to the multiple honor codes of their audiences, and to be aware that these codes may not be compatible between audiences. Ethics and honor converge on one fundamental principle: We nearly always must protect those who give us information to assure they are not inadvertantly harmed. This means we respect privacy, not by not prying, but by not revealing discrediting data. One common way of honorably respecting privacy is to obtain the consent of those we study[6]. Consent presupposes permission to discuss what we see, and in a sense absolves the researcher from "guilt" in reporting dirty information by accepting the research commitment to the canons of scientific revelation. But as Wax (1980) has made clear, the concept of consent is vague. The meaning of "prior consent" or "informed consent" may be meaningless in many settings, and the courtesy of assuring that all participants in the setting be aware of one's research identity may be inappropriate. Consider, for example, scenario three, above. In the perfectly ethical world, the researcher should first announce to the combatants that he or she is taking notes on the fight, and that smacking an inmate is nothing personal, just a communicative technique to re-establish information flow and unblock feedback loops. But we do not work in a perfect world and we are not perfect creatures. Dogmatic adherence to ethical principles, in fact, does not necessarily yield honorable solutions[7]. In addition to the conventional problems of privacy noted by Hilbert (1980), Wax (1980), Thorne (1980) and others, there are other considerations. First, when are researchers researching, and when are they simply private citizens? Ethnography is as much a world view and way of life as it is a research tradition, and one does not turn off the senses or stop inquiring as simply as shutting off the lights[8]. Ethngographers themselves may not know when they are on-stage and when they are off. The amorphousness of ethnographic research requires, as Van Mannen (1979, p. 539) has observed, shifting, even drifting, from the original questions. We find that our questions, our data sources, or even our very research project may have been originally - 10 - misconceived as new data reshape our theories and concepts. What are we to do when the original focus of "doing time" shifts to drugs and the illicit prison economy? We contend that it is simply not possible to fully disclose intentions in some circumstances. To announce to a violent gang that one wishes to study how they run their drugs and collect their debts is not only a poor research strategy, but also foolhardy. Sometimes one must skirt issues, not only because the water has not yet been tested, but because they are rather murky waters. Second, there are many ways in which data can be hidden, and we often do not know how we will search. The most common problem probably occurs when the ethnographer does not know what questions to ask. It may not be difficult to study the academic problems of prisoners in a college class. But what happens when the source of these problems shifts to gang rape or contraband smuggling in prison? The consent, originally given, may not have been intended to authorize new inquiry into covert or illicit behaviors. Should the researcher then formally renegotiate consent and risk denial (and thus end the project)? Or, should investigation continue? Third, from whom should research consent be obtained? In most prison systems, consent must first come from the state department of corrections, then from the warden of the institution, and finally, from the specific group under observation. Rarely are the first two granted. Should lack of formal consent preclude entering the field if a door fortuitously opens? Finally, the protection issue may also create personal dilemmas for those who face them. It may be necessary to choose between silence and disclosure long after one has left the field. Post-research litigation is often one arena into which researchers are pushed. The Mario Brajuha case (Brajuha and Hollowell, 1986; Hollowell, 1985) illustrates one dramatic example of the thorny problem of protecting research confidentiality. But there are others. In two less dramatic but no less traumatic situations, two researchers were independently faced with a similar problem. One was studying prison conditions, and the other was studying prison guards. Both researchers had obtained considerable dirty information, and both were eventually asked to testify in civil rights suits against prison administrators. The first researcher chose to testify against the brutality he had witnessed, but not without painful self-reflection: - 11 - This raised a serious question for me. When a reseacher extends a promise of confidentiality to research subject, there is no doubt that she or he is bound by that promise with respect to that person's private life. But is the promise equally binding with respect to the official behavior of a public official, or may the need for accountability to the public take precedence over a research promise of confidentiality? . . . The implication of this decision was that if I refused to testify I could be held in contempt of court and sent to the [jail], a most disagreeable sanction, since it was the prisoners and not the state who wanted my testimony (Carroll, 1980, p. 45). The second researcher, for equally cogent reasons, chose not to testify despite ethical and legal obligations to do so. In this case, several conceptions of honor mediated choice. To reveal information provided by those who trusted his ability to preserve secrets was less honorable than the possible benefits of revelation in court. Further, to reveal entrusted information would jeapordize not only the researcher's ability to obtain information at a future date, but would also thwart subsequent researchers who entered that prison system. Neither response was necessarily "more correct" than the other. Both responses were intended to "do the right thing," but both were guided by the expectations and standards of different audiences. The former researcher avoided guilt, the latter shame. CONCLUSION The relationship between proper behavior and managing dirty information is paradoxical. On the one hand, deceiving an audience or engaging in research-related behavior considered professionally unethical may be the only honorable choice. On the other hand, protecting one audience may damage another. To avoid guilt, one may act shamefully; to avoid shame, one may incur guilt. By definition, paradoxes have no ready resolution. We suggest that the solutions required to manage dirty information be recognized as a paradox and judged by others as such. Rather than criticize research in which the researcher lies, deceives, or otherwise acts in ways that are considered anathema to immutable standards, it makes more sense to acknowledge that deception to some audience may be a requirement of research. While professional codes and rule-deontological ethics provide crucial cognitive maps when attempting to balance our social performances, they often provide contradictory precepts. These, in turn, generate a vocabulary of motive by which researchers offer caveats or rationales of honor in obligatory methods sections. This conceals the fundamental - 12 - ethical choices faced, or distorts the problemmatic nature of managing information in order to appease the audience of professionals who will eventually pass judgment on the enterprise. This creates a form of academic self-censorship that is anathema both to science and to honorable activity. Hence, the problem of managing dirty information is not solely a concern for the researcher. It becomes one for audiences to confront as well, and the operative question should not be "does behavior violate the ASA ethical code," but instead "Did the researcher, in this given situation, act honorably?" - 13 - FOOTNOTES [1] There may be considerable diversity in perspectives even among professional groups involved in similar research, as Burchard (1958) discovered in examining professionals' attitudes toward covert listening devices in courtroom research. See also Becker's (1967) critique of the revelation and application of research findings as means of maintaining power relations. [2] An earlier version of this paper was originally presented at the Annual Symposium of the Society for the Study of Symbolic Interaction (Iowa City, 1986), at which one participant suggested withdrawal from the field as the most reasonable method of avoiding dirty information. [3] One example of acquiring dirty information occured when a prisoner was asked why another prisoner seemed to behave as a personal valet. This elicited a detailed account of how the first prisoner ran an extensive "loan-sharking" operation that provided power and privilege in the cellhouse. This inmate later faced related felony charges, several employees were released, and had it been known that this information existed, the data could have been subpoenaed. [4] The following provides an example of potentially life-threatening information: Tallman was just put in isolation; he was busted for some heavy offense, and security staff said they would reduce the charges if he cooperated [gave information]. He apparently talked, for others were soon busted, and some staff were fired for complicity. Kareem took me aside and said there was a hit just put out on Tallman, and told me the details, how it would happen in seg, and other information. He said "I just thought someone ought to know." According to the researcher, experienced both in street life and prison culture, the informant was exceptionally reliable. If so, what should the researcher do? Is the information a "test" to see if the researcher can keep a confidence? Is it a subtle way of preserving another's life? Is it intended as little more than insight into prison existence? That researcher was faced with a choice, and for reasons which we find convincing, chose--as a matter of honor--to preserve the information and not repeat it. [5] For an example of such responses both when conducting research and subsequent to leaving the field, see especially Maines, Shaffir and Turowetz, 1980. - 14 - [6] However, research shows obtaining consent itself at times may be manipulative and deceptive in the scientists' interests of generating data (see Gray, 1975). [7] There is occasionally an absurdity in clinging to absolute ethical principles when facing a problem. One colleague, teaching a small number of students in a large auditorium, was troubled by the disruptive behavior of students who sat in the last row and read or whispered. When asked why he didn't politely request that the class first filled up the front rows, he responded that he could not; it would be too authoritarian, and would violate his egalitarian principles. [8] Maines (personal communication, 1986) has reminded us that in recognition of this, the IRS will allow for such expenses for newspapers, magazines, or cable television as tax deductions for professors who can demonstrate their research utility. Some researchers may not distinguish between private and scientific selves. Erving Goffman, for example, collected data continuously, thus maintaining a permanent "on stage" research identity. - 15 - BIBLIOGRAPHY BIBLIOGRAPHY American Sociologist. 1968. "Toward a Code of Ethics." 33:16-318. Appell, G.N. 1980. "Talking Ethics: The Uses of Moral Rhetoric and the Function of Ethical Principles." Social Problems, 27:350-357. Becker, Howard. 1967. "Whose Side are We on?" Social Problems, 14:239-247. Brajuha, Mario and Lyle Hallowell. 1986. "Legal Intrusion and the Politics of Fieldwork: The Impact of the Brajua Case." Urban Life, 14:454-478. Burchard, Waldo W. 1958. "Laywers, Political Scientists, Sociologists--and Concealed Microphones." American Sociological Review, 23:686-691. Carroll, Leo. 1980. "The Ethics of Fieldwork: A Note from Prison Research." Pp. 41-49 in J.R. Mancin and F.A.M. Robbins (eds.), Encountering Society. Lanham, Md.: University Press of America. Fetterman, David M. 1984. "Guilty Knowledge, Dirty Hands, and other Ethical Dilemmas: The Hazards of Contract Research." Pp. 211-236 in D.M. Fetterman (ed.), Ethnography in Educational Evaluation, Beverly Hills: SAGE. Goffman, Erving. 1971. The Presentation of Self in Everyday Life. Harmondsworth (Eng.): Pelican Books. Gray, Bradford H. 1975. Human Subjects in Medical Experimentation: A Sociological Study in the Conduct and Regulation of Clinical Research. New York: Wiley. Hallowell, Lyle. 1985. "The Outcome of the Brajua Case: Legal Implications for Sociologists." ASA Footnotes, 13:1,13. Hilbert, Richard A. 1980. "Covert Participation Observation: On its Nature and Practice." Urban Life, 9:51-78. Horowitz, Ruth. 1983. Honor and the American Dream: Culture and Identity in a Chicano Community. New Brunswick: Rutgers University Press. Lynd, Helen. 1958. On Shame and the Search for Identity. New York: Harcourt, Brace. - 16 - Maines, David R., and Attallah Kappas. 1978. "A Social Organizational Approach to Problems if Ethics in Clinical Research." Perspectives in Biology and Medicine. 21:606-616. Maines, David R., William Shaffir and Allan Turowetz. 1980. "Leaving the Field in Ethnographic Research: Reflections on the Entrance- Exit Hypothesis." Pp. 261-281 in Fieldwork Experience: Qualitative Approaches to Social Research, W. Shaffir, R. Stebbins and A. Turowetz (Eds). New York: St. Martin's Press. Marx, Gary T. 1984. "Notes on the Discovery, Collection, and Assessment of Hidden and Dirty Data." In J. Schneider and J. Kitsuse (eds.), Studies in the Sociology of Social Problems. Norwood, N.J.: Ablex. Patrick, James. 1973. A Glasgow Gang Observed. London: Eyre Methuen. Punch, Maurice. 1986. The Politics and Ethics of Fieldwork. Hollywood: SAGE. Swidler, Ann. 1986. "Culture in Action: Symbols and Strategies." American Sociological Review, 51:273-286. Thorne, Barrie. 1980. "'You Still Takin' Notes?' Fieldwork and Problems of Informed Consent." Social Problems, 27:284-297. Van Maanen, John. 1982. "On the Ethics of Fieldwork." Pp. 227-251 in R. Smith (ed.), An Introduction to Social Research: A Handbook of Social Science Methods, Vol. I. Cambridge (Mass): Ballinger. Quarterly. 1979. "The Fact of Fiction." Administrative Science Quarterly, 24:539-550. P.K. Manning and J. Van Maanen (Eds): Policing, A View from the Street. New York: Random House. Wax, Murray. 1980. "Paradoxes of "Consent" to the Practice of Fieldwork." Social Problems, 27:272-283.

<--Return to Jim Thomas's homepage

Page maintained by: Jim Thomas - jthomas@math.niu.edu