Hazards of Design:
Ethnomethodology and the Ritual Order of Computing

Philip E. Agre
Department of Information Studies
University of California, Los Angeles
Los Angeles, California 90095-1520
USA

pagre@ucla.edu
http://polaris.gseis.ucla.edu/pagre/

Submitted to Mind, Culture, and Activity.

This is a draft. Please do not quote from it.
Version of 31 July 1999.
9100 words.

 

1 Introduction

In the climactic scene of Robert Longo's film adaptation of William Gibson's short story "Johnny Mnemonic" (1986), the hero of the tale, played by Keanu Reeves, jacks into the computer system of an evil corporation. There he engages in what can only be described as a special-effects version of a traditional shamanic journey. Descending through a long tunnel, he fights with and tricks various entities in order to find and bring back the cure to a dread disease. On a relatively overt level, this scene affirms a cultural recoding of the computer worker. As late as the 1970's, the figure of the systems analyst expressed an entirely uncontested alignment between computer work and institutional power, each devoted to the construction of an emerging technical order and to the ideology that Kling (1980) has called systems rationalism. By the 1990's, however, American culture had thrown up an entirely different model, the computer hacker who encounters an already established sprawl of systems. Far from identifying with the institutional order that created the systems, the hacker regards this order as malevolent and decadent, and above all as the enemy of the utopian potential of the technology. At this level, the cultural change has nothing special to do with computers; it is continuous with the emergence of Rambo-like narratives that an entirely different Gibson (1994) has located in a virulent reaction to the debacle of Vietnam.

On another level, though, Johnny Mnemonic's journey provocatively draws on a discourse of virtual reality that has deep roots in the Western technological tradition, whose preoccupation with millennialist themes of disembodiment and rational order (Noble 1992, 1997) that might be called masculine transcendentalism. These themes also plainly organize the discourses of artificial intelligence and cyberspace. Johnny Mnemonic's journey, however, is distinctive in one crucial respect. Considered as theology, the masculine transcendentalism of technology has always been resolutely monotheistic. God occupies the standpoint from which a rational order in the universe can be created and surveyed (e.g., Husserl 1970 [1954]: 66), and the engineer aims to occupy that very same standpoint as the progenitor and administrator of rational order on earth. Johnny Mnemonic, however, invokes an entirely different theology, the much more decentralized spiritual order of the shaman. The project of technological reason was founded quite specifically in opposition to traditional animism and its healing practices, and yet here the constitutive other of technological reason is found irrupting in the cultural elaborations of a paradigmatically advanced technology.

My question is, how do we evaluate such phenomena? It is surely incorrect to view Johnny Mnemonic and the other technocultural echoes of shamanism as wholly immanent in the directions of technological development; at a minimum, the narrative of the shamanic journey in Johnny Mnemonic is plainly overlaid with a series of other narratives whose sources are not obviously related. On the other hand, it is surely equally wrong to treat the phenomena of popular culture simply as arbitrary or opportunistic appropriations of technical methods that have no more affinity for any cultural themes than any others. Indeed, it is not much of an exaggeration to say that computer system designers, in their most routine practices, are attempting to construct a parallel world of disembodied information, and that these practices coevolve with a whole series of transcendentalist visions, from distance learning to digital libraries to friction-free marketplaces, that are not far distant in their metaphors and ambitions from the most extravagant fictional narratives of virtual reality.

It is clear, then, that computers in some sense inscribe forms of social imagination. The question is, in what sense? This question has been taken up by other authors in a variety of contexts. Authors such as Clement (1988), for example, have made much of the historical continuities between contemporary system design and an earlier era of industrial automation; for them, the social content of computing is to be found in the material realities of class conflict and social control. Authors like Suchman and Trigg (1993), by contrast, have sought the inscription practices of computing in the fine details of designers' work; they observe that this work consists in large measure of the successive transformation of technical problems from theoretical statement to fictional scenario to formalism to software. The contrast between these two approaches is instructive: the inscription practices of computing appear to exhibit large-scale historical regularities, and yet the work of inscription is never anything but the local, situated production of order in the sites of technical work.

I believe that both sets of authors have contributed something valuable, and I will not strongly dissent from their conclusions. Nonetheless, I believe that much more remains to be said. It will be the gist of my account that the most fundamental layer of inscription practices in computing is religious in nature, not political, and that the religious content of these practices is grounded in technical practitioners' highly evolved strategies for contending with the inherent dilemmas of their craft. In analyzing these dilemmas, I will draw on certain themes from Schutz's (1967 [1932]) phenomenological account of the cognitive basis of social order, and particularly from the ethnomethodological investigations of Garfinkel (1984 [1967]) and his school. Although I will be drawing on my member's familiarity with the methods and recurrencies of technical work, however, this is not an ethnomethodological investigation. I am not reporting from any specific site of practice beyond the school of hard knocks, and I offer my conclusions only in a speculative mode as questions and suggestions toward more concretely located studies. In turning from the dilemmas of computer work to the methods by which these dilemmas are worked through in practice, I will be drawing on recent anthropological work on signification and ritual. Once again, I can only offer suggestions. The big picture, to which I will return in concluding, pertains to the emerging critique of systems rationalism, and to the specifically theological meaning of ethnomethodology's respecification of the various topics of rational order in local and finite terms.

2 Reflexivity and displaced artifacts

In computer work it is customary to distinguish between designers and users. It is a constructed distinction, to be sure, an the object of a great deal of symbolic work (Thomas 1995). But I want to accept it here on a heuristic basis, schematically, to evoke some of the interpretive problems that arise in daily work around computers.

The crux of the relationship between computer designers and computer users is an artifact, the computer and its software. Marx spoke of the factory worker as confronting production machinery as something alien, and so it frequently is with computers as well. Even on those occasions when designers and users are acquainted with one another, the computer itself has typically undergone some kind of displacement in time and space from the site of its creation. This kind of displacement has been a significant theme in science studies, for example, where it has often stood for the mysterious practical work by which knowledge-claims are rendered universal through the gathering of representations and the gradual unhooking of practical accomplishments from their local conditions (e.g., Collins 1985, Latour 1988, Lynch 1985).

From an ethnomethodological perspective, what is most significant about the relationship between designer and user is the production of cognitive order in the distinct sites of their work. It is useful to compare the case of a paper note passed between intimates. Far from locating the meanings of that note in semantic conventions that might be applied to the marks on the page, ethnomethodology would draw attention to the relationship between the sender and receiver. The note will be written with a reflexive orientation to the practical circumstances and consequences of its being read, and it will be read with a similar orientation to its writing. In particular, the recipient's work of making out the practical force of the note will depend on its ascription to an author, and to the assumption that its author designed it to be read. An analysis of the reciprocal understanding of the writer and reader is thus central to any appreciation of the note itself.

Garfinkel developed these themes in his studies of coroners and clinic personnel (1984 [1967]). The records that were carefully maintained in these sites had proven useless as standardized, mimetic documents of deaths and illnesses, and this seeming deficiency was a mystery. The orderliness of the record, it turned out, was to be sought in their authors' anticipation of later circumstances. A case might always be reopened, or questions might be raised about it, and the records were adequate if they stood up in those future situations. What was crucial was the authors' understanding of those possible future situations. Those future situations necessarily remained hypothetical in any particular occasion of preparing a record, and so it was necessary to anticipate, by various methods, a broad class of potential scenarios. The general point does not relate to the antagonistic structure of the situation but to the authors' reflexive orientation to projected potentials for their work to come back round to them in the circuits of a well-familiar institutional order.

3 Inscribing discourse

Computer designers find themselves in a similar situation to the author of an institutional document, and I want to convey something of the methods by which they contend with it. Computers, it is commonly said, are language machines. They embody language, and they transact linguistic exchanges with their users later on. Computer designers orient themselves pervasively to the intelligibility of their artifacts; in order to be useful, a computer must provide its user with guidance in the construction of some ongoing account of what it is doing. That account is rendered in language, and designers are massively concerned with language as such. Every detail of a computer's functioning is produced through a transformation of language. The first steps of system design are akin to the procedures of certain kinds of discourse analysis: beginning with a corpus of explanations of particular work practices, the systems analyst recovers a grammar, listing the nouns and verbs that will become successively transformed into technical objects -- hardware and software (Agre 1988). The conventional account of this work, the taken-for-granted "official" story of computer design, is that this language is used to define what the computer is supposed to do. It is a machine, say, for forecasting the weather or computing tax payments. Yet this work can be further specified as providing for the accountability of the machine's operation. The designer proceeds by constructing a grammar of accounts. The designer is operating under two profound constraints. First, it is impossible to know the specific practical circumstances in which the machine will be used. Second, the results of the designer's analysis must be embodied in working machinery. System designers employ a specific strategy in negotiating these constraints. As a community, they maintain a repertoire of technical schemata, each consisting of a fragment of narrative form -- serial action, for example -- and a formal device for embodying that aspect of narrative in a machine (Agre 1997). Other schemata provide technical renderings of narrative categories such as objects and their attributes, actions and their outcomes, questions and their answers, and so on. To become a computer person is to learn these schemata, and to acquire a skill that Jameson (1981: 40) called transcoding: paraphrasing others' language in terms of the technical schemata. Transcoding is a powerful and pervasive cognitive orientation. It explains, for example, why computer people have such extraordinary difficulty comprehending almost any proposition from the social sciences, and especially the interventions of ethnomethodology: when they attempt to transcode these sociological propositions, they obtain either nonsense or the most radical individualism and mentalism. The problem is not that they hold to these theoretical commitments on a substantive level, but that their schemata are directed at translating all grammatical forms into technical "things" that can be built.

The purpose of transcoding is to convert discourse into formalism. Natural language, however, is not left behind. Rather, a connection is established between the grammar of natural language and the logic of formalism. Because natural language and formalism are entirely different sorts of things, whole generations of research on formal semantics notwithstanding, this connection is the locus of significant tensions (Suchman and Trigg 1993). These tensions are inherent in the project: designers must be confident that their designs will work, and so they narrate various scenarios, talking through their code hypothetically. Lacking access to the situational detail of any particular context of use, they must seek the coherence of these scenarios in other ways. On one level, the designer is a kind of storyteller, appealing to what Aristotle called probabilities and to the typifications of culturally organized experience. On another level, the designer must also look to the formal closure of the system by ensuring that some intuitive gloss has been given for every logical possibility that the formalism generates. These two levels of order in computer work, narrative and logical, exist in dialectical interaction, and a design feels done once that interaction has seemed to exhaust itself.

4 Accountable machinery

Let us turn from the situation of the designer to the situation of the user. Computers, evidently, are machines that we build so that we can talk about them in a certain way, and ethnomethodology recommends that contexts of computer use are self-organizing with regard to the production of every aspect of rational order. Although computers abundantly supply the linguistic resources that users might employ in accounting for their behavior, in other words, we must still look to each occasion of computer use for the unique way in which it interprets these resources and makes them relevant to ongoing practical concerns. What the people do provides a reflexive ground for making out what the machine has done, and vice versa, and all of this is accomplished only for all practical purposes, being endlessly revisable in light of subsequent evidence. Woolgar (1994) has described this process from the related perspective of reflexive sociology.

Woolgar points particularly to the ascription of intentional predicates to machines: interpreting a machine as knowing or remembering, asking or telling, trying or failing, and so on. If the user's task is to construct ongoing accounts of the machine's behavior in these terms, then it matters whether and when, and on what grounds, such ascriptions can be made. One approach, to which I shall return, is to attempt to define the terms, to articulate general criteria for the application of intentional vocabulary to anything at all, and to artifacts in particular. Woolgar will have none of this, and instead directs our attention to what he calls the moral order of representation in a given setting: the ascriptional practices of that particular local group. These practices constitute a moral order in the sense that they hang together, and in the sense that they interconnect with a range of morally consequential matters such as the assignment of responsibility for error. These practices also change: they change over the long term with the introduction of new technologies and new ideas, but they can also change from moment to moment with changing definitions of the situation.

This account is valuable in directing our attention to the local accounting practices of a group, but it does not offer much guidance in reconstructing how these practices operate, or how they interact with the practices of design. To this end, Woolgar (1991, 1996) offers the further heuristic suggestion that the computer itself be regarded as a kind of text. Texts, of course, are open to a range of interpretations, and the practices of reading and writing are open to investigation in the ways that the literary metaphor suggests. They are also open to the forms of sociological investigation that I mentioned earlier in my sketch of the reflexivity of written notes. Within this framework, it becomes possible, in Woolgar's terms, to investigate designers' work of configuring the user, that is, predisposing the user to certain kinds of interpretation of the computer-text -- starting with the user's own self-interpretation as a user, as a person located outside of the designer's organization and outside of the computer itself.

To extend this analysis, it is useful to focus attention on the specific form of accounts of computers-in-use. If these accounts take the form of ongoing narratives of the reflexively interrelated actions of both user and machine, then we can investigate the nature of these narratives. In order to be narratives of actions, and not simply of events or states or data, both user and machine must be actively accountable as the sorts of agents that can really act, and it turns out historically that designers have been intensely occupied with providing the conditions for this kind of narration. But Schutz (1967 [1932]: 58-61) and Mead (1934: 136-142) both argue that action involves a kind of self-objectification. An action is not simply a movement, like a reflex jerk of one's knee. An action, to the contrary, is always accompanied by a conscious sense of what it would be like for the action to be complete. Action thus requires, among other things, an orientation to the future. And indeed, for several hundred years at least, technologists have sought to ascribe a future orientation to their artifacts. This has not simply been a philosophical sidelight; to the contrary, it has been central to the cultural construction and elaboration of a long series of technologies. McReynolds (1980) has recounted the case of clocks, which stood as symbols of a mechanical model of humanity not simply for their readily visible principles of operation, but specifically because the mainspring of a clock provided a metaphor, a narrative anchor, for the language of animate motivation and vital force. Cybernetics, likewise, excited a generation of technologists and technologically-inclined psychologists not because of its strictly instrumental successes, which were debatable at least at first, but because, as Wiener observed (Galison 1994), the operation of servomechanisms likewise provided a mechanistic way of talking about orientation to the future. In each case, a vast technical rhetoric was organized around a fundamental technical schema: the schema that permitted the workings of an artifact to be narrated as actions. The digital computer was founded in a similar narrative innovation; as Schaffer (1994) points out, Babbage assigned central importance to the functions of memory and anticipation in the planned workings of his Analytical Engine.

Practices of ascribing future-orientation to machines, then, have repeatedly provided the spark for extended and highly productive interactions between discursive forms and technical innovations. And yet it is equally important not to reduce the complexity of users' accounting practices to the simple ascription of intentionality. The accounts that users construct are, and of necessity must be, frequently very complicated. It is not just a matter of machines taking actions, or even of machines making assertions; the behavior of a contemporary computer is regularly accounted as conveying the actions and voices of numerous parties in various relationships. A computer that displays an e-mail message from another user, for example, is understood to be performing a complex metalinguistic operation, framing some reported speech in a matrix of symbols and speech acts and files. A data entry in a company database, likewise, is not just -- or not at all -- a computer's opinion, but more likely the opinion of someone else in the organization or elsewhere, reflexively accounted against the same background of situational contingencies as any other record. Although computers use language in ways that get successfully interpreted in specific situations of use, therefore, we should not succumb to any simple model of the machine as the agent or the author of those linguistically motivated actions. At a minimum, we need something like Goffman's (1979) typology of the distinguishable footings that a speaker might take up in producing a sequence of words, and the methods by which these footings are made reflexively accountable. Beyond that, however, we can be much more specific about the kind of character, so to speak, that a computer plays in the narrative practices of its designers and users. On one hand, designers really do inscribe into their machines the raw materials for the construction of accounts that confer agency on the machines themselves. Computers are not designed to be accountable in terms of their designers' reasons or motivations, or the alternative design choices they might have made, or even the kinds of stylistic signatures that a previous generation of literary critics sought to characterize in other sorts of texts. And yet, on the other hand, the agency that is ascribed to actual, commercially distributed software systems is precisely delineated. Virtually all systems in actual use conform to what Alan Kay (1990: 199) calls the user illusion: the accountable construction of the system's agency as flowing from specific acts of delegation by a user. This is not a limitation of the technology, I should emphasize, but a narrative convention. Artificial intelligence research has sometimes (much less often than is generally imagined) sought to construct software systems whose workings lend themselves to narration in terms of an agency of unrestricted free will, but hardly anybody finds such things useful in practice.

5 The concept of hazard

I have already indicated the central dilemma that designers face: their machines will be evaluated in terms of their usefulness in unknown situations potentially distant in space and time from the lived work of designing them. I want to refer to this cluster of difficulties as the hazards of design. I take this word "hazard" from Keane's (1997) anthropological study of ritual in Indonesia. Keane observes that power is always coupled with hazard: power is enacted and manifested through public representations, and yet representation always carries the risk of misinterpretation or infelicity. More specifically, representations are always, to use Derrida's (1982 [1972]: 315) term, iterable, capable of being repeated in an endless variety of different situations, in which they might take on a variety of interpretations and forces. Neither Keane nor Derrida would argue that these different contextualizations are arbitrary or wholly impossible to anticipate, or that the authors of representations are entirely helpless to influence the uses to which their signs are put. The point is simply that the objectification of meaning in signs, and the autonomous materiality of those signs, creates a complex landscape of potentials for the unravelling of whatever material positions those signs might be bound up in. Many representational practices can be understood in part as attempts to manage the hazards of iterability; consider, for example, the art form of the sound bite, which seeks to limit the potentials for recontextualization through the miracle of audio or video editing.

Designers resemble Garfinkel's portrayals of the formal-analytical sociologist, reifying vernacular categories at a privileged distance from the practical enmeshments of daily life, but with one very significant exception: they are liable to get blamed when things go wrong. It is crucial to understand that, as an ethnomethodological matter, this assignment of blame is just as rational, just as orderly, as the smoothly functioning scenarios of use that designers would naturally prefer to hear about. Although designers aim to facilitate the constructions of accounts of use that interrelate users and machines, they are social actors in their own right, and well-known to be parties to every situation in which their machines are used; their own identities and attributes, accordingly, are no more or less an accomplishment of situated action than anyone else's.

In ethnomethodological terms, the hazardous circumstance that Derrida calls iterability is understood as indexicality: the production of a representation's meaning on every next occasion by the members of successive settings of use. Although a designer may have some sense of control over the mimetic function of users' accounts of machine behavior -- their systematic relationships to a supposedly given ontology -- it is much clearer that the designer can have little control over the actions that users employ these accounts to perform. Put another way, although system design is centrally concerned with the lexicon and grammar of possible accounts of a machine's behavior, these design practices exert little influence over the force that particular deployments of an account might take on.

More fundamentally, designers require the practical reality of settings of use to have the opposite properties from those discovered by Schutz and Garfinkel. Whereas Schutz found the cognitive problem of order to be prior to classical sociological questions of cooperation and conflict (see Heritage 1984: 70), system designers need this problem of order to be presolved in order to maintain any sense of knowing what their systems' behavior will amount to in practice. And whereas Garfinkel emphasizes that the full phenomenal detail of any given practical setting is a self-organizing accomplishment of that setting, the designer's scenarios must presuppose that detail to be predetermined, roughly speaking to be the same phenomenal detail that the designers themselves encounter in their own sites.

6 Asymmetrical intersubjectivity

As a result of these circumstances, every computer system threatens to become a full-time breaching experiment (Garfinkel 1984 [1967]; Heritage 1984). This is the central tension of computing: even though they are laboriously and systematically designed to be intelligible as the authors of accountable actions, computers are incapable of the improvisational work of upholding the morally sanctioned cognitive order of the situations in which they supposedly participate; they thus inevitably breach innumerable maxims of interactional order to which their putative status of accountable agency would otherwise subject them. If the production of rational action presupposes the reflexive awareness and reciprocal attribution of the normative accountability of that action, then it follows that a computer cannot participate in the concerted production of rational order at all. It will be objected that computers as such have never been proven definitively incapable of such accomplishments, even if contemporary machines do not reach the necessary standard; surely it is folly to make such broad claims against the unknowable range of future technical advances. But in speaking of computers we are not talking about artifacts in general; the difficulty arises specifically because computers are designed to lend themselves to complex and consequential attributions of intentionality. The practices of computer system design comprise a historically specific formation that, popular representations notwithstanding, has not changed very much in its essentials from its early days (Friedman 1989). Technical schemata may multiply and evolve, but the methods by which computers are intendedly rendered accountable have not changed in any fundamental way, and would have to change in a fundamental way before this argument would need revisiting.

Let us consider the question more precisely. Computer system design, despite its daunting hazards, is successful in the significant sense that people relate to computers in much the same way that, on Schutz's analysis, they achieve intersubjectivity with one another (cf Reeves and Nass 1996). Schutz affirms that people cannot know one another's subjective experience in a direct way (1967 [1932]: 98-101), so that their knowledge of one another is inherently a matter of inference and their capacity to share a situation depends crucially on a series of otherwise unjustified idealizations, what Schutz (1962: 11-12; cf. Heritage 1984: 55-56) refers to as the "general thesis of reciprocal perspectives". The first idealization, an idealization of symmetry, is the interchangeability of standpoints -- the other, although like me, occupies a distinct location and possesses a distinct perspective. The second idealization, an idealization of identity, is the congruency of the system of relevancies -- as a practical situation unfolds, the other experiences the same circumstances and appreciates their consequences for action in the same way. These idealizations conflict to a certain degree and are defeasible in specific detail, and yet their massive and ongoing application as defaults is the principal basis on which intersubjectivity is maintained.

When applied to a computer, these two idealizations are a considerable and ongoing act of charity. And since they are predominantly tacit, it can easily escape notice just how much intentional ascription is really going on from moment to moment, and just how completely the machine is failing to reciprocate. This failing has been remarked by authors such as Button (1990) who have taken exception to the appropriation of, for example, conversation analysis (Atkinson and Heritage 1984, Sacks 1992) as the discursive raw material for the fashioning of technical grammars of action. Yet this critique does not help us to identify the considerable success that computer systems have sometimes had in the world, or the methods by which user communities construct categories such as success and failure. Ethnomethodology, of course, recommends that such members' critiques be treated in a neutral way, and not raised up to principles of exogenous evaluation. At the same time, the ethnomethodological analysis provides resources for reconstructing the nature of an instability that seems inherent in the social project of computing, and for specifying the systemic difficulties with which both designers and users contend in making whatever accounts they do make of what is going on, here and now, in the use of a particular machine for a particular purpose. What is more, a central question for the members -- who, if anyone, is taking action here? -- also provides an analytical caution for the ethnomethodologist: just as the identity of particular actions is something accomplished by a setting's members, the enumeration of those members and the nature of their membership is likewise produced in and as the setting. It is not good enough to say that the computer is human, or to say that the computer takes actions if the people think it does. Instead, we are compelled to take seriously the proviso that these accomplishments are just for all practical purposes in the setting, those purposes themselves being provisional accomplishments of the setting's members as well. If we fall in with an uncritical anthropomorphization of the computer then we will be faced with an anomalous social situation in which the reflexivity of interpretation does admit time out, and in which many actions are not submitted to reflexive interpretation as constituting an ongoing order of activity. A shortcoming in theorists' claims for a machine would be transformed into an altogether radical shortcoming in a particular setting of action.

7 Patterns of trouble

Such are the challenges facing the otherwise understandable effort to employ ethnomethodological investigation in the analysis of the patterning of what, by members' lights, is troublesome in computer use. Suchman (1987), for example, cites cases in which a machine issued the same instruction twice by simply repeating it; the resulting sequence of interaction was perfectly orderly and accomplished any number of topics of rational order. To speak of repair work in such a setting is perfectly valid, provided that we understand that repair glosses members' categories, and that it is the interaction that the members accounted as broken, not the methodic production of rational order itself.

That being understood, ethnomethodological accounts can be given of a wide variety of breakdown in computer use; these breakdowns will interest designers who have produced them as troublesome, as recurrently so, and so on. Ethnomethodology cannot rule on such matters, but it can provide the "what's more" (Garfinkel 1996: 6) by which they are produced. In doing so, of course, the ethnomethodologist is liable to set in motion a formalistic topicalization of interpretive themes that have been designed to refuse such treatment. Such misconstruals are inevitable if system design carries on in the exact same fashion as previously. That remains to be seen, but it is not inevitable.

To give one example of what I have in mind, let us consider the problem of "where" one is located in using a computer. "Where" one is, whether "in" the word processor or Web browser or operating system command interface, is perhaps the central relevancy for producing what's going on in using a computer. Much of the phenomenal detail of a machine's outward states depends for its very identity on where one is; likewise, what action one is taking by typing or moving the mouse also depends on this "where". Yet many beginners are not able to see where they are, and other beginners discover that the methods they know for finding out where they are themselves depend on knowing where they are. The instructible visibility of "where" is one prominent requirement of good interface design by the lights of the interface design movement, yet experts routinely underestimate the difficulty of seeing it. Beginners are regularly befuddled, for example, by applications that are open but have no open windows, since they have learned to see where they are by inspecting certain features of the currently opened windows.

The example is trivial, but the pattern is pervasive: an expert in computing who proceeds within the frame of the natural attitude will see, as transparently evident, all manner of detail that is not at all visible to the beginner, and the expert's attempts to explain "what's wrong" or "what's going on" or "what to do next" will persistently presuppose details whose necessarily instructed visibility is actually the crux of the difficulty. This is, of course, a potential in any situation of differential expertise. What is distinctive in computing is the nature of the expertise: the expert is not just interpreting the visible evidences of an object in an instructed manner, but is recovering with some reliability the very accounts of the artifact that were originally inscribed into it by the designer. Whereas beginners will formulate accounts that dwell upon more culturally generalized features of the interface, it is as if the expert can see inside the box, or through it. Phenomenologically, the skill is similar to that of air traffic controllers who maintain a vivid imaginative sense of the whole picture in the skies, larger than and phenomenally subsuming the monitor upon which the various data about those skies are displayed. But it is different, too, in a crucial way: for the expert, the hand of the system's designer is tangible and the ontology and grammar inscribed in the system are legible. Above all, the abstractions that are real to the designer are real to the expert. This is what expertise in computer use largely is, and this communion between designer and expert user, laborious yet normally unremarkable, takes considerable pressure off of the thesis of reciprocal perspectives in the case of the expert, while only intensifying it in the case of the beginner. The point is not that the expert no longer accounts for the machine's behavior in terms of actions, but that the very category of actions is transformed, or transcoded, into something resembling the technical version of the category that guided the designers. This transformation is not simply rhetorical but represents in some degree the expert's capacity to orient to the designer's dilemma, and to impute reflexively to machine and designer alike awareness of only those relevancies that were, and are, actually available to them.

8 The theology of engineering

It has often been observed that the rise and institutionalization of quantifying science in the West brought with it a "view from nowhere", a discursively constructed epistemological standpoint that has underwritten generations of claims to objective knowledge and neutral reason, as well as being a material instrument for the centralization of numerous types of authority. Recent research has begun to recover some sense of the massive campaigns of infrastructure and standardization that made the view from nowhere possible (Porter 1994). The point of this research is sometimes difficult for its subjects -- the technical specialists -- to appreciate. If the universe is ultimately governed by uniform laws then surely no special effort should be necessary to ensure that measurements made in one location are commensurable with those made in other locations. But even the most basic physical reality becomes a human reality, meaningful in ways that can have social consequences, through the mediation of institutions and their practices. Infrastructures are hybrids of the human and nonhuman (Latour 1993), and the workings of these hybrids must be sought in the locales where the work is done, and not just in the results that they authoritatively produce. Bowker (1994) has thus justly emphasized the infrastructural preconditions of computing. In order for the numbers that circulate in a computer to have any consequences worth worrying about in the human world, numbers must also flow in a much larger institutional circuitry; numbers are only meaningful relative to the practices that make them so.

Nonetheless, as I remarked at the outset, a considerable tradition identifies the subject of quantification with a certain conception, albeit often secularized, of God. This is a God, for the most part, whose sole properties are omniscience (perfect knowledge of a technically ordered universe) and omnipotence (having created it all) -- a God not so much known as implied, a discursive location projected into the skies, a supplement of technical reason. Nor, as Noble (1992, 1997) has demonstrated, can this theology be understood as marginal or eccentric, or as an extra elaboration that can be removed without wrecking the whole. It is, to the contrary, central to the Western technological tradition.

My purpose here is not to recapitulate this demonstration, but to specify its role in the lived work of computing. Computer people often rebel at the sound of this project, and understandably so, as the ordinary, settled-down-to-serious-work computer work is not filled with theological terms. Supernatural authority is rarely invoked as an internal component of a technical argument, and so ordinary practitioners can be forgiven if theological speculation seems to them wholly the province of philosophers and the popularizations of a field's founders and promoters.

Before arguing that this impression is misleading, let us briefly revisit the cases of Babbage and Wiener. For Babbage, the essential qualities of intelligence were foresight and vigilance, and Babbage was perfectly explicit that this characterization applied equally to the intelligence of machines, factory administrators, and God. The autonomous skill and culture of the workforce was of no account in this scheme, of course, and the factory-as-microcosm was projected as a mechanistic order under the surveillance and superintendence of intelligent machinery. Babbage's theology, then, was part and parcel of his representation of work, and specifically his systematic withdrawal of attention from its skillful, improvised, locally ordered, lived character. Babbage's mechanistic workers resembled Garfinkel's sociological dopes, and indeed Garfinkel (1996: 11) alleges that "General Ideas of the Universal Observer are commonly used in the social sciences to topicalize and justify valid knowledge of every possible thing in any possible world."

Wiener, for his part, "saw power and control as absolutely central to the very definition of cybernetics" and "the human-machine relation as a model, if not an incarnation of the bond between God and 'man' " (Galison 1994: 260). The omnipotence of the engineer, for Wiener, is not the institution of a rational order but its continual homeostatic maintenance through the agency of negative feedback. As in the case of Babbage, the defining characteristics of the engineer sufficed to identify the human being with God, that other designer and controller.

Wiener's theology, like Babbage's, portrays the universe as a mechanistic order, and in each case both God's purpose and the engineer's is to supply a uniform principle of order that obviates any more local principle. Wiener also went further; absent this imposed order, he clearly felt, the world would be chaotic, and this conception of the world as a hostile enemy had an immense influence on subsequent technical practice. The principle of feedback, after all, permits a designer to delegate the work of ordering. Even though the designer cannot know what moves the enemy is going to make, the implemented servomechanism can make those judgements on the spot, tracking the world and bounding any discrepancies between system and world. Even as it proposes a solution to the problem of unpredictable context, Wiener's figure of the enemy makes emphatic the hazards that the designer must necessarily face.

Thus theology and hazard are internally related moments of the historically evolved social location of the engineer. This is, so to speak, the negative theology of engineering; even when the vocabulary of this theology is not present, it still expresses a structure that runs deep in the practices of engineering on an ordinary level. The positive theology of engineering, equally prominent, is the already-mentioned millennialist aspect of masculine transcendentalism. Invoked in numerous forms in the propaganda for large technological projects over centuries, its core is an opposition between a technologically determined utopian future and an otherwise decadent present. Whether invoked by institutions or against them, it opposes concrete experience to an abstract vision. Neither aspect of engineering theology has any place for a cultivated attention to the lived work of accomplishing a technical order: the negative theology locates its alternative organizing principle outside of the lived space of this work, and the positive theology locates another, teleological principle outside of lived time.

9 Ritual order

A certain theology, then, can be found in the structure of engineering discourse, offering among other things an alibi for the inattention of technical practice to the local ordering of activity. This is a purely ideological analysis, however, and not an account of the situated work of designing and using computers. The specifically religious practice of computing is not to be found in overtly religious speculation by the likes of Babbage and Wiener, and it is useful to understand precisely why not. Religious issues do arise with remarkable consistency when the intellectual categories of applied computing are pushed to their limits in the discursive work of the popularizer and philosopher. For example, any discussion of whether a machine "really" thinks or "really" acts is liable to lead to a struggle with some version of the soul, and indeed I have argued elsewhere (Agre 1995) that at least one prominent class of technical methods in artificial intelligence (AI) is lineally descended from Cartesian conceptions of the soul. These questions do not often arise in the daily practical work of computing for the simple reason that this work rarely presents the general question of whether machines "really" exhibit the intentional states that are ascribed to them (cf Woolgar 1987). This "really" indexes a concern with some order of intellectual completeness and closure that goes beyond the practices of transcoding and their systematic production of technical glosses for vernacular intentional categories. AI practitioners such as McDermott (1981) have mocked the "natural stupidity" of using a grand word such as "understanding" to gloss the operation of a simple technical mechanism that does not deserve the name. But the daily work of computing proceeds largely unconstrained by such cautions, and certainly without the benefit of any elaborated critical discourse on the suitability of appropriated terms.

What computing does have is the situated practice of narrating the workings of machinery. So far as technical people are generally concerned, this narrative work is underwritten solely by the formal definitions of the terms. But these definitions do not provide for what is centrally at stake, namely the point-by-point correspondences between the elements of formalism and the empirical phenomena that the technical system is supposed to reproduce. Designers and users alike are greatly aided in establishing these correspondences by the unremarkable workings of the documentary method of interpretation, whereby the narrative and its context (the running system or the designer's scenario) are made to elaborate and specify one another. Just what the narrative is reporting emerges in large part in what the program is doing. The natural "vagueness" of ordinary language is a resource in this way for an enormous variety of situated description practices, and technical practices are no exception (cf Woolgar 1985). What is at stake in any given occasion of narrative work is the import of this term in this context, only for all practical purposes. This latter condition is crucial: in situated technical narration, words find their meanings for the purposes of a particular moment, not "really" for all possible purposes.

On the level of practice, then, the internal relation between theology and hazard is to be sought not in discourse but in the organization of practice itself. Following Keane (1997), I want to suggest that it be sought specifically in the ritual order of computer use: to use a computer successfully, at least in accordance with the kinds of success that designers project when they design, is to enact the formal order of a ritual. "Formal" here, of course, takes on two meanings, that of ritual performance and that of technical representation, and the overlap between these meanings should be regarded as a hypothesis and not a fait accompli.

A good place to start is with traditional conceptions of both ritual and technical work. As Keane (1997: 18) puts it, "[w]hen most successful, ritual performance works by a circular logic in which it creatively brings about a context and set of identities that it portrays as already existing." Such is also the case of computer system design, whose first step is the articulation of a grammar of action for the site whose activities the system is to represent (Agre 1994). Designers present this ontology and grammar as already existing in the site, as structures already immanent in that site's activities. And yet much of the work of implementing a new computer system is arranging for the activities actually to conform to the ontology and grammar of action that the system inscribes, and moreover to do so accountably -- and not just accountably to the site's members but also for purposes of capture by the machine. From keypunch data entry to barcode readers and wireless handheld transaction units, technical means are proliferating for maintaining a correspondence between the unfolding structures of human activity and the accumulating data structures of the machines. This correspondence can only be established, however, so long as the activity is accountably isomorphic to the formal structures that the design process has inscribed.

The traditional understanding of ritual is, however, Keane argues (1997: 3), insufficiently attentive to the hazards inherent in formal representation. I will consider two of Keane's further elaborations on the traditional theory. The first pertains to the internal relationship between hazard and cosmology. The Indonesians in Keane's ethnography regard their present-day rituals as mere shadows of the more perfect rituals of their ancestors (1997: 23, 134), and in speaking formally they endeavor to speak with a double voice. Not only do they express themselves as individuals with concrete involvements and relationships, but they also endeavor to speak on behalf of their clans, ancestors included, and to live up to the elevated standard of performance that such an endeavor entails (1997: 48, 95). "These scenes of encounter", he suggests (1997: 7), "work to display and tap into an agency that is assumed to transcend the particular individuals present and the temporal moment in which they act." The models provided by the ancestors confer form and order on the proceedings, not to mention justification and prestige; like any norms, they also provide the reflexive basis for collectively establishing what is happening in the interaction from moment to moment (1997: 18, 185). And yet, Keane (1997: 24) points out, the necessity of embodying this transcendent order entails hazards with which any ritual performer must contend.

The situation is similar in computer use. Computers bring with them a transcendental order; they cannot contribute to any other kind of order, for the reasons we have seen. Keane's informants portray their actions as shadows of their ancestors', and computer people understand actual occasions of computer use in terms of the many glitches through which they depart from the transcendental order projected by the designers. The undoubted practical efficacy and cognitive structuring often afforded by computers, not to mention their justification and prestige, are bought at the price of considerable hazard.

Keane's (1997: 27) second suggestion, even more provocative though less central to the present argument, involves the representation of hazard itself. If human achievements draw their recognizability as achievements from the obstacles they overcome, then the hazards of ritual must be as visible as its material outcomes. To speak on behalf of the ancestors is to hold oneself to high standards, and the rituals that Keane describes include abundant references to their own character as ritual and thereby to the mass of constraints under which ritual performers operate. The hazards of computing are likewise represented in the innumerable disaster stories from all sides. These stories sometimes function as folklore, in which case they frequently recount the supposed consequences of a single mistaken keystroke. Many other such stories are perfectly accurate and recount unfortunate mismatches between the assumptions inscribed in a machine and the reality of its operating environment, or else the consequential misunderstanding of a machine's outward states. It would be hard to demonstrate that such stories are told with the purpose of dramatizing by contrast the successes of expert users, but that is one effect they frequently have. The hazards of design are not entirely hidden.

To the extent that it aligns itself with the abstractions that were imagined and inscribed in the process of design, computer use thus exhibits a ritual order that visibly manages the hazards of design. As a result, something of the designer's distance from the sites of use is reproduced within those very sites. The smoothest possible functioning of computer use requires that the version of local reality that has been inscribed in the workings of the machinery -- the transcoded version of the local discourse that seeks to stabilize meaning in formalisms whose consequences depend on the context in very different ways than any use of the local discourse -- be accountably reproduced all over again on each successive occasion -- not as the whole of the setting, but as a discrete element of it. To use a computer, one must learn to see one's own self and activities in the way that designers do, and in that way to construct oneself as an object of technical representation. Participants in religious rituals experience themselves as objects of divine knowledge; participants in technical rituals experience themselves as objects of technical knowledge.

10 Ethnomethodology and local knowledge

I opened with the difficult question of the relationship between two religions, the engineers' and the shamans', and with the hypothesis that this opposition might be aligned with the opposition between universal and local conceptions of order and knowledge. What sense can we now make of these matters?

The internal relation between the two forms of religion does help resolve one longstanding puzzle: the divide in technical work between permissible and impermissible attributions of intentionality to artifacts. This distinction is not just the outcome of a locally evolved moral order, I have argued, but is also a product of technical practices: those intentional attributions that can be accountably transcoded into a suitable relationship with an implementable formalism may be accepted, and others will not be. The puzzle concerns the vehemence with which computer people so regularly reject what they view as excessive and ungrounded anthropomorphism (Beardsley 1999, Shneiderman 1998: 380-385, 597-600). The answer does not lie in a generalized conservatism about intentional ascription, since the computer community broadly and the AI community in particular will happily embrace any technical transcoding that seems useful. The answer, rather, is that nontechnical intentional ascription is a kind of heresy, namely animism: the suggestion that nonhumans possess intelligence and vitality of their own account, and not in virtue of their inclusion in a universal order. Indeed, this very explanation is frequently offered by computer people who are offended by loose intentional talk, for example when such talk shows up in their own transcodings of sociological texts. The language of mysticism and magic carries great force in the technical community, as indeed in other religious communities that define themselves in opposition to shamanic cultural forms.

Nonetheless, I want to hesitate before turning the sociological dispute between ethnomethodology and the "worldwide social science movement" (Garfinkel 1996: 5) into a religious controversy. Ethnomethodologists are not shamans; the hidden realms from which they bring back such wisdom are hidden right in front of our eyes. Ethnomethodologists also believe that the methods by which people put the world together are tangible and explicable, albeit with difficulty, whereas shamans seek the powers that put the world together outside of human beings and their choices. Nothing resembling divination is found in ethnomethodology, which would surely treat such things as accounting practices like any others. Whether this approach is to ethnomethodology's credit remains to be seen; the idea that the whole world is a product of human effort has much to recommend it, particularly when the alternative is the woolly stipulations of functionalist sociology, but it also has much to recommend against it as well. Ethnomethodology is, in this sense, the purest atheism (Garfinkel 1996: 8).

A more promising approach begins with the contrast between universal and local standpoints in the discursive construction of knowledge. In these terms, ethnomethodology might be viewed as one representative in an antirationalist tradition that cuts across the political spectrum from Hayek (1948, 1952), with his insistence on the impracticability of reconciling every economicly pertinent aspect of every locality in a centralized fashion, to Haraway (1991), with her insistence on the qualitatively distinct epistemological standpoints that correlate with the whole matrix of distinct social locations. Both of these authors denounce the universalistic standpoint that they identify with scientific reason. Intellectuals from Vico and Hegel to Marx and Lenin have imagined themselves to be able to encompass the whole world in their minds and thereby to uncover the telos of history (Wilson 1940). The modern theorists of locality have renounced this power, insisting instead that intellectuals are just as finite as anyone else. What is distinctive about Garfinkel's approach is his refusal to deny the practical accomplishments of formal analysis and its institutions. By seeking to respecify these accomplishments of particular people in particular settings, ethnomethodology does nothing more or less than making these accomplishments explicable at all.

What, then, of system design? The delusions of formal analysis remain no less delusional for all the efficacy of the practices that have been institutionalized in its name. Ethnomethodology is fundamentally an ethical project, one that seeks to protect indigenous language from being usurped and transcoded under the auspices of intellectual systems, whether these be sociological or technical or otherwise. It rightly refuses to have its insights reappropriated as resources for further usurpations of the same type. The proper object of ethnomethodology is methods, period, and not the reform of professional practices. At the same time, plenty of system designers are ethical people who want to do a better job. To the extent that ethnomethodology illuminates the systemic hazards of traditional modes of system design, it helps to make reform imaginable. So long as technology coevolves with religious beliefs whose roots draw nourishment from nothing deeper than a defensive reaction to its own intrinsic hazards, nothing can change. It is time to disentangle the practices of technology from bad religion and begin to align them with some deeper purpose.

Bibliography

Philip E. Agre, Surveillance and capture: Two models of privacy, The Information Society 10(2), 1994, pages 101-127.

Philip E. Agre, The soul gained and lost: Artificial intelligence as a philosophical project, Stanford Humanities Review 4(2), 1995, pages 1-19.

Philip E. Agre, Computation and Human Experience, Cambridge: Cambridge University Press, 1997.

Philip E. Agre, The Internet and public discourse, First Monday 3(3), 1998.

J. Maxwell Atkinson and John Heritage, eds, Structures of Social Action: Studies in Conversation Analysis, Cambridge: Cambridge University Press, 1984.

Tim Beardsley, Humans unite!, Scientific American 280(3), 1999, pages 35-36.

Geoffrey Bowker, Information mythology: The world of/as information, in Lisa Bud-Frierman, ed, Information Acumen: The Understanding and Use of Knowledge in Modern Business, London: Routledge, 1994.

Graham Button, Going up a blind alley: Conflating conversation analysis and computational modelling, in Paul Luff, Nigel Gilbert, and David Frohlich, eds, Computers and Conversation, London: Academic Press, 1990.

Andrew Clement, Office automation and the technical control of information workers, in Vincent Mosco and Janet Wasko, eds, The Political Economy of Information, Madison: University of Wisconsin Press, 1988.

Harry M. Collins, Changing Order: Replication and Induction in Scientific Practice, London: Sage, 1985.

Jacques Derrida, Signature event context, in Margins of Philosophy, translated by Alan Bass, Chicago: University of Chicago Press, 1982. Originally published in French in 1972.

Andrew L. Friedman, Computer Systems Development: History, Organization and Implementation, Chichester, UK: Wiley, 1989.

Peter Galison, The ontology of the enemy: Norbert Wiener and the cybernetic vision, Critical Inquiry 21(1), 1994, pages 228-266.

Harold Garfinkel, Studies in Ethnomethodology, Cambridge: Polity Press, 1984. Originally published in 1967.

Harold Garfinkel, Ethnomethodology's program, Social Psychology Quarterly 59(1), 1996, pages 5-22.

James W. Gibson, Warrior Dreams: Paramilitary Culture in Post-Vietnam America, New York: Hill and Wang, 1994.

William Gibson, Johnny Mnemonic, in Burning Chrome, New York: Ace Books, 1986. Originally published in Omni magazine, May 1981.

Erving Goffman, Footing, Semiotica 25(1), 1979, 1-29.

Donna J. Haraway, Situated knowledges: The science question in feminism and the privilege of partial perspective, in Simians, Cyborgs, and Women: The Reinvention of Nature, New York: Routledge, 1991.

Friedrich A. Hayek, Individualism and Economic Order, Chicago: University of Chicago Press, 1948.

Friedrich A. Hayek, The Counter-Revolution of Science: Studies on the Abuse of Reason, Glencoe, IL: Free Press, 1952.

John Heritage, Garfinkel and Ethnomethodology, Cambridge: Polity Press, 1984.

Edmund Husserl, The Crisis of European Sciences and Transcendental Phenomenology: An Introduction to Phenomenological Philosophy translated by David Carr, Evanston: Northwestern University Press, 1970. Originally published in German in 1954.

Fredric Jameson, The Political Unconscious: Narrative as a Socially Symbolic Act, Ithaca: Cornell University Press, 1981.

Alan Kay, User interface: A personal view, in Brenda Laurel, ed, The Art of Human-Computer Interface Design, Reading, MA: Addison-Wesley, 1990.

Webb Keane, Signs of Recognition: Powers and Hazards of Representation in an Indonesian Society, Berkeley: University of California Press, 1997.

Rob Kling, Social analyses of computing: Theoretical perspectives in recent empirical research, Computing Surveys 12(1), 1980, pages 61-110.

Bruno Latour, The Pasteurization of France, translated by Alan Sheridan and John Law, Cambridge: Harvard University Press, 1988.

Bruno Latour, We Have Never Been Modern, translated by Catherine Porter, Cambridge: Harvard University Press, 1993.

Michael Lynch, Art and Artifact in Laboratory Science: A Study of Shop Work and Shop Talk in a Research Laboratory, London: Routledge and Kegan Paul, 1985.

Drew McDermott, Artificial intelligence meets natural stupidity, in John Haugeland, ed, Mind Design: Philosophy, Psychology, Artificial Intelligence, Cambridge: MIT Press, 1981.

Paul McReynolds, The clock metaphor in the history of psychology, in Thomas Nickles, Scientific Discovery: Case Studies, Dordrecht: D. Reidel, 1980.

George Herbert Mead, Mind, Self and Society from the Standpoint of a Social Behaviorist, edited by Charles W. Morris, Chicago: University of Chicago Press, 1934.

David Noble, A World Without Women: The Christian Clerical Culture of Western Science, New York: Knopf, 1992.

David Noble, The Religion of Technology: The Divinity of Man and the Spirit of Invention, New York: Knopf, 1997.

Theodore M. Porter, Information, power and the view from nowhere, in Lisa Bud-Frierman, ed, Information Acumen: The Understanding and Use of Knowledge in Modern Business, London: Routledge, 1994.

Byron Reeves and Clifford Nass, The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places, Cambridge University Press, 1996.

Harvey Sacks, Lectures on Conversation, edited by Gail Jefferson, Oxford: Blackwell, 1992.

Simon Schaffer, Babbage's intelligence: Calculating engines and the factory system, Critical Inquiry 21(1), 1994, pages 201-228.

Ben Shneiderman, Designing the User Interface: Strategies for Effective Human-Computer Interaction, third edition, Reading, MA: Addison Wesley, 1998.

Alfred Schutz, Collected Papers, Volume 1: The Problem of Social Reality, edited by Maurice Natanson, The Hague: Nijhoff, 1962.

Alfred Schutz, The Phenomenology of the Social World, translated by George Walsh and Frederick Lehnert, Evanston: Northwestern University Press, 1967. Originally published in 1932.

Lucy A. Suchman, Plans and Situated Actions: The Problem of Human-Machine Communication, Cambridge: Cambridge University Press, 1987.

Lucy A. Suchman and Randall H. Trigg, Artificial intelligence as craftwork, in Seth Chaiklin and Jean Lave, eds, Understanding Practice: Perspectives on Activity and Context, Cambridge: Cambridge University Press, 1993.

Peter J. Thomas, ed, Social and Interactional Dimensions of Human-Computer Interfaces, Cambridge: Cambridge University Press, 1995.

Edmund Wilson, To the Finland Station: A Study in the Writing and Acting of History: New York: Harcourt, Brace, 1940.

Steve Woolgar, Why not a sociology of machines? The case of sociology and artificial intelligence, Sociology 19, 1985, pages 557-572.

Steve Woolgar, Reconstructing man and machine: A note on the sociological critiques of cognitivism, in Wiebe E. Bijker, Thomas P. Hughes, and Trevor J. Pinch, eds, The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology, Cambridge: MIT Press, 1987.

Steve Woolgar, Configuring the user: The case of usability trials, in John Law, ed, A Sociology of Monsters: Essays on Power, Technology and Domination, London: Routledge, 1991.

Steve Woolgar, Re thinking agency: New moves in science and technology studies, Mexican Journal of Behavior Analysis 20, 1994, pages 213-240.

Steve Woolgar, Technologies as cultural artifacts, in William H. Dutton, ed, Visions and Realities of Information and Communication Technologies, Oxford: Oxford University Press, 1996.