Accountable Artifacts:
Ethnomethodology and the Reconstruction of Computing

Philip E. Agre
Department of Information Studies
University of California, Los Angeles
Los Angeles, California 90095-1520
USA

pagre@ucla.edu
http://polaris.gseis.ucla.edu/pagre/

This is a draft. Please do not circulate it or quote from it.
Version of 31 July 1999.
4400 words.

Submitted to Mind, Culture, and Activity.

 

Abstract

Computer system design consists largely of work on language; whereas most engineering disciplines confer technical meanings a limited number of words from the vernacular language, computer people draw an unbounded range of vernacular words into their formalisms. Yet technical words function differently from vernacular words, and this steady formalization of ordinary language should raise concerns. This paper sketches four approaches to the analysis of technical language, drawing on Heidegger's theory of equipment and its role in culturally organized activity, Husserl's theory of mathematization in the sciences, Schutz's theory of intersubjectivity, and Garfinkel's theory of the reflexive accountability of the relationship between system designers and users. These analyses suggest that system design might be improved by cultivating an awareness of the institutional embedding of language and language use.

 

1 Introduction

Eric Hobsbawm (1994) has called the 20th century an age of extremes. Every area of life has been swept by successive waves of enthusiasm and dismay whose accumulated legacy more than overwhelms us. And perhaps because of this, on the level of serious intellectual work the 20th century has been an age of suspicion. One intellectual project after another has portrayed us all as already overtaken by something vast and ambivalent, and has sought to locate a positive ethical project in the foundationless work of rolling back illusions and revealing something underneath.

The extremes of this century, for all their diversity, have had this in common, that they have all sought to appropriate and signify the methods of technical reason. And perhaps because of this, one of the most substantive intellectual legacies of the 20th century has been precisely the critique of technical reason. I identify this critique as a singular phenomenon -- it is *the* critique of technical reason -- inasmuch as its many variants share certain family resemblances that have not often been remarked upon. For example, the critique of technical reason does not locate its object in the artifacts of technology but in a larger institutionalized way of taking hold of the world, one that is both infinitely ambitious in its goals and distinctly limited in the considerations that it takes into account. Despite this commonality, the variants differ greatly in the proposals that they sketch or imply for the reform of technology.

The very notion of reforming technology remains a dangerous one, given the forceful claims that technologists have made as the custodians of reason and the devastation that antirationalist social movements have caused throughout the century. Yet despite the danger, and despite much else, I believe that the broad project of ethical suspicion toward technology has been fully vindicated in its goals, if not in its material accomplishments, and in this paper I want to develop the program of reform that emerges paradoxically from perhaps the least overtly reformist of twentieth-century research traditions, the path leading from phenomenology through ethnomethodology.

2 The trouble with technical language

I came to this topic through my own experiences as a technologist, and I want to frame my inquiry with a pretheoretical account of the dissatisfactions with technical work that propelled me into full-blown dissidence before I was halfway done with graduate school. My field was artificial intelligence. In its time, AI was in a certain sense the vanguard of computer systems design. Computing is a branch of engineering, and all engineering disciplines have in common the application of mathematics to practical problems in some sphere of human life. As such, all engineering involves formalization, by which engineers mean the process of redefining vernacular words in mathematical terms. Engineering involves talk, and much of this talk is concerned with glossing certain real or hypothetical situations in terms that, so to speak, build a bridge to mathematics.

It is not only the talk that builds the bridge, of course, but also the practices of measurement, manufacturing, graphical representation, standardization, and so on that render the material world accountable in certain terms -- that is, in certain mathematical terms. But I want here to focus on the role of talk, or more generally of language, in the methods of engineering. Among all engineering disciplines, computing is distinctive in its uses of language. Whereas a discipline such as mechanical engineering appropriates a relatively limited range of vernacular words for its work -- terms such as strength or force -- the routine work of computing consists in the appropriation of an unbounded range of words. Computers are, in a real sense, machines that we build so that we can talk about them. AI was the vanguard of computing in the sense that it sought to appropriate the most intellectually fraught elements of the vernacular language -- those that describe the intentionality and agency of human action. The history of AI, of course, has been characterized by a steady scaling-back of these ambitions, from the grand declarations of the 1950's to the studied sobriety of the present day. Nonetheless, AI was and is the project of building machines whose operation deserves to be narrated using terms such as reasoning, remembering, deciding, acting, believing, and learning. This is a controversial project within the broader field of computing (Shneiderman, 1998, pp. 380-385, 597-600), not to mention within intellectual culture as a whole (Bloomfield, 1987; Dreyfus, 1972; Taylor, 1985), and yet the actual dynamics of this project have never, in my view, been satisfactorily described.

When I first came to my interest in ethnomethodology, I was very much, as we say, a computer person. I was a graduate student in AI, thoroughly dissatisfied with the field but lacking any vocabulary in which to explain my dissatisfaction. The one thing I knew is that AI was, in a certain dimly existential sense, wrong -- that is, it portrayed human life in a way that seemed intuitively false. My friends and I who shared this sense decided that we would talk with the people who claimed to know what the problem was, and one of these people was Harold Garfinkel. Having been introduced by Lucy Suchman, I showed up in Harold Garfinkel's office at UCLA one day in 1985. I injured my knee on the way and arrived late, but Harold's real problem with me was that I was a computer person. On his understanding, the problem with computer people is pretty much the problem with sociologists: their practice of drawing upon the vernacular language, or indeed upon any and all language, as resources for the coining of esoteric terminology that does violence to the ways that the words are used in the actual activities of daily life. His concern, very reasonably, was that I had visited him in order to draw on the language of ethnomethodology in the same fashion, and that I would try to pack his language into my bag, carry it home with me, and make computer models out of it. Although I hardly understood anything, I did understand that, and Harold started being nice to me when I said that, in my view, ethnomethodology was at bottom an ethical project, a practice of resistance to the symbolic violence of formalization. At the same time, however, I was an engineer, and I persisted in wanting ethnomethodology to be useful. I still persist in this, and my eventual goal here is to explain how.

In 1985, however, this goal was hardly well-formed, and my understanding of the problems with AI were only accidentally parallel to Garfinkel's. My own understanding focused on technical vocabulary. The way I formulated it, every technical term has two facets, the figurative and the formal. In its figurative aspect, technical terms are used to tell stories -- stories both about human lives and about the behavior of mechanisms. In its formal aspect, a technical term enjoys a mathematical definition, and its uses and implications are governed entirely by the deductive logic of the math. The difficulty is that these two uses of language are incompatible. Vernacular words and vernacular practices of narration simply do not behave in the same way as mathematics. The imaginable trajectories of narrative, for example, are ranked and ordered by the collective sense that Aristotle referred to as probability, whereas mathematics embraces with perfect equality every one of the possible deductions from a given set of premises.

Something important and dangerous, on this view, happens to language that is taken up by the discursive practices of technical formalization. (For a fuller account, see Agre, 1997a.) Words are pulled in two directions. In their formal aspect they become very narrow -- or, as the engineers say, they become precise. In their figurative aspect, by contrast, they simultaneously move to the opposite extreme and become shockingly broad -- they become, as the engineers say, vague. This, I had decided, is how technical people generally and AI people in particular build their bridges between the practical world and the Platonic idealities of mathematics: by taking hold of the words of the vernacular language and, one by one, stretching them all out of shape in the service of an impossible task. The task does not seem impossible to the engineers, of course, but that is because, for them, what they are doing does not involve the vernacular language in any important way. To the contrary, for them the work of formalism consists in revealing the direct correspondences, on the model of physics, between reality and mathematics (Agre, 1997b). Because engineering culture acknowledges only the barest of distinctions between reality and mathematics -- the mathematics of a model is understood as only approximating the mathematics of reality -- that culture is systematically oblivious to the transformations that it does in fact effect on the vernacular language. So far as my colleagues in AI were concerned, vernacular language is intrinsically vague to the point of worthlessness, and formalization is the process by which engineers redeem the vernacular language and make it finally useful. The idea that vernacular language has its own very different kind of precision, and that its seeming pathological vagueness is in fact an effect of formalization and not a reason for it, was incomprehensible within that engineering culture, or indeed any engineering culture of which I was aware.

This view, if it is accurate, has momentous consequences. It suggests that technical formalism possesses a strange and parasitic dependence on vernacular language (Agre, 1992), and it suggests moreover that these technical enterprises are not redeeming language but are in fact systematically destroying it. That, anyway, is how it seemed to me during my time as a dissident in a technical field.

Since that time, I have sought to make an ethical project out of my dissatisfaction with the practices of computing. I have sought a constructive relationship to the AI community -- one that cleanly distinguishes the real intellectual utility of the field from its unfortunate uses of language -- and I have sought to make this constructive understanding useful to people whose lives are affected by computing. My purpose here, however, is to bring some analytical depth to the critique of technical language that I have just sketched. My concern is, therefore, not with substantive issues such as the proper analysis of planning or belief as empirical phenomena, but with the ways in which an understanding of language can be part-and-parcel of a larger organization of professional practice.

I will proceed by walking forward, so to speak, through four analyses of the issue, each necessarily brief but nonetheless building toward a reasonably useful proposal concerning the reform of technical practice.

3 Heidegger: Language as equipment

The first analysis draws upon certain themes of Heidegger -- not so much upon Heidegger's (1977 [1954]) later critique of technology as upon his early analysis of language and activity. With regard to language, Heidegger inherits a larger European tradition. Whereas Anglophone philosophers have generally supposed that we can define words, European philosophers have supposed that words define us. Put another way, Anglophone philosophers locate themselves outside of language and imagine that they can order it to their liking, whereas European philosophers locate themselves inside of language and imagine themselves as having been assigned the bottomless task of recovering awareness of the misleading baggage that has gone along with it. Heidegger is certainly in this latter camp, and he inherited the anthropological, or more precisely philological, notion that the proper aim of philosophical work was to recover an originary experience of the world.

Although he never made the connection himself, Heidegger's (1962 [1927], pp. 97-98) analysis of practical activity provides a fine analysis of the difficulty with technical language. We encounter the practical world, Heidegger said, as equipment -- that is, as beings that show up for us in the midst of routine practical activities, and that show up just in those ways in which they play a role in those activities. Although the universe of equipment can properly be analyzed as a totality as one category of experience, it is crucial that nobody ever encounters this totality of equipment as a being unto itself. Each of us might own a toolbox that contains certain items of equipment, but nobody owns a toolbox that contains all of the equipment in the world, because that would mean that one's whole culture would be sitting there as a single entity to be taken up as a whole. Our experience of equipment is not fragmentary -- each item of equipment is meaningfully related to other items, which are related to other items in turn -- but our experience of equipment is dispersed among a vast profusion of distinct practical situations. We are inside of this picture, not outside of it.

And so it is with the resources of language. A dictionary or a grammar may encourage the illusion that we can grasp language as a totality, but language as it is lived and used is considerably more complex than those representations of it can make out. Our experience of language is itself dispersed; the elements of language become available to us, existentially speaking, in particular activities, and show up for us in just the roles that they can play in those activities. Attempts to systematize language, on this view, necessarily falsify it, or at least they pass entirely over the phenomenon of language as an existential structure of practical activity.

On this view, the problem with technical language has two aspects. First, technical language treats items of worldly equipment as having a type of existence -- mathematical or just like it -- that they do not in fact have. And second, the problem with technical language is invisible because technical vocabulary is itself a dispersed network of equipment whose workings never become visible as a single coherent pattern. Having been socialized into a technical manner of speaking, AI practitioners will never have any occasion to encounter technical language as such, as opposed to particular technical terms that might assist in the modeling of this, that, or the other phenomenon. Language, this suggests, is thus historical in two ways: the uses of language help to organize practical activity, and among the domains of practical activity is language itself.

4 Husserl: The invisibility of formalization

Whatever its heuristic value, this Heideggerian account of language lacks much of an account of technical practice. Husserl's (1970 [1954]) "Crisis of the European Sciences", written largely in response to Heidegger, is organized around such an account. Unlike Heidegger, Husserl says explicitly that he does not question the accomplishments of science (1970, pp. 24, 53). For Husserl, the supposed crisis in the sciences lies in a foreshortening in scientists' awareness of their own practices, and specifically their practices of mathematized representation. Husserl locates the historical origin of the problem in Galileo. He observes that Galileo was able to take for granted a considerable body of mathematical methods that had been developed in practical contexts by artisans (1970, pp. 24, 27-29, 49; cf. Hadden, 1994). Precisely because these methods had become routinized, he argued, Galileo could carry them over into the emerging realm of pure science without needing to acquire any worked-out consciousness of their inner phenomenal workings (1970, pp. 29, 48-49). For Galileo it was possible simply to *see* a certain geometrical shape, forexample, whether in the heavens, or in his experimental apparatus, or in the diagrams that he drew on paper. In this founding moment of science, the work of mathematization became invisible (cf. Bowers, 1992; Knoespel, 1987; Star 1989, 1995; Suchman & Jordan 1997). Husserl wanted to make that work visible again -- not by ethnographic documentation but by phenomenological analysis. Perhaps because of his method, however, Husserl locates the difficulty in a structure of essentially solitary experience and not, for example, in language. The specifically linguistic work of formalization that went untheorized in Heidegger was invisible to Husserl as well.

5 Schutz: Reciprocal perspectives

With Schutz's phenomenology of intersubjectivity it becomes possible to analyze technical work generally and computer work in particular as a problem of social relationships. Again it is necessary to extrapolate. For Schutz (1967 [1932], pp. 98-101) the central problem of social life is that none of us has full access to the privileged experiences of the others. As a result, social life carries on through the ceaseless and prereflective application of the "general thesis of reciprocal perspectives" (Shutz, 1962, pp. 11-12): in making ongoing sense of one another, everyone fills in the interpretive gaps by assuming that the other person works the same way, so to speak, as oneself.

This theory provides a useful way of framing technical artifacts generally and computers in particular as occasions for social interpretation. Anyone who designs a computer faces a significant challenge: the computer will be used in distant times and places by unknown others who must find their behavior comprehensible even though the designer is not around (Agre, forthcoming). Of course that is true of any artifact, but computers are distinctive in that it is never quite clear whether the thesis of reciprocal perspectives applies to the artifact or to the designer. Computer people are in fact sharply divided on this issue. For some, it is the very definition of ethics in design that the computer be intelligible as a manifestation or extension of its designer, so that the intelligibility of the machine's function arises through a continual interpretation by the user of the designer's intention, much as if the computer were a message addressed to the user and tacked on the refrigerator (Garfinkel, 1967a; Woolgar, 1987, 1991). Whether any user experiences a computer in this fashion is, however, entirely unclear, given the institutional and imaginative gaps between designers and users in actual industrial practice (Grudin, 1991).

For AI people, on the other hand, the user is supposed to attribute a reciprocal perspective to the machine itself, treating it as the author of actions and intentions. This is not, as popularists and philosophers often suppose, the assumption that computers "really" think or know or act; although the founders of AI have expressed views on this matter in their dual role as popularizers, this question of "really" never actually comes up in the daily practice of AI people, any more than it does in Schutz's theory -- the bracketing of that question for practical purposes is almost precisely what our interpretive charity toward one another consists in.

Schutz's theory, moreover, suggests an account of the purpose of formalization -- that is, the reason why computer people organize their language into discrete technical schemata. One reason, of course, is that is the only way they know how to build computers. But a deeper answer is that a shared understanding of the relevant technical schemata between designer and user structures the user's otherwise underspecified task of interpreting what the machine is doing (Agre, 1999). And that in turn suggests what it really means to learn how to use a computer, namely to learn how to interpret one's own intentions and actions in technical terms. One needn't be conscious of doing this; one need simply experience the situation from one moment to the next in such a way.

6 Garfinkel: Reflexive accountability

Suggestive though it is, this Schutzian theory of computing provides only an underpersuasive account of computer users and no account at all of computer designers. A more satisfactory account can be had by moving along to Garfinkel's (1967b) ethnomethodological account of social action. On Garfinkel's account, the seemingly inherent instability of computers -- actors or actions? -- remains, as does the challenge to the designer that is posed by the distant circumstances of use. Ethnomethodology, however, directs our attention to an explanatory resource -- an overlooked elephant in the living room -- namely the computer system designer's central and pervasive concern with the accountability of the artifacts being designed. Computers are, for most purposes, accountable artifacts. A computer "works" in the deepest sense when, on a given occasion, it provides adequate direction to the user for the production of an account of its operation. If the designer were present in person, of course, then this work of accounting could proceed through a real-time collaboration that would be thoroughly infused with the situational contingencies of its production. The whole point, however, is that the work of accounting for the machine be possible without the designer's presence, and therefore without the designer's foreknowledge of the situational contingencies that will not only influence the user's interpretations but that will also influence which of the millions of theoretically possible formal sequences of behavior the machine with actually perform. The techniques of formalism, of course, help the designer considerably through the fiction that the machine's behavior, as a practical fact of social life, *can* be constrained to merely a million or a hundred billion discrete possibilities, each capable of being generated through mathematical deduction from the premises embodied in the circuitry and the interface.

Simply saying this, of course, does not provide us with an ethnomethodology of computer system design. That will only be found in the documentary materials of particular episodes of design, such as those of Suchman and Trigg's (1993) study of AI practitioners engaged in the lived work of aligning various notational and discursive resources through markings, gestures, and talk at a whiteboard. It does, however, tell us something about the problem that this work solves. As in the case of the Schutzian analysis, an ethnomethodology of computing will depend upon shared resources and the methods of their deployment. But by locating the phenomena of order specifically in social action, the ethnomethodological account directs our attention to designers' and users' reflexive orientations to one another's situations. If those situations are unknown in their particulars then they must be provided for in their potential multiplicity. And that is a task for which technical formalism is well suited. What I referred to as the pathology of technical language consists precisely in this: that it generates a space of potential consequences without need of those situational contingencies that are not registered in formal terms, that it does so by a systematic transformation of language, and that this transformation is specifically unremarkable.

7 Reforming technology

In speaking of pathology, I would seem to break with the Husserlian tradition that later takes shape as the ethnomethodological refusal to remain neutral in the disputes whose methods it documents. Most likely I do -- I am an engineer, not a sociologist, or at least my goal is not to document the accomplishments of technology but to reform its vices. I want to find the middle ground, and the theme of neutrality provides an adequately provocative lever for opening the issues. To see this, observe that I have also run thoroughly afoul of the traditional policies of computer work by blandly describing its purpose as the production of accountable artifacts. That characterization of computer work is certain to horrify AI people, for example. It will sound like nihilism because it seems to underconstrain the work -- would *any* account do, and if not then what determines *which* accounts of a device's operation are warranted? So long as the very notion of an account is derived from the designer's formalism, the problem is manageable. But users, of course, are producing their own accounts, in their own terms.

To succeed on its own terms, therefore -- that is, consciously to design artifacts that are genuinely useable -- technical practice should be led by its own ambitions to embrace something remarkably similar to the ethnomethodological policy of neutrality. Do the machines really know, intend, act, learn? We have already seen that the question does not actually come up. What does come up is the designer's reflexive orientation to the lifeworld of the user. Engineers who deserve the name ought to recognize that they are not simply designing material objects. Rather, as any number of sociologists have urged (e.g., Akrich, 1992; Haraway, 1991), they are designing hybrids of material and semiotic objects, and they are participating in concrete human relationships that are embedded in material and semiotic institutions. When a computer works or does not work, what is working or not working is this elaborately located hybrid.

Once I began to understand technical work in this way, my practical experience demonstrated to me that the main tradition of AI is, by these standards -- standards that, again, I am arguing, are internal to technical work properly understood -- a mess. The language of AI is shot through with systematic conflations that would confuse anyone, and yet so long as the linguistic mediation of formalism has remained invisible, the havoc wreaked by these conflations has gone undetected (Agre, 1997b). In order for this havoc to become visible within the horizon of technical work, ethnomethodology need not abandon its neutrality by rendering judgements on the adequacy or efficiency of particular technical designs. Engineering is capable of expanding to make those judgements itself, provided it summons the self-respect to reckon fully with the materiality of its language -- and, ultimately, with its own self as a located participant in the institutional world.

8 Conclusion

What, then, of the extremes and suspicions of the 20th century and their relationship to technology? The 20th century opened, and is now closing, with a particular form of technological enthusiasm. Epitomized by Italian Futurism and the discourse of cyberspace, this movement celebrates the acceleration of time, the overcoming of space, the economic forces that put all things into motion, and above all the capacity of particular technologies to impose their own intrinsic logic onto the human world (Kern, 1983). The leading Futurist, Marinetti, of course, became a Fascist, and the new order of technology has been denounced as Fascism by no less an authority than the Silicon Valley industry magazine _Upside_ (Malone, 1998).

But we need not resort to political epithets to make the point; it is not in the nature of technical artifacts all by themselves to impose human meanings. To regard technical artifacts in that way is bad engineering. To do well by its own lights, the design of distributed information technologies must discover the problems of institutional order -- the order that is produced nowhere except in each next occasion upon which particular artifacts are designed or used. As information technologies are increasingly designed to mediate the work by which institutional orders are produced, let us encourage information technologists to recognize their own participation in this work.

References

Agre, P. E. (1992). Formalization as a social project, Quarterly Newsletter of the Laboratory of Comparative Human Cognition 14, 25-27.

Agre, P. E. (1997a). Toward a critical technical practice: Lessons learned in trying to reform AI. In G. C. Bowker, S. Leigh Star, W. Turner, and L. Gasser (Eds.), Social science, technical systems and cooperative work: Beyond the great divide. Mahwah, NJ: Erlbaum.

Agre, P. E. (1997b). Computation and human experience. Cambridge: Cambridge University Press.

Agre, P. E. (1999). Hazards of design: Ethnomethodology and the ritual order of computing. In preparation.

Akrich, M. (1992). The de-scription of technical objects. In W. E. Bijker and J. Law (Eds.), Shaping technology/building society: Studies in sociotechnical change. Cambridge: MIT Press.

Bloomfield, B. P. (1987). The culture of artificial intelligence. In B. P. Bloomfield (Ed.), The question of artificial intelligence: Philosophical and sociological perspectives. London: Croom Helm.

Bowers, J. (1992). The politics of formalism. In M. Lea (Ed.), Contexts of computer-mediated communication (pp. 232-261). New York: Harvester Wheatsheaf.

Dreyfus, H. L. (1972). What computers can't do: A critique of artificial reason. New York: Harper and Row.

Garfinkel, H. (1967a). Studies in ethnomethodology. Englewood Cliffs, NJ: Prentice-Hall.

Garfinkel, H. (1967b). "Good" organizational reasons for "bad" clinic records. In Studies in ethnomethodology (pp. 186-207). Cambridge: Polity Press.

Grudin, J. (1991). Interactive systems: Bridging the gaps between developers and users. IEEE Computer 24, 59-69.

Hadden, R. W. (1994). On the shoulders of merchants: Exchange and the mathematical conception of nature in early modern Europe. Albany: State University of New York Press.

Haraway, D. (1991). A cyborg manifesto: Science, technology, and socialist-feminism in the late twentieth century. In Simians, cyborgs, and women: The reinvention of nature (pp. 149-181). London: Free Association Books.

Heidegger, M. (1977). The question concerning technology. In The question concerning technology and other essays (W. Lovitt, Trans., pp. 3-35). New York: Harper. Originally published in German in 1954.

Heidegger, M. (1962). Being and time (J. Macquarrie & E. Robinson, Trans.). New York: Harper.

Hobsbawm, E. J. (1994). The age of extremes: A history of the world, 1914-1991. New York: Pantheon.

Husserl, E. (1970). The crisis of European sciences and transcendental phenomenology: An introduction to phenomenological philosophy (D. Carr, Trans.). Evanston: Northwestern University Press. Originally published in German in 1954.

Kern, S. (1983). The culture of time and space, 1880-1918. Cambridge: Harvard University Press.

Knoespel, K. J. (1987). The narrative matter of mathematics: John Dee's Preface to the _Elements_ of Euclid of Megara (1570), Philological Quarterly 66, 27-46.

Malone, M. S. (1998). Technofascism, Upside, August.

Schutz, A. (1962). Collected papers (vol. 1): The Problem of Social Reality (M. Natanson, Ed.). The Hague: Nijhoff.

Schutz, A. (1967). The phenomenology of the social world (G. Walsh & F. Lehnert, Trans.). Evanston: Northwestern University Press. Originally published in 1932.

Shneiderman, B. (1998). Designing the user interface: Strategies for effective human-computer interaction (3rd ed). Reading, MA: Addison Wesley.

Star, S. L. (1989). Layered space, formal representations and long-distance control: The politics of information. Fundamenta Scientiae 10, 125-155.

Star, S. L. (1995). The politics of formal representations: Wizards, gurus, and organizational complexity. In S. L. Star (Ed.), Ecologies of knowledge: Work and politics in science and technology (pp. 88-118). Albany: State University of New York Press.

Suchman, L., & B. Jordan (1997). Computerization and women's knowledge. In P. E. Agre and D. Schuler (Eds.), Reinventing technology, rediscovering community: Critical explorations of computing as a social practice (pp. 97-105). Norwood, NJ: Ablex.

Suchman, L. A., & R. H. Trigg (1993). Artificial intelligence as craftwork. In S. Chaiklin and J. Lave (Eds.), Understanding practice: Perspectives on activity and context. Cambridge: Cambridge University Press.

Taylor, C. (1985). Human agency and language: Philosophical papers (vol. 1). Cambridge: Cambridge University Press.

Woolgar, S. (1987). Reconstructing man and machine: A note on sociological critiques of cognitivism. In W. B. Bijker, T. P. Hughes, & T. J. Pinch (Eds.), The social construction of technological systems: New directions in the sociology and history of technology (pp. 311-358). Cambridge: MIT Press.

Woolgar, S. (1991). Configuring the user: The case of usability trials. In J. Law (Ed.), A sociology of monsters: Essays on power, technology and domination (pp. 58-99). London: Routledge.