Go back to the directory of Risks messages
Date: Wed, 18 May 94 10:48:39 PDT
From: RISKS Forum <risks@csl.sri.com>
Subject: RISKS DIGEST 16.08

RISKS-LIST: RISKS-FORUM Digest  Weds 18 May 1994  Volume 16 : Issue 08

----------------------------------------------------------------------

Date: Tue, 17 May 1994 14:04:49 -0700
From: Phil Agre <pagre@ucsd.edu>
Subject: tactical research

The 17 May 1994 Wall Street Journal includes an article, excerpted from a
new book (which I have not seen) entitled "Tainted Truth" by Cynthia Crosson,
about the practice of "tactical research", the creation of customized policy
research as part of public debates.  She details the example of computerized
life-cycle analysis for cloth versus disposable diapers.  As with most
computer models, you can get a wide variety of answers depending on the
estimates you give for a large number of hard-to-measure quantitative
variables.  (How many times does a cloth diaper get changed in its lifetime?
How often do people use two diapers rather than one?)  I have heard many, many
anecdotes of computer models being manipulated to give the desired answers;
you probably have too.  It's certainly not a new phenomenon, having risen to
prominence in during the zenith of the systems modeling fad in the 1960's.

Crosson's article tries to explain how this manipulation comes about.  Are
the modelers consciously trying to fool people?  Much as I resist that answer,
out of an ideological preference for more more complex and systemic kinds of
answers, that's basically what she says.  Once you start making your living
making models whose answers are convenient to certain sorts of people, she
says, a sort of treadmill gets going and it's hard to get off.  The phenomenon
is particularly important in the context of the ongoing US health care debate,
in which a blizzard of made-to-order numbers circulates through advertisements
and talk show hosts, all of them resting on more assumptions than you could
shake a stick at.

What's the answer?  How about a public education campaign about the concept
of sensitivity analysis?  The more reputable polling agencies might frequently
use loaded questions, but at least they feel obligated to explain that the
numbers have a statistical margin of error of +/- 3% or whatever.  Likewise,
people presenting models should expect to be asked, "what are your input
variables, and how sensitive is your answer to the range of plausible values
for each?"  That's a simple enough question that a fairly large percentage of
the population can understand and ask it.  It's not adequate, of course, since
assumptions can be built into computer models in a wide variety of ways.  But
I expect that it's sufficient to get rid of the first 90% of the bogus uses of
modeling.

Phil Agre, UCSD

PS Here are some relevant references:

Cynthia Crosson, How "tactical research" muddied diaper debate, The Wall
Street Journal, 17 May 1994, page B1 (marketing section).

Cynthia Crosson, Tainted Truth: The Manipulation of Fact in America, New York:
Simon and Schuster, 1994.

Kenneth L. Kraemer, Siegfried Dickhoven, Susan Fallows Tierney, and John
Leslie King, Datawars: The Politics of Modeling in Federal Policymaking, New
York: Columbia University Press, 1987.

Ida R. Hoos, Systems Analysis in Public Policy: A Critique, revised edition,
Berkeley: University of California Press, 1983.

------------------------------

End of RISKS-FORUM Digest 16.08
************************

Go back to the top of the file