The Social Context of Methodology
Many of us will have had experiences similar to those reported some years ago by
one teacher of psychology:
It has often been my experience in the teaching of psychology......that my
students had gained not so much as a glimmer of what I considered most
important: a sense of the scientific endeavor. Mere exposure to the data and
theories of biology or psychology in no way assured them of a concomitant grasp
of scientific thinking or discovery (Monte, 1975, xv-xvi).
This he considered "rather a grim discovery", but he tried to improve the situation by writing a textbook that would spell out for students the nature of what he called "Psychology's Scientific Endeavor". In doing so he joined the ranks of contributors to a distinct genre, one that is just about as old as experimental psychology itself. The first methodological text was published by Wilhelm Wundt in 1883, just after he had established his psychological laboratory (Wundt, 1883). Like most of Wundt's important works it was never translated into English.
Since then the genre of methodology texts has divided into one set of texts written essentially for aspiring or active research psychologists and another set directed at undergraduates, who could be regarded as consumers rather than producers of psychological knowledge. This split illustrates the dual function of methodology texts, namely, to explain their research practices to psychologists themselves, and to explain them to others. These explanations are provided in the form of procedural accounts that use a special language. Methodology texts are either written in this language or they seek to introduce students to it. This language, which is familiar to us all, deals in the verbal coinage of independent, dependent, intervening, and confounding variables, of data and hypothetical constructs, of operational definitions and of statistical significance, of experimental and quasi-experimental design, of construct, ecological and other types of validity, of antecedents and consequents, stimuli and responses, hypothesis and evidence, and so on.
Because our professional socialization was so intimately tied up with the
learning of this language it has acquired, for many of us, a taken for granted
quality that can make us oblivious to the fact that it is in fact a language, a
form of discourse, a way of talking about things from a particular perspective,
that of the investigator. Now, as Bill Bevan (1991) pointed out quite recently,
what we say, what we believe, and what we do as scientists is not at all
equivalent. He is in good company here, for, sixty years ago, Einstein said: "If
you want to find out anything from the theoretical physicists about the methods
they use, I advise you to stick closely to one principle: don't listen to their
words, fix your attention to their deeds" (Einstein, 1933, p.1). So let us take
this advice and focus on what we do rather than on what we say and believe.
But how would one obtain an account of what scientists actually do? Well, one
way would be for trained outsiders to make systematic observations of
experimental situations, rather in the way that an anthropologist or
ethnographer makes observations of the customs and practices of foreign
cultures. Of course, to make sense of what they observe such observers would
have to acquire a great deal of information about the practices they are
studying, so they would not be naive observers, but they would try to maintain a
more distanced perspective than that of the participants who have been
socialized to take their own practices for granted.
Such observational studies of scientific experiments have in fact been carried
out since the nineteen seventies. They include, among others, studies of
scientists working in such areas as neurohormones (Latour and Woolgar, 1979),
plant proteins (Knorr-Cetina, 1981), high energy physics (Pickering, 1984;
Traweek, 1988), and lasers (Collins, 1985), though not, as yet, behavioral
scientists. These kinds of studies have raised some interesting philosophical
issues (e.g. Hacking, 1988;
To begin with, an investigator has to choose a problem to work on, which means
that, explicitly or implicitly, he or she has to choose a particular way of
formulating the problem. That choice will depend on the way in which problems of
that kind have been formulated by the investigator's scientific predecessors,
or, more prominently in the social sciences, by the culture to which the
investigator belongs. Having chosen a problem one has to choose a method for
studying it. But no-one ever chooses from among the full range of potentially
relevant methods. At any particular time the scientific community to which the
investigator belongs will have clear preferences for and strong commitments to
certain methods rather than others. These preferences and commitments typically
change over time, but only some of that change can be accounted for by
technological progress, much of it appears to be more a matter of fads and
fashions, of economic considerations, of changing value priorities, and so on.
Having got their problem and their methods investigators collect their data. But
at this stage too there are choices to be made and a social context which
determines the outcome of those choices. In any investigative situation there is
always much more potential information than is actually used in the form of
data, and the definition of what is and what is not data depends on local
scientific traditions which assign a particular meaning to inherently ambiguous
observations. Studies of the actual practice of science have shown how the
assignment of meaning to experimental observations depends on social processes
of consensus formation, and how problematic such a consensus can sometimes be.
When the data have been collected and interpreted the study must be written up
in a form that will make it eligible for publication in an appropriate
scientific medium. At this stage the influence of social factors is so obvious
that it is impossible to overlook. There are rigid norms that scientific
publications must conform to, governing both style and content. In practice,
these norms have frequently been found to act back on the way data are
interpreted, on the choice of methods, and on the formulation of the problem.
For instance, one study of APA Publication Manual has suggested that the style
it prescribes still favors a broadly behavioristic framework (Bazerman, 1988).
All this would not matter too much if publication norms incorporated the
distilled essence of scientific rationality. However, these norms, like other
human norms, change historically, and while it is relatively easy to relate
those changes to prevailing scientific fashions and to extra-scientific
influences, it is not at all clear that historically later norms are
intrinsically more rational than historically earlier norms.
We need to introduce a common distinction between two senses of the term
"methodology" here (e.g. Kaplan, 1964). The term is often used to apply to
specific procedures, e.g. analysis of variance. But such procedures are always
applied in a broader methodological framework of which they are a part and from
which they derive their justification. The social aspects of research that I
have been talking about belong to this broader framework.
Both specific procedures, i.e. technique, and methodology in the broader sense
change over time. But they change in different ways. In the case of techniques
it makes sense to speak of progress. Later procedures generally represent an
improvement over earlier procedures. That means that in teaching techniques we
can afford to ignore history. Why bother with procedures that we know are not as
good as the ones we have now? But when it comes to broader methodological
questions we ignore history at our peril. That is because changes in the way
such questions are answered do not seem to obey the law of progress in any
obvious and uncontroversial way.
So when we are dealing with more general questions of methodology, rather than
questions of specific procedures, we had better not ignore history. If we do, we
not only run the risk of repeating the mistakes of the past; more seriously, we
run the risk of becoming trapped in currently fashionable preconceptions and
closing off discussion about the more fundamental methodological issues. In that
case, teaching methodology will be a lot like teaching research ethics, we will
be imparting norms rather than questioning them, expounding on ideals rather
than inquiring into actual practices. Our implicit message will be that the
scientific attitude must be abandoned when it comes to the study of science
Rather than run that risk, I would like next to apply a historical perspective
to the way in which psychologists have tried to come to terms with the social
context of their methodology. But
Not that psychologists have embraced this concern with great enthusiasm. Until
relatively recently they hardly seemed to notice that there was anything worthy
of concern at all, and although that is no longer the case, the tendency to
minimize these concerns is still strong. However, as early as 1933 the
Psychological Review published a
paper by a sharp young experimentalist, Saul Rosenzweig, in which he pointed out
that in psychological experiments the experimenter formed part of the social
environment of the object experimented upon. This made it, not only practically
difficult, but also theoretically dangerous to separate
the effects of the experimenter from the effects of other experimental
variables. In other words, in psychology experimental effects are embedded in
the social situation of the experiment and cannot necessarily be generalized to
other social situations. Rosenzweig's analysis clearly implied that the effects
of manipulating specific stimulus variables did not occur in a social vacuum but
were mediated by the effects of an investigative situation of which they formed
a part. The silence that greeted this challenge to methodological orthodoxy was
deafening. Nobody took up the fundamental issues it raised, and there was not to
be any talk of the social psychology of the psychological experiment for another
three or four decades.
As far as the discipline was concerned Rosenzweig's theoretical analysis could
hardly have come at a worse time. The heyday of neo-behaviorism was just
dawning, and a very different conception of psychological investigation was
coming into its own. The year before the appearance of Rosenzweig's article had
seen the publication of Tolman's
Purposive Behavior in Animals and Men (1932). There Tolman introduced the
language of independent, dependent, and intervening variables to describe
psychological investigation, a language which had not been in use before. This
was soon taken up by other prominent experimentalists, notably Boring (1933) and
Woodworth (1934, 1938). The latter in particular exercized tremendous influence
through his widely disseminated textbooks, both at the graduate and the
undergraduate level (Winston, 1988, 1990). None of these authorities had the
slightest sympathy for Rosenzweig's emphasis on the fundamentally social nature
of psychological investigation. They represented the discipline's dominant
aspiration to be accepted as a fully fledged natural science rather than merely
a social science. The language of independent and dependent variables was
sufficiently abstract to obscure the social features of psychological
investigation which made it different from investigations in chemistry and
biology. If there were such features they could now only be described as one set
of variables among others. But that had not been Rosenzweig's point. What he had
seen was that empirical relationships of the kind established for example by
chemists depended on the possibility of separating the role of the chemist from
the role of the chemicals he or she worked with. Chemical laws need be concerned
only with the chemicals, not the chemist. But in psychological investigation the
role of the psychologist and the role of the treatments or stimulus materials he
or she applied were hopelessly confounded, as we would say. So stimulus A
applied in social context A was not the same as stimulus A applied in social
Another psychologist who had come to a similar conclusion a few years earlier
was Kurt Lewin, though his ideas on this question were not so accessible to an
American audience. In the classical series of experiments on such topics as
level of aspiration and the psychology of anger, which Lewin directed in Berlin
in the late nineteen twenties and early nineteen thirties
the experimenter was conceptualized as very much part of the situation to
which the subject responded. This was also the case in the well known
experiments on "group climates" that Lewin conducted at
Groups of boys engaged in a task were supervised by adult confederates of Lewin
who deliberately adopted different styles of behavior so as to create three
distinct kinds of social atmosphere, or "social climate", in the groups. Lewin
labelled them "authoritarian", "democratic", and "laissez-faire". In the
authoritarian groups the adult leader dictated work assignments and techniques,
in the laissez-faire groups the boys were pretty much left to their own devices,
and in the democratic groups the adult provided options and discussed them with
the boys. We are not concerned here with the striking differences that were
observed between the behavior of the boys under different social conditions but
with the methodology used to produce these conditions. Like all experiments this
study looked at the effect of deliberately varied antecedent conditions on
behavior. But what were the variable antecedent conditions? Did they consist of
stimuli, or independent variables, in addition to the effect of the
experimenter? Obviously not, as the effect of the experimenter
was the independent variable. Note
that it is the social effect created by the experimenter, not the experimenter
as an individual, which provides the crucial antecedent conditions. In other
words, the social relationship of experimenter and subjects lies at the core of
this experimental manipulation. It is not something that can be added to or
subtracted from the rest of the antecedent conditions, leaving an unchanged
The reaction of American experimenters to Lewinian methodology can only be
described as ambivalent. On the one hand, people were greatly impressed by the
practical relevance of these studies and by the boldness with which they
confronted really significant issues. On the other hand, they were also puzzled
and sceptical, because these studies did not conform to the emerging
methodological orthodoxy. Leon Festinger, one of Lewin's most prominent
admirers, still felt this ambivalence when he looked back four decades later.
"Who would have imagined doing a "scientific experiment" in which the
independent variable to be manipulated was autocratic versus democratic
atmospheres", he recalls (Patnoe, 1980, 239). But he adds, "I still have no
conceptual understanding of what all the differences were between these
procedures" (ibid.). He carefully says, "conceptual" understanding, because on
one level he obviously did have an understanding of the difference between an
autocratic and a democratic atmosphere, as we all do. So what was this
"conceptual understanding" that proved so elusive, even after forty years?
Festinger was an enlightened person of broad interests but as an experimentalist
he was thoroughly committed to the new methodological orthodoxy that had
crystallized around the time he received his graduate training (Festinger,
1953). This was the orthodoxy expressed in the language of independent and
dependent variables to which I have already referred.
Because this is still the prevailing orthodoxy, let us try to understand the
source of Festinger's problem, using Kurt Lewin's deviant methodology as a
counterfoil. Every description and every choice of a particular methodology
involves, explicitly or implicitly, some assumptions about the nature of that to
which the methodology is being applied. If we want to be folksy about this we
can say that when we choose to eat the dish in front of us with a spoon
rather than a fork we do so because we believe that it is soup rather than salad
which has been put on the table. Or, if we want to be philosophical about it, we
can say that our methods imply ontological assumptions. So if we think we can
investigate some phenomenon by dividing it up into distinct independent and
dependent variables, it must mean we believe the nature of this phenomenon to be
such that we can safely go ahead without running into the kinds of problems we
would run into if we tried to eat our soup with a fork. Here I think is one
source of the misunderstanding between Festinger and Lewin. Festinger believed
that to be scientific you had to isolate distinct variables, and you had to
assume that you could do this without doing violence to the nature of reality,
because if you didn't, you would have to give up your faith in the effectiveness
of the scientific method. Lewin's experiments resisted an analysis in terms of
relationships among distinct independent and dependent variables, and so, in
spite of his admiration for their originator, Festinger puts "scientific
experiments" in inverted commas when he refers to them. They are interesting,
but they are not the genuine article.
Lewin, of course, came out of a different methodological tradition, one which
did not equate the scientific method with the search for functional
relationships between isolable and ontologically distinct variables. He shared
the conviction of the Gestalt psychologists that reality was not a bundle of
elements, and therefore it wasn't very smart to investigate it as though it
were. He called his approach "field theory", and though conceptually people
generally followed what he meant by that, the methodological implications were
often ignored. But as Lewin's own work demonstrated, field theory legitimated,
and ultimately required, working with holistic units, like group climates, that
represented complexly patterned experimental situations. These were of course
social situations organized by the nature of the relationship between
experimenters and subjects.
Lewin was not the only psychologist whose approach was difficult to reconcile
with the methodological orthodoxy that became established between the mid
thirties and the mid fifties of this century. Analogous tendencies can be seen
in the work of social psychologists like Sherif (1953) and Ash (1952). But
social psychologists had a low rank in the internal status hierarchy of the
discipline (Sherif, (1979), and so far from exerting any wider influence would
themselves have to adapt to the dominant methodological paradigm, if they wanted
to be taken seriously. So when the question of experimenter-subject
relationships was reopened in the nineteen sixties after a long absence from the
literature, the discussion was conducted within a methodological framework
developed by experimentalists strongly committed to the image of psychology as a
natural rather than a social science. This gave the discussion a particular
The new phase in psychology's concern with the social aspects of its methodology
received its initial impetus from Martin Orne's realization that there was
something similar about the hypnotic situation and the experimental situation
(Orne, 1962). He discussed this in terms of the concept of "demand
characteristics", a Lewinian concept that indicated the existence of some
historical continuity in this area. Demand characteristics are features of
perceived situations, not isolated
stimulus elements. Orne also pointed out that applied psychologists had long had
to recognize the social effects of investigative situations, most explicitly so
in the well known
The years that followed Orne's work saw an explosion of empirical studies on the
social psychology of the psychological experiment, so that by 1978 Rosenthal and
Rubin were able to entitle their position paper on the topic: "Interpersonal
expectancy effects: The first 345 studies". There is no question that this
represented a great advance over the previous period. It
certainly made it more difficult to think of experimental situations as
utterly unique among human situations in having no social character, or at least
none that mattered. To achieve that result most of the effort in the new
research area had been devoted simply to the demonstration of the existence of
interpersonal expectancy effects or of relevant intrapersonal factors, like
evaluation apprehension. But, as one of the commentators on the Rosenthal and
Rubin position paper pointed out, this empirical advance had not been
accompanied by a commensurate theoretical advance (Adair, 1978).
Among other things, this meant that the new research interest in social factors in experimentation was held to have only limited methodological implications. The rules of the game were still those that had been developed in the thirties and forties by the hybridization of elements from stimulus-response psychology, positivist philosophy and statistical procedures from applied biology. By the nineteen sixties and seventies these rules governed what was virtually the only game in town for American research psychologists, and one played by these rules if one wanted to achieve plausibility within the discipline and maintain one's own scientific self-respect. So experiments of fairly conventional design became the favorite context for discussions of the social aspects of experimentation. This had its limitations. Methodological orthodoxy depended on offering the same abstract account of experimental situations, whether in physics, biology or psychology. Experiments were situations in which human investigators manipulated some external material in order to verify or falsify their predictions. Valid conclusions could only be reached if it was assumed that investigators and the objects of their investigation had no effect on each other apart from what could be manipulated or controlled in the experimental situation. From this point of view the unavoidable social features of human experimentation had the status of a nuisance. They were categorized as "artifacts", with an implied distinction between experimental facts and artifacts.
But how do you distinguish between facts and artifacts? The answer one finds in
the literature seems to come down to this, that artifacts are unintended
effects. In other words, we end up with a completely subjective criterion for
distinguishing fact and artifact. We become involved in this paradox when we try
to talk about experimental situations in a language that already presupposes a
particular version of what an experiment is, a point made quite a number of
years ago by Gadlin and Ingle (1975). The concept of "experimental artifact" may
be useful in the technical context of a particular experiment, but it becomes
seriously misleading if it is used to imply some distinction in principle
between the factual and the artifactual components of experiments in general.
All experimental situations are artifactual, and so are their products. The
whole point of conducting experiments is to produce artifacts, to construct
situations and to make observations that would not exist without our deliberate
efforts. Our goals and intentions are deeply implicated in the structure of
experiments, and as our goals change so does the structure of our experiments,
including of course the social structure.
Let me illustrate this with some historical examples that I have been looking at
during the last few years (Danziger, 1985; 1990). Everyone knows about the
world's first major psychological laboratory where a large proportion of the
first generation of experimental psychologists was trained. That was Wilhelm
Wundt's laboratory at
Maybe you feel that the difference between then and now simply demonstrates
scientific progress, that people have become more sophisticated about likely
sources of error in psychological experiments and about experimental design in
general. But that interpretation is difficult to sustain in the face of three
sets of evidence. First of all, when one actually reads the early experimental
literature one finds that its authors were extremely sensitive to possible
sources of error in their procedures - in some ways they are more sensitive than
a modern critic, unfamiliar with those procedures, is likely to be. For
instance, there is hardly a better example of meticulous attention to laboratory
procedures and problems than Titchener's "Manual of Laboratory Practice"
(1901-1905). Secondly, the old pattern never died out completely but rather
survived in a small way in the sensation/perception area, which is not generally
associated with scientific backwardness. Thirdly, when one makes a systematic
survey of the empirical reports of early modern psychology one finds instances
where the relationship of experimenters and subjects was much more like our own
conception of what it should be. This was particularly the case where the
subjects were children or were drawn from clinical populations. In those cases
experimenters and subjects did not exchange roles, and the roles remained quite
separate. Subjects provided the raw data and experimenters analysed them,
theorized about them and published the results. In this respect these
experiments were much more like modern mainstream experiments, but in other
respects they tended to be methodologically naive compared
to the experiments conducted at
Recall my earlier distinction between technique and methodology. The notion of
progress can be unproblematically applied to technique, but not to methodology.
Now, any differences in technique between the early experimentalists and
ourselves clearly depend on more fundamental differences in methodology. We do
not have the same conception of what we are trying to do when we engage in
psychological research. We have different knowledge goals.
This concept of knowledge goals brings us back to the investigator. In order to
understand investigators' adoption of a certain methodology we have to
understand their goals, but to do that we first have to distinguish between
specific and more general goals. Investigators commonly look for specific
information relevant to a particular hypothesis. But they do not accept just any
piece of information indiscriminately. They make certain demands on the data
they are willing to accept. We have concepts like reliability and validity to
express such demands. When we do research we are only interested in certain
kinds of information, information that satisfies certain criteria. Our behavior
implies that we have general knowledge goals in addition to the specific goals
formulated in research reports. Normally, we do not need to formulate these
general goals explicitly; we can take them for granted, because they are shared
by all members of our scientific community. In fact, such shared goals are part
of what constitutes a scientific community.
Day to day research activity depends on never questioning the general knowledge
goals of your scientific community. That is part of what Thomas Kuhn (1970)
meant by "normal science". The price paid is the equation of methodology with
technique and the resulting conservatism with regard to fundamental
methodological change. If, however, we want to understand methodology in the
more general sense, we need to adopt a broader perspective so that we can
compare different scientific communities with each other without prejudging the
issue of which is on the right track and which is not. At this level of analysis
the concept of general knowledge goals is indispensible.
For example, the arrangements of the early psychological laboratories become
understandable as perfectly rational when one keeps in mind the general
knowledge goal pursued by these investigators. They had a very definite
knowledge object, namely, the universal features of the individual adult
consciousness. To obtain information on this object you needed sophisticated and
reliable observers of the individual consciousness. However, it was not the
individuality of each consciousness which was of interest but their common
features. There was therefore no reason why experimenters and subjects should
not exchange roles, especially as an understanding of the purpose of the
experiment was held to establish the best conditions for careful introspective
observation (Wundt, 1906; Danziger, 1980). The knowledge goals of these
investigators required sophisticated, well informed and dedicated experimental
subjects, and their methodology was designed around that requirement.
Those who did research on children or on clinically stigmatized subjects, on the
other hand, had different knowledge goals. They were ultimately interested in
deviance from some supposed adult (and usually male) norm and so had to work
with subjects they could not exchange places with, subjects who were by
definition excluded from filling the shoes of the scientific investigators.
During the period between World War I and II American psychology increasingly
adopted a kind of research practice for which the aggregation of observations
across groups of individuals was fundamental. In this style of investigative
practice, pioneered by Francis Galton, knowledge goals had shifted away from the
individual consciousness, or the
individual organism for that matter. The new goal focused on the production and
analysis of inter-individual variance. For that you needed data from a
relatively large number of subjects and this imposed new constraints on the
social context of methodology. As resources were limited, investigators and
subjects would have to meet for relatively brief periods and would almost
certainly be strangers to one another.
The social situations in which psychological data were now produced differed
quite considerably from the situation in earlier investigations, whether of the
laboratory or the clinical type. It seems likely that a different set of social
psychological problems would characterize each of these investigative
situations. Studies of the social psychology of psychological experiments have
tended to concentrate on one kind of experimental situation, that of the
contemporary mainstream experiment. It would be interesting to broaden this
perspective and to extend social psychological analysis to other types of
This is particularly desirable to-day, because in recent years there have been a
number of innovative developments in investigative practice. After several
decades of methodological gridlock there appears to be a growing realization
that the range of social contexts which lend themselves to the systematic
collection and analysis of psychological information is much larger than the
very limited context to which most psychological investigation had been reduced.
To mention only a few, we now have discourse analysis (Potter and Wetherell,
1987) and estrogenic analysis (Marsh, Rosser and Harre, 1978) which are
proliferating in Britain; we have various kinds of collaborative research in
institutional settings (e.g. Argyrols, 1985; Torbert, 1981); we have research in
cultural psychology which has adopted some of the methods of modern ethnographic
research (Stigler, Shweder and Herdt, 1990). All these methodologies are based
on a different construction of the relationship between investigators and their
subjects than the one that has characterized psychology over much of this
century. In fact, it is inappropriate to speak of "subjects" in the context of
"new paradigm" research. What we can say is that all forms of investigative
practice have participants, and that in one particularly widespread form of
practice the relationship among some of the participants takes the form of the
familiar division into asymmetrical experimenter and subject roles. But we
should stop treating this form as though it provided the only conceivable social
framework for the achievement of scientifically significant psychological
Because of the close link between the kind of knowledge we achieve and the
social conditions under which it is produced, we always impose limitations on
our knowledge when we accept restrictions on how it is produced. Often, these
limitations are accepted quite deliberately, in fact, they are seen as positive
knowledge goals, but if a certain methodology becomes simply taken for granted,
thought about the kind of knowledge it yields tends to stop. Then we may end up
with a kind of knowledge we did not really want, but we accept it anyway,
because we have been taught that if we abandon a specific version of scientific
methodology we are abandoning all hope of any kind of valid knowledge.
The kind of knowledge that the conventional methodological context is very good
at producing is knowledge about the unidirectional effect of unilateral
interventions. It is also knowledge that targets statistical effects rather than
individuals. There is certainly room for such knowledge, for example, when you
are interested in what Philip Runkel (1990) calls "casting nets", that is,
finding the proportion of some population with a particular attribute, as in
public opinion or advertising research. What must be resisted, however, is the
insinuation that this kind of methodology provides the only basis for any kind
of psychological knowledge that deserves to be called scientific. Like any
methodology, net casting makes strong assumptions about the nature of the
objects to which it is applied, and these assumptions cannot be adequately
tested within the framework of the methodology (Danziger, 1988). Insofar as
these assumptions are fundamentally incorrect, blind persistence in the
indiscriminate use of such a methodology can hardly be regarded as an example of
true scientific spirit. Genuine science would seem to have more to do with care
in choosing methods appropriate to the subject matter.
If departures from methodological orthodoxy are becoming more and more frequent,
it is because traditional assumptions about our subject matter are increasingly
being questioned and traditional knowledge goals are increasingly being replaced
by new goals. Developmental psychologists like Winegar and Valsiner (1992) have
indicated that the current psychogenetic reconceptualization of the
developmental process requires "a rethinking of some of our most cherished
methodological tools: independent and dependent variables and analysis of
statistical variance" (p. 258). Others have elaborated on a kind of knowledge
required by practitioners which is fundamentally different from the kind of
knowledge supplied by the application of traditional methodology (Polkinghorne,
1992; Kvale, 1992; Shotter, 1993). Gergen (1989) and others see the goal of
psychological inquiry as lying in the direction of an enlargement and enrichment
of psychological intelligibilities and therefore needing to draw on hermeneutic
methods. Moreover, a considerable body of feminist scholarship has not only
called attention to the historical and systematic links between the social
context of research and a certain kind of knowledge (Sherif, 1979; Wallston and
Grady, 1985; Morawski, 1988; Bayer and Morawski 1992), but has gone on to
suggest alternatives (Hollway, 1989; Lykes, 1989; Morawski, 1990; Morawski and
In the face of these rapidly growing developments the task of the teacher of
psychological methodology needs rethinking. Surely it is no longer adequate to
define this task in terms of training in a prescribed set of technical skills
associated with one methodological approach. That is not to say that one set of
skills is simply to be replaced with another. Rather, we need to pay more
attention to questions of methodology as distinct from questions of technique.
That means abandoning the unfortunate tradition of pretending that one narrow
set of techniques exhausts the range of methodological options at our disposal.
It means raising questions about our knowledge goals and about the ontological
assumptions that are implied by different methodologies. And that means going
beyond the idea of methodology as the embodiment of contextless general
prescriptions and recognizing that methodology in use always involves the
implementation of some social scenario.
The methodological norms of our discipline crystallized about half a century
ago. Around that time and before many prominent psychologists were definitely
interested in current developments in the philosophy of science, and their
methodological notions reflect that interest. Unfortunately, the period of
crystallization, which the discipline probably needed, gradually developed into
a period of fossilization, which the discipline probably did not need. A
methodological consensus achieved at a particular historical moment was accepted
as valid for all future time. No longer did it appear necessary to keep up with
developments outside the discipline which might be relevant to questions of
methodology. Not only did it become unfashionable to take an interest in what
was happening in the philosophy of science, but other related fields were
treated with the same disdain. More recent developments in the history and
sociology of science were widely ignored, although some of these developments
were of potentially enormous significance for our conception of psychological
research. Thomas Kuhn's widely misinterpreted work (Peterson, 1981) was a
dubious exception. But in any case, since it first appeared, thirty years ago
(Kuhn, 1962), there have been many further investigations of the social context
of science, and this has led to a number of more recent formulations regarding
the historical contingency of scientific methodology. These range from Dudley
Shapere's (1984) demonstration of the historical interplay of method and content
in science to Ian Hacking's (1992) notion of "styles of reasoning" of which the
statistical style is a prime example. As Thomas Nickles (1989, 318, 321) put it:
"methodology is a human social-scientific subject and not a purely logical
subject....the methodological order and the social order are inseparable".
Perhaps the most radical, but, at least in Europe, also the most influential,
version of the new account of scientific method is to be found in the writings
of the French philosopher, Michel Foucault, who is known for his concept of a
"regime of truth". "Each society", he says, "has its regime of truth: that is,
the types of discourse which it accepts and makes function as true; the
mechanisms and instances which enable one to distinguish true and false
statements....the techniques and procedures accorded value in the acquisition of
truth; the status of those who are charged with saying what counts as true"
(Foucault, 1980, 131).
When one makes the jump from these kinds of conclusion to American textbook
discussions of psychological methodology one not only knows one has landed on a
different continent, one also feels one is in a different century. That simple
faith in the timeless effectiveness of certain techniques, mistakenly identified
with the scientific method in the
singular, may be quite touching, but it certainly seems to have more in common
with the world of the late nineteenth than that of the late twentieth century.
The question is whether we can afford to go on educating our students in this way. If we do, are we not perhaps consigning them to the role of low level technicians who are unable to demonstrate much understanding of the wider implications of what they are doing, and hence are increasingly less likely to be consulted about those implications? Of course, no-one is suggesting that technical questions are no longer important, or that "anything goes". What I am suggesting is that in this day and age a purely technical training is not enough. If they are to apply them wisely and creatively students need to be able to put their technical skills in perspective, and for that they need something more than technical training, they need a broad, interdisciplinary, education in methodology.
G. Stanley Hall Lecture, annual meeting of the American Psychological Association in Toronto, August 1993.
Adair, J.G. (1978). Comment. The
Behavioral and Brain Sciences, 3,
Ash, S.E. (1952). Social
Englewood Cliffs N.J.: Prentice-Hall.
Bayer, B.M. and Morawski, J.G. (1992). Experimenters and their experimental
performances: The case of small group research, 1950-1990. Presented at the
annual convention of the Canadian Psychological Association,
Bazerman, C. (1988). Codifying the social scientific style: The APA
Publication Manual as a behaviorist rhetoric. In C. Bazerman,
Shaping written knowledge: The genre and
activity of the experimental article in science (pp. 257-277).
Bevan, W. (1991). Contemporary psychology: A tour inside the onion.
Boring, E.G. (1933). The physical
dimensions of consciousness.
Collins, H.M. (1985). Changing order:
Replication and induction in scientific practice. London and Beverly Hills:
Danziger, K. (1980). The history of introspection reconsidered.
Journal of the History of the Behavioral
Sciences, 16, 241-262.
Danziger, K. (1990). Constructing the
subject: Historical origins of psychological research. New York: Cambridge
Danziger, K. (1992). The project of an experimental social psychology:
Historical perspectives. Science in
Context, 5, 309-328.
Einstein, A. (1933). On the method of
Festinger, L. (1953). Laboratory experiments. In L Festinger and D. Katz (Eds.),
Research Methods in the Behavioral
Foucault, M. (1980). Power/Knowledge.
Edited by Colin Gordon.
Gadlin, H., & Ingle, G. (1975). Through the one-way mirror: The limits of
experimental self-reflection. American
Psychologist, 30, 1003-1009.
Gergen, K.J. (1989). The possibility of psychological knowledge: A hermeneutic
inquiry. In M.J. Packer and R.B. Addison
(Eds.) Entering the hermeneutic circle:
Hermeneutic investigation in psychology.
Hollway, W. (1989). Subjectivity and
method in psychology: Gender, meaning and science.
Kaplan, A. (1964). The conduct of
inquiry: Methodology for behavioral science. San Francisco: Chandler.
Knorr-Cetina, K. (1981). The manufacture
of knowledge: An essay on the constructivist and contextual nature of science.
Kuhn, T.S. (1962). The structure of
Kvale, S. (1992). Postmodern psychology: A contradiction in terms?
In S. Kvale (Ed.) Psychology and
Latour, B., & Woolgar, S. (1979).
Laboratory life: The social construction of scientific facts. Beverly Hills:
Lewin, K., Lippitt, R., & White, R.K. (1939). Patterns of aggressive behavior in
experimentally created "social
climates". Journal of Social Psychology,
Lykes, M.B. (1989) Dialogue with Guatemalan Indian women: Critical perspectives
on constructing collaborative research. In R.K. Unger (Ed.),
Representations: Social constructions of
Marsh, P., Rosser, E., and Harre, R. (1978).
The rules of disorder.
McMullin, E. (1992) (Ed.), The social
dimensions of science. Notre Dame: University of Notre Dame Press.
Monte, C.F. (1975). Psychology's
Morawski, J.G. (1988). Impossible experiments and practical constructions: The
social bases of psychologists' work. In J.G. Morawski (Ed.),
The rise of experimentation in American
Morawski, J.G. (1990). Toward the unimagined: Feminism and epistemology in
psychology. In R.T. Hare-Mustin & J.
Marecek (Eds.), Making a difference:
Psychology and the construction of gender (pp. 150-183).
Morawski, J.G., & Steele, R.S. (1991). The one or the other?
Textual analysis of masculine power and feminist empowerment.
Theory and Psychology,
Nickles, T. (1989) Justification and experiment. In D. Gooding, T. Pinch, &
Orne, M.T. (1962). On the social psychology of the psychological experiment:
With particular reference to demand characteristics and their implications.
Orne, M.T. (1970). Hypnosis, motivation, and the ecological validity of the
psychological experiment. In W.J. Arnold & M.M. Page (Eds.),
Patnoe, S. (1988). A narrative history of
experimental social psychology: The Lewin tradition. New York: Springer.
Peterson, G.L. (1981). Historical self-understanding in the social sciences: The
use of Thomas Kuhn in psychology.
Journal for the Theory of Social
Behaviour, 11, 1-30.
Pickering, A. (1984). Constructing
quarks: A sociological history of particle physics. Chicago: University of
Polkinghorne, D.E. (1992). Postmodern epistemology of practice. In
Potter, J. & Wetherell, M. (1987).
Discourse and social psychology: Beyond attitudes and behaviour.
Roethlisberger, F.J., & Dickson, W.J. (1939).
Management and the worker.
Rosenthal, R., & Rubin, D.B. (1978).
Interpersonal expectancy effects: The first 345 studies.
The Behavioral and Brain Sciences,
Rosenzweig, S. (1933). The experimental situation as a psychological problem.
Runkel, P.J. (1990). Casting nets and
testing specimens: Two grand methods of psychology. New York: Praeger.
Shapere, D. (1984). Reason and the search
for knowledge: Investigations in the philosophy of science.
Sherif, M., & Sherif, C.W. (1953). Groups
in harmony and tension: An integration of studies on intergroup relations.
Sherif, C.W. (1979). Bias in psychology. In J. Sherman & E.T. Beck (Eds.),
The prism of sex: Essays in the sociology
of knowledge (pp.93-133).
Shotter, J. (1993). Conversational
realities: Studies in social
constructionism. London and Newbury Park: Sage.
Stigler, J.W., Shweder, R.A., & Herdt G. (Eds.). (1990).
Cultural Psychology: Essays on
comparative human development.
Titchener, E.B. (1901-1905). Experimental
psychology: A manual of laboratory practice. New York: Macmillan.
Tolman, E.C. (1932). Purposive behavior
in animals and men. New York: Appleton-Century-Crofts.
Torbert, W. R. (1981). Why educational research has been so uneducational: the
case for a new model of social science based on collaborative inquiry. In P.
Reason and J. Rowan (Eds.). Human
inquiry: A sourcebook of new paradigm research.
Traweek, S. (1988). Beamtimes and
lifetimes: The world of high energy physicists. Cambridge,
Wallston, B.S., and Grady, K.E. (1985). Integrating the feminist critique and
the crisis in social psychology: Another look at research methods. In V.E.
O'Leary, R.K. Unger, & B.S. Wallston (Eds.),
Women, gender, and social psychology
Winegar, L.T., and Valsiner, J.T. (1992). Re-contextualizing
context: Analysis of metadata and some further elaborations. In L.T.
Winegar, & J. Valsiner (Eds.), Children's
development within social context,
vol. 2: Research and methodology (pp.249-266).
Winston, A.S. (1988). Cause and Experiment in Introductory
Psychology: An analysis of R.S. Woodworth's textbooks.
Teaching of Psychology,
Winston, A.S. (1990). Robert Sessions Woodworth and the "
Woodworth, R.S. (1934). Psychology.
Woodworth, R.S. (1938). Experimental
Wundt, W. (1883). Logik, vol.2:
Methodenlehre. Stuttgart: Enke.
Wundt, W. (1906). Die Aufgaben der experimentellen Psychologie.
In Essays, 2nd ed. (pp.187-212).