Saturday, May 09, 2009
[education destroyed 2] the issue of research
This [shorter] article is the continuation on the educational post here and abridges the work of:
Stone, J. E. & Clements, A. (1998), Research and innovation: Let the buyer beware, in Robert R. Spillane & Paul Regnier (Eds.), The superintendent of the future (pp.59-97), Gaithersburg, MD: Aspen Publishers, via J. E. Stone and Andrea Clements, East Tennessee State University
Quantitative versus Qualitative Research
Quantitative research includes both descriptive and explanatory studies. Descriptive studies are concerned only with establishing the existence of a phenomenon of interest--student achievement, for example. How much of it exists, where it exists, and what kinds of it exist are typical descriptive hypotheses. Explanatory studies are concerned with the causes of a phenomenon of interest.
For example, does the use of Direct Instruction improve achievement? Technically stated, explanatory studies are concerned with the discovery of functional relationships (i.e., relationships in which the state of a given phenomenon is said to be a function of a preceding event or condition).
Less technically, explanatory studies are concerned with whether a given effect is the result of a particular cause. Causal relationships are examined in experiments and experiment-like studies called quasi-experiments. More is said about experiments below.
Descriptive studies address a wide range of topics. For example, a report of average test scores for students at different schools would be descriptive. So would a study of the number of words comprising recognition vocabulary of children at succeeding ages. Descriptive studies include a number of subtypes.
For example, studies of characteristics such as preferred types of play or ability to perform certain intellectual tasks may entail observation of fresh samples of children at successive chronological age levels. Such studies are called "cross-sectional" descriptive research. Studies that examine the same characteristics but observe the same individual children over a period of years are called "longitudinal."
Quantitative descriptive studies also include reports of correlational relationships between variables. An example of a correlational study would be one that describes the degree of relationship between family socioeconomic status and school achievement. Another example is hyperactivity's relationship to junk food consumption. Correlational studies are among those most frequently misinterpreted by users of educational research.
Despite its current unpopularity among educators, there is a great deal of high-quality quantitative research in education. It includes disquieting descriptive findings such as falling SAT scores and reports of low math and science achievement and similarly disquieting experimental results such as those of the Follow Through project. In the opinion of the authors, quantitative research's unpopularity may well be related to its disagreeable results. Findings that affirm orthodoxy are clearly more popular.
Qualitative research in education is a growth industry. It is a type of research long used in fields such as cultural anthropology. Qualitative research relies on written description instead of objective measurement, and its findings are subject to all the vagaries associated with written descriptions of any kind. Rather than attempting to affirm hypotheses and make generalizations that are grounded on an agreed-upon objective framework, qualitative research is more concerned with description as subjectively perceived by an observer in context.
Such descriptions are thought to be more honest and realistic than descriptions that purport to be objective and at arm's length. It is a form of research premised on a postmodern, multiculturalist view of science. It argues that the objective understanding to which traditional science aspires is nothing more than an arbitrary Western convention--one educators should be free to reject.
By avoiding a focus on particular variables of interest, qualitative research presumably avoids the imposition of cultural bias. Of course such a process ignores the very information typically sought by the consumer. For example, a teacher's question about whether one teaching method produces greater achievement than another would not be answered by a qualitative study. Qualitative studies do not "prove" or "disprove" anything. They can only describe. The validity of such studies is simply an open question (Krathwohl, 1993).
The vagueness of the methods used in qualitative studies invites observer bias. Observers are necessarily selective in their observations. For example, an observer who dislikes the punishment seen in a classroom may tend to note the negative emotional reactions of students more than would a disinterested observer.
By contrast, a more impartial observer might give greater attention to the increased on-task behavior that may be effected by the use of punishment. Although there are ways to make such observations more reliable, they are far more subject to researcher bias than most quantitative reports.
Like qualitative research, action research has gained in popularity among educators. Wiersma (1995) describes it as research "conducted by teachers, administrators, or other educational professionals for solving a specific problem or for providing information for decision making at the local level" (p. 11). Action research is typically quantitative but less rigorous in design and methodology than conventional quantitative research.
The following is a classroom level example: A teacher is having discipline problems during her fifth-period class. She arranges the desks differently and assesses whether the discipline problems are reduced. A written report of her investigation, including data, analysis, and a brief discussion, would be considered action research.
Would such a finding be a sufficient basis for recommending that teachers employ rearranged desks as a means of treating discipline problems? In theory it would not. Practice, however, is another matter. Despite methodological weaknesses--in the present example, a single class sample and no control group--such findings are sometimes used to bolster proposals for new and innovative programs.
Pseudoresearch is a form of scholarly writing that appears to make factual claims based on evidence but, in fact, consists only of opinion founded on opinion. Previous studies are cited, but they contain only theory and opinion. Legitimate empirical reports traditionally present a review of literature that enables the reader to put new findings in context and to strengthen factual generalizations (Stanovich,1996). However, previous studies containing only opinion do nothing to strengthen the report that cites them.
Commonsense educational claims are often supported by such "research." For example, if an expert opines that schooling is improved by greater funding and if other experts cite and endorse that original claim, subsequent reports will contain what appears to be substantiation.
* If the claim seems plausible and thus goes unquestioned, it appears to gain acceptance as a fact without ever being tested. Such claims are said to be supported by "research" but it is "research" in the sense of a systematic review of relevant literature, not in the sense of studies that offer an empirical foundation for factual assertions.
Educational innovations that are consistent with popular educational doctrines are often supported by such research. The controversial but widely used whole-language reading instruction (discussed below), for example, goes unquestioned by most educators because it fits hand-in-glove with learner-centered pedagogy. It is supported primarily by favorable opinion among like-minded educators, not demonstrated experimental results.
A type of research that seems to produce empirical facts from opinion is a group-interaction process called the Delphi method (Eason, 1992; Strauss & Zeigler, 1975). However, instead of creating the appearance of empirically grounded fact from multiple reports of opinion (as does pseudoresearch), the * Delphi method creates facts about opinion.
In Delphi research, the opinions of experts are collected and synthesized in a multistage, iterative process. For example, if a researcher sought to determine the future occupations open to high school graduates, he or she might consult a panel consisting of career counselors, former high school students, employers, and economists. The panelists would be asked to compose a list of prospective jobs, and they would each share their list with the other panelists.
After viewing the lists of other panelists some members might choose to change their estimations, and their changes would then be shared with the other panelists in a second round of mutual review. Ideally, three or so rounds of sharing and realignment would produce a consensus. The "fact" resulting from such a study is that experts agree about the future availability of certain jobs, not that certain jobs have a high probability of being available.
A recent attempt to find effective institution-to-home "transition strategies" for disabled juvenile delinquents illustrates how a Delphi consensus can be confused with an empirically grounded conclusion. Following three rounds of surveys, Pollard, Pollard, and Meers (1994) concluded that the priorities identified by the panelists provided a "blueprint for successful transition" when, in fact, the surveys produced only a consensus about what may or may not prove to be a successful blueprint.
Rand corporation is credited with developing the Delphi technique as a means of distilling a consensus of expert opinion. Sackman (1974) has summarized its primary shortcomings. The expert status of panelists is not scientifically verifiable and neither is the assumption that group opinion is superior to individual opinion.
One other confusion about the Delphi technique pertains to its use by the leader of a deliberative body. Delphi methodology can create the appearance of consensus where none exists--a problematic outcome of a deliberative process. Technically, the Delphi technique does not force a consensus; but as a practical matter, it is designed to produce a consensus and it puts substantial pressure on dissenters for conformity to the group.
When employed by the leadership of a deliberative group, it can turn what should be an open and fair-minded exchange of views into a power struggle. Minority viewpoints can be isolated and marginalized. The result is more mindless conformity than reasoned agreement. The conclusions reached by committees and policy-making bodies can easily be distorted by Delphi methodology.
Experimental and Quasi-Experimental Research
Experiments are quantitative studies in which cause-effect relationships are tested (Campbell and Stanley, 1966). Quasi-experiments attempt the same but with certain limitations. Other studies may suggest or imply causal relationships, but their findings are far more ambiguous and subject to misinterpretation. Experiments are not foolproof, but they afford the best evidence science has to offer.
From a purely scientific standpoint, experiments are important because they attempt to answer the primary question with which science is concerned:
"What explains or accounts for the phenomenon under investigation?"
All sciences aspire to this kind of understanding. They are valuable from a practical standpoint, too, because they address the question of whether a given program, teaching method, treatment, intervention, curriculum, and the like produces expected effects.
Because schooling is intended as means of making a difference in the lives of students, the armamentarium of professional educators should contain tools that are well tested and demonstrably effective. Ideally, they should also be convenient, cost-effective, and well received by the student; but at a minimum, they must be effective.
The critical importance of experimental evidence in establishing effectiveness is not well understood by educators, but it is just such an understanding that is at the heart of knowing which research is valuable and why.
The aim of science is said to be the explanation of natural phenomena. However, the term explanation itself requires a bit of explanation. As the term is used by scientists, explanation refers to cause-and-effect explanation.
For example, a phenomenon such as achievement in school is said to be explained (or at least partially explained) if it can be shown that the presence or absence of achievement is functionally (i.e., causally) related to a preceding event or set of events termed a cause. A functional or causal relationship is initially stated in a tentative form called a hypothesis and is not considered a valid explanation until affirmed by evidence.
Experimental research is the business of collecting evidence that might support or disconfirm causal hypotheses. It entails the manipulation of a hypothesized cause for the purpose of inducing an expected effect. If a given effect (technically, a change in the "dependent variable") follows alteration of the purported cause (technically, a change in the "independent variable"), the causal hypothesis is said to be supported.
Other types of quantitative research and even qualitative research may be valuable in suggesting cause-effect hypotheses, but only experimental research can provide a direct test.
Internal and External Validity of Studies
Whether an empirical study is capable of demonstrating a causal relationship is one issue, but whether a given experiment was properly conducted is another. Moreover, even a properly conducted experiment may have limited applicability and usefulness in the "real world."
Whether the procedures used in an experiment permit valid findings is the matter of internal validity.
Whether the findings of an experiment are generally applicable to the "real world" (i.e., applicable under conditions beyond those under which the study was conducted) is the matter of external validity.
A wide variety of technical considerations can adversely influence the internal validity of an experiment. For example, the manner in which subjects were assigned to treatment and comparison groups can profoundly affect the outcome of an otherwise well-designed experiment.
Technical issues with respect to type of sampling and type of population sampled, for example, can greatly influence the external validity of a study.
Accurate assessment of these and other technical details requires considerable expertise. Even well-informed investigators may overlook significant threats to the validity of an experiment. Cook and Campbell (1979) provide an authoritative discussion of the myriad considerations that should be considered. Happily there are at least three considerations that a nonexpert can examine to assess the internal validity of a study: source, convergence, and replication.
Source. If a study is reported in a peer-reviewed scholarly journal, chances are good that it meets acceptable standards of internal and external validity. Peer review typically entails blind review of a manuscript by a panel of experts selected by an editor. Panelists are not given the author's name and the author is not given the reviewers' names. All criticisms and replies are exchanged through the editor. The most reputable and selective journals use this process.
Reports reviewed only by an editor may be valid, but peer-reviewed scholarship is generally conceded to be the most credible. Again, the process is not foolproof, but it is the best science has to offer.
Unpublished reports and reports that are not subject to editorial review--grant proposals and reports of funded research such as those included in the ERIC's Research in Education, for example--are of uncertain quality and should be treated as such.
Convergence. If a study's findings are generally consistent with (i.e., they converge with) the findings of other investigations in an area of research, they are generally assumed credible (Stanovich, 1996). Any competent research report will include a review of relevant literature. Consistencies and discrepancies within the existing literature and between the report at hand and previous studies are analyzed and discussed.
Articles called "reviews of literature" and "meta-analyses" are dedicated to citing and summarizing all of the findings relevant to a given topic or area of study.
Although new and revolutionary findings are sometimes uncovered by a single study, competent observations of the same or similar phenomena usually result in similar findings. Most scientific advancements come as incremental additions to understanding, not breakthroughs.
Replication. Replications are repeats of an original study by another investigator using a fresh set of subjects. The credibility of a study that has been replicated is greatly enhanced. Findings that have been replicated are considered valid even if they do not converge with other reports in the same general area of investigation. Only a small percentage of studies in the behavioral sciences are replicated, however.
The Need for Both Experiments and Field Testing
Few experimental investigations are able to fully satisfy requirements for both internal and external validity in a single study. The controls, artificial conditions, and other constraints necessary to ensure internal validity tend to interfere with external validity. Conversely, unanticipated and uncontrolled events can confound or invalidate an otherwise well-conceived study that is conducted in a natural environment such as a school.
Because of this inherent conflict, programs or interventions derived from experimental investigations should be field tested prior to implementation.
Field tests are trials of an experimentally supported finding in the classroom or clinic or other setting for which it is intended. Not infrequently they result in the discovery of limitations, cautions, and restrictions on the applicability of experimentally validated findings. Even findings that have been field tested elsewhere may lack local applicability because of peculiar local conditions.
Thus, large-scale programs, in particular, should also be locally tested on a small scale in what is called a pilot study. Pilot studies are especially important when the implementation of research findings entail significant time and energy costs for school personnel or learning opportunity costs for students.
An example of the problem of research methods is the climate debate. Both sides quote ‘experts’ but only their own experts, not those of the other side. Thus there appears to be quite cogent science supporting the sceptic stance that man made global warming is not occurring and there is an august body of science also supporting that it is occurring.
Both sides ignore the other’s ‘science’ and continue to quote their own as some some of refutation. This is ‘pseudo-science’ and proves nothing.
In education, it is one of the key methods of forcing through ‘educational consensus’ on ‘latest discoveries’ supporting the socialistic thrust of the powers that be in education and the results are now out there for the whole community to see.
Hover near a group of chavs and listen to their conversation for more anecdotal evidence of the plight of education in our community.