All of us engage
in actions that have some of the characteristics of formal research, although
perhaps we do not realize this at the time. We try out new methods of teaching,
new materials, new textbooks. We compare what we did this year with what we did
last year. Teachers frequently ask students and colleagues their opinions about
school and classroom activities. Counselors
interview
students, faculty, and parents about school activities. Administrators hold
regular meetings
to gauge how
faculty members feel about various issues. School boards query administrators,
administrators query teachers, teachers query students and each other. We
observe, we analyze, we question, we hypothesize, we evaluate. But rarely do we
do these things systematically. Rarely do we observe under controlled conditions.
Rarely are our instruments as accurate and reliable as they might be. Rarely do
we use the variety of research techniques and methodologies at our disposal.
The term research
can mean any sort of “careful, systematic, patient study and investigation
in some
fi eld of
knowledge.
Basic research is concerned
with clarifying underlying processes, with the hypothesis usually expressed as
a theory. Researchers engaged in basic research studies are not particularly
interested in examining the effectiveness of specifi c educational practices.
An example of basic research might be an attempt to refi ne one or more stages
of Erickson’s psychological
theory of
development.
Applied research
on
the other hand, is interested in examining the effectiveness
of particular
educational practices. Researchers engaged in applied research studies may
or may not
want to
investigate the degree to which certain theories are useful in practical
settings. An example might be an attempt by a researcher to fi nd out whether a
particular theory of how children learn to read can be applied to fi rst
graders who are non-readers. Many studies combine
the two types of
research. An example would be a study that examines the effects of particular
teacher
behaviors on
students while also testing a theory of personality. Many methodologies fi t
within the framework of research. If we learn how to use more of these
methodologies where they are appropriate and if we can become more
knowledgeable in our research efforts, we can obtain more reliable information
upon which to base our educational decisions. Let us look, therefore, at some
of the research methodologies we might use.
QUANTITATIVE AND
QUALITATIVE RESEARCH
Another
distinction involves the difference between quantitative and qualitative
research . Although we
shall discuss
the basic differences between these two types of research more fully in Chapter
18, we will
provide a brief
overview here. In the simplest sense, quantitative data deal primarily with
numbers, whereas qualitative data primarily involve words. But this is too simple
and too brief. Quantitative and qualitative methods differ in their assumptions
about the purpose of research itself, methods utilized by researchers, kinds of
studies undertaken, the role of the researcher, and the degree to which generalization
is possible. Quantitative researchers usually base their work on the belief
that facts and feelings can be separated, that the world is a single reality
made up of facts that can be discovered.
IMPLICATIONS FOR
EDUCATIONAL RESEARCH
We hope that
this brief introduction has not only stimulated your interest in what has been
called, by some, the third revolution in science during the twentieth century
(the theory of relativity and the discovery of quantum mechanics being the fi rst
two), but that it helps to make sense out of what we view as some implications
for educational research. What are these implications?
* If chaos
theory is correct, the diffi culty in discovering widely generalizable rules or
laws in education, let alone
the social
sciences in general, may not be due to inadequate concepts and theories or to
insuffi ciently precise measurement and methodology, but may simply be an
unavoidable fact about the world. Another implication is that whatever “laws”
we do discover may be seriously limited in their applicability—across
geography, across individual and/ or group differences, and across time. If
this is so, chaos theory provides support for researchers to concentrate
on studying
topics at the local level—classroom, school, agency—and for repeated studies
over time to see if such
laws hold up. Another
implication is that educators should pay more attention to the intensive study
of the exceptional or the unusual, rather than treating such instances as
trivial, incidental, or “errors.” Yet another implication is that researchers
should focus on predictability on a larger scale—that is, looking for patterns in
individuals or groups over larger units of time. This would suggest a greater
emphasis on long-term studies rather than the easier-to-conduct (and cheaper)
short-time investigations that are currently the norm. Not surprisingly, chaos
theory has its critics. In education,
the criticism is
not of the theory itself, but more with misinterpretations and/or
misapplications of it. † Chaos theorists do not say that all is chaos; quite
the contrary, they say that we must pay more attention to chaotic phenomena and
revise our conceptions of predictability. At the same time, the laws of gravity
still hold, as, with less certainty, do many generalizations in education.
EXPERIMENTAL
RESEARCH Experimental research
is the most
conclusive of scientific methods. Because the researcher actually establishes
different treatments and then studies their effects, results from this type
of research are likely to lead to the most clear-cut interpretations. Suppose a
history teacher is interested in the following question: How can I most
effectively teach important concepts (such as democracy or colonialism) to my students?
The teacher might compare the effectiveness of two or more methods of
instruction (usually called the independent variable ) in promoting the
learning of historical concepts. After systematically assigning students
to contrasting
forms of history instruction (such as inquiry versus programmed units), the
teacher could
compare the
effects of these contrasting methods by testing students’ conceptual knowledge.
Student learning in each group could be assessed by an objective test or some
other measuring device. If the average scores on the test (usually called the dependent
variable ) differed, they would give some idea of the effectiveness of the
various methods. Hn the simplest sort of experiment, two contrasting methods
are compared and an attempt is made to control for all other (extraneous)
variables—such as student ability
level, age,
grade level, time, materials, and teacher characteristics—that might affect the
outcome under investigation. Methods of such control could include holding the
classes during the same or closely related periods of time, using the same materials
in both groups, comparing students of the same age and grade level, and so on. Of
course, we want to have as much control as possible over the assignment of
individuals to the various treatment groups, to ensure that the groups are
similar. But in most schools, systematic assignment of students to treatment
groups is diffi cult, if not impossible, to achieve. Nevertheless, useful
comparisons are still possible. You might wish to compare the effect of
different
teaching methods
(lectures versus discussion, for example) on student achievement or attitudes
in two
or more intact
history classes in the same school. If adifference exists between the
classes in terms of what is being measured, this result can suggest how the two
methods compare, even though the exact causes of the difference would be somewhat
in doubt.
CORRELATIONAL
RESEARCH
Another type of
research is done to determine relationships among two or more variables and to
explore their implications for cause and effect; this is called correlational
research . This type of research can help us make more intelligent
predictions. For instance, could a math teacher predict which sorts of individuals
are likely to have trouble learning the subject matter of algebra? If we could
make fairly accurate predictions in this regard, then perhaps we could suggest
some corrective measures for teachers to use to help such individuals so that
large numbers of “algebra-haters” are not produced.
How do we do
this? First, we need to collect various kinds of information on students that
we think are
related to their
achievement in algebra. Such information might include their performance on a
number of
tasks logically
related to the learning of algebra (such as computational skills, ability to
solve word problems, and understanding of math concepts), their verbal
abilities, their study habits, aspects of their backgrounds, their early
experiences with math courses and math teachers, the number and kinds of math
courses they’ve taken, and anything else that might conceivably point to how
those students who do well in math differ from those who do poorly. We then
examine the data to see if any relationships
exist between
some or all of these characteristics and subsequent success in algebra. Perhaps
those who perform better in algebra have better computational skills or higher
self-esteem or receive more attention from the teacher. Such information can help
us predict more accurately the likelihood of learning diffi culties for certain
types of students in algebra courses. It may even suggest some specifi c ways
to help students learn better. In short, correlational research seeks to
investigate the extent to which one or more relationships of some type exist.
The approach requires no manipulation or intervention on the part of the
researcher other than administering the instrument(s) necessary to collect the data
desired. In general, one would undertake this type of research to look for and
describe relationships that may exist among naturally occurring phenomena,
without trying in any way to alter these phenomena.
CAUSAL-COMPARATIVE
RESEARCH
Another type of
research is intended to determine the cause for or the consequences of
differences between groups of people; this is called causal-comparative research
. Suppose a teacher wants to determine whether students from single-parent
families do more poorly in her course than students from two-parent families.
To investigate this question experimentally, the teacher would systematically
select two groups of students and then assign each to a single- or two-parent
family—which is
clearly
impossible (not to mention unethical!). To test this question using a
causal-comparative design,
the teacher
might compare two groups of students who already belong to one or the other
type of family
to see if they
differ in their achievement. Suppose the groups do differ. Can the teacher defi
nitely conclude that the difference in family situation produced the difference
in achievement? Alas, no. The teacher can conclude that a difference does exist
but cannot say for sure what caused the difference.
Interpretations
of causal-comparative research are limited, therefore, because the researcher
cannot say conclusively whether a particular factor is a cause or a result of
the behavior(s) observed. In the example presented here, the teacher cannot be
certain whether (1) any perceived difference in achievement between the two
groups is due to the difference in home situation, (2) the parent status is due
to the difference in achievement between the two groups (although this seems unlikely),
or (3) some undentified factor is at work. Nevertheless, despite problems of
interpretation, causal-comparative studies are of value in identifying possible
causes of observed variations in the behavior patterns of students. In this
respect, they are very similar to correlational studies.
SURVEY RESEARCH
Another type of research obtains
data to determine specifi c characteristics of a group. This is called survey
research . Take the case of a high school principal who wants to fi nd out
how his faculty feels about his administrative policies. What do they like
about his policies? What do they dislike? Why? Which policies do they like the
best or least? These sorts of questions can best be answered through a variety
of survey techniques that measure faculty attitudes toward the policies of the
administration. A descriptive survey involves asking the same set of
questions (often prepared in the form of a written questionnaire or ability
test) of a large number of individuals either by mail, by telephone, or in
person. When answers to a set of questions are solicited in person, the research
is called an interview. Responses are then tabulated
and reported, usually in the form
of frequencies or percentages of those who answer in a particular way to
each of the questions. The diffi
culties involved in survey research are mainly threefold: (1) ensuring that the
questions are clear and not misleading, (2) getting respondentsto answer
questions thoughtfully and honestly, and
(3) getting a suffi cient number of the questionnaires completed and
returned to enable making meaningful analyses. The big advantage of survey
research is that it has the potential to provide us with a lot of information
obtained from quite a large sample of individuals. If more details about particular
survey questions are desired, the principal (or someone else) can conduct personal
interviews with faculty. The advantages of an interview (over a questionnaire)
are that openended questions (those requiring a response of some length) can be
used with greater confi dence, particular questions of special interest or
value can be pursued in depth, follow-up questions can be asked, and items that
are unclear can be explained.
ETHNOGRAPHIC
RESEARCH
In all the examples presented so
far, the questions beingasked involve how well, how much, or how effi
cientlyknowledge, attitudes, or opinions and the like exist or are being
developed. Sometimes, however, researchers may wish to obtain a more complete
picture of the educational process than answers to the above questions provide.
When they do, some form of qualitative research is called for.
Qualitative research differs from the previous (quantitative) methodologies in
both its methods and its underlying philosophy we discuss these differences,
along with recent efforts
to reconcile the two approaches.
HISTORICAL
RESEARCH
You are probably already familiar
with historical research . In this type of research, some aspect of the
past is studied, either by perusing documents of the period or by
interviewing individuals who lived during
the time. The researcher then
attempts to reconstruct as accurately as possible what happened during that
time and to explain why it did.
ACTION RESEARCH
Action research differs from all
the preceding methodologies in two fundamental ways. The fi rst is that
generalization to other persons,
settings, or situations is of minimal importance. Instead of searching for
powerful generalizations, action researchers (often teachers or other education
professionals, rather than professional researchers) focus on getting
information that will enable them to change conditions in a particular situation
in which they are personally involved. Examples would include improving the
reading capabilities of students in a specifi c classroom, reducing tensions between
ethnic groups in the lunchroom at a particular middle school, or identifying
better ways to serve special education students in a specifi ed school
district. Accordingly, any of the methodologies discussed earlier may be
appropriate.
EVALUATION
RESEARCH
There are many different kinds of
evaluations depending on the object being evaluated and the purpose of the evaluation.
Evaluation research is usually described as either formative or summative
. Formative evaluations are intended to improve the object being
evaluated; they help to form or strengthen it by examining the delivery of the program
or technology and the quality of its implementation. In contrast, summative
evaluations seek to examine the effects or outcomes of an object by
describing what
happens after the delivery of the
program or technology in order to assess whether the object caused the outcome.
An example of a formative evaluation product is a needs assessment report. A
needs assessment determines the appropriate audience for the program, as well
as the extent of the need and what might work to meet the need. Summative
evaluations can be thought of as either (a) outcome evaluations, which
investigate whether the program or technology appeared to have caused demonstrable
effects on specifi cally defi ned target outcomes, or (b) impact evaluations,
which are broader and attempt to assess the overall effects (intended or
unintended) of the program or technology as a whole. Evaluators ask many
different kinds of questions and often use a variety of methods to address
them. For example, in summative evaluations, evaluators often use
quasi-experimental research designs to assess the hypothesized causal effects
of a program.
0 Response to "Types of Research "
Posting Komentar