This paper reviews the research concerning computer-based reading
instruction for college students. Most studies suggest that computers
can provide motivating and efficient learning and help students improve
their reading skills. However, it is not clear whether the computer or
the instructing via the computer best accounts for student gains.
Analysis also reveals many methodological flaws in the studies. The
conclusion is that computer-based instruction can be effectively used
with college students to improve reading skills, but that attention to
the method rather than the means of instruction is most important.
The computer revolution has hit higher education as college
students increasingly receive their instruction via computers. But along
with the widespread use of computers in classrooms has come concerns
about their effectiveness. As Tanner (1984) says, "When a new
technology is touted as having so much potential for education, its
glamour cannot be allowed to obscure the need to validate its
usefulness" (p. 37).
Research on computer-based instruction at the college level has
been published since the late 1960s and, in 1980, the first
meta-analysis was conducted (a meta-analysis uses statistical analysis
on the results of many different studies to generalize from the
findings). From the perspective of Kulik, Kulik, and Cohen's 1980
meta-analysis of 59 studies, results for computer-based instruction in
colleges look promising. Kulik et al. reported that "the computer
has made a small but significant contribution to the effectiveness of
college teaching" (p. 538) particularly in terms of student
achievement and students' attitudes toward their instruction. An
updated meta-analysis (Kulik & Kulik, 1986) reached similar
conclusions about achievement and student attitudes. Interestingly
enough, both the 1980 and 1986 meta-analyses reported almost identical
effect sizes for student achievement, 0.25 and 0.26 respectively,
meaning that the typical control student performed at the 50th
percentile in comparison to the typical computer using student who
performed at the 60th percentile. But the most striking finding of both
meta-analyses was a "substantial savings in instructional
time" (Kulik et al., 1980, p. 537) suggesting that computers could
cut learning time to "two thirds the time required by conventional
teaching methods" (Kulik & Kulik, 1986, p. 100). Still, Kulik
et al., caution that the impact of computer-based instruction on student
achievement at the college level is not as dramatic as gains at the
elementary and secondary levels, and that other teaching methods might
be just as effective as computers. These meta-analyses provide only a
broad overview of the efficacy of computer-based instruction in college
classrooms across many disciplines, and, as will be discussed later, may
be flawed in their conclusions.
Literature reviews focusing on computers to teach reading are more
cautious in their findings. Balajthy (1987), for instance, believes that
the "results of research on computer-based instruction in reading
are at best equivocal" (p. 63). He explains that while using
computer-based instruction to supplement traditional instruction is
effective, so is almost any type of supplemental instruction, whether or
not it uses computers. In a recent review examining reading achievement
in adult education, Rachal (1995) found "no significant differences
between computer-assisted instruction (CAI) and traditional reading
instruction" (p. 249) for adults reading at the ninth grade level
To date, no meta-analysis or research review has looked
specifically at the research on the use of computers to teach
college-level reading skills. With this in mind, the goal of this paper
is to examine this body of research. First, the Educational Resources
Information Center (ERIC) database was searched, using various
combinations of the following keywords: CIA or computer-based
instruction (CBI) and reading and college or college student(s).
Bibliographies of articles and reports found through the ERIC search
were also used to locate other relevant studies. Studies were chosen to
review on the basis that they described experimental or
quasi-experimental studies involving college students, at two or
four-year institutions, using computers for reading instruction--either
reading instruction per se or reading for study in a content area class.
Only studies published since 1980 were included since software and
hardware prior to 1980 is largely outdated. Also, repeat studies using
the same software and conducted by the same researchers were not
included, just the most recent version of the research was examined.
Expectations for Computer-Based Instruction
Much of the excitement over computer instruction is fueled by the
belief that computers can help college students, sometimes in ways human
teachers or tutors cannot. One of the most powerful arguments advanced
for computer-based instruction is that computers can individualize
instruction (Askov & Clark, 1991; Kamil, 1987; Reinking, 1987;
Seigel & Davis, 1987; Turner, 1988; Watkins, 1991). Computers should
be capable of adjusting the content of lessons or the rate of
instruction according to the learners' needs. For example, Watkins
reports on one student who needed 8 hours to complete an assignment, a
pace that would exhaust the patience of most human tutors, but not an
electronic one. Moreover, Taraban (1996) describes a program that could
advise students as to which reading strategies work best for Them. The
computer monitors the student's reading behaviors and performances
on exercises or quizzes, accumulates sufficient information to make
correlations between the student's reading activities and
performance scores, then recommends strategies that have helped the
student in the past.
Computers could place in the readers' hands more control over
learning, which Askov and Clark (1991) and Turner (1988) contend is
empowering for low-level readers. Now with long-distance capabilities
for delivering information, computers can be flexible to meet
students' needs; computer instruction could conceivably be
administered at various sites at any time of the day or night (Turner).
Such accessibility has been recognized as advantageous for students who
must juggle multiple responsibilities (Askov & Clark; Turner).
Other hypothetical advantages to computer instruction have less to
do with the nature of instruction than with students' attitudes
toward their learning. Particularly for the struggling student,
observers report computer instruction can provide an important degree of
privacy; only the computer program knows the student's skill level
(Askov & Clark, 1991; Turner, 1988; Watkins, 1991). In fact, Askov
and Clark contend that computer instruction provides a certain cachet.
Students who might be embarrassed to admit they are taking a remedial
reading class can avoid stigma by saying they are attending a computer
class. Finally, researchers expect to capitalize on the excitement
surrounding new technologies to motivate students to learn. In the age
of television and video games, perhaps computers can engage students
with graphics, animation, and game-like features in ways that will make
learning fun (Kamil, 1987).
In short, optimistic educators believe computers will be patient,
responsive, personalized tutors providing extra help with assignments in
ways that engage and encourage learners. However, where there is hope,
there is also fear, the fear that computer-based instruction will prove
to be an expensive, ineffective attempt to improve learning. Askov and
Clark (1991), among others, point to the high costs of installing
hardware and purchasing software, of maintaining and upgrading
equipment, and of providing computer training for teachers, as well as
expert, technical support in the classroom or lab. Computers can crash
(Kamil, 1987) and software, once installed, can be inflexible and may
not exactly suit the needs of the course or the students (Kamil;
Watkins, 1991). Moreover, integrating computers into the curriculum
takes teacher time and energy, extra work which might cause resistance
or create resentment among faculty (Askov & Clark).
Reinking (1988-89) offers some more troubling criticisms of reading
software, arguing that most are neither pedagogically sound nor based on
current research about the teaching of reading. Rather, he claims,
assumptions underlying reading software development are fundamentally
flawed. For instance, programmers rely on the misconception that reading
is best taught by focusing on isolated skills, rather than on
integrating these skills into the act of reading. Often these programs
ignore the process of good readers to emphasize products, such as
correct responses to multiple choice questions, and therefore do not
teach reading comprehension so much as measure it. In short, bad
software equates with poor learning. Computers, some observers warn, may
not be the panacea for education.
Research-Based Answers: Computer vs. Traditional Instruction
To test hypothetical or observed advantages and drawbacks of
computer-based instruction, we can turn to research. Typical research
studies pit computer-based instruction against traditional teaching
methods. Students in a control group might complete assignments by
filling in worksheets or reading printed texts, while students in an
experimental group might complete the same assignments using a computer
program or by reading texts on a computer screen. An early concern of
researchers was whether reading the same text in print or on a computer
has any impact on reading ability. Other important research questions
consider whether computer-based instruction has any effect on attitudes,
learning time, or student achievement.
Mode of Delivery
Studies that examine mode of delivery investigate whether the
experience of reading material on a printed page differs from reading on
a computer screen. This is important because computer-based instruction,
almost by definition, requires students to read significant amounts of
information on a computer screen. Despite some concerns that computers
might impede reading ability because of eye strain, or slow readers
down, there seems to be little difference between these two reading
methods. Fish and Feldmann (1987) found no significant comprehension
differences among sophisticated readers (graduate students) when reading
the same text on page or screen. Similarly, Askwall (1985) reported the
same text presented on a computer or on paper had no effect on
undergraduates' reading speed or comprehension. Therefore no
detectable differences seem to exist between reading information from
print or from a computer screen.
Most researchers agree that students have a positive attitude
toward learning on computers (Balajthy, 1988; Kulik & Kulik, 1986;
Kulik, et al., 1980; Lang & Brackett, 1985; McCreary & Maginnis,
1989; Mikulecky, Clark, & Adams, 1989; Wepner, Feeley, & Wilde,
1989). Only one study (Wepner, Feeley, & Minery, 1990) reported
negative student reactions toward computer-based instruction, which the
researchers attributed to "poor lab conditions" (overcrowded
lab with outdated, unreliable hardware and software) and an
"unfortunate change in instructors midway through the course"
(p. 353). More typical are Mikulecky, et al.'s, findings that
students' attitudes toward computer-assisted instruction were
strongly positive. Students reported on questionnaires that they enjoyed
using the computer lessons and learned from them. In this case, the
researchers maintain that students recognized the computer taught them
useful reading strategies.
This positive student attitude, however, can be problematic if
students confuse interest with effectiveness, cautions Balajthy (1988).
That is, students in Balajthy's (1988) study rated the 2
computer-based instructional methods as being more effective than
traditional workbook exercises, when, in fact, the group using workbooks
showed greater achievement gains. Balajthy conjectures that students
equate their interest in computer-based instruction with learning
effectiveness and therefore may not, if left to their own devices, be
capable of choosing the mode of instruction that would be most helpful
to them. So while computers seem to motivate learning, this same
motivation may misdirect students' attention toward unproductive
activities and therefore not pay off in achievement gains.
Computer-based instruction reduces learning time (Kulik &
Kulik, 1986; Kulik, et al., 1980; Wepner et al. 1990; Wepner et al.,
1989). Wepner et al. (1990) found students using computers could
complete an entire program in the same time it took the control group to
get through two thirds of the same material. The researchers noted that
this 32% reduction in instructional time "correspond [s]
precisely" (p. 352) with Kulik and colleagues (1986, 1991, 1980)
findings in their meta-analyses. Wepner et al. (1989) report a similar
result in an earlier study, noting that the computer's ability to
efficiently manage instruction (in this case, to calculate words per
minute read and comprehension scores, and to supply reading materials)
saved time since the computer users "consistently finished before
the allotted time while the control group sometimes had to do their
paperwork after class" (p. 8). They hypothesized that this
time-saving feature of the computer may account for students'
positive reactions toward computer-based instruction.
The one study to contradict these findings is Balajthy's
(1988) comparison of students who used traditional workbook exercises
vs. 2 groups who used 2 different computer programs to study vocabulary.
As noted previously, the workbook users outperformed the computer users
on vocabulary quizzes, yet these students also spent significantly less
time on the text exercises. These findings suggest the workbook
exercises were the most efficient use of student time (students learned
the most in the least amount of time), even though students rated this
method of instruction as least interesting and least effective. The fact
that students spent less time on the workbook could be explained by
their low-interest ratings, yet it is not clear why these exercises also
proved to be more effective learning tools.
An important concern for researchers has been whether
computer-based instruction improves student achievement, achievement
most often measured in quantifiable terms such as the differences
between scores on pre- and post-tests. Many individual studies focusing
on reading skills have found computer-based instruction to be effective
for improving reading comprehension (Dixon, 1993; Grabe, Petros, &
Sawler, 1989; Kester, 1982; Lang & Brackett, 1985; Mikulecky, et
al., 1989; Price & Murvin, 1992; Skinner, 1990; Wepner et al.,
1990). Vocabulary (Culver, 1991; Lang & Brackett, 1985) and reading
rate (Culver; Wepner et al., 1990) also appear to benefit from
computer-based instruction. A closer look at these studies, though,
raises the question of whether the computer or the instruction via the
computer made the difference in student learning.
For example, studies often compare one group that receives
computer-based instruction to a control group that receives no special
instruction to show that computer-based instruction is effective. Price
and Murvin (1992) reported that a computer program supplementing the
textbook in an accounting class boosted student success rates in the
course when compared to students in previous classes who had no access
to the computer-assisted instruction. Similarly, Grabe et al. (1989)
found that students in an educational psychology class scored better on
exams when they used computer-assisted instruction to study the assigned
textbook reading as opposed to studying the textbook on their own.
Moreover, Mikulecky et al. (1989) looked at undergraduates in a biology
class who used a computer program .to help them understand the reading
material as compared to a control group who studied the textbook on
their own. The computer group scored significantly higher on exams, and
even on subsequent exams, suggesting the computer had modeled and taught
students effective reading strategies. But while both treatment and
control groups worked with the same textbook for the same amount of
time, only the treatment group received instruction (through the
computer program) about how to identify, compare, contrast, and connect
key concepts in the reading, skills that were necessary to do well on
These studies seem to indicate that computer-based instruction can
provide effective supplemental instruction. Another example is
Kester's (1982) study in which students in basic skills classes who
used computer-assisted instruction at least 2 hours a week to supplement
their regular classwork made significantly greater gains in reading
skills than students who did not engage in supplemental instruction.
Dixon (1993) found that students completing a required remedial reading
course averaged 4 years of growth in reading comprehension in the first
study and 3 years of growth in a repeat study, leading Dixon to conclude
that computer-assisted instruction is effective for remediating
It might also be that computer-based instruction, when compared to
traditional instruction, provides a different type of instructional
experience. Skinner (1990) compared 2 groups of students using 2
different versions of the same computer program to a control group who
used text-only materials to study for a classroom management class. The
computer groups performed consistently better on quizzes than the
text-only group. Skinner hypothesized that the computer programs were
effective study tools because they gave the students immediate feedback
and were motivating. But also, students working under computer
instruction were required to complete tutorial units while the text
group had no such requirement.
Like the studies that indicate computer-assisted instruction helps
college students' reading comprehension, studies suggest
computer-based instruction can also improve students' vocabulary.
In Lang and Brackett's (1985) research, college freshmen using
computers to learn vocabulary and comprehension skills showed gains of
one to two years in grade level reading ability over the course of the
semester. This study, however, lacked a control group. Culver (1991)
reports that computer instruction can improve English as a Second
Language (ESL) students' vocabulary. Over the course of a semester,
the researcher noted an overall increase of 3.9 grade levels in
vocabulary development for students using a computerized, levelized
reading program. But, like Lang and Brackett's study, Culver's
study lacked a control group against which to compare these gains.
Computer-based instruction also seems to improve students'
reading rates. Wepner et al. (1990) concluded that reading rate for
developmental reading students improved using computer-assisted
instruction. Growth for the computer users compared to the central group
was statistically significant. In this study, however, students in the
computer group were able to finish all the assigned units while the
control group completed only two thirds of the similar text materials.
Culver (1991), too, found reading rate improvements in ESL students
using computer-based instruction in a developmental reading class. The
majority of students improved their reading speed, with an average 3.4
grade level increase for the semester. But, as mentioned previously,
this study lacked a control group.
In most of the above cited studies, computer-based instruction did
improve students' reading skills. However, attributing achievement
gains to the computer alone may be misleading, since the computer often
provided additional or different instruction that the control groups did
On the other hand, some studies have found computer-based
instruction has little or no effect on reading skills (Burke &
others, 1992; Jobst & McNinch, 1994; Kleinmann, 1987), comprehension
and vocabulary (McCreary & Maginnis, 1989; Taylor & Rosecrans,
1986), or efficiency (Wepner et al., 1989). For example, in a study much
like many of those cited above, Burke and others (1992) placed students
into practice labs to study, either with a computer-based approach or
with a text-based approach. The researchers found no significant
difference in the achievement of the 2 groups. However, when compared to
a group who did not use a practice lab, the 2 groups who participated in
practice labs, whether computer or text-based, scored significantly
higher on a standardized reading test. This led Burke and others to
conclude that the amount of practice time, not the mode of presentation,
best accounts for differences in student achievement. Conversely,
Kleinmann was careful to set up a study that used identical text and
computer-assisted instructional materials and equal instructional time.
He found that both groups made significant gains in reading achievement,
and no significant difference in gains existed between the groups.
Kleinmann concluded that while supplemental instruction appears to be
effective for ESL students in a developmental reading program,
supplemental computer-based instruction does not seem to be any more
effective than supplemental traditional instruction. In fact, Burke and
others and Kleinmann's studies, which directly addressed the
question of whether achievement gains are due to more practice time or
to computer-based instruction, suggest the answer lies in additional
Taking a different approach, Jobst and McNinch (1994) set up a
computer-based and text-based reading assignment for students in a
technical writing class. Rather than create identical study materials (a
case study), they deliberately constructed materials that would take
advantage of each method: The printed version was cheap and easy to use;
the computer version allowed for graphics and student choice about
moving around in the text. Despite the researchers' expectations of
increased achievement among the computer users, no significant
differences were found in retention of the material or in students'
exam scores. This study raised concern that the time involved in
developing computer-assisted tutorials did not pay off in student
Analyzing student achievement when students use computers or texts
is not a simple process. Some studies of computer-based instruction
reveal that student aptitude might influence achievement. Two studies
(Price & Murvin, 1992; Skinner, 1990) suggest that poor readers can
benefit more than capable readers from computer use. Price and Murvin,
who added supplemental computer-based instruction to an accounting
class, reported the results of their colleague's research that
students with reading skills below college level stayed in their
accounting class and succeeded at higher rates than previous students.
Students with college level reading skills also benefited from the
computer instruction, but not as dramatically. Similarly, Skinner
concluded that computer-based instruction is effective for college
students, but particularly for those with a record of poor past
performance. Low-achieving students using computer-based instruction
scored 15% higher on quizzes than low-achieving students using
text-based study materials. Skinner hypothesizes this improvement is due
to "the structure and frequent opportunities to respond provided by
CBI" (p. 358). Indeed, as noted earlier, students working under
computer instruction were required to complete tutorial units while the
text group had access to a human tutor with no required tutorials. In
these two studies, the use of computer-based instruction to require
supplemental work may be one reason less able students improved under
the computer treatment when compared to students who studied on their
In contrast, the study by Grabe et al. (1989) illustrates a more
problematic interchange between student aptitude and instructional
effectiveness. In their experiment, when students were given free access
to computer-assisted tutorials for study, the better students tended to
use the computers. These computer users outperformed their classmates on
most exams (even taking into account the fact they were better readers).
Despite these advantages to computer users, the number of students using
computer-assisted instruction over the course of the semester declined
drastically. Researchers were not sure why this was so or why less
capable students made less use of computer-assisted tutorials even
though such instruction might benefit them.
Based on the above cited studies of college students, it appears
that computer-based instruction can improve students' reading
abilities. The majority of studies indicate that computer-assisted
instruction increases student achievement. This finding, though, might
be because computer-based instruction supplements or adds new
instruction not provided to those students using "traditional
methods." In these cases, more instruction, different instruction,
or more time on task may account for the gains by computer users. Other
factors, such as student ability, may further influence achievement
gains of computer users. The structured approach of some computer-based
instruction may help less able students who are unable to study
effectively on their own.
Research-Based Answers: Computer vs. Computer Studies
Although most studies compare computer-based to text-based
instruction, a few researchers (Balajthy, 1988; Blohm, 1987; Gay, 1986;
Kulik & Kulik, 1986; Skinner, 1990; Taylor & Rosencrans, 1986)
have examined differences among various computerized instructional
methods. A key question is whether the method of computer instruction
affects student achievement.
Unfortunately, no consistent terminology describes features of
computer-based instruction. It is not clear, for instance, what makes a
program "interactive." Nevertheless, Kulik and Kulik (1986)
established three main categories of computer instruction: (a)
computer-assisted, providing drill and practice or tutorial instruction;
(b) computer-manage & providing evaluation, feedback, guidance and
record keeping for the student; and (c) computer-enriched, serving as a
tool to solve problems or as a model to illustrate ideas or
relationships. Kulik and Kulik's (1986, 1991) meta-analyses found
no difference in achievement among these instructional methods. All
types of computer-based instruction had small but positive effects on
student learning. They concluded that college students can readily adapt
to a variety of computer-based instructional methods.
Another area of considerable interest is learner control vs.
program control in computer-based instruction. In fact, learner control
has become a field of study in itself and will only be touched on in
this paper in the context of reading instruction. In learner control
situations, subjects typically make decisions about how the computer
program operates; for instance, they may decide whether or not to
preview or review material, complete practice exercises, or do extra
work if they receive low scores. In program control, the computer guides
the learner's course through the program and usually "makes
decisions" about whether to review material or to assign extra
exercises for the learner.
Two studies (Balajthy, 1988; Gay, 1986) caution against giving poor
readers significant control over their learning, whether using
computer-assisted or traditional methods. In these studies, learner
control hurt low-aptitude students who lacked effective learning and
reading strategies. As both researchers explain, these students are
unable to accurately monitor the success or failure of their own
learning. In Gay's study, when students were given control over the
computer program to study modules on DNA structure, subjects avoided
difficult or unfamiliar material and tended to overstudy familiar
topics. On the other hand, subjects with high prior knowledge of the
topic under learner control conditions were significantly more efficient
in their use of time than subjects with high prior knowledge under
program control or than low prior knowledge subjects under learner
control or program control. Blohm (1987) also found that providing
proficient readers with learner control (in this case, computerized
access to lookup aids, such as clarification of technical language)
improved their reading comprehension. That is, competent readers
successfully monitored their own comprehension and took advantage of
computerized tools when compared to competent readers reading the same
material via computer with no lookup aids. Both Gay and Blohm studies
suggest that students can be given more learner control if their prior
understanding of a topic is relatively high.
In one of the few studies comparing 2 different methods of
computer-assisted instruction to traditional instruction, Skinner (1990)
allowed students to use the same computer program under guided (GUIDED)
or unguided (SOLO) conditions. Under the SOLO method, students were able
to choose which computerized tutorials to complete, while under the
GUIDED method students were required to complete entire units of
tutorials. The control group used text-only tutorials. As mentioned
earlier, Skinner found that low-achieving students benefited
significantly from computer-based instruction. The study also revealed
that students seemed to prefer the guided method of computer
instruction. That is, even when students in the SOLO group could operate
the program as they wished, most treated it like a GUIDED program. This
finding might explain the higher levels of achievement for both the
computer users and for the low-ability students, since other research
suggests program control benefits less capable students.
Skinner's (1990) results, though, are contradicted by other
studies. Taylor and Rosecrans (1986) also examined a control group
(non-computer users) and 2 different computer-assisted treatments, being
students receiving computer-assisted instruction in a structured manner
and students using computer-assisted instruction during their free time
(unstructured). In this study, the control group outperformed the 2
experimental groups. In another three-way study, Balajthy (1988)
compared students using traditional workbook exercises to students using
2 different computer programs; a fast-paced video game and a slow-moving
text exercise. In this case, the workbook users outscored the 2 groups
of computer users. As in Skinner's study, students seemed motivated
by the computer-based instruction (rating it as highly effective) and
spent more time using the computers. But, unlike Skinner's
subjects, the students in Balajthy's (1988) study did not benefit
as much from the computer-based instruction as did their counterparts
who studied with text workbooks.
A tentative conclusion might be that interactive yet guided
practice (as advocated by Burke and others, 1992) is a beneficial
approach for computerized remedial reading instruction, and, for these
students, better than unaided homework. Students with more prior
knowledge about a topic (Gay, 1986), or with good reading skills (Blohm,
1987) may benefit from more control over their own learning (Grabe et
al., 1989). Still more research is needed to sort out the various
influences of the type of computer program or instructional method and
of the characteristics of the learner on achievement.
Criticisms of the Research
Although research studies should provide a more reliable, objective
assessment of computer-based instruction than anecdotal or hypothetical
observations, research has its limits and problems.
As noted earlier, the type and quality of computer programs vary
greatly. Balajthy (1987) contends that "a variety of observers have
indicated that the computer is not being well-used in the field of
education" (p. 56). He also suggests there is a "lack of
quality software" (p. 57), a point Reinking (1988-89) supports when
he argues that most reading software is neither pedagogically sound nor
based on current research about the teaching of reading. Balajthy (1987)
points out that "almost all computer-based research is based on the
programmed instruction model, which ... is presently out of favor among
reading researchers and teachers" (p. 56). In these situations, the
software or hardware limitations may also limit the research findings.
If computers are not being effectively used to teach, then researchers
will not see the results of good computer-based instruction.
More troublesome, though, are claims of flawed research studies and
meta-analyses. In an examination of the meta-analyses done on
computer-based instruction by Kulik and colleagues, Clark (1986) claims
75% of the studies used were poorly designed (based on a random sampling
of 30% of the studies included in the meta-analyses). He also notes that
over 50% of the studies he examined failed to control the amount of
instruction each group received, so that more instructional time might
account for the increased learning of the computer users. Moreover,
Clark points to studies in which the method of instruction differed
between experimental and control groups. In these studies, the type of
instruction, rather than the computer, may account for any measured
effect. Reinking and Bridwell-Bowles (1991) also contend that many
computer-based studies fail to properly control variables, such as time
on task. Again, if the computer group is spending more time studying
than the control group, this extra time, rather than the computer, might
account for differences between the 2 groups.
A recurring criticism of research design is failure to control for
the Hawthorne effect that tends to operate on the experimental group
(Balajthy, 1987; Clark, 1986; Reinking et al., 1991). The novelty of
using computers, explains Balajthy (1987), might result in increased
student effort to learn. Evidence for this effect is bolstered by the
finding in the most recent meta-analysis (Kulik & Kulik, 1986) that
computer-based instruction is more effective over short periods of time
(less than 4 weeks) and effectiveness decreases over longer periods.
Perhaps, after 3 or 4 weeks, the novelty of using a computer wears off.
On the other hand, shorter studies might be more tightly controlled and
therefore better able to measure significant differences between the 2
groups (Kulik & Kulik, 1986). Another concern is lack of control
over the "same teacher" effect. That is, if different
instructors design the curriculum and/or teach the control group and the
experimental group, differences in achievement might be attributed to
the instructor rather than to the method of instruction. As evidence of
this problem, Clark notes that when the same teacher designs both the
computer and traditional instruction, computer-based effect sizes for
college students reduce to insignificant levels.
In fact, when Clark (1986) re-analyzed the studies, controlling for
such variables as the "same teacher" effect or instructional
methods, his revised effect sizes were much lower. He concludes that
meta-analyses overestimate the effect of computer-based instruction on
achievement. It would appear that many of Clark's criticisms of the
meta-analyses apply to the studies examined in this paper. Differences
in instructional methods or time on task between control and
experimental groups may account for the differences between groups.
The Correct Question?
In light of the criticisms over the research and meta-analyses,
Clark (1986) contends that it is basically misleading or unproductive to
pit computer-based instruction against traditional teaching methods.
When studies are correctly designed, Clark asserts, no discernible
differences in student achievement exist that can be attributed to
computers, and there is no reason to believe that there should be. The
computer, he argues, is just a delivery system for instruction. The type
of instruction, rather than the means by which it is sent to the
student, is paramount. Therefore, the correct question researchers could
investigate productively would be how computers might deliver good
instruction most effectively and cost-efficiently.
Balajthy (1987) also believes research into computer-based
instruction could be more productive by focusing on the question,
"In what ways can the computer improve on conventional classroom
effectiveness and efficiency?" (p. 55). Unlike Clark (1986) though,
Balajthy (1987) insists that research and research reviews show
"there is no doubt that computer instruction is effective" (p.
55). Like Clark, Balajthy (1987) emphasizes identifying effective
teaching methods, then considering how computers can effectively deliver
that instruction. In this process, Balajthy (1987) advocates examining
the various student-based, computer-based, or instructional factors that
influence computer effectiveness.
As Tanner (1984) urges, we as educators should not allow the
excitement of computers arriving in our classrooms and labs to blind us
to the need to examine carefully how we use computers with our students.
It would seem, based on the examination of research included in this
review, that both Clark's (1986) and Balajthy's (1987)
emphasis on good instruction via computers, rather than on computer
instruction itself is important. As educational researchers, we might do
well to heed Rachel's (1995) criticisms of computer-based research,
noting how frequently studies lacked control over treatment time, did
not randomly assign subjects to groups, or used a small number of
subjects for study. His suggestions to future researchers are excellent:
pre-testing and randomly assigning an adequate number of students to
control and experimental groups taught by equally competent instructors;
carefully documenting time on task and post-testing students after equal
number of hours for each treatment; using appropriate software; and
reporting methodology and findings as clearly as possible.
Despite problems with the research, in light of the studies cited
here, there are some good reasons to use computers for reading
instruction with college students. Computer-based instruction can
provide motivating and efficient learning since two of the most
significant advantages to using computers are that students have
positive attitudes toward learning with computers and that computers, in
most situations, can reduce instructional time.
Moreover, computer-based instruction as a supplement to traditional
teaching methods appears to increase student achievement, though it is
not clear whether computer-based instruction itself or the instruction
given students via the computer best accounts for student gains. This
uncertainty suggests that teachers need to consider carefully the
computer program itself. Instruction should be based on sound pedagogy.
In fact, supplemental materials or additional instruction may be
provided to students to improve reading skills without computer aid.
Here, some evidence indicates that remedial students can benefit more
from computer instruction, but only if they are not given a significant
degree of control over their own learning. For these students, program
control or explicitly taught learning strategies might be more
advantageous. Certainly there is promise that computers can help teach
large numbers of college students, including remedial readers, but only
if they are used wisely.
Askov, E. N., & Clark, C. J. (1991). Using computers in adult
literacy instruction. Journal of Reading, 34, 434-437.
Askwall, S. (1985). Computer supported reading vs. reading text on
paper: A comparison of two reading situations. International Journal of
Man-Machine Studies, 425-439.
Balajthy, E. (1987). What does research on computer-based
instruction have to say to the reading teacher? Reading Research and
Instruction, 27 (1) 55-65.
Balajthy, E. (1988). An investigation of learner-control variables
in vocabulary learning using traditional instruction and two forms of
computer-based instruction. Reading Research and Instruction, 27 (4),
Blohm, P. J. (1987). Effect of lookup aids on mature readers'
recall of technical text. Reading Research and Instruction, 26 (2),
Burke, M., & others. (1992, March). Computer-assisted vs.
text-based practice: Which method is more effective? A version of this
paper was presented at the Annual Midwest Reading and Study Skills
Conference, Kansas City, MO. (ERIC Document Reproduction Service No. Ed
Clark, R. E. (1986). Instruction studies: Analyzing the
meta-analyses. Educational Communication and Technology, 33, 249-262.
Culver, L. (1991). Improving reading speed and comprehension for
ESL students with the computer (Practicum Report No. FLO19486). Florida,
Nova University. (ERIC Document Reproduction Service No. Ed 335 960).
Dixon, R. A. (1993, March). Improved reading comprehension: A key
to university retention? Paper presented at the Annual Midwest Regional
and Study Skills Conference, Kansas City, MO.
Fish, M. C., & Feldmann, S. C. (1987). A comparison of reading
comprehension using print and microcomputer presentation. Journal of
Computer-Based Instruction, 14 (2), 57-61.
Gay, G. (1986). Interaction of learner control and prior
understanding in computer-assisted video instruction. Journal of
Educational Psychology, 78, 225-227.
Grabe, M., Petros, T., & Sawler, B. (1989). An evaluation of
computer assisted study in controlled and free access settings. Journal
of Computer-Based Instruction, 16 (3), 110-116.
Jobst, J.W., & McNinch, T.L. (1994). The effectiveness of two
case study versions: Printed versus computer-assisted instruction.
Journal of Technical Writing and Communication, 24, 421-433.
Kamil, M. L. (1987). Computers and reading research. In D. Reinking
(Ed.), Reading and computers: Issues for theory and practice (pp.
57-75). New York: Teachers College Press.
Kester, D. L. (1982, August). Is micro-computer assisted basic
skills instruction good for black, disadvantaged community college
students from Watts and similar communities? Paper presented at the
International School Psychology Colloquium, Stockholm, Sweden.
Kleinmann, H.H. (1987). The effect of computer-assisted instruction
on ESL reading achievement. The Modern Language Journal, 71, 267-273.
Kulik, C.C. & Kulik, J.A. (1986). Effectiveness of
computer-based education in colleges. AEDS Journal, 19 (2-3), 81-108.
Kulik, C.C. & Kulik, J.A. (1991). Effectiveness of
computer-based instruction: An updated analysis. Computers in Human
Behavior, 7, 75-94.
Kulik, J. A., Kulik, C. C., & Cohen, P. A. (1980).
Effectiveness of computer-based college teaching: A meta-analysis of
findings. Review of Educational Research, 50, 525-544.
Lang, W. S., & Brackett, E. J. (1985, February). Effects on
reading achievement in developmental education: Computer-assisted
instruction and the college student. Paper presented at the Annual
Meeting of the South Carolina Reading Association, Columbia, SC.
McCreary, R., & Maginnis, G. (1989, May). The effects of
computer-assisted instruction on reading achievement for college
freshman. Paper presented at the Annual Meeting of the International
Reading Association, New Orleans, LA.
Mikulecky, L., Clark, E.S., & Adams, S. M. (1989). Teaching
concept mapping and university level study strategies using computers.
Journal of Reading, 32, 694-702.
Price, R. L., & Murvin, H. J. (1992). Computers can help
student retention in introductory college accounting. Business Education
Forum, 47, 25-27.
Rachal, J.R. (1995). Adult reading achievement comparing
computer-assisted and traditional approaches: A comprehensive review of
the experimental literature. Reading Research and Instruction, 34 (3),
Reinking, D. (1987). Computers, reading, and a new technology of
print. In D. Reinking (Ed.), Reading and computers: Issues for theory
and practice (pp. 3-23). New York: Teachers College Press.
Reinking, D. (1988-89). Misconceptions about reading and software
development. Computing Teacher, 16 (4), 27-29.
Reinking, D., & Bridwell-Bowles, L. (1991). Computers in
reading and writing. In Barr, R., Kamil, M. L., Mosenthal, E B., &
Pearson, ED. (Eds.) Handbook of reading research (pp. 310-340), New
Seigel, M. A., & Davis, D. M. (1987). Redefining a basic CAI
technique to teach reading comprehension. In D. Reinking (Ed.), Reading
and computers: Issues for theory and practice (pp. 111-126). New York:
Teachers College Press.
Skinner, M. E. (1990). The effects of computer-based instruction on
the achievement of college students as a function of achievement status
and mode of presentation. Computers in Human Behavior, 6, 351-60.
Tanner, D. E. (1984). Horses, carts, and computers in reading: A
review of research. Computers, Reading, and Language Arts, 2, 35-38:
Taraban, R. (1996) A computer-based paradigm for developmental
research and instruction. Journal of Developmental Education, 20 (11),
12-14, 16, 18, 20.
Taylor, V. B., & Rosecrans, D. (1986). An investigation of
vocabulary development via computer-assisted instruction (CAI). (ERIC
Document Reproduction Service No. ED 281 168).
Turner, T. C. (1988). Using the computer for adult literacy
instruction. Journal of Reading, 31, 643-647.
Watkins, B. T. (1991). Using computers to teach basic skills.
Chronicle of Higher Education, 38 (6) A23-26.
Wepner, S.B., Feeley, J. T., & Minery, B. (1990). Do computers
have a place in college reading courses? Journal of Reading, 33,
Wepner, S. B., Feeley, J. T., & Wilde, S. (1989). Using
computers in college reading courses? Journal of Developmental
Education, 13, 6-8, 24.
Alison Kuehner has been teaching English at Ohlone College in
Fremont, California for ten years. Last year, while on sabbatical leave
earning a Master's Degree in Reading Instruction, she had time to
research this article.