Research suggests that to adequately prepare teachers for the task
of classroom assessment, attention should be given to the educational
measurement instruction. In addition, the literature indicates that the
use of computer-mediated instruction has the potential to affect student
knowledge, skills, and attitudes. This study compared the effects of a
traditional face-to-face instruction of an undergraduate level
educational measurement course to a computer-mediated instruction on
academic course performance, educational measurement knowledge, skills,
and attitudes of teacher education students (N = 51) at Sultan Qaboos
University in Oman, using a posttest-only control group design. Results
revealed statistically significant group differences favoring the
computer-mediated instruction. Implications and recommendations for
educational measurement instruction and research are discussed.
Educational Measurement Instruction
Assessment of student's learning is one of the many job
responsibilities of teachers (Mertler, 2003). It has been estimated that
teachers spend up to a half of their professional time in classroom
assessment activities (Plake, 1993). Appropriate implementation of these
activities requires strong knowledge and skills in and positive
attitudes toward educational measurement (Alkharusi, Kazem, &
Al-Musawi, 2008; Popham, 2006). Unfortunately, many students enrolled in
educational measurement course encounter difficulties. They view the
course as less relevant to their prospective profession as teachers,
expect it to be difficult, and often try to avoid taking it for as long
as possible (Bryant & Barnes, 1997; Hills, 1991; Kottke, 2000;
VanZile-Tamsen & Boes, 1997). As might be expected, this situation
may result in negative attitudes and poor course performance (Alkharusi,
2009). In addition, studies investigating classroom assessment literacy
and practices have repeatedly expressed a concern about teachers'
knowledge and skills in educational measurement (e.g., Alkharusi et al.,
2008; McMillan, Myran, & Workamn, 2002; Mertler, 1999, 2003). These
difficulties imply that to adequately prepare teachers for the task of
classroom assessment, attention should be given to the educational
measurement instruction. The present study aimed at investigating the
comparative effects of a traditional face-to-face instruction and a
computer-mediated instruction on educational measurement academic course
performance, knowledge, skills, and attitudes.
When reviewing the literature related to educational measurement,
we found only one study that specifically focused on educational
measurement instruction, and that this study is outdated. Muller (1974)
developed instructional materials to enable students to take a graduate
level educational measurement course by self-instruction. Results
indicated that students in the self-instructional section performed as
well as did students in the lecture-discussion section on unit exams.
Also, most of the students were very satisfied with the
self-instructional experience and the self-instructional materials.
In 1990, the American Federation of Teachers (AFT), the National
Council on Measurement in Education (NCME), and the National Education
Association (NEA) jointly developed Standards for Teacher Competence in
Educational Assessment of Students. These standards are intended to
guide the preparation of teachers in educational measurement (AFT, NCME,
& NEA, 1990). The standards hold that teachers should be skilled in
choosing and developing assessment methods appropriate for instructional
decisions; administering, scoring, and interpreting results of
externally- and teacher-produced assessment methods; using assessment
results in making decisions for individual students, planning teaching,
developing curriculum, and making school improvements; developing valid
assessment-based procedures; communicating assessment results to
students, parents, and other audiences; and recognizing methods and uses
of assessments that are unethical, illegal, or otherwise inappropriate.
Consequently, some researchers (Arter, 1999; O'Sullivan &
Johnson, 1993; Stiggins, 1999; Talyor & Nolen, 1996) have described
methods for incorporating these standards when teaching educational
measurement. For example, O' Sullivan and Johnson (1993) developed
eight instructional tasks that are matched with the AFT, NCME, and
NEA's (1990) Standards for a graduate level educational measurement
course. Educational measurement skills of the students enrolled in the
course were pretested and posttested. The results showed a statistically
significant improvement in students' knowledge and understanding of
the educational measurement. Furthermore, students who completed the
course after six months reported a higher level of educational
measurement skills than new students beginning the same course. Given
the need for instruction in educational measurement, the next step in
the process is to find the most effective means of delivery.
There has been an increased emphasis on using technology to improve
teaching and learning environments. Educators have recognized that by
combining technology and pedagogy it is possible to create teaching and
learning environments that are more stimulating than traditional
classroom environments (Larkin & Chabay, 1992; Seagren &
Watwood, 1996, 1997; Zhang, 1998). In an experimental analysis of a
computer-mediated instruction, Basile and D'Aquila (2002) reported
that as a result of using educational technology students become less
bored, more motivated, and likely to learn more about the subject
matter. Asynchronous computer-mediated communication is one form of
technology application that has become a useful resource for education.
The difference between asynchronous computer-mediated and face-to-face
instruction is that a Web site replaces the classroom in the
asynchronous courses (Schulte, 2004).
Computer-mediated communication refers to "computer
applications for direct human-to-human communication" (Santoro,
1995, p. 11). The advent of computer-mediated communication has provided
tools to support teaching and learning with greater advantage (Hoskins
& Hooff, 2005). It provides electronic mail (e-mail), group
conferencing, and interactive chat capabilities; delivers instruction;
and facilitates interactivity in terms of student-to-student and
student-to teacher interactions (Jung, Choi, Lim, & Leem, 2002).
Asynchronous computer-mediated instruction has been found to
promote collaborative learning; support independent, active, generative,
and self-paced learning techniques; and facilitate the ability to create
learning communities (Fernandez & Liu, 1999; Hiltz, 1997; Schulte,
2004). Hiltz (1995) reported that, on one hand, students indicated
greater satisfaction with computer-mediated instruction with respect to
access to the instructors, access to the educational experience,
increased participation, and ability to apply learned materials in new
contexts. On the other hand, instructors indicated improved ability on
the part of students to synthesize diverse ideas and deal with complex
Moreover, computer-mediated instruction requires students to take
responsibility for their own learning more than traditional
instructional approaches (Berge & Collins, 1995). However, Hiltz
(1997) contends that online discussion should be an integral part of the
asynchronous computer-mediated courses, and that it should be graded to
hold students responsible for the learning process. It has been
maintained that students may have more control over the progress of the
course in the asynchronous instructional environments as opposed to the
traditional ones where the instructor may need to move the class along
to accommodate the curriculum (Piburn & Middleton, 1998). With
computer-mediated instruction, students are no longer passive learners
(Berge & Collins, 1995). Instead, they become active participants in
the creation of knowledge and meaning (Berge & Collins, 1995). This
may contrast to the face-to-face instruction where instructors often
tend to dominate classroom conversations, and that students tend to be
more passive (Piburn & Middleton, 1998). As such, it may seem
reasonable to argue that computer-mediated instruction can shift
learning toward a more constructivist approach (Rhodes, 1999).
Social Constructivist Instruction
The social constructivist approach is based on the assumption that
individuals learn to construct their knowledge and meanings through
interaction with others (Pear & Crone-Todd, 2002). It holds that
knowledge is not presented to the individuals, but emerges from active
dialogue where people create their own learning paths and knowledge
(Hiltz, 1994). Vygotsky (1962,1987) viewed socialization as fundamental
to the learning process. To Vygotsky, individuals construct and
reconstruct their own meaning systems through interaction with others.
The learner constructs meaningful relations between the new knowledge
acquired through interaction and his or her previously existing
knowledge (Barab, Hay, & Duffy, 1998). According to the
constructivist approach, learners communicate their knowledge to others
who provide feedback (Pear & Crone-Todd, 2002). As such, the social
constructivist approach to teaching involves a high level of
student-student and student-instructor interaction to enable students to
construct their own knowledge (Pear & Crone-Todd, 2002).
Computer-mediated communication tools can promote increased social
interactions by supporting conversation and collaboration (Jonassen,
Davidson, Collins, Campbell, & Haag, 1995).
Schutte (2005) conducted a study involving a traditional classroom
and a Web-based class that demonstrated the ability of technology to
promote collaboration. Schutte randomly divided students enrolled in a
statistics class into two groups. One group was taught in a traditional
classroom and the other in a virtual classroom presented through the
Web. The virtual students had e-mail groups, hypernews discussion, forms
input via the Web, and Internet relay chat moderated by the instructor.
The course content and requirements were the same for both classes. The
results indicated that the experimental group scored significantly
higher than the traditional group on the exams. Also, posttest results
revealed that the experimental group had significantly higher
perceptions of learning flexibility, understanding of the material, and
a greater affect toward math than the traditional group. Schutte (2005)
attributed the results to the collaborative learning promoted by
computer-mediated communication tools.
Armed with the aforementioned literature, the present study
proposed computer-mediated instruction that is grounded in the social
constructivist approach to teaching and learning. The study utilized
Moodle (modular object-oriented dynamic learning environment), which is
one of the most frequently course management system used to support the
social constructivist approach to teaching and learning (Romero,
Vertura, & Garcia, 2008). The students were expected to learn by
studying and reflecting on the course materials in two ways: (a) by
constructing answers to potential questions posted on the course site
for them to discuss and (b) by providing feedback and assistance to each
other to help them construct their knowledge related to educational
measurement. In addition, the study proposed a combination of
technology-based collaborative learning with hands-on activities as a
way to teach the undergraduate level educational measurement course. The
advantage of this approach is that it motivates students to learn and
produces higher learning outcomes by allowing students to help one
another, learn from one another, and acquire competence in carrying out
educational measurement tasks related to their area of interest
(Fernandez & Liu, 1999). It was expected that the findings from this
study would reveal information that may equip educational measurement
instructors with instructional methods capable of assisting prospective
teachers to attain the desired competency levels in classroom
Statement of the Problem
It has been documented that the use teachnology-based instruction
in a social constructivist learning environment might have the potential
to improve student academic performance ,knowledge, skills, and
attitudes (Reeves & Reeve s, 2008; Schutte, 2005; Tutty & Klein,
2008). Based upon our understanding of the applications of the social
constructivism and computer-mediated instruction, the problem being
addressed in this study was the comparative effects of a traditional
face-to-face instruction and a computer-mediated instruction of an
undergraduate level educational measurement course on student academic
course performance, knowledge of, perceived skillfulness in, and
attitude toward educational measurement. The primary research question
was identified as follows: Are there differences between students
exposed to a traditional face-to-face instruction and students exposed
to a computer-mediated instruction of an undergraduate level educational
measurement course with respect to academic course performance,
knowledge of, perceived skillfulness in, and attitude toward educational
The participants were 51 undergraduate teacher education students
representing art and science majors in the College of Education (COE) at
Sultan Qaboos University (SQU) in Oman. The students were enrolled in
two intact sections of an educational measurement course taught by the
same instructor during the Fall2008 semester. The two sections were
randomly assigned to either a control group (n = 27) taught using
traditional faceto-face instruction or an experimental group (n = 24)
taught using computer-mediated instruction. Table 1 displays
characteristics of the sample along with results of the [chi
square]-test analyses comparing the distributions of the two groups in
terms of gender, major, stage in the program, whether or not have taken
a course in teaching methods, whether or not have taken a course in
teaching practicum, and prior experience with computer-mediated
instruction. As shown in Table 1, the majority of the participants were
female and senior. About half of the students had taken less than two
courses taught using computer-mediated instruction. Results of the [chi
square]-test analyses revealed no statistically significant differences
between the two groups on the distributions of gender, major, stage in
the program, whether or not have taken a course in teaching methods,
whether or not have taken a course in teaching practicum, and prior
experience with computer-mediated instruction. Also, there were no
statistically significant differences in the self-reported last
cumulative grade point average (GPA) between the two groups, t(49) =
.792, p > .05. In addition, upon entering the course, the students
rated their levels of confidence in using computers to learn educational
measurement. Results showed no statistically significant differences in
the levels of confidence in using computers between the two groups,
t(49) = 1.88,p > .05.
The undergraduate level educational measurement course is offered
by the Department of Psychology in the (COE) at (SQU). The goal of the
course is to have students develop knowledge, skills, and abilities
related to classroom assessment that deemed essential to their
prospective profession as teachers. The course is a three credit hour
required course for all undergraduate education majors. A prerequisite
for students enrolled in this course is to have completed and passed a
course in educational objectives. Topics covered within the
undergraduate level educational measurement course are basic concepts
and principles in measurement and evaluation, teacher-made tests,
standardized tests, test and item analysis, reliability and validity,
performance assessment, grading, reporting, and communicating assessment
results. In this study, the traditional face-to-face section of the
course was a morning class taught on two different days whereas the
computer-mediated section of the course was flexible in the sense that
students could access the course Website at any time during the week.
The students chose to enroll in one section of the course and not the
other depending on whether the class time fits with the timetable
specified by the SQU Deanship of Registration.
The following five instruments were utilized in this study to
collect data regarding participants' confidence in using computers
to learn, knowledge of, perceived skillfulness in, and attitude toward
educational measurement, as well as their background and demographic
information. The items of these instruments were subjected to a content
validation process. They were given to a group of five faculty members
in the areas of educational measurement and psychology from SQU. The
faculty members were asked to judge the clarity of wording and the
appropriateness of each item and its relevance to the construct being
measured. Their feedback was used for refinement of the items.
Confidence in Using Computers to Learn Educational Measurement
Levine and Donitsa-Schmidt's (1998) Computer Confidence Scale
(10 items, [alpha] = .90) was adapted to assess students' beliefs
about their ability to learn the educational measurement course using
computers. Responses were obtained on a 5-point Liken scale ranging from
1 (strongly disagree) to 5 (strongly agree). Scoring of the negative
items was reversed so that a high average rating score reflected a high
confidence level. This scale was administered at the beginning of the
study, and the internal consistency coefficient was .85 as measured by
Academic Course Performance
Total points earned in the course were used to reflect
students' academic performance in the course. These points were a
summation of the points earned in the homework assignments (10%), the
quizzes (10%), the project (20%), the midterm exam (20%), and the final
exam (40%). The objectives, the content, and the questions covered in
these course requirements were similar to both sections (face-to-face
instruction and computer-mediated instruction) of the course. Knowledge
of Educational Measurement
Informed by the literature on classroom assessment literacy
(Mertler & Campbell, 2005; Plake & Impara, 1992), 32
multiple-choice items with four options, one being the correct response
were used to assess students' knowledge and understanding of basic
principles of sound classroom assessment practices, terminology,
development, and use of various classroom assessment methods. The items
were dichotomously scored (0 = incorrect response, 1 = correct response)
with a high total score reflecting a high level of educational
measurement knowledge. These items were administered at the end of the
study, and the KR20 reliability coefficient was .84.
Perceived Skillfulness in Educational Measurement
Informed by the literature on classroom assessment (Alkharusi,
2002; O'Sullivan & Johnson, 1993; Zhang & Burry-Stock,
1994), 46 items were used to assess students' perceptions of skills
in performing certain educational measurement tasks. Responses were
obtained on a 5-point Likert scale ranging from 0 (not all skilled) to 5
(very skilled). A high average rating score reflected a high level of
perceived skillfulness in educational measurement. These items were
administered at the end of the study, and the internal consistency
coefficient was .97 as measured by Cronbach's alpha.
Attitude Toward Educational Measurement
Bryant and Barnes's (1997) Attitude Toward Educational
Measurement Inventory (29 items, [alpha] = .93) was used to assess
students' attitudes toward educational measurement. Responses were
obtained on a 5-point Likert scale ranging from 1 (strongly disagree) to
5 (strongly agree). Scoring of the negative items was reversed so that a
high average rating score reflected a more positive attitude toward
educational measurement. This inventory was administered at the end of
the study, and the internal consistency coefficient was .89 as measured
by Cronbach's alpha.
Background and Demographic Information
The students were requested to provide information such as gender,
major, stage in the program, last cumulative grade point average,
whether or not have taken a course in teaching methods, whether or not
have taken a course in teaching practicum, and number of courses taken
with computer-mediated instruction.
In this study, two intact sections of the educational measurement
course were used. The sections were assigned to either a control group
taught using the traditional face-to-face instruction or an experimental
group taught using the computer-mediated instruction. The sections were
taught by the same instructor using the same course content, textbook,
and requirements. The course requirements included discussion, four
biweekly homework assignments (10%), four biweekly quizzes (10 %), a
project of planning and constructing an achievement test to be completed
by the end of the semester (20%),one midterm exam (20%), and the final
comprehensive course exam (40%). In each section, the instructor
randomly form working groups of three to four students per group for
discussion, homework assignments, and the project. The groups were given
alist of question stems to facilitate group discussion and dialogue
according to the reciprocal questioning approach of the learning
environments (Wool folk,2004). Differences between the two sections
included the computer used to mediate the instruction and communication
Traditional Face-to-Face Instruction
This class met face-to-face in a traditional classroom two times
per week in the morning, for two hours, over a 16-week period. The
instruction consisted of face-to-face lectures and discussions
supplemented with readings assigned by the instructor for each week. The
class time was devoted to the transmission of information from the
instructor to the class with the working groups taking notes and
discussing the course materials in a reciprocal learning environment.
The students completed in groups the homework assignments and the
project outside the class and handed them to the instructor in the class
on the date due. The lecture notes and the readings served as the only
resources that could help students understand the topics and complete
the course requirements. The quizzes, the midterm exam, and the final
exam were all completed in the class using paper-and-pencil on scheduled
dates. There were no practice quizzes.
The first meeting of this class was face-toface with the instructor
to orient the students to the course nature and answer organizational
questions about it. The students in this class used Moodle, an open
source Course Management System (CMS), to access the course materials
interactively through computer. The students had online access to the
course Website using their university username and password. The course
site included the course syllabus, the working groups, private grade
book, forums for class and group discussions intended to promote
interaction between the instructor and students as well as among the
students, short practice and mandatory quizzes, and resources connecting
students to the lecture notes and demonstrations. The lectures were not
provided through video or audio technologies. Instead, they were weekly
posted on the course site in PDF files.
The course site was structured so that one topic was covered each
week. To help provide structure and pacing for students, due dates were
posted for all topics and tasks. The students were required to log into
the course site and post responses at least once for each discussion
topic, twice per week. Although the discussion topics end at each week,
the students could refer and comment back of the previous posts and
discussions. Also, the students were asked to check the course site on a
regular basis for discussion questions and announcements from the
instructor and feedback from the classmates.
The students completed the quizzes and the midterm exam using the
Moodle system with immediate grading and feedback, whereas the final
exam was completed manually in class using paper and pencil. The
students were asked to upload word processed project and homework
assignments and send them to the instructor on the due date via the
course site. They were asked to use their e-mail to communicate and work
with their respective groups. In addition, they were informed that they
could obtain assistance from the instructor through in-person meetings,
phone calls, and e-mails. Finally, the students were informed that the
instructor would be able to monitor their interactions and group work on
the course site.
Prior to the beginning of the Fall 2008 semester, the authors
arranged with the course instructor to collect data from the students.
During the first class meeting and before the assignment of the sections
into traditional and online classes, the authors informed the students
that a project is being conducted to examine the effect of teaching
methods on variables assumed to be associated with teacher preparation
in educational measurement. At this time, the authors requested the
participation of the students. The students were informed that they are
not obligated to participate and that participation will not influence a
student's grade in any way. They were also told that the
participation would be required at the end of the semester and as such
they need to write their university identification numbers on the
questionnaires to enable the authors to match pretest with posttest
information. The students were assured that when data are coded for
statistical analyses and stored in the computer, they would not contain
the university identification numbers to identify a particular
student's responses. All students agreed to participate in the
study. They were then given the Background and Demographic Questionnaire
and the Computer Confidence Scale.
One week prior to the final course exam, the authors gave the
students a questionnaire containing items regarding knowledge of,
perceived skillfulness in, and attitude toward educational measurement.
Brief instructions were provided by the authors regarding the order of
information that was requested in the questionnaire, how to respond to
the respective items, and where to find directions that were also
included in the questionnaire. Then, the students were requested to
write their university identification number and fill out the
questionnaire. This administration was made during a regularly scheduled
Research Design and Limitations of the Study
This study employed a posttest-only control group design. The type
of instruction delivery through either a traditional face-toface or a
computer-mediated format was the independent variable of the study.
Academic course performance reflected by the total points earned in the
course and post-measures on the knowledge of, perceived skillfulness in,
and attitude toward educational measurement were the dependent
There were several limitations to this study including threats to
the internal and extemal validity. Although the high similarity of the
groups on the background and demographic variables (see Table 1)
provided evidence for homogeneity between the groups, selection bias is
apossible threat to the internal validity of the study due to the
absence of random assignment. No students dropped out during the course,
thereby minimizing attrition as a threat to the internal validity.
Instrumentation as a threat to the internal validity was minimized by
selecting post-measures that provided valid and reliable scores on the
knowledge of, perceived skillfulness in, and attitude toward educational
measurement. We did not observe students from one section discussing the
instructional treatments with students from the other section. Also, the
instructor made efforts to reduce the awareness and expectations of the
study. These might minimize resentful demoralization, diffusion of
treatment, compensatory rivalry, and compensatory equalization as
threats to the internal validity. However, it may be difficult to
generalize the results of the study to other settings without the same
level of experience, motivation, and other personological
characteristics of the instructor implementing the independent variable
in this study. Thus, experimenter effect is a possible threat to the
external validity of the study. In addition, the absence of the random
selection of participants should be considered alimitation when
generalizing the findings of this study to populations that may differ
from undergraduate teacher education students in the COE at SQU in Oman.
Independent samples t-tests were employed to investigate
differences between students taught using a traditional face-to-face
instruction and students taught using a computer-mediated instruction
with respect to the academic course performance and post-measures on the
knowledge of, perceived skillfulness in, and attitude toward educational
measurement. Table 2 presents means and standard deviations for the
academic course academic performance and post-measures on the
educational measurement knowledge, skills, and attitudes for students
exposed to the computer-mediated instruction (i.e., experimental group)
and students exposed to the traditional face-to-face instruction (i.e.,
control group). The results revealed that when compared to the control
group, the experimental group had on average higher academic course
performance, t(49) = 3.71, p < .01, d = 1.05, 95%CI = [2.62, 8.82];
higher levels of educational measurement knowledge, t(49) = 2.19,p <
.05, d = .61,95%CI = [.29, 6.85]; higher levels of perceived
skillfulness in educational measurement, t(49) = 2.59, p < .05, d =
.73, 95% CI = [.07, .62]; and more positive attitudes toward educational
measurement, t(49) = 9.88, p < .001, d = 2.79, 95%CI= [.73, 1.11].
Classroom assessment is considered as one of the competencies that
teachers must possess (AFT, NCME, & NEA, 1990), and as such some
teacher education programs require a course in educational measurement
to help teachers develop the necessary knowledge, skills, and attitudes
for the task of the classroom assessment (Campbell, Murphy, & Holt,
2002). However, the dissatisfaction with teachers' assessment
literacy and practices along with the difficulties expressed by teacher
education students enrolled in the course have led us to design a
computer-mediated instruction of an undergraduate level educational
measurement course and compare its effects on the academic course
performance, educational measurement knowledge, skills, and attitudes to
a traditional face-to-face instruction. The instruction in this study
was guided by the principles of the social constructionist pedagogy. The
results indicated that the computer-mediated instruction favorably
affects the educational measurement knowledge, skills, and attitudes of
teacher education students as well as their academic course performance.
These results are generally consistent with those integrating social
constructivist approaches of instruction into computer-based learning
environments (e.g., Jung, Choi, Lim, Leem, 2002; Kearsley, 2000;
O'Donnell, Hmelo-Silver, & Erkens, 2006; Tutty & Klein,
Different approaches of learning theory support the social
constructivist learning environment for different reasons. Advocate s of
information processing theory point to the value of group discussion in
helping students rehearse, elaborate, and expand their knowledge, make
connections, and review information (Woolfolk, 2004). Proponents of a
Piagetian perspective suggest that the interaction in groups can create
the cognitive conflict and disequilibrium that lead a student to
question his or her understanding of the material and try out new ideas
(Woolfolk, 2004). Vygotsky's theory suggests that social
interaction is important for learning because higher mental functions
such as reasoning, comprehension, and critical thinking originate in
social interactions and are then internalized by individuals (Woolfolk,
When compared to the traditional face-to-face class in this study,
all group members in the computed-mediated instruction class actively
and frequently participated in the group and class forums, asking
questions and giving elaborated written explanations and feedback to
each other, with careful online monitoring by the instructor, who acted
as a model and a facilitator for the discussion, sharing of
explanations, and brainstorming. This might have provided the social
support and scaffolding that students may need during their learning
process (Woolfolk, 2004). In addition, as suggested by Palincsar and
Herrenkohl (2002), the reciprocal questioning approach employed in this
study requiring the groups to ask and answer task-related questions
might have promoted active group and class online dialogue, and
facilitated conceptual understanding and problem-solving of the learned
materials. Furthermore, as indicated by Herrington, Reeves, and Oliver
(2007), not having a strict class meeting schedule, giving more
opportunities to attempt online quizzes, and the use of authentic
project and homework assignments along with immediate online written
feedback might have consolidated the learned knowledge, skills, and
attitudes of the students in the computer-mediated class.
To sum, the findings from this study seem to point to a conclusion
that although the time needed to deliver a computer-mediated instruction
is two to three times greater than to deliver the traditional
face-to-face instruction (Pallof & Pratt, 1999), the online learning
environment could provide teacher education students many opportunities
to acquire the educational measurement knowledge, skills, and attitudes
needed for successful classroom assessment practices. The current
findings testify the value of the social constructivism in a
technology-based learning environment for enhancing educational
measurement instructional outcomes, which deserve careful attention and
further research. Other studies may need to con sider the effects of
group composition with differing abilities of students. Interviews with
students may also validate the self-report questionnaires and provide a
deeper understanding of the phenomenon. The absence of random selection
and random assignment as well as the experimenter effect served as
limitations to the study findings. Thus, additional research will need
to be conducted to determine the extent to which the findings are
applicable to other settings.
The research was thankfully supported by a grant
(IG/EDU/PSYC/08/04) from Sultan Qaboos University in Oman.This funding
source had no involvement in the conduct of the research and preparation
of the article. We would like to thank Mr. Hilal Al-Rasheedi for
providing us technical support in the design and conduct of the course
Alkharusi, H. A. (2002). Relationship between math self-concept,
perceived self-efficacy, and attitude toward educational measurement
among College of Education students at Sultan Qaboos University.
Unpublished master's thesis. Kent State University.
Alkharusi, H. (2009). Correlates of teacher education
students' academic performance in an educational measurement
course. International Journal of Learning, 16, 1-15.
Alkharusi, H., Kazem, A., & Al-Musawi, A. (2008). Knowledge,
skills, and attitudes of preservice and inservice teachers in
educational measurement. Manuscript submitted for publication.
American Foundation of Teachers, National Council on Measurement in
Education, & National Education Association. (1990). Standards for
teacher competence in educational assessment of students. Educational
Measurement: Issues and Practice, 2, 30-32.
Arter, J. (1999). Teaching about performance assessment.
Educational Measurement: Issues and Practice, 18, 30-44.
Barab, S. A., Hay, K. E., & Duffy, T. M. (1998). Grounded
constructions and how technology can help. TecTrends, 43, 15-23.
Basile, A., & D'Aquila, J. M. (2002). An experimental
analysis of computer-mediated instruction and student attitudes in a
principles of financial accounting course. Journal of Education for
Business, 77, 137-143.
Berge, Z., & Collins, M. P. (1995). Computer-mediated
communications and the online classroom: An introduction. In Z. L. Berge
& M. P. Collins (Eds.), Computer mediated communication and the
online classroom: Vol. 1. Overview and perspectives (pp. 1 -10).
Cresskill, NJ: Hampton Press.
Bryant, N. C., & Barnes, L. L. B. (1997). Development and
validation of the attitude toward educational measurement inventory.
Educational and Psychological Measurement, 57, 870-875.
Campbell, C., Murphy, J. A., & Holt, J. K. (2002, October).
Psychometric analysis of an assessment literacy instrument:
Applicability to preservice teachers. Paper presented at the meeting of
the Mid-Western Educational Research Association, Columbus, OH.
Fernandez,G.C.J., & Liu, L. (1999). A technology-based teaching
model that stimulates statistics learning. Computers in the Schools, 16,
Herrington, J., Reeves,T. C., & Oliver, R. (2007). Immersive
learning technologies: realism and online authentic learning. Journal of
Computing in Higher Education, 19, 65 -84.
Hills, J. R. (1991). Apathy concerning grading and testing. Phi
Delta Kappan, 72, 540-545.
Hiltz, S. R. (1994). The virtual classroom: Learning without limits
via computer networks. Norwood, NJ: Ablex Publishing Corp.
Hiltz, S. R. (1995, March). Teaching in a virtual classroom. Paper
presented at the International Conference on Computer Assisted
Instruction, National Chiao Tung University, Hsinchu, Taiwan.
Hiltz, S. R. (1997). Impacts of college-level courses via
asynchronous learning networks: Some preliminary results.Journal of
Asynchronous Learning Networks, 1, 1-19.
Hoskins, S. L., & van Hooff, J. C. (2005). Motivation and
ability: Which students use online learning and what influence does it
have on their achievement? British Journal of Educational Technology,
Jonassen, D., Davidson, M., Collins, M., Campbell, J., & Haag,
B. B. (1995). Constructivism and computer-mediated communication in
distance education. The American Journal of Distance Education, 9, 7-26.
Jung, I., Choi, S., Lim, C., & Leem, J. (2002). Effects of
different types of interaction on learning achievement, satisfaction and
participation in web-based instruction. Innovations in Education and
Teaching International, 39, 153-162.
Kearsley, G. (2000). Online education: Learning and teaching in
cyberspace. Belmont, CA: Wadsworth/Thomson Learning
Kottke, J. L. (2000). Mathematical proficiency, statistics
knowledge, attitudes toward statistics, and measurement course
performance. The College Student Journal, 34, 334-347.
Larkin, J. H., & Chabay, R. W. (1992). Introduction. In J. H.
Larking & R. W. Chabay (Eds.), Computer-assisted instruction and
intelligent tutoring systems: Shared goals and complementary approaches
(pp. 1-9). Hillsdale, NJ: Lawrence Erlbaum.
Levine, T., & Donitsa-Schmidt, S. (1998). Computer use,
confidence, attitudes, and knowledge: A casual analysis. Computers in
Human Behavior, 14, 125-146.
McMillan, J.H., Myran, S., & Workman, D. (2002). Elementary
teachers' classroom assessment and grading practices. The Journal
of Educational Research, 95, 203-213.
Mertler, C. A. (1999, October). Teachers" (mis)conceptions of
classroom test validity and reliability. Paper presented at the meeting
of the Mid-Western Educational Research Association, Chicago, IL.
Mertler, C. A. (2003, October). Preservice versus inservice
teachers' assessment literacy: Does classroom experience make a
difference? Paper presented at the meeting of the Mid-Western
Educational Research Association, Columbus, OH.
Mertler, C. A., & Campbell, C. (2005, April). Measuring
teachers' knowledge and application of classroom assessment
concepts: Development of the assessment literacy inventory. Paper
presented at the meeting of the American Educational Research
Association, Montreal, Quebec, Canada.
Muller, D. J. (1974). Evaluation of instructional materials and
prediction of student success in a self-instructional section of an
educational measurement course. The Journal of Experimental Education,
O'Donnell, A. M., Hmelo-Silver, C., & Erkens, G. (Eds.).
(2006). Collaborative learning, reasoning, and technology. Mahwah, NJ:
O'Sullivan, R. G., & Johnson, R. L. (1993, April). Using
performance assessments to measure teachers' competence in
classroom assessment. Paper presented at the meeting of the American
Educational Research Association, Atlanta, GA.
Palincsar, A. S., & Herrernkohl, L. R. (2002). Designing
collaborative learning contexts. Theory Into Practice, 61, 26-32.
Palloff, R.M., & Pratt, K. (1999). Building learning
communities in cyberspace. San Francisco: Jossey-Bass
Pear, J. J., & Crone-Todd, D. E. (2002). A social
constructivist approach to computer-mediated instruction. Computers and
Education, 38, 221-23l.
Piburn, M. D., & Middleton, J. A. (1998). Patterns o f faculty
and student conversation in Listerv and traditional journals in a
program for preservice mathematics and science teachers. Journal of
Research on Computing in Education, 31, 62-77.
Plake, B. S. (1993). Teacher assessment literacy: Teachers'
competencies in the educational assessment of students. Mid-Western
Educational Researcher, 6, 21-27.
Plake, B. S., & Impara, J. C. (1992). Teacher competencies
questionnaire description. Lincoln, NE: University of Nebraska.
Popham, W. J. (2006). Needed: A dose of assessment literacy.
Educational Leadership, 63, 84-85.
Reeves, P. M., & Reeves, T. C. (2008). Design considerations
for online learning in health and social work education. Learning in
Health and Social Care, 7, 46-58.
Rhodes, C. S. (1999, February). A transactional view of interactive
online components. Proceedings of SITE99. A report produced for the
International Conference of the Society for Information Technology and
Teacher Education, San Antonio, TX.
Romero, C., Ventura, S., & Garcia, E. (2008). Data mining in
course management systems: Moodle case study and tutorial. Computers and
Education, 51, 368-384.
Santoro, G. M. (1995). What is computermediated communication? In
Z. L. Berge & M. P. Collins (Eds.), Computer mediated communication
and the online classroom: Vol. 1. Overview and perspectives (pp. 11
-27). Cresskill, NJ: Hampton Press.
Schulte, A. (2004). The development of an asynchronous
computer-mediated course: Observation on how to promote interactivity.
College Teaching, 52, 6-10.
Schutte, J. G. (2005). Virtual teaching in higher education: The
new intellectual super highway or just another traffic jam? Retrieved
September 4, 2005, from http://www.csun.edu/sociology/virexp.htm
Seagren, A., & Watwood, B. (1996, February). The virtual
classroom: Great expectations. Delivering graduate education by
computer." A success story. Proceedings of the International
Conference of the National Community College Chair Academy, Phoenix, AZ.
Seagren, A., & Watwood, B. (1997, February). The virtual
classroom: What works? Proceedings of the International Conference of
the Chair Academy, Reno, NV.
Stiggins, R. J. (1999). Evaluating classroom assessment training in
teacher education programs. Educational Measurement: Issues and
Practice, 18, 23-27.
Taylor, C. S., & Nolen, S. B. (1996). A contextualized approach
to teaching teachers about classroom-based assessment. Educational
Psychologist, 31, 77-88.
Tutty, J. I., & Klein, J. D. (2008). Computermediated
instruction: A comparison of online and face-to-face collaboration.
Educational Technology and Research Development, 56, 101-124.
VanZile-Tamsen, C., & Boes, S. R. (1997, November). Graduate
students "attitudes and anxiety toward two required courses: Career
development and tests and measurement. Paper presented at the meeting of
the Georgia Educational Research Association, Atlanta, GA.
Vygotsky, L. S. (1962). Thought and language. Cambridge, MA: M. I.
Vygotsky, L. S. (1987). Thinking and its development in childhood.
In R. W. Rieber & A. S. Carton (Eds.), The collected works of L. S.
Vygotsky: Vol. 1. Problems of general psychology (pp. 311-324). New
York: Plneum Press.
Woolfolk, A. (2004). Educational psychology (9th ed.). Boston, MA:
Zhang, P. (1998). A case study on technology use in distance
education. Journal of Research on Computing in Education, 30, 398-416.
Zhang, Z., & Burry-Stock, J.A. (1994). Assessment Practices
Inventory. Tuscaloosa, AL: The University of Alabama.
Hussain Alkharusi, Ph.D., Ali Kazem, Ph.D., Department of
Psychology, College of Education, Sultan Qaboos University. Ali
Al-Musawai,Ph.D., Department of Instructional and Learning Technologies,
College of Education, Sultan Qaboos University.
Correspondence concerning this article should be addressed to Dr.
Hussain Alkharusi at firstname.lastname@example.org.
Characteristics of the Sample along with Results of the
[chi square]-Test Analyses on the Distributions of the
Characteristics across the Control and the Experimental Group
Control Experimental [chi
group group square] df p-value
Gender 7 3 1.45 1 .23
Female 20 21
Major 0.06 1 .81
Art 11 9
Science 16 15
Stage in the program 0.07 1 .80
Junior 7 7
Senior 20 17
Have taken teaching 0.81 1 .78
No 8 8
Yes 19 16
Have taken teaching 2.25 1 .13
No 9 13
Yes 18 11
Prior experience 12.55 2 .25
0-2 courses 10 8
3-5 courses 13 13
> 5 courses 4 3
Note. CMI = computer-mediated instruction
Means and Standard Deviations for Course Academic Performance and
Post-Measures on Educational Measurement Knowledge, Skills, and
Experimental group Control group
Variable (n = 24) (n = 27)
M SD M SD
Performance 81.79 4.02 76.07 6.53
EM knowledge 20.08 6.63 16.51 4.99
EM skills 3.15 0.39 2.80 0.55
EM attitude 4.11 0.37 3.19 0.3
Note. EM = educational measurement.