An examination of typical classroom context and instruction for students with and without behavioral disorders.
Disabled students (Case studies)
Mental illness (Case studies)
Mental illness (Care and treatment)
Teenagers (Case studies)
Youth (Case studies)
Teaching (Case studies)
Teachers (Case studies)
Software (Case studies)
Disabled children (Case studies)
Disabled children (Care and treatment)
Children (Behavior)
Children (Case studies)
Scott, Terrance M.
Alter, Peter J.
Hirn, Regina G.
Pub Date:
Name: Education & Treatment of Children Publisher: West Virginia University Press, University of West Virginia Audience: Professional Format: Magazine/Journal Subject: Education; Family and marriage; Social sciences Copyright: COPYRIGHT 2011 West Virginia University Press, University of West Virginia ISSN: 0748-8491
Date: Nov, 2011 Source Volume: 34 Source Issue: 4
Canadian Subject Form: Child behaviour Computer Subject: Software quality
Product Code: E121930 Youth; 7372000 Computer Software NAICS Code: 51121 Software Publishers SIC Code: 7372 Prepackaged software
Government Agency: United States. Department of Education

Accession Number:
Full Text:

Classrooms are complex social systems in which teachers and students interact in a variety of ways across contexts. Of issue is both the nature and frequency of teachers' use of what typically are considered effective instructional practice and the typical manner in which students respond to different teacher behaviors. This study expands upon earlier research using direction observation and coding systems to take a snapshot of how classrooms typically operate and to analyze how teacher behaviors predict student success rates. Over 1000 observations of elementary and high school classrooms were conducted during instructional contexts and the data for both teacher and student behavior summarized for analysis. Descriptive data on specific frequency and duration outcomes are presented for teachers and students and possible interactions are discussed.

Teachers are asked to assume many and varied roles for students outside that of the traditional instructor (e.g., counselor, surrogate parent, friend, etc). The multiple dynamics of a classroom can be a challenge for any teacher. Today, however, the role of the classroom teacher is becoming even more multidimensional. As a result of the Individuals with Disabilities Education Act (IDEA), teachers are being asked to accommodate more students with emotional and behavioral disorders (EBD) into general education settings. In fact, this has resulted in more than 80% of students with EBD being served in general education settings (U.S. Department of Education, 2009).

Teachers report that students with EBD engage in more disruptive classroom behavior (e.g., Wehby, Falk, Barton-Arwood, Lande, & Cooley, 2003). But for students with or at-risk for EBD, investigations into patterns of teacher-student interactions have raised some alarming results. These students receive less instruction, fewer instances of teacher praise and fewer opportunities to respond (Sutherland, Lewis-Palmer, Stichter & Morgan, 2008; Sutherland & Oswald, 2005). They also receive more reprimands, and are more likely to be engaged in ongoing, coercive interactions that increase in both frequency and intensity across time than their typically developing peers (Carr, Taylor & Robinson, 1991, Kauffman & Brigham, 2009). In other words a cycle of behavioral exchanges occurs in which off-task or disruptive behavior elicits fewer positive interactional initiations or responses from the teacher (e.g. praise statements and opportunities to respond) leading to ereater student levels of off-task and disruptive behavior.

In addition to behavioral concerns, failure to be academically successful in school is characteristic of students with EBD (Kavale & Mostert, 2004; Kauffman & Landrum, 2009; Lane, Carter, Pierson, & Glaeser, 2006; Nelson, Benner, Lane, & Smith, 2004). These students are far more likely to be deficient in basic academic skills than are their peers without such difficulties (Reid, Gonzalez, Nordness, Trout, & Epstein, 2004; Wagner, Kutash, Duchnowski, Epstein, & Sumi, 2005) and are at much greater risk of school failure (Kauffman & Landrum, 2009; Wagner, Kutash, Duchnowski, Epstein, & Sumi, 2005). For example, estimates of the prevalence of academic difficulties, especially reading and arithmetic deficits, of students with EBD range from 25% to 97% (Reid, Gonzalez, Nordness, Trout, & Epstein, 2004). Clearly, as teachers' responsibilities increase in the face of these challenging students, their moment-to-moment instructional behaviors must become more precise. In the absence of effective teacher intervention practices, both the teacher and the student with EBD tend to experience failures that often result in burnout and attrition for teachers (Zabel & Zabel, 2002), and school failure for the student (Wagner, Kutash, Duchnowski, Epstein, & Sumi, 2005).

Because issues of classroom and teacher quality are obvious, there is no shortage of books, manuals, programs, or workshops on the subject of effective teacher practice. Furthermore, recent legislative requirements (e.g., No Child Left Behind) would seem to have illuminated the issue of teacher quality with sufficient intensity to prompt more attention to assessment of teachers and teaching. Yet despite the urgency of this need there is no universally accepted methodology by which to assess or evaluate classroom or teaching practice. When considered in light of current teacher preparation programs at colleges and universities this dearth is particularly shocking. How can teacher preparation programs guide teachers to create classrooms that facilitate learning to the greatest possible degree if no standard observation system for classrooms and teaching exists? Furthermore, without base rates of pivotal teacher behaviors and corresponding student behaviors, how will we know what and to what degree teachers must engage in certain behaviors to lead to student success?

Assessment Through Direct Observation

Teacher practices such as modeling (Brophy, 2006; Gleason, Carnine, and Boriero, 1990; Rosenshine, 1979;), provision of opportunities to respond (Haydon, Mancil & Van Loan, 2009; Sutherland, Alder, & Gunter, 2003), presentation of clear expectations for learning (Brophy, 2006; Rosenshine, 1976; Weinstein, 2002), and positive acknowledgement (Brophy, 2006; Good, 1984; Hattie & Timperley, 2007) have repeatedly been shown to be associated with increased student achievement. Further, recent studies have shown both a connection between classroom activity and student behavior (Greenwood, Horton, & Ut-ley, 2002; Hayling, Cook, Gresham, State, & Kern, 2007).

In the 1990s, the results of increasingly sophisticated electronic methods of classroom observation and analysis were published, demonstrating discrepancies in teacher practice based on student behavior (e.g., Gunter et al., 1993; Shores, Jack, Gunter, Ellis, & DeBriere, 1993; Shores et al., 1994). Looking at frequency counts and conditional probabilities, this research identified a general dearth of positive acknowledgement and a tendency for teachers to focus on negative behavior -- an effect that was exacerbated for students with a history of challenging behavior. More recently, computerized coding systems have been employed to look at relationships between the teacher's delivery of opportunities for student response, student behavior, and teacher praise (Stichter, Lewis, & Whittaker, 2009; Sutherland, 2000; Sutherland, Wehby, & Yoder, 2002; Sutherland & Singh, 2004). Findings suggest that rates of effective teacher practice such as praise and provision of opportunities to respond continue to be relatively low.

Methods to assess what is occurring in classrooms have a long history with a wide variety of identified methodologies. These methodologies include self-reports versus 'other' reports and direct versus indirect methods. Indirect methods include surveys and interviews while direct methods include rating scales, checklists and observation systems. The data that they yield may be in an anecdotal, qualitative and/ or quantitative format. While different methodologies may be equally valuable depending on the purpose they serve, the advantages of direct and systematic observation to capture overall classroom phenomenon provides a concrete manner of quantifying practice in an operational way that reduce bias and enhance feedback (see Yoder & Symons, 2010). Yoder and Symons (2010) identify the reduction in bias error by self and other reports, the use of operational definitions to measure observable behaviors and the wide array of analyses that can be conducted on collected data as some of the advantages. With regard to the classroom and teacher variables there are several behaviors that are related to teacher-student interactions that seem to dictate overall classroom and teacher quality. When these variables are examined through direct observation in a bi-directional system of teacher-student interactions, we can accurately capture the effectiveness of the teacher behaviors and the corresponding student behaviors.

Teaching Behavior

Teaching behaviors are verbal and physical interactions with students that are associated with increased probability of student success with specifically communicated outcomes (see Hattie, 2009). Thus, teaching behaviors are the behaviors that teachers use when engaging in effective instruction. Basic rates of teaching or so-called academic talk is defined as the teacher engaging in behaviors that include explaining a concept, demonstrating a principle or modeling a skill or activity of an academic topic and furthering the lesson/ objective of the class. Reports of academic talk rates have varied in previous studies ranging from 40.22% (Wallace, Anderson, & Bartholomay, 2002) to 69% (Stichter, Lewis, & Wittaker, 2009) with an average at approximately 50% of instructional time to be optimal for student outcomes (Stichter, Lewis, & Wittaker, 2009). Interestingly, while these and other reviewed studies (e.g., Roberson, Woolsey, Seabrooks & Williams, 2004) varied across grade level in terms of elementary-level and high school-level there are no reported patterns of difference between rates of academic talk across grade levels and across populations of students. When considering the

basic structural differences between a high school and elementary school day, this sameness is surprising and further investigation warranted.

Opportunities to respond. Opportunities to respond (OTR) are curriculum-related prompts that the teacher provides to either the group or an individual student. Beyond the basic provision of instruction in the form of academic talk, the role of opportunities to respond has been identified as a discrete teacher behavior that is positively correlated with improving students academic and behavioral outcomes (Brophy & Good, 1986; Haydon, Mancil & Van Loan, 2009; Kern & Clemens, 2007; Partin, Robertson, Maggin, Oliver & Wehby, 2010; Sutherland & Wehby, 2001). OTR rates have been reported as optimal at a level of four to six responses per minute when presenting new material (CEC, 1987 as cited in Sutherland & Wehby, 2001). This is important in considering how OTR rates affect student behavior. When OTR was treated as the independent variable, rates of were established on average at or above three responses per minute (Haydon, Mancil, & Van Loan, 2009; Partin, et al., 2010; Sutherland, Alder & Gunter, 2003). Unfortunately, reported OTR rates have been much lower in direct observation research of natural classroom settings for students with EBD.

Positive feedback. The use of positive feedback, including general and specific praise for student behaviors, and the role of negative feedback in the form reprimands and correction has been a well-investigated phenomenon (see Hattie, 2009 for full review of meta-analyses). Positive academic and behavioral feedback, or teacher praise has been statistically correlated with student on-task behavior (Apter, Arnold & Stinson, 2010) and has strong empirical support for both increasing academic and behavioral performance and decreasing problem behaviors (Gable, Hester, Rock & Hughes, 2009). However, it is often reported as an underused teaching tool despite the supporting research (Shores, et al, 1993; Sutherland, Wehby & Yoder, 2002). With regard to reprimands and correction, there is a continued assertion that teachers maintain a ratio of praise to correction at 3:1 or 4:1 (Gable, Hester, Rock, & Hughes, 2009; Stichter, Lewis, & Wittaker, 2009).

Student Behaviors

With regard to measuring student behaviors, the positive relationship between students' academic engaged time and academic achievement has a strong empirical research base (Brophy, 2006; Hat-tie, 2009). Because of this powerful connection, it is logical to measure relationships between the aforementioned teacher behaviors and students' levels of engagement. Teacher behaviors that increase engagement can be expected to increase overall academic achievement. Active engagement is defined as engaging with instruction via choral responding, raising hand, responding to teacher instruction, writing, reading or otherwise completing assigned tasks.

The ultimate success of students will be heavily dependent upon the success that is facilitated in classrooms, where students spend the majority of their time. A more conceptually sound model of assessment must be dynamic, producing both descriptive and inferential data and providing sensitivity to the array of factors that vary across classrooms and contexts. We propose that the future of classroom and teacher assessment will involve (1) clearly defined and quantifiable environment, teaching, and student variables that can be used as a repeated measure; (2) variables identified from empirical literature as being associated with effective classrooms/instruction; (3) teacher and student behaviors as both independent and dependent variables -- affecting one another; (4) an ability to account for differential effects by subject, student characteristics, class size, etc.; (5) direct observation of behavior in classroom settings.

While previously identified studies have examined the roles of all of the instructional strategies identified and compared their rates of occurrence and impact on students' behaviors, there are still noticeable gaps in the literature. To date individual studies have restricted their examination to either elementary-level or high school level teachers and students. No study has examined both settings using the same methodology and definitions of behaviors to compare and contrast instructional practices at different grade levels. Second, while instructional practices for students with EBD have been compared to practices for typically developing students, this added dimension of different grade levels has not been included.

Our intent here is to demonstrate the technology for large-scale observation and to highlight questions and outcomes that are worthy of further and more systematically designed research. Thus, the nature of this research is largely exploratory. However, three questions were posed as a part of this exploration:

In what teaching practices do teachers typically engage in classroom settings?

To what degree are students engaged with instruction in a typical classroom?

Are there differences in how teachers behave in relation to students with behavioral problems and do students with behavioral problems behave differently than students who do not have these problems?


This study involved the use of handheld computers and coding software to observe and record teacher and student behaviors in real time. All observation data were uploaded to a main database, from which a large number of questions could potentially be asked and individually analyzed. In this case, the question of study is in regard to the differential occurrence of teachers' specific instructional practices and subsequent student behavior. After receiving approval from the university Institutional Review Board for human subjects research, schools were approached by the authors and agreed to allow classroom observations with the understanding that data would be summarized without identifiers and returned to the school for informational purposes.

Setting and Participants

Two elementary schools and two high schools in both urban and rural areas near a large city in the Midwestern US made up the sample from which observations were collected. These schools all were characterized by poverty, having better than 80% free and reduced lunch rates. Participating schools constitute a convenience sample as the authors were involved in other projects in these schools and principals welcomed the classroom observations. While not representing a diverse cross-section of the urban and rural schools, the participating schools were ethnically diverse, averaging between 14 and 24% minority population.

Data was collected in school classrooms during instructional times that included reading, math, social studies, and science. Observers did not begin coding during non-instructional time (e.g., transition, when teachers are returning papers, roll call) but began once the teacher was providing instruction in a content area. Observations were conducted for a maximum of 15-minutes. Because the focus of this examination was on teacher-directed academic instruction and each unique observation was specific to a particular context, observations were continued as long as the instructional context remained in tact, for a maximum of up tol5-minutes total duration. The observation was ended and erased if the academic content changed before at least 10 minutes of observation time. For example, observation was being conducted with a small group reading lesson, the observation would end if the lesson switched to math. If that switch occurred prior to the 10-minute observation mark then that observation was erased and not used. Given that observations occurred during highly engaging instructional times, the literature supports observations of this length as sufficient to capture a valid snapshot of teacher practices and student behaviors (Sharpe & Koperwas, 2003; Sugai & Tindal, 1993; Tawney & Gast, 1984).

No names were coded in association with either the teacher or student in any classroom as all observations were anonymous as a condition of the IRB. Thus, we were unable to look at individual teachers and cannot analyze the data by teacher. Excepting any teachers that asked not to be included (2 total across 4 schools), all teachers were observed at least once in every school, with most being observed 4-5 times. All observations were conducted using one student and the teacher as the dyad for coding. The rationale for observing a single student is that we are able to get a snapshot of typical interactions between the teacher and an individual student in the classroom.

In most classrooms, observers simply sat in the back of the classroom and picked a student at random from among the class. All observations were then done on that random student and the classroom teacher as a dyad. In a subset of classrooms across two schools (one elementary and one high school) where students with or at-risk for EBD has been previously identified, observers approached the teacher at the beginning of class and asked for identification of both a student with or at-risk of EBD (by seat or clothing color) and another student deemed "successful" with no such issues.


The Multiple Option Observation System for Experimental Studies (MOOSES[TM] Tapp, Wehby, & Ellis, 1992) observation and recording software was installed onto HP[TM] Ipaq 100 handheld computers and measurement occurred during 15-minute observation intervals at randomly selected times in academic-focused classrooms at each school. MOOSES[TM] allows users to program specific observation codes for both frequency and duration events. Observers (i.e., data collectors) sat in the back of the classroom tapping the touch screen of their handheld to code specific behaviors. Prior to observation, basic demographic information regarding the level of school and student type (behavior or non-behavior in two schools) was recorded as part of the file name.

Observer training. The cadre of observers was comprised of graduate students in education and community members who were recruited to code for a small hourly wage. All were trained by the authors using videotaped scenarios. Observers were required to demonstrate 90% reliability with video scenarios and then in classroom settings. The criterion for being allowed to collect independently was to have two weeks of live classroom recording with a trainer at 90% or better reliability. Two project trainers conducted reliability measures during 20% of observations. In these cases the reliability observer and regular observer synchronized the observation session start time and then collected data independently. Code file names were used to identify reliability files and the MOOSES software performed the reliability calculations by code.

Reliability. The authors in reviewed previous studies of this nature to develop the initial operational definitions for teacher and student behaviors. However, definitions were altered slightly during piloting procedures as deemed necessary when pilot reliability did not reach the required 80% minimum criterion. For example, initial teacher codes for "correction" and "negative feedback" were combined as negative feedback when correction could not be reliably coded. Similarly, "active teaching" and "passive teaching" were combined as "teaching" when neither could be reliably coded alone. The goal was to create definitions that captured the behavior of interest while still maintaining sufficient reliability.

Interobserver reliability data was recorded during 14% of observations (169 15-minute reliability observations). Both observer and reliability code files were entered into the database and reliability calculated by the MOOSES program. Reliability of Duration codes produced a percentage of the time in which both observers were selecting the same duration code. Reliability of frequency codes was completed using a five-second window. That is, a positive agreement between observers required both observers to have hit the same code within a window of 5 seconds. Reliability of frequency codes produced a percentage of the number of times both observers hit the same code within the 5-second window.

Demographics and Context Code Definitions

Demographic descriptors were coded prior to beginning an observation. For purposes of this study, demographic codes identified school name, grade level, academic content (reading/language arts, math, science, social studies), and students with and without behavior problems. During observation, context codes were used to denote the focus on instruction (full class, small/sub group with peers, small/sub group with teacher, individual, and one-on-one).

Teacher Behavior Code Definitions

Teacher behaviors include both duration and frequency codes that are meant to capture specific instructional behaviors. Teaching and not teaching were mutually exclusive in that only one of these duration codes could be used at a given time. Provision of an opportunity to respond (group or individual), positive feedback, and negative feedback were all frequency codes that could only be used when the teacher was engaged in active instruction.

Teaching. The definition of teaching includes reference to all things that teachers do in relation to instruction and is coded as a duration event. As such, the teacher is engaged in instruction by explaining a concept, demonstrating a principle, modeling a skill or activity, or going over behavioral expectations and providing behavioral performance feedback to class including target student, or target student alone. The interaction or content must be academic and furthering the lesson/objective of class. Examples included the teacher lecturing to the whole class, demonstrating how to perform a lab assignment to the whole class, and moving around the room actively supervising and encouraging students.

Not teaching. Not teaching was defined as those activities that did not meet the definition of teaching and was coded as a duration event. As such, the teacher was not actively delivering instruction, was not academically engaging students or was involved in independent task with no interactions with student (i.e., no adult was engaged with students). Examples included the teacher working at his/her desk while student completed an independent activity and teacher talking with student about content that was not related to the lesson.

Providing opportunities to respond. An opportunity to respond (OTR) was defined by a teacher behavior that gave the target student an opportunity to respond to a question or request that was related to the academic content and was coded as a frequency event. This may have occurred either in the context of the entire group including the target student or to the student individually. The OTR provided could require verbal or gestural responses and would be coded as an opportunity regardless of whether the student responded. All instances of OTR were required to be related to the academic or behavioral lesson and observers waited until the teacher completed the request before it was coded as an OTR. For example, the teacher may have asked the target student to demonstrate a problem by working it at the board or the teacher may have asked the class (including the target student) to provide thumbs up or thumbs down to indicate whether they agreed with a statement about a book they'd read. Non-examples included instances in which the teacher was providing a correction ("Johnny, pick up your pencil off the floor.") and directions not related directly to the lesson content (e.g., "Tamara, take your notebook out.")

Delivering positive feedback. Positive feedback was defined as the teacher delivering feedback on an academic or social behavior that indicated the behavior/response to be correct and was coded as a frequency event. This may have occurred to the group including target student or individually to the target student alone. If the teacher was providing positive feedback in a sequence (makes several positive statements in a row, about the same behavior), the sequence was coded as one occurrence. New occurrences were coded when the teacher delivered feedback for a different behavior or when the instructional context changed (e.g., whole class is praised, then individual target student is praised). Feedback could be verbal (e.g., "Good work!") or nonverbal (e.g., teacher shows thumbs up to class).

Delivering negative feedback. Negative feedback was defined as the teacher delivering feedback on an academic or social behavior that indicated the behavior/response was incorrect and was coded as a frequency event. This included correction and may have occurred to the group including target student or individually to the target student alone. If the teacher was providing negative feedback in a sequence by making several negative statements in a row about the same behavior (e.g., "no" "stop that" "turn around" "quiet"), the sequence was coded as one occurrence. New occurrences were coded when the teacher delivered negative feedback for a different behavior or when instructional context changes (e.g., whole class is admonished, then individual target student is admonished). Feedback could be verbal (e.g., "That's wrong!") or nonverbal (e.g., teacher shows thumbs down to class).

Responding to student. This frequency code was used to indicate that the teacher had responded to a student's raised hand. This could mean that the teacher fully engaged the student or that the student was simply acknowledged--even if the question or request was not heard (e.g., "not right now").

Student Behavior

Student behaviors include both duration and frequency codes that are meant to capture specific student instructional behaviors. Active engagement, passive engagement, and off-task were mutually exclusive in that only one of these duration codes could be used at a given time. Student disruption as a frequency code could only be used when the student was codes as off-task.

Active engagement. Active engagement was used as a duration code to note that a target student engaged with instructional content via choral response, raising-of-hand, responding to teacher instruction, writing, reading, or otherwise actively completing an assigned task (e.g., typing on computer, manipulating assigned materials). This code was indicated only when the student was doing something other than listening or observing.

Passive engagement. Passive student engagement was used a duration code to indicate that the student was passively attending to instruction - either by orientation to teacher, performing peer, or materials (i.e., tracking with eyes) but was not required to do anything other than listen or observe. Examples included a student sitting quietly at desk and facing the teacher who is instructing and a student sitting quietly with collaborative work group but not actively speaking, writing, or otherwise working on an activity.

Off-task. The student was coded as off-task with a duration code if he or she was not engaged in active or passive engagement but was engaging in an activity that was incompatible with any assigned task. In such cases the student was neither actively engaged nor looking at the teacher or assigned work and may or may not have been disrupting the class in some way. Examples included a target student being out of seat without permission but not bothering anyone else (if bothering a peer code would have included disruption as well), a target student looking away from the teacher and instructional materials and directing attention toward students playing outside the window, and a target student texting a friend.

Student disruption. Disruption was a frequency code meant to capture behaviors that are incompatible with learning. It was defined as the student displaying neither active or passive engagement and displaying behavior that did or potentially could have disrupted the lesson (e.g., wandering out of seat, noises, bothering peer, etc.). Disruption was coded for an event, regardless of the length of the event. Thus, a student who wandered the room would be off-task with the duration code and receive a single code for a disruptive event. If the behavior returned to on-task for more than 5 seconds and then occurred again, another disruptive code was tallied.

Hand raised. This code was used to note any time that a student raised his or her hand as a means of eliciting teacher attention. If the student was not disrupting the classroom at the time and their hand was raised this frequency code was recorded.


After schools agreed to participate, observers were assigned to classrooms for coding and reliability according to a weekly schedule. Observers entered the classroom and checked in with the teacher, identified a target student, and then sat in the back of the room - entering the demographic and contextual information. Observation time began once the teacher gave an initial class direction regarding expectations or tasks (e.g., "Get out your books." "OK, where'd we leave off yesterday?" "I expect you all to be working on your draft."). Observers continuously coded teacher and student behavior for 15 minutes, or until the academic content changed. Less than 5% of observations ended prior to the 15-minute interval. Upon completion of the observation interval the observer saved the file and moved to the next classroom site. Once all observations for a given day were completed, observers uploaded the files from their handheld onto their personal computer and then e-mailed them to the third author who put them into the main project database for formatting and analysis.


The following section presents a summary of the results of all observations. Results for teacher behaviors are presented first, followed by corresponding student behaviors, and then results that compare differences among students with and without identified behavior problems are compared. Finally, reliability data are presented by code.

Across the four schools, 1277 individual observations were completed accounting for a total of 327.8 hours. Further, 581 comparative observations of EBD and comparison peers were conducted across these one elementary and one high school.

Teacher behaviors

During 327.8 hours of observation, teachers spent 203 6 hours (62.1% of that time) engaged in teaching and 124.2 hours (37.8% of that time) engaged in non-teaching. In addition, teachers provided a group opportunity to respond on average of once every 2.04 minutes and an individual opportunity to respond to a target student once every 12.5 minutes. Teachers were observed to provide positive feed back to students on average once every 16.67 minutes and to provide negative feedback on average once every 14.29 minutes.

Instructional groupings in the classroom are presented in Figure 1. Groupings were coded as whole group - the teacher addressing the entire classroom in nearly half of all observation time (49%). Individual student work was the next most frequent at 27%, followed by small group led by the teacher at 14%, small group of peers at 7%, and one-on-one with the teacher rarely seen at l%. During classroom times students were seen raise a hand to get teacher attention 2492 times and teachers acknowledged the student 1429 times, or 57% of opportunities.

Student Behaviors

Coding for student disruption recorded 1256 instances, averaging one every 16.67 minutes. Further, as presented in Figure 2, during the average observation period students were observed to be actively engaged with the curriculum 39% the time, passively engaged with the curriculum 42%, off-task 13%, and 6% of the time was coded as "down time," indicating no task or expectations were apparent.

Students with and without Behavior Problems

Comparing across students who were and were not identified as having a behavior problem, teachers tended to provide fewer opportunities to the group and individually when a student with behavior problems was coded as the target. Further, Figure 3 shows that both problem behavior and non-problem behavior students received the same level of positive teacher feedback averaging once every 50 minutes. However students with behavior problems received negative feedback on the average of once every 10 minutes while for nonproblem behavior students the rate was once every 16.67 minutes.




Overall, duration codes had higher reliability than did frequency codes but all codes averaged.96 with a range of.84 to.99. Reliability data are presented by code in Table l.



This study has several limitations that are described here. However, we stress that this is an initial foray into large-scale classroom assessment, representing the largest number of observations analyzing teacher and student interactions to date. Our intent here has been to demonstrate the technology for large-scale observation as much as the results and to highlight questions and outcomes that are worthy of further and more systematically designed research. Thus, the nature of this research has been largely exploratory as we wished to take an admittedly rudimentary snapshot of teacher and student behavior in the classroom.

First and foremost, this study is limited by the fact that all observations occurred within the context of two elementary and two high schools. While the schools represented a high degree of ethnic and economic diversity, they in no way were selected systematically to represent all schools and thus, results cannot be generalized. Further, these schools were selected as a convenience sample in that these are schools with which the authors had a relationship. Secondly, the comparisons of students with and without identified behavior problems was not done in any systematic manner and with no matching of students by classroom, grade level, or other relevant variables. Again, these students were a part of multiple projects that allowed overlapping data collection to increase the total number of observations. Finally, analyses are limited by the manner in which the data was collected and thus more sophisticated analyses of moderating and mediating variables (e.g., lesson content, demographics, grade level) is not possible. However, the sample is large and provides support for much of what has been previously reported from much smaller studies. The results are meant to prompt conversation and stimulate questions for further study and cannot be generalized in any way.

Teacher Behavior

Given reported strength of the empirical relationship between academic engagement and student achievement (see Wang, Haertel, and Walberg, 1993-94), we present the results in this context and discuss in terms of implications for teaching and teacher training.

Had teaching been defined as "active teaching," requiring that the teacher be actively engaging students, the observed results for teaching would not be unexpected. But that teaching was defined essentially as any teacher activity that involved interacting with, speaking to, or even passively observing students, 37.9% of time coded as not teaching is alarming. More than a third of instructional time was spent with no teacher involvement of any kind. Clearly, students would be considered more likely to be actively engaged when teachers are teaching than when they aren't. If not then the entire notion of having teachers must be rethought. The fact that students were observed to be actively engaged with instruction during 39% of instructional time is actually somewhat higher than might be expected given the low teaching numbers and these data leave us to wonder what active student engagement might look like were teachers to dedicate even half of their non-teaching time to some sort of active instruction. In further consideration of the lack of teacher instruction, off-task rates of 13% also seem much lower than might have been expected.

Teachers provided opportunities for the target student to respond as part of the larger group about once every two minutes and as an individual about once every 12.5 minutes. Two points seem relevant to this result. First, because teachers were observed to teach during only 62.1% of the average instructional period, the numbers for opportunities to respond are deflated. That is, because an OTR can only be delivered when the teacher is teaching, opportunities to respond during teaching can be recalculated as once every 1.27 minutes for the group and once every 7.77 minutes for the target individual when the teacher is teaching. This seems to indicate that teachers do not necessarily need to work on increasing the frequency of OTR as higher rates could be achieved simply by increasing the time spent teaching as opposed to not teaching. Second, although the data does not allow for a direct comparison of rooms with and without students identified as having behavioral problems, there is some evidence that teachers provide fewer opportunities to respond to both the group and individual when a student with behavior problems is in the classroom. If true, this points to the existence of a coercive model in which teachers and students learn to control one another's behavior by refraining from interaction. Such cases might be described as the both the teacher and student thinking, "If you don't bother me I won't bother you."

Low rates of teacher feedback are also a result that raises concerns about the delivery of effective instruction. Positive feedback was delivered to the target student as an individual or as a part of a group only about once every 16.67 minutes and negative feedback was delivered to the target student as an individual or as a part of a group only about once every 14.29 minutes. We must consider that such rates indicate either that students engage in negative behavior more frequently than positive or that teachers tend to ignore positive behavior more than negative. The fact that students were observed to engage in disruptive behavior only about once every 16.67 minutes (the same rate as teacher providing positive feedback), and be off-task only 13% would seem to support the latter. In addition, although teachers provide students with and without behavior problems the same level of positive feedback, students with behavior problems received three times more negative feedback. Further, teachers tended not to respond to students even when the student appropriately elicited such with a raised hand. In such cases the teacher acknowledged the student's request only 57% of opportunities. Under such conditions there would seen to be great incentive for students to act in a negative manner to garner teacher attention. But because rates of disruption and off-task were generally low we again must wonder what student engagement and behavior might look like were teachers to be more responsive to students and provide more frequent positive feedback.

The typical instructional grouping was also interesting, especially in light of the fact that a large number of the target students we observed were on IEP for learning or behavior disorders. Overall, 76% of the average classroom instructional time was dedicated to teacher lecture to the group and individual work. The remaining time was split between small peer groups and teacher-led small group instruction--but with only l% dedicated to one-on-one instruction. While the data we have does not permit analysis of the number of classrooms with special education collaborating teachers, we find the l% figure to be surprisingly low, as it would typically account for less than 30 seconds of the average class.

Future Research

The results presented here provide what we have described as an initial snapshot of teacher and student behaviors and interactions during instructional classroom settings. We have attempted to expand upon the several examples that have come before us by greatly increasing the number of observations and coding a larger range of both demographic and behavior variables. Our interests lie in how we might use this information to be more prescriptive in how we train teachers to implement instructional strategies. What we have discovered is that, from our observations, much of the time allocated to instruction is wasted. That is, we believe that teachers often squander opportunities to provide engagement and feedback that we know are so crucial to student learning and achievement. But in light of the limitations of this study, there is much to be followed up on to move forward with this line of research.

First, future research must scale up to expand upon the number, type, setting, age level, and performance level of schools in which observations occur. As the database continues to grow we will be able to be much more confident in the validity of observed results and in our ability to generalize them. Second, research must also scale down to focus on individual classroom settings and the experimental manipulation of variables such as Oh and feedback. Such thoughtful experiments will help to more precisely define the nature and number of teacher behaviors necessary to maximize student success. Finally, research must come to terms with the vast variety of demographic variables that cut across schools, teachers, and students. More sophisticated analyses including hierarchical linear modeling may prove useful in sorting out the relative and variable implications these variables may have in determining the most effective manner of delivering instruction.


Apter, B., Arnold, C., & Swinson, J. (2010). A mass observation study of student and teacher behaviour in British primary classrooms Educational Psychology in Practice, 26(2), 151-171.

Brophy, J. (2006). History of research on classroom management. In C. M. Evertson & C. S. Weinstein (Eds.), Handbook of classroom management: Research, practice, and contemporary issues (pp. 17-43). Mahwah, NJ: Eribaum.

Brophy, J.E., & Good, T.L. (1986). Teacher behavior and student achievement. In M.C. Wittrock (Ed), Handbook of Research on Teaching (3rd edition, pp 328-377) New York, NY: Macmillan.

Carr, E.G., Taylor, J.C., Robinson, S. (1991). The effects of severe behavior problems in children on the teaching behavior of adults. Journal of Applied Behavior Analysis, 24, 523-535.

Council for Exceptional Children (CEC) (1987). Academy for effective instruction: Working with mildly handicapped students. Reston, VA.

Gable, R.A., Hester, P.M. Rock, M.L., & Hughes (2009). Back to basics: Rules, praise, ignoring and reprimands revisisted. Intervention in School and Clinic, 44, 195-205.

Gleason, M., Carnine, D., & Boriero, D. (1990). Improving CAI effectiveness with attention to instructional design in teaching story problems to mildly handicapped students. Journal of Special Education Technology, 10(3),129-136.

Good, T. L. (1984). Teacher Effects. In making our schools more effective: Proceedings of three state conferences. Columbia, MO: University of Missouri.

Greenwood, C. R., Horton, B. T., & Utley, C. A. (2002). Academic engagement: Current perspectives on research and practice. School Psychology Review, 31(3), 328-349.

Gunter, P. L., Jack, S. L. Shores, R.E., Carrell, D.E., & Flowers, J. (1993). Lag sequential analysis as a tool for functional analysis of student disruptive behavior in classrooms. Journal of Emotional Behavioral Disorders, 1, 138-148.

Hattie, J. A. C. (2009). Visible Learning: A synthesis of over 800 meta-analyses relating to achievement. New York, Routledge Press.

Hattie, J. A. C., & Timperley, H. (200&). The power of feedback. Review of Educational Research, 77(l), 81-112.

Haydon, T., Mancil, G. R., & Van Loan, C. (2009). Using opportunities to respond in a general education classroom: A case study. Education and Treatment of Children, 32(2), 267-278.

Hayling, C. C., Cook, C., Gresham, F. M., State, T., & Kern, L. (2007). An analysis of the status and stability of the behaviors of students with emotional and behavioral difficulties. Journal of behavioral education, 17, 24-42.

Kavale, K.A., & Mostert, M.P. (2004). Social skills interventions for individuals with learning disabilities. Learning Disability Quarterly, 27(l), 31-43.

Kauffman, J. M., & Landrum, T. J. (2009) Characteristics of emotional and behavioral disorders of children and youth (9th ed.). Upper Saddle River, NJ: Prentice Hall.

Kern, L., & Clemens, Nathan H. (2007). Antecedent strategies to promote appropriate classroom behavior. Psychology in the Schools, 44(l), 65-75.

Lane, K., Carter, W., Pierson, M., & Glaeser, B. (2006). Academic, social, and behavioral characteristics of high school students with emotional disturbances or learning disabilities. Journal of Emotional & Behavioral Disorders, 14(2), 108-117.

Nelson, J. R., Benner, G. J., Lane, K. L., & Smith, B. W. (2004). Academic achievement of K-12 students with emotional and behavioral disorders. Exceptional Children, 71, 59-73.

Partin, T.0, Robertson, R.E., Maggin, D.M., Oliver, R.M. & Wehby, J. (2010). Using teacher praise and opportunities to respond to promote appropriate student behavior. Preventing School Failure 54(3), 172-178.

Reid, R., Gonzalez, J.E., Nordness, RD., Trout, A., Epstein, M.H. (2004). A meta-analysis of the academic status of students with emotional/behavioral disturbance. The Journal of Special Education, 38(3), 130-143.

Roberson, L., Woolsey, M. L., Seabrooks, J., & Williams, G. (2004). An ecobehaviroal assessment of the teaching behaviors of teacher candidates during their special education internship experiences. Teacher Education and Special Education, 27, 264-275.

Rosenshine, B. (1976). Recent research on teaching behaviors and student achievement. Journal of Teacher Education, 27(1), 61-64.

Rosenshine, B. (1979). Content, time and direct instruction. In P.L. Peterson and H.J. Walberg (Eds), Research on taching: Concepts, findings and implications, Berkeley, CA: McCutchan.

Sharpe, T., & Koperwas, J. (2003). Behavior and sequential analysis: Principles and practice. Thousand Oaks, CA Sage.

Shores, R. E., Jack, S. L., Gunter, P. L., Ellis, D. N., DeBriere, T., & Wehby, J. (1993). Classroom interactions of children with severe behavior disorders. Journal of Emotional and Behavioral Disorders, 1, 27-39

Shores, R. E., Jack, S. L., Gunter, P. L., Ellis, D. N., DeBriere, T. J., & Wehby, J. H. (1994). Classroom interactions of children with behavior disorders. Journal of Emotional and Behavioral Disorders, 1, 27-39.

Stichter, J. P., Lewis, T. J., & Whittaker, T. (2009). Assessing teacher use of opportunities to respond and effective classroom management strategies: Comparisons among high- and low-risk elementary schools. Journal of Positive Behavior Interventions, 11(2), 68-81.

Sugai, G. & Tindal, G. (1993). Effective school consultation: An interactive approach. Pacific Grove, CA: Brooks/Cole.

Sutherland, K. S., (2000) Promoting positive interactions between teachers and students with emotional//behavioral disorders. Preventing School Failure, 44(3), 110-115.

Sutherland, K. S., Adler, N., & Gunter, P. L. (2003). The effects of varying rates of opportunities to respond to academic requests on the classroom behavior of students with EBD. Journal of Emotional and Behavioral Disorders, 11, 239-248.

Sutherland, K. S., Lewis-Palmer, T., Stichter, J. & Morgan, P. L. (2008). Examining the influence of teacher behavior and classroom context on the behavioral and academic outcomes for students with emotional or behavioral disorders. Journal of Special Education, 41(4), 223-233.

Sutherland, K.S. & Oswald, D.P. (2005). The relationship between teacher and student behavior in classrooms for students with emotional and behavioral disorders: Transactional processes. Journal of Child and Family Studies, 14(l), 1-14.

Sutherland, K. S., & Singh, N. N. (2004). Learned helplessness and students with emotional or behavioral disorders: Deprivation in the classroom. Behavioral Disorders 29(2), 169-181.

Sutherland, K. S., & Wehby, J. H. (2001). The effect of self-evaluation on teaching behavior in classrooms for students with emotional and behavioral disorders. journal of Special Education, 35(3), 161-171.

Sutherland, K. S., Wehby, J. H., & Yoder, P. J. (2002). Examination of the relationship between teacher praise and opportunities for students with EBD to respond to academic requests. Journal of Emotional and Behavioral Disorders, 10, 5-13.

Tapp, J., Wehby, J., & Ellis, D. (1992). The multiple option observation system for experimental studies (MOOSES). Nashville, TN: Tapp and Associates.

Tawney, J. W., & Gast, D. L. (1984). Single subject research in special education. Columbus, OH: Merrill.

Wallace, T., Anderson, A.R., Bartholomay, T., & Hupp, S. (2002). An ecobehavioral examination of high school classrooms that include students with disabilities Exceptional Children, 68(3), 345-359.

U.S. Department of Education, Office of Special Education and Rehabilitative Services, Office of Special Education Programs, 28th Annual Report to Congress on the Implementation of the Individuals with Disabilities Education Act, 2006, vol. l, Washington, D.C., 2009. Wailace, T., Anderson, A. R., & Bartholomay, T. (2002). An ecobehavioral examination of high school classrooms that include students with disabilities, 68(3), 345-359.

Wagner, M., Kutash, K., Duchnowski, A.J., Epstein, M.H., & Sumi, W.C. (2005). The children and youth we serve: A national picture of the characteristics of students with emotional disturbances receiving special education. Journal of Emotional and Behavioral Disorders, 13(2), 79-96.

Wallace, T., Anderson, A., Reschly, A., & Bartholomay, T. (2002). An ecobehavioral examination of high school classrooms that include students with disabilities Exceptional Children, 68(3), 345-359.

Wang, M. C.; Haertel, G. D.; and Walberg, H. J. (1993). What helps students learn? Educational Leadership, 51(4), 74-79.

Wehby, J. J, Falk, K. B., Barton-Arwood, S., Lane, K. L., & Cooley, C. (2003). The impact of comprehensive reading instruction on the academic and social behavior of students with emotional and behavioral disorders. Journal of Emotional and Behavioral Disorders, 11(4), 225-238.

Weinstein, R. S. (2002). Reaching higher: The power of expectations in schooling. Cambridge, MA: Harvard University Press.

Yoder, P. & Symons, F. (2010). Observational measurement of behavior. New York: Springer.

Zabel, R. H. 7 Zabel, M. K. (2002). Burnout among special education teachers and perceptions of support. Journal of Special Education Leadership, 15(2), 67-73.

Correspondence to Terrance M. Scott, College of Education and Human Development, University of Louisville, Louisville, KY 40292; e-mail:

Terrance M. Scott

University of Louisville

Peter J. Alter

St. Mary's College of California

Regina G. Him

University of Louisville
Percent of Time Obser

Down Time in the Classroom         6

Target Student Off-Task           13

Target Student Passively Engaged  42

Target Student Actively Engaged   39

Figure 2. Percent of classroom time in which students exhibited
different levels of engagenment across all observations.

Note: Table made from bar graph.

Table 1 Reliability by Code

Duration          Frequency
Codes             Codes

Teaching     .98  OTR group    .88

Not          .95  OTR          .84
Teaching          individual

Whole Group  .98  Student      .90

Small Group  .99  Teacher      .87
Teacher           Acknowledge

Small Group  .94  Positive     .85
Peers             Feedback

Independent  .96  Negative     .88
Work              Feedback

One-on-One   .87  Disruption   .86

Downtime     .90

Off Task     .90

Passive      .95

Active       .95

Overall by   .96               .89
Codes Type
Gale Copyright:
Copyright 2011 Gale, Cengage Learning. All rights reserved.