Lessons learned about the utility of social validity.
Book publishing (Training)
Teachers (Training)
Disabled children (Care and treatment)
Disabled children (Training)
Children (Behavior)
Marketing research
Strain, Phillip S.
Barton, Erin E.
Dunlap, Glen
Pub Date:
Name: Education & Treatment of Children Publisher: West Virginia University Press, University of West Virginia Audience: Professional Format: Magazine/Journal Subject: Education; Family and marriage; Social sciences Copyright: COPYRIGHT 2012 West Virginia University Press, University of West Virginia ISSN: 0748-8491
Date: May, 2012 Source Volume: 35 Source Issue: 2
Event Code: 280 Personnel administration Canadian Subject Form: Child behaviour
Product Code: 2731000 Book Publishing NAICS Code: 51113 Book Publishers SIC Code: 2731 Book publishing
Company Name: Pearson Education Inc.
Geographic Scope: United States; Maryland Geographic Code: 1U5MD Maryland

Accession Number:
Full Text:

In this paper, we examine the link between evidence-based practice and social validity by describing five examples from our own research where social validity measures resulted in data that were essential to both a clinical and research agenda on evidence-based practice. Social validity data are reviewed in the context of behavioral skill training for family members of children with autism, the implementation of a manualized approach for treating severe problem behaviors, an intervention to increase play behaviors in young children with special needs, a home coaching intervention designed to reduce child challenging behaviors, and a large scale longitudinal study of early school failure. The examples illustrate the essential function of social validation and highlight implications for research and the future development of evidence-based practice.

Beginning with Wolf (1978) the concept of social validity has entailed three components: (a) consumers' selection of intervention targets, (b) consumers' compatibility with intervention tactics, and (c) consumers' evaluation of intervention impact. Since Wolf's groundbreaking treatise there have been many examples in the behavioral literature where investigators have employed parts, and more rarely, all of these components of social validity in specific intervention studies. While few have and do argue the theoretical and ethical merits of social validity, it is less clear in published research how the inclusion of a social validity perspective leads to empirical knowledge that would otherwise be unavailable. In this paper we describe some of our own attempts to utilize the precepts of social validity and more importantly, how this consumer oriented assessment perspective revealed unique information that on occasion was at variance with conventional wisdom. The remainder of this paper is organized around five examples of social validity in action. These examples delineate the fundamental relation between social validity and evidence-based practice.

Example 1: Comprehensive Consumer Input and Behavioral Training for Families with Children with Autism.

Since 1981 Strain and colleagues have operated a comprehensive early intervention program (LEAP Preschool) for young children with autism (Strain & Bovey, 2008). In addition to an inclusive pre-school component the LEAP model involves systematic, behavioral skill training for adult family members. Utilizing each of Wolf's (1978) dimensions of social validity, consumers individually select where skill training occurs and what the indicators of child progress should be. They likewise determine their own comfort-level and competence with selected intervention strategies; and thereby determine the length of support they receive in specific routines. How do we go about collecting these social validity data?

Related to consumer selection of where training occurs and what the indicators of child progress should be, we utilize a simple interview format whereby adult family members identify the five or six re-occurring child-rearing routines that cause them the most parenting stress. While there is obvious and expected variability across families, accumulated data across 30 years suggests that most families find mornings (dressing, feeding, getting off to school), outings (getting in the car, going to a novel place), bedtime (getting undressed/dressed tooth-brushing, bathing), and visitors in the home as particularly stressful routines. Having these data early-on in the evolution of the LEAP model was crucial to staffing the project such that personnel were actually available across a wide hour span to meet families' need for support in the early morning and in the evening.

Social validity assessment specific to consumers' selection of valued outcome indices within identified routines generated unexpected and quite consistent data across families. Our preconceived notion was that families would be most interested in their children displaying specific developmental skills associated with routines, such as independently dressing self, feeding, and tooth brushing. Relatedly, we also expected that families would be focused on communication skills such as expressing one's needs, wants, and discomforts. Contrary to our a priori judgments, families were most interested in completing routines in a timely fashion. And, as it turns out, in most cases families were spending lengthy amounts of time completing routines prior to skill training. From a clinical research standpoint we had perfectly acceptable baselines, as measured by time-to-completion, and consumers had an unambiguous, easily measured, and valued outcome index by which to judge the efficacy of their participation in skill training.

Consumers' rating of their comfort-level and sense of confidence to implement strategies across their selected routines were measured repeatedly using simple, 5-point rating scales. Figure 1 depicts the Jones family's progress in one routine (i.e., bedtime) along with social validity data on comfort and sense of confidence with intervention strategies.


As Figure 1 shows, the Jones family made relatively fast progress in reducing the amount of time to complete the bedtime routine with their daughter. However, it is clear that despite reaching their target time frame (20 min) for completing the bedtime routine in less than two weeks, the adult implementer (Dad in this case) never rated himself above "Somewhat Confident" in implementing the bedtime routine. This general data path was replicated across 50 families; namely, families achieved behavior change on indices of their choice long before they reported feeling comfortable with the strategies they were implementing to achieve that behavior change. As a result of having this social validity data we elected to base our continued support to families on their "comfortability" ratings rather than on child behavior change alone. In implementing this social validity strategy with families we found that, on average, families needed to complete skill training on only 2 to 3 routines until they generalized their skills and confidence to other routines (Cordisco & Strain, 1986). On average, it took families three months to complete skills training on initial routines.

Example 2: Comprehensive Consumer Input in a School-based Model of Behavior Support

Prevent-Teach-Reinforce (PTR) is a model designed to guide school-based teams in developing and implementing behavior support plans for students with serious problem behaviors in general and special education settings. The model offers a step-by-step, standardized process that teachers and other team members use to create and carry out individualized interventions (Dunlap et al., 2010). The model was developed, refined, and field-tested in school districts in Florida and Colorado, and data from the model's initial evaluation were published in 2009 (lovannone et al.). PTR is essentially a process for implementing research-based assessment and intervention strategies aligned with the approach of positive behavior support (Koegel, Koegel, & Dunlap, 1996; Sailor, Dunlap, Sugai, & Horner, 2009). However, a distinguishing characteristic of the PTR model is that it incorporates all of Wolf's (1978) dimensions of social validity.

Consumer input has been essential in the development of the PTR model and it plays a vital role every time the model is implemented. Consumers in the PTR model are defined as the school-based teams, and especially the teachers, who are responsible for implementing the procedures and who are deeply invested in the behavioral outcomes. From the beginning, consumers provided detailed feedback regarding the steps of the PTR process and the manner with which the steps were described in the early versions of the field manual. As an example, recommendations from teachers who participated in early field tests of the PTR model led to important modifications in the manner with which functional assessment data are obtained from school-based team members. The original version of the PTR assessment questionnaires included a series of items that required open-ended written responses. Input indicated that this form of responding required more time and effort than many teachers were willing to expend. Therefore, we modified the form of responding on the assessment tools to closed-option boxes, which greatly reduced the time required by individual team members to complete the assessment process. This type of consumer feedback was instrumental in shaping a manual that has been regarded by school professionals to be clear, efficient, and easy-to-use (Iovannone et al., 2009), and which undoubtedly increased the extent to which teams proceed through the PTR assessment as intended (Strain, Wilson, & Dunlap, 2011).

Social validation of goals and procedures is built into the PTR model by having teams make the important decisions regarding specific objectives regarding students' behavior change, and by having teams select the specific intervention tactics that will comprise the behavior intervention plans. The PTR model begins with establishing a school-based team (Step 1) and proceeds to Step 2, which has the team working to develop specific behavioral, social and academic goals that are desired outcomes of the behavior support plan. The fact that the students' teachers and other team members collaborate to specify the goals implies strongly that the goals are valued by the members of the team and that there is likely to be a greater commitment to achieving them than if they were imposed by outside authorities. The behavioral objectives then become the focus of the PTR assessment process (Step 3), during which team members share their knowledge and observations of the students' behavioral patterns.

Step 4 of the model involves the team members using assessment information to construct a behavior intervention plan for the focus student. The plan is comprised of at least one intervention strategy from the three main procedural categories: Prevent (antecedent manipulations), Teach (instructional strategies) and Reinforce (adjustments of reinforcing consequences). A critical aspect of this step is that it is the team members (primarily the student's principal teacher) who determine which interventions will be included on the behavior intervention plan. The PTR manual (Dunlap et al., 2010) provides menus of intervention options, and the team selects those that they believe will be: (a) effective, (b) logically derived from the assessment information and, importantly, (c) congruent with the preferences, values, and routines of those who will be chiefly responsible for their implementation. The fact that it is the intervention agents themselves who make the selections suggests that the plan is likely to have a high degree of "contextual fit" and, thereby, that the plan will be implemented with a high level of fidelity (Albin, Lucyshyn, Homer, & Flannery, 1996).

These assumptions, related to contextual fit and consumer involvement in the PTR model, were examined during the initial, randomized controlled evaluation of the PTR model (Iovannone et al., 2009). This controlled evaluation revealed that students who participated in the PTR process showed statistically significant improvements, when compared with students who received "services as usual," in levels of problem behavior, social skills, and academic engagement. Data were also obtained on teachers' perceptions of the PTR model and on the fidelity with which the procedures were implemented. The 124 teachers who had implemented the model completed a slightly modified version of the Treatment Acceptability Rating Form (TARF; Reimers & Wacker, 1988). The TARF uses a 5-point Likert scale (with "5" indicating the most positive rating) for 15 items related to teachers' attitudes regarding the feasibility and efficacy of the indicated intervention -- in this case, PTR. Overall, the teachers' ratings were very positive, averaging a score of 4.16 across the 15 items. One question, "How willing are you to carry out this (PTR) behavior plan?" produced an average rating of 4.80. Another item, "How well will carrying out the plan fit into the existing routine?" resulted in an average rating of 4.31. The point is that these teachers who had some experience implementing the PTR model described the process and the specific intervention strategies as acceptable and effective, and they indicated a willingness to use the model in the future. Relatedly, it is important to note that the implemented PTR plans, in almost all cases, involved teachers making a radical departure from their prior attempts to address problem behaviors. Finally, it is important to note that direct measures of the fidelity with which the PTR plans were implemented showed that more than 80% of the participating teachers implemented the behavior intervention plans accurately and with high quality (Iovannone et al., 2009). We attribute these encouraging data to the ongoing process of soliciting consumer input and incorporating that input to assure the social validity of the overall model.

Example 3: Consumer Satisfaction with Intervention Targets and Procedures in a Play Intervention

Play is a functional, generative goal for many children with special needs for at least three reasons. First, play promotes successful inclusion into classroom and community settings for young children with special needs. In fact, teaching children to play with materials and peers increases the likelihood of learning in inclusive settings (Buysse, Wesley, Keyes, & Bailey, 1996; Morrison, Sainato, Benchaaban, & Endo, 2002). Second, play provides a context for embedding evidence-based practices, teaching positive social and communicative interactions with peers, siblings, or adults, and assessing the child's strengths and needs. For example, play provides natural, contextually relevant opportunities for embedding child-focused instructional strategies targeting individualized objectives (Pretti-Frontczak & Bricker, 2004; Sandall & Schwartz, 2008). Third, play has a practical value (Garfinkle, 2004). Play is flexible; children play across settings, materials, and people. In fact, independent play (i.e., when children are engaged in meaningful behaviors with their environment, siblings, or peers) might provide caregivers with more opportunities to interact with their children and more free time.

The research on play has advanced such that there is a set of evidence-based practices for increasing the play repertoires of young children with special needs (e.g., Barton & Wolery, 2008). However, few play studies have examined the social validity of the goals, intervention, or outcomes in inclusive classrooms. This type of subjective evaluation by consumers is particularly important for informing the play literature given the rational for teaching play includes promoting inclusion, providing a context for embedded instruction, and freeing up time for caregivers, which are directly related to consumer satisfaction.

In a recent examination of a pretend play intervention, titled Project PLAY, we taught preschool teachers to implement an intervention package (i.e., a system of least prompts, contingent imitation, descriptive praise) designed to increase the pretend play skills of young children with special needs. The purpose of this study was to teach generalized play behaviors to young children with developmental delays and to develop an acceptable intervention package for preschool teachers in inclusive classrooms. The study took place across three preschool classrooms within an inclusive early childhood program that served children with special needs and children with typical development from the local community. The program director nominated teachers for the study, and these teachers then identified children who had play goals on their Individual Education Programs (IEP).

We designed the intervention package so that the classroom teachers systematically taught increasingly complex play behaviors and embedded language into the child's play. We employed a multiple probe design across behaviors across children to examine the relation between the intervention package and the play behaviors of the target children. The intervention was functionally related to increases in the play and language behaviors of all children and teachers implemented the intervention with fidelity (Barton, 2010). These results, albeit important for establishing experimental control, did not completely address the two purposes of the study (i.e., teach generalized play and develop an acceptable intervention package). Thus, we created and administered subjective questionnaires to measure the acceptability of both the study outcomes (i.e., is the child demonstrating more play behaviors outside of the intervention sessions) and the intervention procedures to participating teachers and caregivers of participating children; thus, measuring two (i.e., procedures and outcomes) of Wolf's (1978) dimensions of social validity. The goals of the intervention were assumed to be important and relevant for the participating children because the inclusion criteria specified the children had to have play-related goals on their IEPs. Results indicated the intervention was functionally related to increases in generalized play behaviors. In fact, both teachers and parents indicated the intervention was effective for increasing child play and language behaviors outside of the intervention sessions, within the classroom and at home, respectively. This is particularly important to note given the intervention did not include a home component. In fact, the parents had little knowledge of the intervention procedures or outcomes, other than signing an initial consent form. This measure of social validity suggested the intervention provided contextually relevant opportunities for targeting play and related language behaviors across settings (see Tables 1 & 2). However, teachers and parents were less likely to associate the intervention with increases in social interactions across settings. Thus, although play provides contextually relevant opportunities for social interactions, systematic, embedded instruction targeting social skills likely is necessary to impact social interactions.

We also were interested in the acceptability and feasibility of the intervention for teachers in inclusive classrooms. The five participating teachers had no prior experience with a system of least prompts or systematic teaching of play. In fact, during the initial training session the teachers expressed concerns that the system of least prompts would be too difficult to implement in the classroom as part of the daily routine. However, soon after the initial teacher training session (i.e., a one-hour training with lecture, videos, role play, and performance based feedback), we observed the teachers using components of the intervention package (e.g., contingent imitation, a system of least prompts) outside of the scheduled research sessions with other children. Although we lacked the quantitative data to substantiate this generalized use of the system of least prompts, we included items on the teachers' social validity questionnaire to assess acceptability and the use of the intervention package at the completion of the study.Despite our and their initial concerns, all five teachers indicated the intervention was easy to implement (see Table 2). Additionally, all five teachers indicated they used the intervention package outside of the intervention sessions and planned to continue to use the strategies after the completion of Project PLAY.

In sum, both parents and teachers were largely satisfied with both the child outcomes and the intervention. Although we were confident the intervention was related to increases in play behaviors within the intervention sessions across toys, we were less confident and even skeptical about the generalized play behaviors of children at home. Furthermore, we had preconceived notions about the acceptability of the intervention package given the teachers' initial discomfort and concerns with the intervention procedures. This measurement of social validity provided us with information about perceived child outcomes across settings, acceptability of the procedures, and the teachers' generalized use of the intervention strategies. These findings have important implications for future research and application of this intervention approach (e.g., systematic instruction targeting play). Specifically, we intend to extend this research to systematically examine the generalized play behaviors of children at home and teachers' generalized and maintained use of embedded instruction targeting play.

Example 4: Consumer Satisfaction with Intervention Targets and Procedures in a Caregiver Training Intervention

Over the past several years, the Incredible Years (IY) caregiver-training program has emerged as an empirically supported, evidence-based curriculum for caregivers of children with challenging behaviors, including children at-risk for or with disabilities (Joseph & Strain, 2003; Webster- Stratton., 2008). This body of research, albeit extensive, has primarily examined decreases in negative parenting practices rather than increases in positive parenting practices (e.g., Phaneuf & McIntyre, 2007). We were interested in addressing this gap in the literature by examining the effects of adding a home coaching component to the IY group training for caregivers of young children with developmental delays and examining the effects on positive parenting practices.

We employed a multiple baseline design (across parenting behaviors) to examine the relation between IY group training plus home coaching and changes in positive parenting practices (see Lissman, Barton, & Morgan, 2010, for a detailed description of this study). We recruited two caregivers from an IY group training series led by a behavioral specialist from an early intervention agency in an urban community in the Pacific Northwest. We selected four positive parenting behaviors based on the key components of the group training (i.e., responsive play, descriptive praise, limit setting, and appropriate responses to challenging behaviors). Thus, the goals of the coaching were directly aligned with the group training. We staggered introduction of the home coaching across parenting behaviors (within each caregiver) and measured the use of the positive parenting practices immediately prior to each coaching session.

The parents demonstrated variable and moderate increases in positive parenting behaviors with the home coaching. These results suggested more intensive interventions (e.g., over more time or with more support) might be required to elicit robust, observable changes in positive parenting practices. Despite this lack of behavioral change, caregivers reported perceived benefits from the intervention on social validity measures. We measured the social validity of the intervention procedures and outcomes with a caregiver satisfaction questionnaire created for the study, the Social Emotional Assessment Evaluation Measure (SEAM; Squires & Bricker, 2007), and the Ages and Stages Questionnaire: Social Emotional (ASQ:SE; Squires, Bricker, & Twombly, 2002). The satisfaction questionnaire consisted of 7 items related to the intervention procedures and outcomes of the study. The SEAM is a curriculum-based assessment focusing on social emotional development. This assessment tool asked caregivers to rate their child's social emotional competency across 10 benchmarks and identify any specific concerns about their child's social emotional competence. The ASQ:SE is a caregiver-completed screening tool targeting social emotional behaviors. Caregivers completed the SEAM and the ASQ:SE at pre and post intervention and the questionnaire at post intervention.

Caregivers reported being highly satisfied with the intervention procedures and outcomes. Caregivers reported a notable reduction in concerns about their child's social emotional competence and challenging behaviors on both the SEAM and the ASQ:SE (see Lissman et al., 2010). Thus, despite limited behavioral change on their part, parents reported fewer concerns about challenging behaviors (as reported on the ASQ:SE) and high social emotional competence scores (as reported on the SEAM). These results were inconsistent with our preconceived notions (i.e., that caregiver satisfaction would be related to increases in their own positive parenting practices). This measurement of social validity revealed information about consumer satisfaction that would be indiscernible with a visual analysis of the target behaviors, and challenged assumptions related to consumer satisfaction of the intervention procedures and outcomes. This has important implications given the high rate of attrition in home-based caregiver training programs (Lees & Ronan, 2008; Reid, Webster-Stratton, & Baydar, 2004). Parents who are satisfied with the intervention and outcomes might be less likely to drop out (McIntyre, 2008).

Example 5: Consumer Satisfaction of Intervention Targets in the Context of Research on School Failure

In the early 1980's, Strain and colleagues launched a longitudinal study of 200 children in grades K-2 who were, because of multiple risk factors, at risk for school failure (Strain, Lambert, Kerr, Stagg, & Lenkner, 1983). Children were assessed repeatedly on a comprehensive battery of cognitive, academic, social, and problem behavior indices. For the most part, we found significant stability in children's performance across years (McConnell et al., 1984), suggesting that early failure in school was highly predictive of subsequent failure across grades. Within this general group trend we noted, however, that upwards of 25% of the study participants' behavior was enormously variable across years, even on cognitive measures.

This naturalistic study led eventually to the consideration of a child intervention program to address this general trend of repeated failure across grades. Borrowing directly from social validity precepts, we began the intervention development phase by asking teachers across the early elementary grades what behaviors children needed to display in order to be successful in their classrooms. Our notion being that this information would be used to select intervention targets (presumably with high social validity). With notable exceptions, most teachers in this study described a "successful" child as one who would be 12 to 24 months more developmentally advanced than children of the age range normally enrolled at a specific grade level. So, kindergarten teachers, in the majority of cases, described the "successful" student as one with the behavioral competence of a 6 or 7 year old, not of a 5 year old, and so forth.

These social validity data were not what was expected and they led to a complete re-conceptualization of our intervention approach.First, since not all teachers responded with a "developmentally-inappropriate" profile we thought that it would be of empirical interest to directly compare children's behavior as they were exposed, in a natural experiment, to teachers with very contrasting views of success criteria. Specifically, we continued to track children at risk across years and specifically noted whether their teachers were or were not defining success within a reasonable developmental envelope. We also added a more continuous measure of behavior (percent of engaged time) to the protocol to increase our behavior sampling within years and to employ a metric shown to be a good proxy for skill acquisition (McWilliam & Bailey, 1995). We also added a direct observational measure of teacher behavior to the protocol; specifically, we assessed teachers' delivery of positive verbal feedback to students when they were actively academically engaged (e.g., reading an assigned text, completing a worksheet, using a counting line to complete math problems).

Figures 2, 3, and 4 show three of the possible natural experiment sequences as children at risk moved from class to class with teachers who differed widely on their definitions of success. The dependent variable on these graphs is the mean percent of time children were actively engaged across three full-day samples of behavior distributed evenly across each school year. The corollary, suggested independent variable is the probability of the child receiving positive feedback given an episode of engagement.



Figures 2-4 suggest that the overall trend toward repeated failure across grades is not necessarily predictive of individual children's performance. Relatedly, the data in Figures 2-4 indicate that multiple years of relative success or failure is not predictive of later success or failure. These data also reflect the fundamental utility of our social validity assessment of teachers' definitions of success criteria. Finally, these data show that different levels of expectations for children are highly correlated with teachers' use of positive verbal feedback for children's academic engagement.

Considering our social validity assessment data with the follow along examination of students' and teachers' directly observed behaviors, we fundamentally altered our initial plans for children-focused intervention. Put simply, the social validity data changed our perspective on the nature of the "problem" from one of student deficits to teacher attitudes and associated instructional behaviors. The intervention that followed began with instruction specific to "developmentally appropriate" expectations and progressed to class-wide instructional practices aimed at increasing teachers' use of positive feedback contingent on student engagement.



In this paper we have briefly summarized five examples from our own work in which social validity measurement has played a pivotal role in evidence-based practice. Considered collectively, these examples show how attention to social validity can: (a) influence the design of service delivery systems such that help is available when and where consumers most need it, (b) help determine when to make critical clinical decisions around scaling back the level of support to clients, (c) totally change one's perceptions about the nature of an intervention approach and the target of said intervention, (d) lead to invention agents being willing to implement complex and radically different strategies to alter problem behaviors and teach new skills, (e) reveal important and unanticipated effects of intervention, and (f) guide future research efforts.

Taken together these social validity examples suggest three critical implications for future research and clinical practice. First, these examples demonstrate that choosing an intervention with prior data on effectiveness does not ensure effectiveness in the present, individual case. Simply put, no psycho-educational treatment is known to be universally effective and no psycho-educational treatment is known to be universally acceptable either. Ensuring effectiveness and acceptability of practices and outcomes requires extant data collection. Second, these examples show that the measurement of social validity is, in many cases, absolutely essential to achieving good outcomes for clients. All too often social validity assessment has been treated as a luxury item, an add-on in the manner that "park-assist" is an add-on to an automobile. We would argue that social validity assessment is more akin to a steering wheel. Third, several of these examples suggest that there is a positive correlation between practitioners "liking" an intervention (i.e., finding it acceptable and doable) and implementing that intervention with fidelity. Significantly, emerging data indicate that behavioral treatments many yield little or no benefit unless they are implemented with extraordinary high level of fidelity (Strain & Bovey, 2011). The precise effect associated with the "liking - implement with fidelity" correlation is unknown but we suggest that this is an essential question for future research.


Albin, R. W., Lucyshyn, J. M., Homer, R. H., & Flannery, K. B. (1996).Contextual fit for behavior support plans: A model for "goodness of fit". In L. Koegel, R. Koegel, & G. Dunlap (Eds.), Positive behavioral support: Including people with difficult behavior in the community (pp.81-98). Baltimore, MD: Paul H. Brookes.

Barton, E. E. (2010). Training preschool teachers to increase the pretend play and related verbal behaviors of young children with special needs. Manuscript in progress.

Barton, E. E., & Wolery, M. (2008). Teaching pretend play to children with disabilities: A review of the literature. Topics in Early Childhood Special Education, 28, 109-125.

Buysse, V., Wesley, P., Keyes, L., & Bailey, D. (1996). Assessing comfort zone of child care teachers in serving young children with

disabilities. Journal of Early Intervention, 20, 189-203.

Cordisco, L., & Strain, P. S. (1986). Assessment of generalization and maintenance in a multicomponent parent training program.Journal of the Division for Early Childhood, 10, 10-24.

Dunlap, G., lovannone, R., Kincaid, D., Wilson, K., Christiansen, K., Strain, P., & English, C. (2010). Prevent-Teach-Reinforce: A school-based model of positive behavior. Baltimore, MD: Paul H.Brookes.

Garfinkle, A. N. (2004). Assessing play skills. In M. McLean, M. Wolery, & D. B. Bailey (Eds.), Assessing infants and preschoolers with special needs (pp. 451-486). Upper Saddle River, NJ: Pearson Education, Inc.

Iovannone, R., Greenbaum, P., Wei, W., Kincaid, D., Dunlap, G., & Strain, P. (2009). Randomized control trial of a tertiary behavior intervention for students with problem behaviors: Preliminary outcomes. Journal of Emotional and Behavioral Disorders, 17, 213-225.

Joseph, G. E., & Strain, P. S. (2003). Comprehensive evidence-based social emotional curricula for young children: An analysis of efficacious adoption potential. Topics in Early Childhood Special Education, 23, 65-76.

Koegel, L. K., Koegel, R. L., & Dunlap, G. (Eds.). (1996). Positive behavioral support: Including people with difficult behavior in the community. Baltimore, MD: Paul H. Brookes.

Lees, G. D., & Ronan, R. K. (2008). Engagement and effectiveness of parent management training (Incredible Years) for solo high risk mothers: A multiple baseline evaluation. Behavior Change, 25, 109-128.

Lissman, D. C., Barton, E. E., & Morgan, G. (2010). The effect of evidence-based group training plus contextualized coaching on parenting practices. Manuscript submitted for publication.

McConnell, S. R., Strain, P. S., Kerr, M. M., Stagg, V., Lenkner, D. A., & Lambert, D. L. (1984). An empirical definition of elementary school adjustment: Selection of target behaviors for a comprehensive treatment program. Behavior Modification, 8, 451-473.

McIntyre, L. L. (2008). Parent training for young children with developmental disabilities: A randomized controlled trial. American journal of Mental Retardation, 113, 356-368.

McWilliam, R. A., & Bailey, D. B. (1995). Effects of classroom social structure and disability on engagement. Topics in Early Childhood Special Education, 15, 123-147.

Morrison, R. S., Sainato, D. M., Benchaaban, D., & Endo, S. (2002).Increasing play skills of children with autism using activity schedules and correspondence training. Journal of Early Intervention, 25, 58-72.

Phaneuf, L., & McIntyre, L. L. (2007). Effects of individualized video feedback combined with group parent training on inappropriate maternal behavior. Journal of Applied Behavior Analysis, 40, 737-741.

Pretti-Frontczak, K., & Bricker, D. (2004). An activity-based approach to early intervention (3rd ed.). Baltimore, MD: Paul H. Brookes.

Reid, J. M., Webster-Stratton, C., & Baydar, N. (2004). Halting the development of conduct problems in Head Start children: the effects of parent training. Journal of Clinical Child and Adolescent Psychology, 33, 279-291.

Reimers, T., & Wacker, D. (1988). Parents' ratings of the acceptability of behavioral treatment recommendations made in an outpatient clinic: A preliminary analysis of the influence of treatment effectiveness. Behavioral Disorders, 14, 7-15.

Sailor, W., Dunlap, G., Sugai, G., & Homer, R. (Eds.). (2009). Handbook of positive behavior support. New York, NY: Springer.

Sandall, S. R., & Schwartz, I. S. (2008). Building blocks for teaching pre-schoolers with special needs (2nd ed.). Baltimore, MD: Paul H.


Squires, J., & Bricker, D. (2007). An activity based approach to developing young children's social emotional development. Baltimore, MD: Paul H. Brookes.

Squires, J., Bricker, D., & Twombly, L. (2002). Ages and stage questionnaires: Social emotional. Baltimore, MD: Paul H. Brookes.

Strain, P. S., & Bovey, E. (2008). LEAP preschool. In J. S. Handleman & S. L. Harris (Eds.), Preschool education programs for children with autism (pp. 249-281). Austin, TX: Pro-Ed.

Strain, P. S., & Bovey, E. (2011). Randomized, controlled trial of the LEAP model of early intervention for young children with ASD. Topics in Early Childhood Special Education 31(3), 133-154.

Strain, P. S., Lambert, D. L., Kerr, M. M., Stagg, V., & Lenkner, D. (1983). Naturalistic assessment of children's compliance to teachers' requests and consequences for compliance. Journal of Applied Behavior Analysis, 16, 243-249.

Strain, P. S., Wilson, K., & Dunlap, G. (2011). Prevent-Teach-Reinforce: A model for treating problem behavior of children with ASD

in regular education classes. Behavioral Disorders 36, 1-12.

Webster-Stratton, C. (2008). The Incredible Years: Parents and children series. Leader's guide: Preschool version of BASIC (ages 3-6 years). Seattle, WA: The Incredible Years.

Wolf, M. (1978). Social validity: The case for subjective measurement, or how applied behavior analysis is finding its heart. Journal of Applied Behavior Analysis, 11, 203-214.

Phillip S. Strain and Erin E. Barton University of Colorado Denver Glen Dunlap University of South Florida

Correspondence to Phillip S. Strain, CU Denver School of Education & Human Development, Campus Box 106, P.O. Box 173364, Denver, CO 80217-3364; e-mail: Phil. Strain@ucdenver.edu.
Table 1
Project PLAY Parent completed Social Validity
Questionnaire and Mean Scores

      Item                                                  M (n=3)

1     I noticed increases in my child's play behaviors        5.3
      at home after starting Project PLAY.

2     I noticed increases in my child's language              5.0
      behaviors at home after starting Project PLAY.

3     I noticed increases in my child's social behaviors      4.7
      at home after starting Project PLAY.

4     I would recommend Project PLAY to other teachers        6.0
      and parents of children with autism.

5     I was pleased with the outcomes of Project PLAY         5.7
      for my child.

Note. Parents rated all items on a 7-point Likert-type
scale (from 0 - 6) with 0 indicating I strongly disagree with the
statement and 6 indicating I strongly agree
with the statement.

Table 2 Project PLAY Teacher Completed Social Validity
Questionnaire and Mean Scores

    Item                                                  M(n=5)

1   The Project PLAY was effective for increasing the       5.2
    play behaviors of my target child.

2   The Project PLAY was effective for increasing the         5
    language behaviors of my target child

3   The Project PLAY was effective for increasing my        4.6
    target child's play behaviors outside of the
    research sessions in different areas of the

4   The Project PLAY was effective for increasing my        3.8
    target child's social interactions with peers
    outside of research sessions.

5   The procedures involved in Project PLAY were easy       5.6
    to implement.

6   The coaching was helpful for accurately                 5.8
    implementing Project PLAY.

7   The duration of each session was appropriate.           5.6

8   I was able to use the strategies involved in            5.4
    Project PLAY outside of the research sessions.

9   I would recommend Project PLAY to other teachers        5.8
    and parents of children with autism.

10  I will continue to use the strategies involved in       5.6
    Project PLAY with my target child and other children.

Note. Teachers rated all items on a 7-point Likert-type scale
(from 0 - 6) with 0 indicating I strongly disagree with the statement
and 6 indicating I strongly agree with the statement.
Gale Copyright:
Copyright 2012 Gale, Cengage Learning. All rights reserved.