Social validity of a positive behavior interventions and support model.
Article Type:
Report
Subject:
Education (Methods)
Education (Social aspects)
Authors:
Miramontes, Nancy Y.
Marchant, Michelle
Heath Allen, Melissa
Fischer, Lane
Pub Date:
11/01/2011
Publication:
Name: Education & Treatment of Children Publisher: West Virginia University Press, University of West Virginia Audience: Professional Format: Magazine/Journal Subject: Education; Family and marriage; Social sciences Copyright: COPYRIGHT 2011 West Virginia University Press, University of West Virginia ISSN: 0748-8491
Issue:
Date: Nov, 2011 Source Volume: 34 Source Issue: 4
Topic:
Event Code: 200 Management dynamics; 290 Public affairs Canadian Subject Form: Behaviour problems Computer Subject: Company business management
Geographic:
Geographic Scope: United States Geographic Code: 1USA United States

Accession Number:
271811386
Full Text:
Abstract

As more schools turn to positive behavior interventions and support (PBIS) to address students' academic and behavioral problems, there is an increased need to adequately evaluate these programs for social relevance. The present study used social validation measures to evaluate a statewide PBIS initiative. Active consumers of the program were polled regarding their perceptions of the program's social relevance, including the acceptability of its treatment goals, procedures, and outcomes. Based on participants' feedback, several areas were identified for improvement, including the amount of paperwork required for successful implementation and the practicality of implementing and adhering to program procedures. As evidenced from the findings of this study, social validity is an important consideration when evaluating school-wide programs.

Key Words: social validity, positive behavior support, social validation, contextual fit.

The criteria for evaluating behavioral support programs are changing. In addition to supplying behavioral intervention strategies that are theoretically and technically sound, programs must now be matched specifically to the people and environment affected by implementation (Albin, Lucyshyn, Horner, & Flannery, 1996). With complex local and societal needs, public educators may feel overwhelmed by plans and strategies that promise results they do not deliver. The purported effectiveness of educational programs, however, does not guarantee that each program will be equally effective in every setting (Reimers, Wacker, & Koeppl, 1987). In selecting programs, educators must evaluate each on its applicability as well as the reliability and validity of its content and measures. They must also consider the program's potential value for the specific group of consumers it will serve.

Background

Social validity was First described by Wolf in 1978 as the value society places on a product. To legitimately analyze a program, Wolf proposed that society must evaluate its effectiveness based on goals, procedures, and outcomes. This information could then be used to tailor the program to better meet the needs of the consumer. With its roots in applied behavior analysis, social validity "attempts to go beyond 'clinical judgment' to derive information from the broader social environment of the individual(s) whose behavior is being changed" (Kennedy, 1992, p. 147). This focus not only makes social validity an important concept to consider when evaluating programs, but it challenges the field to look beyond typical "clinical judgments" and recognizes the value of and need for assessing consumer reaction.

A program with high social validity is responsive to consumer needs, an aspect integral to evaluating program effectiveness because social validity promotes increased fidelity and sustainability (Albin et al., 1996). When researchers consider the concept of social validity and respond to consumers' concerns, these consumers become invested in making informed choices and are more likely to offer support. Additionally, Schwartz (1991) noted that consumers who made informed choices reported increased satisfaction, and that satisfied consumers improved a program's viability.

Improving a program's viability begins by considering the dynamics between research and practice, which in the case of social validity includes a disconnect between published research and applied research as it is actually carried out in the field. Educators have valid reasons for concern regarding the quality and applicability of current educational research (Carnine, 1997). Because of the increasing gap between educational research and practice (Kern & Manz, 2004) and the top-down manner in which research-based programs are typically introduced (Child Trends, 2008), social validity becomes even more critical.

Purpose of Social Validity Research

The purpose of social validity is not to gather false praise for a proposed program, but to gather useful information about potential pitfalls, implementation barriers, and varying perceptions regarding the program's potential impact (Schwartz & Baer, 1991). Carnine (1997) explained that social validity is important because endeavoring to seek out consumer opinion sets the foundation for consumer trust and buy-in. Behavioral programs that tap into what matters most to key consumers help bridge the gap between research and practice (Kazdin, 1977). Furthermore, research-based interventions that address both student needs and teacher satisfaction and buy-in will likely be successful due to their focus on key consumers.

Consumer-focused educational and behavioral programs, like positive behavior interventions and support (PBIS) and response to intervention (RtI), are particularly responsive to information gathered regarding social validation. PBIS and RtI in particular emphasize the use of collected data to monitor the effectiveness of interventions (Sugai et al., 2000). PBIS and RtI interventions are designed to be proactive rather than reactive. These systems of positive interventions are data driven in that they rely heavily on data to guide decision making and are based on the sustained use of research-validated practices focused on maximizing student achievement (Gresham, 2004; Sugai & Horner, 2002). With a team-problem-solving model as its foundation, PBIS and RtI functions on a Three-tiered approach to problem identification and intervention.

The multi-tiered intervention model of PBIS and Rtl is well suited for social validation assessments. One of the fundamental philosophies of PBIS and RtI is that although humanistic values should not replace empiricism, these values should certainly inform empiricism (Carr et al, 2002). Stakeholder participation is fundamental to the success of PBIS and RtI in establishing itself as a collaborative system (Sugai et al., 2000). Its inclusive function as a support network has undoubtedly contributed to its success with systems level change (Carret al., 2002; Sugai et al., 2000).

Successful social validation relies heavily on a number of system dynamics to create the environment needed for effective evaluation. First, a variety of key stakeholders (e.g. consumer judges) should be surveyed (Kern & Manz, 2004). Second, the Secondary and tertiary tiers of PBIS and RtI, not just the universal tiers need to be evaluated. And Third, the social validation data should be used productively: for example, to produce a workable outline drawing attention to aspects that appear to need modification.

Limitations in Current Literature

The current literature pertaining to PBIS and RtI is rapidly increasing, but not without limitations. Despite the repeated recommendations to assess social validity (Carr, 2007; Gresham & Lopez, 1996; Kern & Mantz, 2004), researchers continue to omit reporting, or even evaluating, these important data. Even when strides are made to assess and report on social validity, these reports rarely reflect the original purposes for social validation as outlined by Wolf (1978) and Kazdin (1977), who originally identified social validity and social validation. The original intent of Wolf and Kazdin was to gather evaluation data from direct consumers in three distinct areas: the social significance of treatment goals, the social appropriateness of procedures, and the social importance of effects and outcomes.

In an attempt to reconsider original perspectives on social validity assessments, Kern and Mantz (2004) proposed the following improvements for future research: (a.) clearer definitions of the term "stakeholders," (b.) adequate representation of stakeholders, and (c.) adequate assessments of goals, procedures and outcomes regarding the secondary and/or tertiary levels of the PBIS and RtI prevention model. These improvements redirect the purpose of social validity assessments to the original intervention goals of serving clients (Kern & Mantz, 2004).

In response to numerous requests in the literature (Kern & Mantz, 2004; Schwartz & Baer, 1991; Scott, 2007), the present study proposes to begin addressing these limitations by adequately defining and including the following consumer groups: teachers, administrators, and related service providers. An adequate representation of these groups will be sampled to assess their satisfaction regarding goals, procedures, and outcomes across all three tiers of PBIS and RtI. And finally, since the study targets the levels of social validity for schools currently implementing PBIS and RtI programs, a sample will be taken from those participating in a statewide PBIS and RtI initiative specifically designed to implement PBIS and RtI measures into the public school system. This state wide initiative specifically trains schools on all matters having to do with PBIS and RtI implementation, including but not limited to, intervention design, implementation and progress assessment, with ongoing support through school based intervention coaches and district support staff. This training offers schools the opportunity to learn how PBIS and RtI can work for their schools and helps them tailor interventions to fit the needs of each individual school. Teams are offered examples of interventions with detailed instruction on intervention design, implementation, and progress monitoring (refer to http://www.updc.org/abc/, for more information). For purposes of this research study and to maintain anonymity, this statewide program will be referred to as an Academic/Behavioral Coaching Initiative (ABC-I).

Purpose & Research Questions

Social validation is not a new concept to the field; however, its methodology is still evolving. Thus clarifications need to be made regarding the process of social validation. The present study proposes that social validation procedures be redirected to their primary purpose - to evaluate program goals, procedures, and outcomes at all levels of implementation. Furthermore, correlations between social validation data and treatment integrity will be evaluated to further inform overall program performance.

This study proposes to expand knowledge concerning the primary constructs of social validity by addressing the following questions:

1. What perceptions do related service providers (e.g. school psychologists, school counselors, social workers) have regarding the social validity of goals, procedures, and outcomes of positive behavioral initiatives?

2. What perceptions do teachers have in respect to the social validity of goals, procedures, and outcomes of positive behavioral initiatives?

3. What perceptions do administrators have in respect to the goals, procedures, and outcomes of positive behavioral initiatives?

4. What perceptions do the consumers involved in positive behavioral initiatives have in respect to goals, procedures, and outcomes of the secondary and tertiary levels of PBIS and RtI initiatives?

5. Is there a relationship between the respondent's school's Schoolwide Evaluation Tool (SET) scores (treatment fidelity) and their individual responses to the questionnaire?

Method

Participants and Setting

The respondents for this study were selected from a convenience sample pool of participants in PBIS and RtI programs from elementary and middle schools currently involved in a state-wide behavior support initiative (ABC-I) implemented in a western state. All of the schools sponsored by ABC-I received the questionnaire developed for this study. Input for the research was provided by 35 schools in 16 school districts, distributed across urban, rural, and suburban regions, representative of the state's general population.

Of the final sample, approximately 80-90% (n = 270) was composed of direct consumers, defined as teachers for the purposes of this study, and 10-20% was composed of indirect consumers, defined as administrators and related service personnel. Participants were asked to provide demographic information regarding professional classification and number of years their school had participated in the program. A comprehensive summary of the participating schools that supplied information regarding their school and district is presented in Table 1.

Measures

As the primary measure for this study, a questionnaire was designed following suggestions in the literature for accurate social validity sampling. The questionnaire included 18 items, each item measuring a specific judgment of social validity. The term goals of PBIS was rephrased as four components of PBIS. Following the suggestion of Reimers et al. (1987) that understanding should precede measures of acceptability, the questionnaire was designed to include an item written to measure participants' perceived understanding of the model's goals and procedures.

Two questions intended to measure the first judgment of social validity (i.e. program goal acceptability) were written using the terms important and acceptance as outlined by Fawcett (1991). Two questions intended to measure the second judgment of social validity (acceptability of program procedures) employed the language "willingness to use," "given time constraints," and "willingness to recommend to others," as recommended by Kern and Manz (2004) for determining meaningful acceptance. The third judgment of social validity (program outcomes) was also addressed with two questions, both constructed to measure acceptability of treatment outcomes.

The format for these questions pertaining to treatment outcomes was taken from an outline provided by Lane & Beebe-Frankenberger (2004). The final three items of this questionnaire were written to specifically address Secondary and tertiary levels of PBIS prevention. The final questionnaire was submitted to expert review. The reviewers were selected from a pool of university associates with years of experence in designing questionnaires for research purposes. These experts provided feedback regarding the layout of the survey and clarity of the questions, which informed the final version of the questionnaire.

To measure treatment fidelity, the ABC-1 researchers used the School-wide Evaluation Tool (SET) version 2.0 (Sugai, Lewis-Palmer, Todd, & Horner, 2001) to assess and evaluate individual schools' yearly progression. This tool was specifically designed to evaluate treatment fidelity, ensuring that schools are implementing PBIS and RtI accurately. Data for the SET are collected annually, in place before PBIS interventions begin and conducted 6 to 12 weeks after the interventions are implemented. The SET provides ABC-I with a score that places a school within a specific implementation level. Schools that have received 80% on six of the seven indicator categories are judged as having high implementation - as having implemented PBIS programs efficiently and with integrity.

Procedures and Data Collection

The initial questionnaire was administered at the ABC-I yearly training conference, which gathered all of the PBIS school teams for ongoing training. At the close of the conference, the conference director introduced the survey as part of a new research effort between the statewide program and a university, explaining to attendees that feedback from this satisfaction questionnaire would inform future changes to the program. Participation was voluntary.

Questionnaires were included in the conference folders attendees received, and voluntary participation earned them a raffle entry for winning one of Ten iPod shuffles. Those who chose to participate were instructed to First read and sign the informed consent page and to submit their completed questionnaires in an envelope provided by their building coordinator. Thus questionnaires were grouped by school. Building coordinators were instructed to seal the questionnaires in their school envelope, ensuring privacy of individual responses, and to place envelopes at a designated location in the conference center. If they wished to complete their questionnaires in private and submit these individually (not as a school), participants could take the survey home and mail it to the program director within the following two weeks to be included in the shuffle raffle; 15 participants chose this option.

Participants were asked to read a brief One-page explanation of terms included in the questionnaire to reduce the possibility of confusion regarding terminology. The questionnaire is included as Figure A. Participants completed a demographics section (Two questions) at the top of the questionnaire, indicating professional classification (teacher, related service provider, etc.) and number of years their school had participated with the state-wide initiative. Following the Two demographic questions, participants responded to 18 statements for which they were asked to indicate their perception by checking One of the following options: strongly disagree, disagree, neutral, agree, or strongly agree.

Materials

The questionnaire was administered in paper/pencil format. The complete list of materials for this study included informed consent forms for participants, questionnaire cover pages with terminology clarifications and instructions for filling out the questionnaire, pens/pencils, conference folders, white envelopes to insure questionnaire privacy, manila envelopes for blank questionnaires, and 10 iPod shuffles.

Statistical Analyses

Descriptive statistics were used to summarize and describe participant demographic information (years involved in ABC-I project and participant's current position). Data included percentages of participants who identified each specific professional classification and the number of schools that reported data.

On the questionnaire's 18 statements rated on a 5-point Likert scale, points were anchored by specified descriptors ranging from strongly agree to strongly disagree. Medians and measures of central tendency (i.e., means and standard deviations) were calculated for participants' responses to each statement. Because this research was exploratory and did not focus on survey development, no analyses for reliability and validity were run on the survey itself. Due to the skewed nature of the data, a Spearman correlation was conducted to explore linear relationships between school SET scores and responses on the questionnaire.

Results

The data are organized and presented according to consumer judges. Each subgroup is identified and represented by significant data. The number of respondents for the entire survey totaled 270. Of these, 9.2% (n = 26) were administrators; 7.4% (n = 21) were related service providers (i.e., school psychologists, counselors, and social workers); 57.1% (n = 161) were teachers--both general and special educators. Additionally, 11.7% (n = 33) did not respond to this question, and 15.1% (n = 29) marked the "other" category (i.e., district coaches, building coordinators, paraeducators, parent/community representatives, etc.). A comprehensive summary of this data is presented in Table 1. For ease of reporting and interpreting data, participant satisfaction was indicated when respondents reported agreeing or strongly agreeing with a statement; dissatisfaction was indicated when respondents reported disagreeing or strongly disagreeing with an item.

Participant Groups Regarding Overall Program

Related service providers. The data show this consumer group to have been highly satisfied with the statewide program. The mode response on the majority of the questions was 4 (agree). When asked whether the program had made a positive impact on their school, 100% of responders agreed or strongly agreed (M = 4.65, SD =.49) When asked whether the program was worth the time and effort invested, 95.2 % responded positively, rating this question either 4 (agree) or 5 (strongly agree) (M = 4.57, SD =.60). Out of 21 responders, 95.2% also indicated that they would recommend the program model to other educators, while 4.8 % remained neutral (M = 4.71, SD =.56). And finally, when asked about their perceptions of staff consensus or buy-in for the program, 66.7 % said they agreed or strongly agreed with a positive statement, while 9.5 % did not (M = 3.71, SD =.85).

Teachers. The results for the group of teachers, both general and special educators, were also positive. The mode response on the majority of the questions was 4 (agree). Of 161 teacher responders, 98.1% agreed or strongly agreed that the program made a positive impact in their schools (M = 4.52, SD =.61); of 156 responders, 94.3 % agreed or strongly agreed that the program was worth their time and effort (M = 4.38, SD =.64). Similarly, 92.2 % of 155 responders agreed or strongly agreed that they would recommend the program's initiatives to other educators (M = 4.41, SD =.70). Out of twelve social validity questions, three elicited more negative responses from the teacher consumer group. On the sixth question, 72.4 % of 160 teachers agreed or strongly agreed that the data collection procedures of the program were easy to implement, and 10 % disagreed or strongly disagreed (M = 3.83, SD =.89). On the seventh question, 6.9% of 159 teachers disagreed or strongly disagreed that the progress monitoring procedures were practical (M = 3.85, SD =.83). In judging the paperwork required to implement the program's strategies, 68.4 % of 161 responders agreed or strongly agreed that it was reasonable, while 8.1 % disagreed or strongly disagreed (M = 3.96, SD = 2,6).

Administrators. The data show this consumer group to have also been highly satisfied with the program. The mode response on the majority of the questions was 5 (strongly agree). Out of 26 administrators, 100% agreed or strongly agreed that the program made a positive impact in their schools (M = 4.81, SD =.40), and 100% agreed or strongly agreed that the program improved positive school outcomes (M = 4.64, SD =.49). Likewise, on Question 2 (M = 4.62, SD =.50), and Question 3 (M = 4.42, SD =.50), 100 % of administrators agreed or strongly agreed that they had increased their knowledge and skills in the application of systematic problem solving strategies to academic and social behavior issues.

Concerning data collection procedures for the program's interventions, 73.1 % of 26 administrators agreed or strongly agreed that procedures were easy, while 3.8 % disagreed or strongly disagreed (M = 3.88, SD =.91). In addition, 76% of 25 administrators agreed or strongly agreed that the paperwork was reasonable, while 4 % disagreed or strongly disagreed (M = 4.0, SD =.96). The majority of the administrators (96.1%, M = 4.76, SD =.44) also agreed or strongly agreed that they would recommend the program to other educators, and 100% (M = 4.65, SD =.56) agreed or strongly agreed that the program was worth their time and effort. In comparison to the other consumer groups, administrators' responses were the most positive, with the largest percentages indicating that they agreed or strongly agreed.

Responses Across Levels of Intervention

Statements 13-18 on the questionnaire were concerned with consumers' satisfaction with treatment goals, procedures, and outcomes across all tiers of PBIS program implementation. Of 270 respondents, 94.6 % (M = 4.38, SD =.68) were satisfied with the universal/core goals of their school program; 91 % (M = 4.30, SD =.74) were satisfied with the universal procedures; and 86.2 % (M = 4.15, SD =.75) were satisfied with the overall outcomes at the universal level. Out of 270 responders, 86.2 % were satisfied with the supplemental and intensive (2nd & 3rd tier) goals, while 2.5% were not satisfied (M = 4.18, SD =.76). And out of 270 responders, 81.5% (M = 4.11, SD =.77) were satisfied with the supplemental and intensive procedures at their schools, and 79.7 % (M = 4.03, SD =.77) were satisfied with the supplemental and intensive outcomes at their schools.

SET Scores and Social Validation Correlations

A Spearman rho correlation coefficient was calculated for the relationship between school SET scores (i.e. treatment fidelity) and participants' responses to the individual questionnaire items. A comprehensive summary of this data is presented in Table 2. Significant positive correlations were found between SET scores and Question 1 (rho (279) =.118, p<.01); Question 4 (rho (278) =.145, p<.01); Question 10 (rho (272) =.195, p<.01); Question 14 (rho (274) =.181, p<.01); Question 16 (rho (273) =.146, p<.01); and Question 17 (rho (274) =.143, p<.01). The data illustrate that respondents from schools with higher treatment fidelity used the program strategies and interventions more, agreed that the program initiatives made a positive impact; perceived their schools to have more buy-in for the program; and were more satisfied with their school's universal goals, supplemental and intensive goals, and procedures. No positive correlations were found between SET data and 12 of the other individual questions. No negative correlations were found.

Discussion

A central purpose of the present study was to redirect social validity assessments to their primary purpose: to evaluate program goals, procedures, and outcomes at all levels of intervention implementation. This was accomplished by polling consumers who are often excluded from assessments although their perceptions contribute vital information to future intervention efforts. Two recommended consumer types were included: direct consumers (teachers) and indirect consumers (related service providers and administrators) who make decisions about adopting programs and whose power, influence, and support affect intervention outcomes.

Consumer Responses: Approval and Suggestions

Related service providers, teachers, and administrators regarded several facets of the statewide program quite positively. A substantial majority noted an improvement in overall school climate and environment after implementing universal initiatives of the statewide program. A comparable majority of all participants perceived that the implemented program positively impacted their school and was worth the time and effort they invested. Additionally, a strong majority also indicated that they would recommend the program to others.

When compared to perceptions of related service providers and administrators, teachers' responses were slightly less supportive. A plausible explanation for this slight difference might be that teachers are the ones actually implementing program interventions, and related service providers and administrators are in an indirect supportive role. Most likely the daily effort and commitment constituted a greater burden for teachers than for other participants. When implementing programs, teachers' perceptions, buy-in, and ability to carry out the interventions should be carefully considered. Administrators were overwhelmingly positive about the program's initiatives, unanimously agreeing that they would recommend the ABC-I model to other educators, that the project positively impacted their school, and that the project increased their knowledge and skill for solving academic and social behavior challenges.

Although all three groups responded positively to survey items, all Three identified the same three areas as needing improvement. Accordingly, the following changes are suggested to improve future program implementation: (a.) The methods of data collection procedures could be easier, (b.) The progress-monitoring procedures could be more practical, and (c.) The amount of paperwork involved in implementing the program's initiatives could be reduced.

This finding is consistent with the suggestion of Carnine (1997) that research efforts should be evaluated in terms of trustworthiness, usability, and accessibility. He also noted that improvements in these areas bear strong implications for practice. Programs that are not accessible or useful are less likely to be implemented, particularly when staff have limited time and heavy workloads. The finding of this study of less support for data collection practices and implementation procedures implies that this particular intervention must improve in these areas in order to be socially viable. Improving usability and accessibility for key consumers would increase the likelihood of current stakeholders maintaining PBIS and RtI programs, perhaps ultimately the success of implementing future PBIS and RtI programs statewide and nationwide.

Supplemental and Intensive Levels of Implementation: Social Validity

Due to the importance of evaluating the Secondary and tertiary levels of intervention, this study also targeted their implementation. The findings suggest that participating consumers find the secondary and tertiary goals and procedures just slightly more favorable than the Secondary and tertiary outcomes. This slight trend warrants further investigation into the rarely explored supplemental and intensive levels of positive behavior supports. Perhaps the difference in consumer perception between goals/procedures and outcomes at these levels indicates a need for more intensive dedication on fewer indicators of progress

Rapid progress is not likely to be apparent if behaviors increase rapidly in response to universal intervention efforts and make outcomes at the secondary and tertiary levels more difficult to see. In 2000 Sugai et al. noted that despite limited resources, schools can make significant contributions by "working smarter" and creating proactive environments that sustain systemwide interventions like PBIS In particular, procedures must be structured and targeted to address the identified problems. For this program's initiatives, a foundation for successful outcomes has been built on participants' positive perceptions of the program's socially valid goals and procedures.

What remains is to use this buy-in to "work smarter" and make necessary adjustments if the outcomes are to maintain future social validity. For instance, infusing more support at the supplementary and intensive levels may yield more immediate positive results. Additionally, fine tuning the procedures for easier implementation may lessen the teachers' burden by increasing efficiency and reducing the amount of time required to implement interventions. Another aspect to consider is the possibility of rewarding teachers for their tireless commitment to children's education. These efforts would be especially critical at the supplementary and intensive levels of the program. Extra support for teachers would help them maintain their focused dedication to benefiting children.

Relationship of Treatment Fidelity to Social Validity

To discern whether treatment fidelity was correlated with positive consumer perceptions, this study explored the relationship between the respondents' school SET scores (treatment fidelity) and their individual responses to the questionnaire. The correlational data gathered to identify the presence or absence of these relationships indicated that the more accurately PBIS and RtI was implemented at a school, the more consumers agreed upon the following points: (a.) The initiatives made a positive impact in their school; (b.) their school had consensus or buy-in, and (c.) the consumers were satisfied with the supplemental and intensive goals and procedures.

Though several significant correlations were found, responses to 12 of the 18 questions did not correlate significantly with SET scores. This majority of non-significant correlations can be attributed to a number of study limitations. Restricted range reduces the potential for identifying significant correlations, and participants' responses were overwhelmingly positive with minimal variance. The fact that no negative correlations were found between treatment fidelity and positive responses to individual questions does indicate that social validity and treatment fidelity did not act against each other.

Of interest, the questions that were significantly correlated support an assertion that the more accurately a school implemented PBIS and RtI initiatives, the more the consumers perceived these initiatives as positively impacting their school. Data collection and paperwork involved in implementing PBIS and RtI initiatives are often cumbersome and time consuming for consumers. Additionally, immediate results are rarely seen. But participants' perceptions did demonstrate that where implementation was accurate, the program did slowly become valued. This finding further supports the need and utility of gathering social validity data from important consumers. The argument could be made that accurate and faithful PBIS and RtI initiatives foster social validity, or that good contextual fit and social validity foster faithful PBIS and RtI implementation. Furthermore, this study's findings, while meaningful, should be interpreted in light of potential limitations discussed in the following section.

Future Study and Practice

Limitations of this Study

The results of this study should be interpreted with caution. First, this research was exploratory, and thus the survey instrument, though based on researched principles, was not tested for validity and reliability -- limiting the generalizability of the findings. Specifically, flaws in the format, structure, or wording of the questionnaire may have yield responses affected by participants' individual misunderstanding.

Second, incomplete SET data were used in determining correlations between implementation progress and answers to individual questions on the questionnaire. More than half of the data were missing due to failure of participating schools to collect and/or report scores. A complete data set might have yielded different results, which may or may not have contributed to stronger positive correlations. The data may have warranted stronger support for making PBIS and RtI initiatives easier to implement with high fidelity, or the data could have demonstrated a negative correlation indicating that consumers who were implementing PBIS and RtI initiatives faithfully did not rank their validity higher than those who were not. The implication is that without a complete data set, an accurate determination cannot be made about the ease with which PBIS and RtI initiatives improve over time. Future research should emphasize the need for accurate data keeping when implementing PBIS and RtI initiatives.

Furthermore, the study was conducted at an annual statewide conference with those who attended. This was a sample of convenience and this group did not include all consumers of the statewide initiatives; thus those who attended may not proportionately represent all participants. Additionally, paper format was the only format offered; access to a web-based survey distributed to all program consumers via the Internet was not provided.

Future Directions

Despite the limitations of this study, future intervention research should make social validation assessments a higher priority. Neglecting this typically underexplored area would result in a deficit in meaningful data that could improve overall intervention implementation. Effective social validation assessments should incorporate suggestions in existing literature (Kern & Mantz, 2004; Lyst et al., 2005; Scott, 2007; Shwartz & Baer, 1991). Increasing consumer categories to include students, parents and other community members would yield more global data and further inform improvement efforts. Future research that includes these additional consumers would expand the social validity literature.

In addition, future studies would benefit from evaluating the reliability and validity of survey questionnaires. A valid and reliable instrument would strengthen the generalizability of the obtained data, further informing efforts to implement socially valid programs. The improved instrument would promote the collection of social validity data and thus strengthen future studies in this area. Future research should also evaluate the correlations between treatment fidelity and social validity, a relationship, which has yet to be thoroughly and formally examined.

Implications for Practice

The information presented in this study empowers consumers in two ways. First, the data provide a global view of this program's social validity and of consumers' perceptions of progress across the state. Second, the research findings inform consumers of the specific areas that are perceived as needing improvement.

The most important finding is that the majority of consumers of the PBIS and RtI initiatives said that they did not find the data collection procedures for these interventions to be socially valid. While the initiatives were found to foster positive improvement in school climate, a significant percentage of the respondents did not find these methods practical. However, small yet statistically significant correlations were found between faithful program implementation and increased social validity, indicating that the more accurately a program was implemented, the more valuable the consumers perceived the procedures. To increase effectiveness of interventions, individual schools can make adjustments and alter interventions to better suit their environments and needs. With social validation data, each school can make adjustments that suit the needs of its particular group of consumers.

The larger context for this research addresses the need for meticulous research strategies that carefully consider social validity and its impact on program implementation. Carnine (1997) noted "research is not just science, but craft and art as well. In short, researchers and other groups must begin [to] work concurrently to deal with shortcomings that undermine the value of research" (p. 520). Consumers should protect themselves from interventions that promise results they do not deliver. Social validity assessments cannot be conducted without the people, so if the people require the policymakers to share this information in the spirit of collaboration, then consumers' valuable opinions can be used for and not against them.

One of the fundamental philosophies of PBIS is that while humanistic values should not replace empiricism, these values should certainly inform empiricism (Carr et al., 2002). Social validation continues to be an underused measure in the literature. Because consumers are an integral part of interventions, their opinions and needs must be carefully considered. Consumer opinion cannot and should not be ignored, for it impacts every aspect of program implementation and ultimately the success or failure of interventions.

References

Albin, R. W., Lucyshyn, J. M, Horner, R. H., & Flannery, K. B. (1996). Contextual fit for behavioral support plans. In L. Koegil, R. Koegil, & G. Dunlap (Eds.), Positive behavioral support: Including people with difficult behaviors in the community (pp. 81-97). Baltimore, MD: Brookes.

Bohanon, H., Fenning, P., Carney, K. L., Minnis-Kim, M. J., Anderson-Harriss, S., Mortoz, K. B.,... Pigott, T. D. (2006). Schoolwide application of positive behavior support in an urban high school: A case study. Journal of Positive Behavior Interventions, 8(3), 131-145. doi: 10.1177/10983007060080030201.

Carnine, D. (1997). Bridging the research-to-practice gap. Exceptional Children, 63(4), 513-521.

Carr, E. G. (2007). The expanding vision of positive behavior support: Research perspectives on happiness, helpfulness, hopefulness Journal of Positive Behavior Interventions, 9(1), 3-14. doi: 10.1177/10983007070090010201.

Carr, E. G., Dunlap, G., Horner, R. H., Koegel, R. L., Turnbull, A. P., Sailor, W.,... Fox, L. (2002). Positive behavior support: Evolution of an applied science. Journal of Positive Behavior Interventions, 4(1), 4-16. doi: 10.1177/109830070200400102.

Carr, J. E., Austin, J. L., Britton, L. N., Kellum, K. K., & Bailey, J. S. (1999). An assessment of social validity trends in applied behavior analysis. Behavioral Interventions, 14, 223-232. doi: 10.1002/(SICI) 1099-078X (199910/12)14:4<223:: AID-BIN37>3.0.CO;2-Y.

Child Trends. (2008, August). The role of organizational context and external influences in the implementation of evidence-based programs: An exploratory study (Report IV). Washington, DC Author. Retrieved from http://www.childtrends.org/Files/Child_Trends-2008_12_17_FR_Implementation4.pdf.

Elliot, S. N. (1988). Acceptability of behavioral treatments: Review of variables that influence treatment selection. Professional Psychology: Research and Practice, 19(1), 68-80. doi: 10.1037/07357028.19.1.68.

Elliot, S. N., Witt, J. C., Galvin, G. A., & Peterson, R. (1984). Acceptability of positive and reductive behavioral interventions: Factors that influence teachers' decisions. Journal of School Psychology, 22, 353-360. doi: 10.1016/0022-4405(84)90022-0.

Fawcett, S. B. (1991). Social validity: A note on methodology. Journal of Applied Behavior Analysis, 24, 235-239. doi: 10.1901/jaba.1991.24-235.

Finn, C. A., & Sladeczek, I. E. (2001). Assessing the social validity of behavior interventions: A review of treatment acceptability measures. School Psychology Quarterly, 16, 176-206. doi: 10.1521/scpq.l6.2.176.18703.

Fixsen, D., & Dunlap, G. (2004). A celebration of the contributions of Montrose M. Wolf and Todd R. Risley. Journal of Positive Behavior Interventions, 6(2), 121-123. doi: 10.1177/10983007040060020701.

Francisco, V. T., & Butterfoss, F. D. (2007). Social validation of goals, procedures, and effects in public health. Health Promotion Practice, 8, 128-133. doi: 10.1177/1524839906298495.

Foster, S. L., & Mash, E. J. (1999). Assessing social validity in clinical treatment research: Issues and procedures. Journal of Consulting and Clinical Psychology, 67(3), 308-319. doi: 10.1037/0022006X.67.3.308.

George, H. P., & Kincaid, D. K. (2008). Building district level capacity for positive behavior support. Journal of Positive Behavior Interventions, 10(1), 20-32. doi: 10.1177/1098300707311367.

George, M. P., White, G. P., & Schlaffer, J. J. (2007). Implementing school-wide behavior change: Lessons from the field. Psychology in the Schools, 44(1), 41-51. doi: 10.1002/pits.20204.

Gresham, F. M. (2004). Current status and future directions of school-based behavioral interventions. School Psychology Review, 33(3), 326-343.

Gresham, F. M., & Lopez, M. F. (1996). Social validation: A unifying concept for school-based consultation research and practice. School Psychology Quarterly, 11(3), 204-227. doi: 10.1037/h0088930.

Houchins, D. E., Jolivette, K., Wessendorf, S., McGlynn, M., & Nelson, C. M. (2005). Stakeholders' view of implementing positive behavioral support in a juvenile justice setting. Education and Treatment of Children, 28(4), 380-399.

Kamps, D. M., Kravits, T., Gonzales-Lopez, A., Kemmerer, K., Potucek, J., & Harrell, L.G. (1998). What do peers think? Social validity of peer-mediated programs. Education & Treatment of Children, 5(21), 107-134.

Kauffman, J. M. (1996). Research to practice issues. Behavioral Disorders, 22(1), 55-60.

Kazdin, A. E. (1977). Assessing the clinical or applied importance of behavior change through social validation. Behavior Modification, 1(4), 427-451. doi: 10.1177/014544557714001.

Kennedy, C. H. (1992). Trends in the measurement of social validity. The Behavior Analyst, 15,147-156.

Kern, L., & Manz, P. (2004). A look at current validity issues of school-wide behavior support. Behavioral Disorders, 30(1), 47-59.

Kincaid, D., Childs, K., Blase, K. A., & Wallace, F. (2007). Identifying barriers and facilitators in implementing school-wide positive behavior support. Journal of Positive Behavior Interventions, 9(3), 174-184. doi: 10.1177/10983007070090030501.

Lane, K. L., & Beebe-Frankenberger, M. (2004). Social validity: Goals, procedures, outcomes. In V. Lanigan (Ed.), School-based interventions: The tools you need to succeed (85-127). Boston, MA: Pearson Education, Inc.

Lloyd, J. W., & Heubusch, J. D. (1996). Issues of social validation in research on serving individuals with emotional or behavioral disorders. Behavioral Disorders, 22(1), 8-14.

Lyst-Miltich, A., Gabriel, S., O'Shaughnessy, T. E., Meyers, J., & Meyers, B. (2005). Social validity: Perceptions of check and connect with early literacy support. Journal of School Psychology, 43,197-218. doi: 10.1016/j.jsp.2005.04.004.

McCarthy, J. A., & Shrum, L. J. (2000). The measurement of personal values in survey research: A test of alternative rating procedures. Public Opinion Quarterly, 64(3), 271-298. doi: 10.1086/317989.

McCurdy, B. L., Mannella, M. C., & Eldridge, N. (2003). Positive behavior support in urban schools: Can we prevent the escalation of antisocial behavior? Journal of Positive Behavior Interventions, 5(3), 158-170. doi: 10.1177/10983007030050030501.

McMahon, R. J., & Forehand, R. L. (1983). Consumer satisfaction in behavioral treatment of children: Types, issues, and recommendations. Behavior Therapy, 14, 209-225. doi: 10.1016/S00057894(83)80111-7.

Miltich, A. P. (2003). Factors related to social validity: Teacher perceptions of project early literacy-school engagement. Dissertation Abstracts International Section A: Humanities and Social Sciences, 64(6), 1971.

Montague, M., Bergeron, J., & Lago-Delello, E. (1997). Using prevention strategies in general education. Exceptional Children, 29(8), 1-12.

Ransdell, L. B. (1996). Maximizing response rate in questionnaire research. American Journal of Health Behavior, 20(2), 50-56.

Reimers, T. M., Wacker, D. P., & Koeppl, G. (1987). Acceptability of behavioral interventions: A review of the literature. School Psychology Review, 16(2), 212-227.

Safran, S. P., & Oswald, K. (2003). Positive behavior supports: Can schools reshape disciplinary practices? Council for Exceptional Children, 69(3), 361-373.

Schwartz, I. S. (1991). The study of consumer behavior and social validity: An essential partnership for applied behavior analysis. Journal of Applied Behavior Analysis, 24(2), 241-244. doi:10.1901/jaba.l991.24-241.

Schwartz, I. S., & Baer, D. M. (1991). Social validity assessments: Is current practice state of the art? Journal of Applied Behavior Analysis, 24(2), 186-212. doi: 10.1901/jaba.l991.24-189.

Scott, T. M. (2007). Issues of personal dignity and social validity in schoolwide systems of positive behavior support. Journal of Positive Behavior Interventions, 9(2), 102-112. doi: 10.1177/10983007070090020101.

Scott, T. M., & Barrett, S. B. (2004). Using staff and student time engaged in disciplinary procedures to evaluate the impact of school-wide pbs. Journal of Positive Behavior Interventions, 6(1), 21-27. doi: 10.1177/10983007040060010401.

Seigel, C. T. (2008). School-wide positive behavior support programs in elementary schools. Unpublished master's thesis, Dominican University of California, San Rafael, California.

Smylie, M. A. (1988). The enhancement function of staff development: Organizational and psychological antecedents to individual teacher change. American Educational Research, 25(1), 1-30.

Sugai, G., & Horner, R. (2002). The evolution of discipline practices: School-wide positive behavior supports. Child & Family Behavior Therapy, 24 (1/2), 23-50. doi: 10.1300/J019v24n01_03.

Sugai, G., Lewis-Palmer, T., Todd, A. W., Horner, R. (2001). School-wide evaluation tool. Eugene: University of Oregon.

Sugai, G., Horner, R. H., Dunlap, G., Hieneman, M., Lewis, T. J., Nelson, C. M.,... Ruef, M. (2000). Applying positive behavioral support and functional behavioral assessment in schools. Journal of Positive Behavioral Interventions, 2, 131-143. doi: 10.1177/109830070000200302.

Tourangeau, R., Couper, M. P., & Conrad, F. (2004). Spacing, position, and order. Interpretative heuristics for visual features of survey questions. Public Opinion Quarterly, 68(3), 368-393. doi: 10.1093/poq/nfh035.

Utah Personnel Development Center. (n.d.). Retrieved from http://www.updc.org/professional-development/.

Wolf, M. M. (1978). Social validity: The case for subjective measurement or how applied behavior analysis is finding its heart. Journal of Applied Behavior Analysis, 11, 203-214. doi: 10.1901/jaba.1978.11-203.

Correspondence to Michelle Marchant, Department of Counseling Psychology and Special Education, 340-B MCKB, Brigham Young University, Provo, UT 84062; email: michelle_marchant@bythedu.

Nancy Y. Miramontes Michelle Marchant Melissa Allen Heath Lane Fischer Brigham Young University
Table 1 Participant Information

Total  Consumers
                                         Participating

Participating  Years    SET    Related  Administrators  Teachers  Other
Schools         with  Score    Service
               ABC-I         Providers

School 1           1      -          0               5         1      3

School 2           7   95.8          1               3         1      2
                          %

School 3           3   95.1          0               2         1      0

School 4           3      -          1               3         1      0

School 5           1  51.9%          0               4         0      0

School 6           1   49.1          0               4         0      0
                          %

School 7           2      -          1               3         0      2

School 8           3      -          1               3         1      3

School 9           1      -          1               5         0      2

School 10          2      -          2               4         0      1

School 11          3      -          1               5         2      3

School 12          2   59.1          1               8         2      5
                          %

School 13          2      -          1               3         2      o

School 14          3      -          0               4         0      5

School 15          3      -          0               4         0      1

School 16          3   100%          0               2         0      4

School 17          3      -          1               5         1      1

School 18          2      -          1               6         1      1

School 19          2      -          1               6         1      2

School 20          1   92.6          0               4         1      2
                          %

School 21          1   97.4          1               2         0      3
                          %

School 22          3   83.3          1               3         0      1
                          %

School 23          3  100 %          1               3         1      1

School 24          1   85.4          0               6         1      3
                          %

School 25          2   94.7          1               5         1      2
                          %

School 26          3   97.6          0               1         1      1
                          %

School 27          3   98.3          0               5         1      0
                          %

School 28          2   94.6          0               0         1      1
                          %

School 29          3   100%          1               5         0      3

School 30          3   100%          0               8         0      2

School 31          2  97.6%          0               4         1      1

School 32          3  95.8%          0               6         1      2

School 33          3  96.4%          0               5         1      3

School 34          3  88.3%          1               5         0      2

School 35          3  94.0%          0               7         0      2

Note. A total of 15 Schools did not include their surveys in
school envelopes so their school information is not reported above.


Figure 1: Questionnaire

ABC-I End of Year Survey

1. How long has your school been a part of the ABC-I project?__

2. Please circle your current school position: Administrator,
Gen Ed Teacher, SpEd Teacher, Counselor, Related Server, District
Coach, Building Coordinator, Paraeducator, Parent/Community
Representative? Other:__

                       Strongly  Disagree  Neutral  Agree  Strongly
                       Disagree                                Agree

1. In the past year,
I used ABC-I
strategies and
interventions.

2. My knowledge (i.e.
information learned
from this program) in
the application of
systematic problem
solving for academic
and social behavior
has increased.

3. My skills (i.e.
personal tools
gathered from
program, abilities)
in the application of
systematic problem
solving for academic
and social behavior
have increased.

4. The ABC-I Project
made a positive
impact within my
school.

5. ABC-I requirements
improved school
outcomes.

6. ABC-I's data
collection procedures
were easy to
implement.

7. ABC-I's progress
monitoring procedures
were practical (i.e.
easy, feasible,
useable).

8. The amount of
paperwork involved in
implementing ABC-I
was reasonable (i.e.
not asking too much,
manageable).

9. Our school's
administrative
leadership for ABC-I
was supportive (i.e.
provided help,
facilitated
implementation).

10. Our school has
staff consensus or
"buy in" for ABC-I.

11. ABC-I Project was
worth the time and
effort invested.

12. I would recommend
the ABC-I model to
other educators.

The following statements address your school's tiered
approach to intervention & prevention:

13. I am satisfied
with our school's
universal /core
goals.

14. I am satisfied
with our school's
universal/core
procedures.

15. I am satisfied
with our school's
universal / core
outcomes.

16. I am satisfied
with our school's
supplemental and
intensive goals.

17. I am satisfied
with our school's
supplemental and
intensive
procedures.

18.I am satisfied
with our school's
supplemental and
intensive outcomes.


Table 2 Correlation Between School-Wide Evaluation Tool
(SET) Score and Scores from End Of Year Survey Questions

Questions from ABC-UBI end of year survey            Spearman
                                                  Correlation

1. In the past year, I used ABC-UBI                    .118
strategies and interventions.

2. My knowledge (i.e. information learned from         .073
this program) in the application of systematic
problem solving for academic and social behavior
has increased.

3. My skills (i.e. personal tools gathered             .052
from program, abilities) in the application of
systematic problem solving for academic and
social behavior have increased.

4. The ABC-UBI Project made a positive impact        .145 *
within my school.

5. ABC-UBI requirements improved school outcomes.      .101

6. ABC-UBI's data collection procedures were easy      .037
to implement.

7. ABC-UBI's progress monitoring procedures were       .024
practical (i.e. easy, feasible, useable).

8. The amount of paperwork involved in implementing    .088
ABC-UBI was reasonable (i.e. not asking too much,
manageable).

9. Our school's administrative leadership for          .017
ABC-UBI was supportive (i.e. provided help,
facilitated implementation).

10. Our school has staff consensus or "buy in" for  .195 **
ABC-UBI.

11. ABC-UBI Project was worth the time and effort      .059
invested.

12.1 would recommend the ABC-UBI model to other        .090
educators.

13.1 am satisfied with our school's universal/core     .100
goals.
14.1 am satisfied with our school's universal/core  .181* *
procedures.

15.1 am satisfied with our school's universal 1 core   .104
outcomes.

16.1 am satisfied with our school's supplemental and .146 *
intensive goals.

17.1 am satisfied with our school's supplemental and .143 *
intensive procedures.

18.1 am satisfied with our school's supplemental and   .114
intensive outcomes.

*. Correlation is significant at the.05 level (2-tailed).

**. Correlation is significant at the.01 level (2-tailed).
Gale Copyright:
Copyright 2011 Gale, Cengage Learning. All rights reserved.