Evidence-based practice: a framework for making effective decisions.
Article Type:
Report
Subject:
Decision-making (Educational aspects)
Educational programs (Management)
Authors:
Spencer, Trina D.
Detrich, Ronnie
Slocum, Timothy A.
Pub Date:
05/01/2012
Publication:
Name: Education & Treatment of Children Publisher: West Virginia University Press, University of West Virginia Audience: Professional Format: Magazine/Journal Subject: Education; Family and marriage; Social sciences Copyright: COPYRIGHT 2012 West Virginia University Press, University of West Virginia ISSN: 0748-8491
Issue:
Date: May, 2012 Source Volume: 35 Source Issue: 2
Topic:
Event Code: 200 Management dynamics Computer Subject: Company business management
Geographic:
Geographic Scope: United States Geographic Code: 1USA United States

Accession Number:
292882218
Full Text:
Abstract

The research to practice gap in education has been a long-standing concern. The enactment of No Child Left Behind brought increased emphasis on the value of using scientifically based instructional practices to improve educational outcomes. It also brought education into the broader evidence-based practice movement that started in medicine and has spread across a number of human service disciplines. Although the term evidence-based practice has become ubiquitous in education, there is no common agreement about what it means. In this paper, we offer a definition of evidence-based practice, provide a rationale for it, and discuss some of the main tenants of evidence-based practice. Additionally, we describe a decision-making model that features the relationships between the critical sources of influence and the chief responsibilities of evidence-based practitioners.

"Knowing is not enough; we must apply.

Willing is not enough; we must do."

--Goethe

Education has long struggled with the gap between the methods La that are best supported by systematic research and those that are most widely used (e.g., Burns & Ysseldyke, 2009; Carnine, 1997; Espin & Deno, 2000; Hoagwood, Burns, & Weisz, 2002; Kazdin, 2000; Kratochwill & Stoiber, 2000). Researchers, practitioners, theorists, and policy-makers alike have speculated about the cause of this division. Researchers contend that practitioners do not understand the implications of their work or possess the skills necessary to be good consumers of science. Practitioners complain that too often research is not applicable in the real world and research findings are largely inaccessible because they are published in journals designed for researchers, not practitioners (Carnine, 1997; Greenwood & Abbott, 2001). Presumably, if practitioners were using research evidence as a basis for selecting interventions there would be no research to practice gap.

This "research to practice gap" is not just an issue for education but has been a concern across disciplines as varied as medicine and psychology. In an attempt to address the gap in psychology in 1949, the American Psychological Association established the scientist-practitioner model as a basis for training psychologists (Drabick & Goldfried, 2000). Despite the emphasis on the training of psychologists as scientist-practitioners, there continues to be great concern about the lack of science in practice and the lack of practice in the science of psychology (Chwalisz, 2003; Hayes, Barlow, Nelson-Grey, 1999). Evidence-based practice has been proposed as a means of narrowing the research to practice gap (Chwalisz, 2003).

The passage of No Child Left Behind (NCLB, 2001) was a true watershed for efforts to increase the role of research evidence in education. For the first time, the use of scientific research for making educational decisions was prominently featured in national education legislation. NCLB (2001) included more than 100 references to the use of science or scientifically based evidence as a basis for educational practice. The research to practice gap was transformed from a concern of a relatively small group of educational researchers and reformers to a national policy issue resulting in greater political and social traction than ever before. The subsequent passage of Individuals with Disabilities Education Improvement Act (IDEIA) in 2004 extended this trend and further established the use of science in education. Educational practice based on scientific evidence was no longer just a good idea; it became the law.

The mandates in NCLB (2001) and IDEIA (2004) to use scientifically based evidence have resulted in widespread interest in evidence-based practice. Originating in medicine, evidence-based practice was quickly adopted by numerous other professions because it provided a means of addressing a serious problem that has challenged numerous professions the research to practice gap. A gap between research and practice means that consumers are not receiving services that are based on the best research evidence that exists and therefore may suffer from poorer outcomes and unnecessary costs associated with ineffective treatments. Many professions have recognized that evidence-based practice has the potential to bring research results into daily practice and thereby improve outcomes for consumers.

Within the medical profession, evidence-based practice has been defined as a decision-making process informed by three distinct sources of influence: (1) the best available evidence, (2) clinical expertise and (3) client values (Institute of Medicine, 2001; Sackett, Rosenberg, Gray, Haynes, & Richardson, 1996; Sackett, Straus, Richardson, Rosenberg, & Haynes, 2000). Major organizations including the American Psychological Association (APA, 2002), the American Speech and Language Association (ASHA, 2005), Institute for Education Science (WWC, http://ies.ed.gov/ncee/wwc/), and the National Autism Center (NAC, 2009) followed by adopting very similar definitions of evidence-based practice. Table 1 presents definitions from several organizations that are relevant to education. In these definitions, the word practice refers to the whole of one's professional activities. This is the sense in which a physician practices medicine and a lawyer practices law. Importantly, these definitions do not limit evidence-based practice to a small portion of the decisions that a professional must make; instead, they suggest that this model of decision-making should be applied pervasively across their entire professional practice. Drawing directly from these well-established definitions from related professions, we define evidence-based practice in education as a decision making process that integrates (1) the best available evidence, (2) professional judgment, and (3) client values and context. Similar to the use of the term practice in other professions, we use it to refer to all professional activities of an educator. We suggest that evidence-based practice should be pervasive in all aspects of the professional practice of educators. We will elaborate on these issues throughout this paper.

The word practice also has a second meaning -- specific method or technique used by a professional. This is the sense in which one might talk about "best practices" or the practice of providing immediate corrections of errors. Within education, many have defined evidence-based practice with this sense of the word practice. Thus, within education, the term evidence-based practice is most often used to refer to a program or intervention that has been found to have strong research support (e.g., Cook, Tankersley, & Landrum, 2009; Dunst, Trivette, & Cutspec, 2002; Odom et al., 2005). This definition focuses on the important issue of the research support for a particular program or intervention; however, it leaves the broader question of how (or whether) practitioners should go about balancing this information with other constraints on problem solving outside the scope of the evidence-based practice (as used in this definition).

There are two main reasons why we believe it is important to adopt the broader view that is common in other professions. First, this broader view explicitly recognizes that the choice of treatments in a particular school, classroom, or individual case is an outcome of a decision-making process in which empirical evidence is one of several important influences. This decision-making process is the key to whether programs and interventions that have strong research support are implemented in schools; therefore, articulating how research evidence should be integrated into decision-making is extremely important for addressing the goal of increased use of this evidence to improve outcomes. Second, the broader view suggests that the best available evidence should influence all educational decision-making, whereas the narrower view leads to lists of well-supported treatments. These lists can be extremely useful within a decision-making process but by themselves, they cannot offer solutions to many of the challenges that educators face daily (Slocum, Spencer, & Detrich, 2012 [this issue]). These points are elaborated upon throughout the rest of this paper and in other articles in this special issue.

Table 1 Definitions of evidence-based practice in various professions.

American Psychological Association (A PA, 2005)

"Evidence-based practice in psychology is the integration of the best available research with clinical expertise in the context of patient characteristics, culture, and preferences."

American Speech and Hearing Association (ASHA, 2005)

"The term evidence-based practice refers to an approach in which current, high-quality research evidence is integrated with practitioner expertise and client preferences and values into the process of making clinical decisions"

National Autism Center (2009)

"Evidence-based practice requires the integration of research findings with other critical factors. These factors include:

Professional judgment and data-based decision-making

Values and preferences of families, including the student on the autism spectrum whenever feasible

Capacity to accurately implement the interventions."

Education (Whitehurst, 2002)

Evidence-based education is "The integration of professional wisdom with the best available empirical evidence in making decisions about how to deliver instruction."

Many of the authors who have used the narrower definition of evidence-based practice also fully recognize the importance of the problem-solving processes that may lead to the use of well-supported treatments. For example, Cook, Tankersley, and Harjusola-Webb (2008) provided extensive discussion of the necessary role of professional decision making by special educators in selecting and implementing treatments. Cook and Cook (2011) recognized these issues and suggested "that educators use the term 'evidence-based education' to describe the broader process of evidence-based decision making" (pg. 7). In addition, many of the other authors who have used the narrower sense of evidence-based practice have published extensively on systems to improve educational decision-making and support for high quality implementation. We do not differ from these authors in our overall vision of educational decision-making, but we do strongly believe that there is great value in the broader definition of evidence-based practice.

We believe that diverging from the meaning of evidence-based practice that is used in other professions invites a great deal of confusion. The educational endeavor includes a variety of disciplines (e.g., special education, psychology, speech-language pathology, occupational therapy, and physical therapy) and if these disciplines adopt dissimilar notions of evidence-based practice, the risk of confusion is extremely high. In addition, practitioners from different disciplines speak to each other in different "languages" which could result in delays to effective treatment for clients. Additionally, consumers are likely to be confused by the different uses of the terms and less able and willing to participate in decision-making.

We suggest that the term evidence-based practice be reserved for the decision-making process described above and the term empirically supported treatment be used for activities that have been shown to meet specific criteria for research support. These terms correspond with usage in the extensive literature on evidence-based practice in psychology (e.g., APA, 2005; Chambless & 011endick, 2001; Stoiber & Kratochwill, 2000). In the context of education, treatment typically refers to an intervention, program, or curriculum that addresses any academic or behavioral outcome. We would add that treatment could also refer to smaller scale strategies, tactics, and techniques. Most fundamentally, a treatment is any specific educator behavior that has an identifiable effect on student behavior. A treatment can either be preventive (teaching reading to all students), concurrent (correction routines within instruction), or remedial (extra support for struggling readers) and can be applied to any level of educational service such as whole school, classroom, or individual student.

Evidence-based Practice: New Wine or New Bottle?

The increased interest in evidence-based practice has not been received without reservations from responsible practitioners across disciplines (Norcross, Beutler, & Levant, 2006). The concern may be framed as whether evidence-based practice is a new way to describe what practitioners have always done (new bottle) or a new way of practicing (new wine). It is understandable that those who have been working in educational settings may be skeptical about the evidence-based practice movement. After all, educational services have been offered for years without any explicit requirements that practitioners rely on scientific research to inform decisions about services. However, it has been argued that a service is not a service if the treatments are unevaluated (Hoagwood et al., 2002). "Service" implies the treatment is beneficial. Unless we know the benefits and risks of a treatment there is no basis for claiming it is a service. A treatment may cause harm to students and that harm may not be detected unless the treatment has been evaluated. This suggests there is an ethical responsibility to base decisions on the best available evidence. Most organizations that license practitioners have ethical statements requiring decisions to be based on scientific evidence (APA, 2002; National Association of School Psychologists [NASD], 2000; Behavior Analyst Certification Board [BACB], 2004). However, the persistent research to practice gap makes it clear that it is not always common practice to base decisions about treatments on the existing research base. Increasing the emphasis on best available evidence as a basis for making treatment decisions is indeed a new wine for many practitioners.

It could be argued (Detrich, 2008) that the evidence-based practice movement is ultimately a consumer protection movement. Narrowing the research to practice gap protects consumers by using treatments that the best evidence suggests are most likely to be effective. An equally important but less obvious protection for consumers is that in a fully realized evidence-based practice approach client values and context are also explicitly recognized as critical considerations in decision-making. Recognition of client values raises important questions of the goals of educational treatments and reminds practitioners that effectiveness can only be judged relative to established goals. When decisions are made for districts, schools, or classrooms, clients can be considered to be the larger community. When decisions are made for individual students, their specific families become important partners in the process of setting goals and evaluating potential treatments. Finally, being sensitive to the various contexts in which a treatment might be implemented, the practitioner will have to make judgments about which treatment is most appropriate and how to adapt it to best fit the relevant clinical context.

The following section elaborates on each of the three elements of evidence-based practice (best available evidence, professional judgment, and client values and context). This serves as background information for a framework of evidence-based practice, which highlights the interactive nature of these three influences in educational practice. The expanded descriptions of best available evidence, professional judgment, and client values and context also set the stage for the subsequent articles in the special issue. In Figure 1, we mark the primary responsibilities of evidence-based practitioners with squares and the sources of influence with circles. Arrows indicate the direction of influence, as one element informs the other. We recognize this process is neither as simple nor as linear as we have depicted here. In fact, practitioners are continually making decisions and the interacting influences are much too complicated to capture accurately in a diagram. For present purposes, it is useful to describe some of the basic processes related to selecting, adapting, and implementing treatments before layering on additional complexities, many of which are addressed in the other articles in this special issue.

Best Available Evidence

The evidence-based practice model asserts that the best available evidence should be one of the three main influences on educational decision-making. The term best available evidence implies that there is a range of evidence and that educators should select the best of what is available although seemingly simple, this concept is powerful and has far reaching implications for educational practice. It requires that educators determine the evidence that is best for the particular decision to be made. The best evidence is that which is (a) most relevant to the decision and (b) has the highest degree of certainty. Relevance depends on how closely the evidence matches the educator's particular problem in terms of the nature of the students involved, the desired outcomes, the details of the treatment, and the school context. The certainty of the evidence depends on methodological quality and amount of research available. When highly relevant and highly certain evidence is available, educators should use it. And when ideal evidence is not available, then educators should use the best of what is available. The best available evidence may be only indirectly relevant and may fall short of the highest standards for rigorous research. Nonetheless, the mandate to use the best available evidence suggests that imperfect evidence, used wisely, is better than no evidence at all. This concept is necessary if evidence-based practice is to be pervasive in educational decision-making. If educators attend only to the highest quality evidence, evidence-based practice is limited to a small minority of educational decisions. This topic is explored thoroughly in Slocum et al. (2012 [this issue]), but for the current purposes it is important to understand that in our evidence-based practice framework best available evidence is sufficiently broad to inform selecting and adapting treatments, designing treatments locally, and relying on progress monitoring data (practice-based evidence) to evaluate impact (see Figure 1).

Perhaps, not surprisingly, there are other perspectives about what is meant by the term best available evidence. In some instances, best available evidence has come to mean research with very high methodological quality. Organizations such as the What Works Clearinghouse (WWC, 2011) rely on a specific type of systematic review to develop evidence about a treatment's impact. The review process is rigorous and excludes research studies that do not meet defined quality standards. In general, randomized clinical trials, as well as very high quality quasi-experimental designs (WWC, 2011) and single-case designs (Kratochwill et al., 2010) constitute evidence and all other methods of developing evidence are ignored. Further, even very high quality research is excluded if it does not meet specific requirements of relevance to the specific review topics. In many instances, after reviewing the research base for a specific topic, it is determined that none or very little of the published research meets the quality and relevance standards, and as a consequence the systematic review provides limited guidance on topics that are of great importance to practitioners. For example, in a recent report the WWC (2012) reviewed the evidence for Peer Assisted Learning Strategies (PALS) as an intervention to improve adolescent literacy. Of the 97 research articles that were reviewed none met standards and only one was found to meet standards with reservations, so it was determined that the evidence only supported the conclusion that PALS had potentially positive effects. It must be emphasized that the articles reviewed had all been through a peer-review process and found to be adequate for publication in professional journals.

This sort of review is very useful for some educational questions. In addition, it is valuable to know when this level of evidence is not available so educators can be aware that they are using less certain evidence in decision-making. However, if this is the only source of evidence that is considered to be legitimate, most educational decisions will be made without the benefit of evidence. Educators cannot put their decisions on hold until numerous high quality research studies are conducted on their particular situation. Part of the power of the concept best available evidence is that is recognizes this reality of professional practice and provides flexibility so that decisions can be as effective as possible given the evidence that is available. When evidence is incomplete and less than definitive, the educator must exercise greater professional judgment in selecting treatments. Recognizing the uncertainties involved in these decisions, evidence-based educators place greater reliance on progress monitoring to evaluate the effectiveness of their decisions.

Professional Judgment

The second component of evidence-based practice is professional judgment. Professional judgment is ubiquitous. We do not say this as a suggestion of how professional decision-making processes ought to work, but as recognition of how they must work. Decisions cannot be made without the filter of professional judgment (see Figure 1). From the initial steps of recognizing that a situation is a problem and the formulation of a problem statement through later steps of evaluating progress and judging whether the problem has been solved, decision-making simply cannot occur without judgment. Through professional judgment, the practitioner refines the other sources of influence by retaining relevant and valuable information and discarding the rest. At every juncture practitioners must employ professional judgment to make decisions about how to weigh the best available evidence and client values and contextual factors and navigate the decision-making process.

Professional judgment is a fundamental element of evidence-based practice, but often its value and complexity is not fully recognized. The APA Presidential Task Force on Evidence-Based Practice (2005) described eight competencies that contribute to professional judgment: (1) formulating the problem so that treatment is possible, (2) making clinical decisions, implementing treatments, and monitoring progress, (3) interpersonal expertise, (4) continuous development of professional skills, (5) evaluating research evidence, (6) understanding the influence of context on treatment, (7) utilizing available resources, and (8) having a cogent rationale for treatment. The process of developing professional judgment has been a source of discussion. Some have suggested that professional judgment (aka professional wisdom) is developed purely through experience (Kamhi, 1994; Whitehurst, 2002). However, the list of competencies that contribute to professional judgment suggests more complex interrelations; the evaluation of research, monitoring progress, and continuous development of professional skills are aspects of professional judgment that identify linkages with the research base, with systematically learning from progress monitoring, and with professional education. This notion of professional judgment as a rigorous and informed aspect of professionalism is far different from the idea that judgment allows room for an attitude that anything goes based on uninformed opinion and unconstrained bias. Stanovich and Stanovich (2003) argue:

This captures an approach to professional judgment that complements (rather than detracts from) the best available evidence. In our evidence-based practice framework, one function of best available evidence is to sharpen practitioner judgment. It was not possible to represent how this sharpening occurs in Figure 1 without complicating the visual display of the other interactions, but the relationship between best available evidence and professional judgment is indeed vital to fully realizing this model. Given the potential sources of bias that can affect judgment, both formal research and ongoing progress monitoring can serve as moderating influences and improve the quality of decision-making. Knowledge of recommendations derived from the best available evidence in a variety of circumstances helps practitioners learn from their clinical experience by focusing their attention on variables that are most important for change. Through an interaction between evidence and direct experience of their effectiveness, practitioners are shaped into wise decision-makers.

Client Values and Context

The final component of evidence-based practice is client values and contexts. Client values represent the deeply held ideals of individual clients, their families, and the broader community. The inclusion of client values in the evidence-based practice framework recognizes many of the important factors that have been described as social validity of research (Wolf, 1978). Both professional ethics and practicality demand that the values of the students, families, and communities we serve must be a fundamental contributor to decision-making. The very purposes and goals of education are socially determined as is the range of acceptable forms of treatment. Professional ethics require that the consideration of values extend from the broad community to smaller groups and families. This ethical stance is embodied in the requirement of family involvement in the process of developing Individual Educational Plans (IEPs) as is mandated by the Individuals with Disability Education Improvement Act (IDEIA, 2004).

On the practical level, we can see that schools exist only with the support of the larger community and often the strength of that support can be measured through funding and other resources. Further, the effectiveness of many interventions may be partially dependent on student and family involvement. This involvement may correlate with the degree to which the goals and the nature of the treatment correspond with deeply held values. For example, intervention to improve classroom behavior may be more effective with strong "buyin" from students and their, families. Also, if families contribute to the selection of an academic intervention, they may be more willing to support homework and efforts to promote generalization.

Recognition of client values as an important contributor to decision-making can support the overall purpose of evidence-based practice in education -- improved outcomes for students. However, we must also recognize that the interaction between client values and the other two components is not simple. One of the key roles of professional judgment is to bring the best available evidence together with client values. Client values should not dominate this judgment (e.g., "it does not matter what the research says -- we like a particular program") any more than the best available evidence without consideration of client values should drive decisions. Giving priority to client values without consideration of the best available evidence can result in wasted time, money, and resources and fail to produce meaningful outcomes. Giving special weight to the best available evidence without consideration of client values can result in treatments with low acceptability, which may fail to produce effective outcomes. In addition, low treatment acceptability can result in failure to maintain implementation even if outcomes are positive.

In addition to client values, when making decisions about treatments, practitioners should also consider the context in which treatment is to occur. The consideration of the role of context is important for selecting treatments that are most likely to produce positive results in the particular school, classroom, or situation within a classroom. Each setting includes specific resources and constraints that influence the effectiveness of a particular treatment. Contextual factors to be considered include the correspondence of values and theoretical assumptions of a treatment and those of the implementers; the match between the resources required and the resources available in the treatment setting (including materials, time, space, and personnel); and the degree to which implementers have the training and skills to implement the treatment with adequate levels of integrity. For example, there would be little value in selecting a treatment if the school cannot afford to purchase the materials and provide the professional development necessary for high quality implementation. When several treatment options have roughly the same level of support from the best available evidence, contextual variables may determine the best choice. Treatments that are a better contextual fit may be implemented with higher quality and ultimately produce better outcomes (Albin, Lucyshyn, Homer, & Flannery, 1996). The resources required, level of training and skills, and acceptability to the implementers, although important, should not be used as the only or even primary basis for selecting a treatment with minimal research support when a well-supported treatment exists.

Evidence-based Practice Framework

In this section, we highlight some of the ways that the best available evidence, professional judgment, and client values and context interact throughout a problem solving process (including selecting, adapting, and implementing treatments). The process begins with a practical question and ends with positive outcomes for students. Many important interactions between the best available evidence, professional judgment, and client values and context are captured in Figure 1; however, it is impossible to portray the true dynamic nature of these relationships in a two dimensional illustration. Therefore, we offer this figure as a starting point for thinking carefully about an important and complex topic.

Practical Question

The evidence-based practice decision-making process is initiated by a concrete educational -- problem the practitioner identifies student performance that is not adequate in some way. This may be as broad as reading outcomes for an entire school district or as specific as social behavior of an individual with autism. The identification of performance as inadequate is itself a professional judgment based on experience, training, and context. In order to begin the systematic evidence-based practice decision-making process, the practitioner must formulate a practical question. A well-constructed question defines the population or student(s) under consideration (e.g., third grade student(s) with autism), the outcome to be achieved (e.g., increased reading comprehension), and key features of the setting (e.g., special education teacher and six students in self-contained classroom). The question usually takes one of two forms. In a problem-based question, the practitioner asks about the best interventions to solve a specific problem. For example, "What treatment should I use to teach reading comprehension to my third grade students with autism in a small group arrangement?" This type of question asks for a comparative evaluation of all relevant treatments. Alternatively, in a treatment-based question, the practitioner asks about the effectiveness of a specific treatment. For example, "What is the evidence supporting the use of direct and explicit instruction for teaching reading comprehension to third grade students with autism in a small group arrangement?" This type of question asks about the evidence on a single treatment of interest. In the course of day-to-day practice, practitioners are likely to ask both types of questions.

Regardless of which form the question takes, practitioners should formulate the question in such a way that evidence is useful and relevant. In other words, formulation of the question must be informed by client values concerning educational goals and an understanding of the opportunities and limitations afforded by the educational context (see Figure 1). Important educational goals may be informed by state core standards, but students and families may have suggestions about which goals are priorities to them. For example, a family of a student with autism may consider social outcomes and inclusion more important than learning to count. In this example, the school's infrastructure for supporting inclusion is also relevant to the formulation of the question. Before initiating a search for the best available evidence, a great deal of professional judgment is needed to incorporate client values and the context into the question. To minimize or ignore either the client values or the school context will result in a question that is not properly formed and could misdirect the search for the best available evidence.

Selecting Treatments

A well-constructed question that incorporates client values and context guides the search for the best available evidence. Practitioners must judge the available evidence for "bestness" -- scientific strength and relevance to their question. The process of identifying the best available evidence involves interplay between the practical question, the various sources of evidence, and considerations of client values and context (see Figure 1). As evidence is encountered it must be weighed for strength and relevance to the question, and this process may require further consideration of goals and acceptable forms of treatment as well as the context in which the treatment will be implemented.

The search for the best available evidence involves multiple sources of evidence. Among those are empirically supported treatment reviews, practice guides, best practice reviews, primary research articles, and relevant principles of behavior (see Slocum et al., 2012 [this issue]). These sources of evidence can all contribute to selecting a treatment that addresses the practical question, is supported by the best available evidence, and makes sense in the particular context. An empirically supported treatment review may identify a treatment that has strong supporting evidence in which the population and setting in the research are a close match with the practice context. But this is rare. It is more likely that the match of populations and/or settings is not perfect. This requires practitioners to be sensitive to specific characteristics of their students (e.g., age, performance level, specific academic, behavioral, and cognitive strengths and weaknesses) and their context (e.g., skills of staff, available training and supervision, and other resources). Differences between the specific participants and context in research setting and those in the practice setting require the practitioner to make judgments about the importance of these differences. Various sources of evidence could inform these judgments. The practitioner could consult practice guides and best practice reviews for guidance on whether the treatment is likely to be effective with the particular population of students and context in question. This difficult professional judgment is also informed by his or her knowledge of relevant principles of behavior. Thus, the selection of treatments is not equivalent to choosing an intervention from a list of empirically supported treatments; rather, the selection of treatments involves critical interplay between the best available evidence, client and contextual considerations, and professional judgment.

Adapting Treatments

After selection of a treatment, the evidence-based practice process involves judgments about whether the treatment as it is described in research studies, manuals, curricula or other materials must be adapted to the specific local context (see Figure 1). Practitioners must make detailed decisions about the specific features to be changed and exactly how they are to be adjusted to produce a good "contextual fit" (Albin et al., 1996) and increase the probability of a positive outcome. The best available evidence should inform these judgments and decisions, as well. This step is extremely important for success of the entire process. Failure to adapt the treatment to local circumstances may render an otherwise powerful treatment ineffective. On the other hand, adaptations that eliminate or undermine critical elements of the treatment may also render it ineffective. The evidence base informing adaptation may come from research covering the range of the treatment's variations that retain its effectiveness. These decisions can also be informed by more general evidence about effective instructional and behavioral strategies. Although this more general evidence may not be specific to the treatment in question, it may provide a very effective reference for wise decision-making and may constitute the best evidence that is available to inform these decisions.

Implementing Treatments

Selecting an empirically supported treatment is not sufficient to assure positive outcomes. It is necessary to carefully consider many issues related to implementation in a specific context. Failing to attend to issues of implementation will likely result in the failure of the treatment effort. Effective implementation requires ongoing professional judgment that can be informed by the best available evidence and include consideration of client values and context (see Figure 1). The implementation process requires careful consideration of the features of treatment setting to assure a good contextual fit. Professional development to ensure that those who deliver the treatment have all the necessary skills is fundamentally important to effective implementation. There is likely to be tension between the requirements of training and the realities of providing training in a service delivery context. The best available evidence for effective training and professional judgment should guide decisions about how professional development is conducted in a specific practice context.

Joyce and Showers (2002) present compelling evidence that supervision and coaching are necessary to assure that the treatment is actually delivered in an effective matter. Again, there will be tension between the demands of effective supervision and coaching and the many other demands on the practitioners' time. The practitioners must make decisions about how to arrange supervision and coaching so that it is effective and efficient. Effectiveness without efficiency will likely result in poor sustainability of the implementation effort. Efficiency without effectiveness will likely result in limited positive outcomes. Effective implementation has been the focus of research in recent years (Adelman & Taylor, 2003; Elliott & Mihalic, 2004; Fixsen, Naoom, Blase, Friedman, & Wallace, 2005). This research clearly suggests that implementation is a series of judgments about making an empirically supported treatment effectively fit into a specific treatment context. Evidence-based practitioners would be well served to take advantage of the best available evidence to guide and inform their judgments about implementation in their particular context.

The evidence-based practitioner relies on progress monitoring to evaluate the effects of a treatment. Progress monitoring to improve outcomes is itself supported by substantial research (Fuchs & Fuchs, 1986; Yeh, 2007). When a treatment is implemented, progress monitoring provides the best available evidence on the effects of the implementation. The data from progress monitoring provide an occasion for additional problem solving. These data must be evaluated and judgments made about whether progress is adequate, and if not what should be done about it (Barnett, Daly, Jones, & Lentz, 2004; Witt, VanDerHeyden, & Gilbertson, 2004). The practitioner may continue to monitor outcomes, institute measures of fidelity of implementation, adjust supervision and coaching, modify treatment procedures, change treatments, or make other decisions. Evidence-based practice suggests that the best available evidence, the practitioner's experience, and relevant aspects of context should inform those choices. For example, there is a growing research base on monitoring fidelity of implementation, interpreting these results, and intervening to improve fidelity (Bartels & Mortenson, 2005; Codding, Livanish, Pace, & Vaca, 2008; Gresham, 2009; Mortenson & Witt, 1998). One of the common features for assuring high levels of integrity over time is the use of performance feedback (Mortenson & Witt, 1998; Noell, Duhon, Gatti, & Connell, 2002). Extending beyond treatment integrity, there is a large research base about the effects of performance feedback on a wide range of behaviors in a wide variety of settings (Alvero, Bulkin, & Austin, 2001; Balcazar, Hopkins, & Suarez, 1986). Much of the established research is from settings other than schools so the practitioner will have to make judgments about the relevance of this literature to the practical problem he or she is trying to solve and the appropriateness of generalizing from this research base to the current context. Even though the practitioner is relying on the best available evidence, judgments are necessary for making decisions.

Positive Outcomes

Positive outcomes and positive social evaluations are the final arbiters of the adequacy of decisions and the basis for claiming that the initial practical problem has been solved. The decision-making process is iterative and is not complete until positive outcomes are achieved. However, the achievement of positive outcomes is also a professional judgment that can be based on the two other pillars of evidence-based practice. The best available evidence and client values have important roles in informing reasonable expectations for outcomes (see Figure 1). Research evidence may show whether the outcomes obtained are comparable to those reported for similar groups of students. Client values are also important for setting standards of performance. Although it may seem that "better" outcomes are always a clear goal, it is not always so simple. First, different communities and families may place different relative weight on various outcomes (see Strain, Barton, & Dunlap, 2012 [this issue]). Second, any decision. (or lack of decision) entails opportunity costs -- choosing to devote time and resources to any course of action preclude the use of those resources to pursue other courses of action. For example, a community may place high value on both reading and math skills. Implementing a particular reading program and providing two hours per day for literacy instruction may raise reading performance. The question for the practitioner then, is whether to devote additional time and resources to further improve reading outcomes or to devote those resources to improving math performance. The evidence-based practitioner might seek the best evidence on whether the current level of reading performance predicts success in later grades and also consider the value the community places on both reading and math. A reading program, of course, is an on-going treatment. Other treatments, such as special interventions to improve social behavior or address particular reading challenges of specific students may be temporary. When such treatments are successful practitioners must decide whether the treatment will continue and if so, in what form. It may continue indefinitely in its present form or features of the treatment might be modified so that treatment is less obvious and less demanding on the support system. These types of decisions are very common in tiered decision-making approaches such as Response to Intervention (Rd; Walker & Shinn, 2010) and school-wide positive behavior supports (Sugai & Horner, 2005).

Conclusion

The basic assumption of evidence-based practice is that basing important professional decisions on the three pillars of the best available evidence, professional judgment, and client values and context is more likely to result in positive outcomes than a process that does not take advantage of these three influences in a comprehensive manner. Ultimately, this assumption has to be tested empirically. Currently, the best available evidence regarding the effectiveness of evidence-based practice is a chain of logic. Embedded in this logic is a delicate balance between research evidence and client values and context, with a great deal of confidence in practitioners' judgment to create this balance. It has to be acknowledged that being an evidence-based practitioner is extremely challenging and requires a great deal from the professional because there is no research-based guidance about how practitioners should achieve the balance. Despite a lack of evidence indicating the evidence-based practice process is effective, it is likely the best model for practitioners who seek to have a positive impact. In order to honor their professional responsibility, practitioners should seek out the best available evidence for selecting, adapting, and implementing treatments. To be maximally effective, they must do this by relying on their professional judgment aided by ongoing progress monitoring data, and by making the client an active part of the decision-making process.

We have sketched a conceptual roadmap that may serve as an initial basis for further reflection and inspection of evidence-based practice. Given the limited empirical examinations that exist, it is also a logical thing to do. However, logic is not an acceptable substitute for evidence. The impact of evidence-based practice on the research to practice gap (and educational outcomes), as an area of investigation, is ripe with opportunity and one that is best approached through collaborative relationships between invested researchers and practitioners.

Introduction to Special Issue

This special issue of Education and Treatment of Children explores some of the many important facets of evidence-based practice. Slocum, Spencer, and Detrich examine the concept best available evidence and suggest that evidence-based practice can be most widely applied and most effective in education if this concept is understood to include multiple sources of evidence and multiple kinds of treatments. Strain, Barton, and Dunlap illustrate the importance of considering client values and preferences in the selection of intervention targets and the design of service delivery systems. From a social validity perspective, they present lessons learned about consumer input and satisfaction. The remaining articles explore the challenges and uncertainties that are inherent in the process of identifying empirically supported treatments. Slocum, Detrich, and Spencer suggest that the process of systematically reviewing and evaluating research support for treatments is a measurement process and that the concepts of measurement validity are relevant. In this article, they discuss how various aspects of measurement validity can be applied to the process of reviewing research to identify empirically supported treatments. One particularly challenging component of these reviews is quality appraisal of the studies the process of rating the methodological quality of each research study. Wendt and Miller compare seven scales for quality appraisal of single-subject research. They describe each scale, compare each to the quality indicators proposed by Homer et al. (2005), and apply each to a set of four research studies. In a complementary article, Homer, Swaminathan, Sugai, and Smolkowski examine the fundamental logic of single-subject research and suggest how this research paradigm can come to be more influential in identifying empirically supported treatments. They clarify the specific features of data that support strong conclusions from single subject research results. This is important for disciplined visual analysis of results and it also provides a basis for evaluating strategies for statistical analysis of single-subject results. Susan Wilczyinski draws on her experience as the director of the National Autism Center's National Standards Project to identify numerous risks that are inherent in the process of systematically reviewing research and identifying treatments. This article is an important reminder that reviewing research and identifying well-supported interventions is an extremely complex process fraught with challenges. Gardner, Spencer, Boelter, Dubard, and Jennett use the single-subject quality indicators suggested by Homer et al (2005) to evaluate the evidence base on brief functional analysis methodology as a means of assessing the behavioral difficulties of typically developing children. It is an example of many of the key issues and challenges related to identifying empirically supported treatments (and assessments). Finally, O'Keeffe, Slocum, Burlingame, Snyder, and Bundock examine the question of whether systematic reviews that identify empirically supported treatments tend to derive recommendations that are similar to those from traditional narrative reviews and meta-analyses. They use the research on repeated readings -- an intervention to improve reading fluency -- to test the convergence of these different types of reviews. There are many intriguing questions for future evidence-based practice research in these articles. We look forward to the continued advancement of evidence-based practice and refinement of many of the concepts discussed in these papers.

References

Adelman, H. S., & Taylor, L. (2003). Rethinking school psychology: Commentary on public health framework series. Journal of School Psychology, 41, 83-90.

Albin, R. W., Lucyshyn, J. M, Horner, R. H., & Flannery, K. B. (1996). Contextual fit for behavioral support plans. In L. Koegil, R. Koegil, & G. Dunlap (Eds.), Positive behavioral support: including people with difficult behaviors in the community (pp. 81-97). Baltimore, MD: Brookes.

Alvero, A. M., Bucklin, B. R., & Austin, J. (2001). An objective review of the effectiveness and essential characteristics of performance feedback in organizational settings (1985-1998). Journal of Organizational Behavior Management, 21, 3-30.

American Psychological Association (August, 2005). American Psychological Association Policy Statement on evidence-base practice in psychology. Published as appendix of APA Presidential Task Force of Evidence-Based Practice (2006). Evidence-based practice in psychology. American Psychologist, 61, 271-285.

American Psychological Association (2002). Ethical principles of psychologists and code of conduct. Retrieved from http://www.apa.orgiethicsi.

American Speech-Language-Hearing Association. (2005). Evidence-based practice in communication disorders [Position Statement]. Retrieved from www.asha.org/policy.

Balcazar, F. R., Hopkins, B. L., & Suarez, Y. (1986). A critical, objective review of performance feedback. Journal of Organizational Behavior Management, 7, 65-89.

Barnett, D., Daly, E., Jones, K., & Lentz, F. (2004). Response to intervention: Empirically based special service decisions from singlecase designs of increasing and decreasing intensity. Journal of Special Education, 38, 66-79.

Bartels, S. M., & Mortenson, B. P. (2005). Enhancing adherence to a problem solving model for middle-school pre-referral teams: A performance feedback and checklist approach. Journal of Applied School Psychology, 22, 109-123.

Behavior Analysis Certification Board (2004). Behavior Analysis Certification Boardguidelines for responsible conduct for behavior analysts. Retrieved from http://www.bacb.com/consum_frame.html

Burns, M. K., & Ysseldyke, J. E. (2009). Reported prevalence of evidence-based instructional practices in special education. Journal of Special Education, 43, 3-11.

Carnine, D. (1997). Bridging the research-to-practice gap. Exceptional Children, 63, 513-521.

Chambless, D. L., & Ollendick, T. H. (2001). Empirically supported psychological interventions: Controversies and evidence. Annual Review of Psychology, 52, 685-716.

Chwalisz, K. (2003). Evidence-based practice: A Framework for twenty-first-century scientist-practitioner training. The Counseling Psychologist, 31, 497-528.

Codding, R., Livanis, A., Pace, G., & Vaca, L. (2008). Using performance feedback to improve treatment integrity of classwide behavior plans: An investigation of observer reactivity. Journal of Applied Behavior Analysis, 40, 417-422.

Cook, B. G., & Cook, S. C. (2011). Thinking and communicating clearly about evidence-based practices in special education. Retrieved from http://www.cecdr.org/subpage.cfm?id=67726117-0544-E2C6-4E287FE0E2A6A05E.

Cook, B.G., Tankersley, M., & Harjusola-Webb, S. (2008). Evidence-based special education and professional wisdom: Putting it all together. Intervention in School and Clinic, 44, 1.05-111.

Cook, B. G., Tankersley, M., & Landrum, T. J. (2009). Determining evidence-based practices in special education. Exceptional Children, 75, 365-383.

Detrich, R. (2008). Evidence-based, empirically-supported, or best practice: A guide for the scientist practitioner. In J. K. Luiselli, D. C. Russo, W. P. Christian, & S. M. Wilczynski (Eds.), Effective practices for children with autism: Educational and behavioral support interventions that work (pp. 3-25). New York, NY: Oxford.

Drabick, D. A. G., & Goldfried, M. R. (2000). Training the scientist-practitioner for the 21st century: Putting the bloom back on the rose. Journal of Clinical Psychology, 56, 327-340.

Dunst, C. J., Trivette, C. M., & Cutspec, P. A. (2002). Toward an operational definition of evidence-based practices. Centerscope, 1, 1-10.

Elliott, D. S., & Mihalic, S. (2004). Issues in dissemination and replicating effective prevention programs. Prevention Science, 5, 47-53.

Espin, C. A., & Deno, S. L. (2000). Introduction to the special issue of learning disabilities research & practice: Research to practice: Views from researchers and practitioners. Learning Disabilities Research & Practice, 15(2), 67-68.

Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation research: A synthesis of the literature. Tampa: University of South Florida, Louis de la Parte Florida Mental Health Institute (FMHI Publication #231).

Fuchs, L. S., & Fuchs, D. (1986). Effects of systematic formative evaluation: A meta-analysis. Exceptional Children, 53, 199-208.

Greenwood, C. R., & Abbot, M. (2001). The research to practice gap in special education. Teacher Education and Special Education, 24, 276-189.

Gresham, F. M. (2009). Evolution of the treatment integrity concept: Current status and future directions. School Psychology Review, 38, 533-540.

Hayes, S. C., Barlow, D. H., & Nelson-Grey, R. O. (1999). The scientist-practitioner: Research and accountability in the age of managed care. Boston, MA: Allyn and Bacon.

Hoagwood, K., Burns, B. J., & Weisz, J. (2002). A profitable conjunction: From science to service in children's mental health. In B. J. Burns & K. Hoagwood (Eds.), Community-based interventions for youth with severe emotional disturbances (pp. 327-338). New York, NY: Oxford University Press.

Horner, R. H., Carr, E. G., Halle, J., Mcgee, G., Odom, S., & Wolery, M. (2005). The use of single-subject research to identify evidence-based practice in special education. Exceptional Children, 71, 165-179.

Horner, R., Swaminathan, H., Sugai, G., & Smolkowski, K. (in press). Expanding analysis of single case research. Washington, DC: Institute of Education Science, U.S. Department of Education.

Individuals with Disabilities Education Improvement Act (IDEA) of 2004, 20 United States Congress 1412[a] [5]), Pub. L. No. 108-466.

Institute of Medicine. (2001). Crossing the quality chasm: A new health system for the 21st century. (Committee on Quality of Health Care in America). Washington, DC: National Academies Press.

Joyce, B., & Showers, B. (2002). Student Achievement Through Staff Development (3rd ed.). Alexandria, VA: Association for Supervision and Curriculum Development.

Kamhi, A. G. (1994). Toward a theory of clinical expertise in speech-language pathology. Language Speech, and Hearing Services in Schools, 25, 115-118.

Kazdin, A. E. (2000). Psychotherapy for children and adolescents: directions for research and practice. New York, NY: Oxford University Press.

Kratochwill, T. R., & Stoiber, K. C. (2000). Empirically supported interventions and school psychology: Conceptual and practical issues: Part II. School Psychology Quarterly, 15, 233-253.

Kratochwill, T. R., Hitchcock, J., Homer, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2010). Single-case designs technical documentation. Retrieved from: http://ies.ed.gov/ncee/wwc/pdf/wwc_scd.pdf

Mortenson, B. P., & Witt, J. C. (1998). The use of weekly performance feedback to increase teacher implementation of a pre-referral academic intervention School Psychology Review, 27, 613-627.

National Association of School Psychology. (2000). Professional conduct manual. Retrieved from http://www.nasponline.org/standards/index.aspx

National Autism Center. (2009). National Standards Report: National Standards Project -- Addressing the need for evidence-based practice guidelines for autism spectrum disorders. Randolph, MA: National Autism Center, Inc.

No Child Left Behind, 20 U.S.C. 16301 et seq. (2001).

Noell, G. H., Duhon, G. J., Gatti, S. L., & Connell, J. E. (2002). Consultation, follow-up, and implementation of behavior management interventions in general education. School Psychology Review, 31, 217-234.

Norcross, J., Beutler, L., & Levant, R. (2006). Evidence-based practices in mental health: Debate and dialogue on the fundamental questions. Washington, DC: American Psychological Association.

Odom, S. L., Brantlinger, E., Gersten, R., Homer, R. H., Thompson, B., & Harris, K. R. (2005). Research in special education: Scientific methods and evidence-based practices. Exceptional Children, 71, 137-148.

Sackett, D. L., & Rosenberg, W. C., Gray, J. A. M., Haynes, R. B., & Richardson, W. S. (1996). Evidence based medicine: What it is and what it isn't. BMJ: British Medical Journal, 312(7023), 71-72.

Sackett, D. L., Straus, S. E., Richardson, W. S., Rosenberg, W., & Haynes, R. B. (2000).Evidence-based medicine: How to practice and teach EBP (2nd ed.). New York: Churchill Livingstone.

Slocum, T. A., Spencer, T. D., & Detrich, R. (2012 [this issue]). Best available evidence: Three complementary approaches. Education and Treatment of Children, 35(2), 27-55.

Stanovich, P. J., & Stanovich, K. E. (2003). Using research and reason in education: How teachers can use scientifically based research to make curricular & instructional decisions. Washington, DC: US Department of Education.

Strain, P. S., Barton, E. E., & Dunlap, G. (2012 [this issue]). Lessons learned about the utility of social validity. Education and Treatment of Children, 35(2), 57-74.

Stoiber, K. C., & Kratochwill, T. R. (2000). Empirically supported interventions and school psychology: Rationale and methodological issues: Part 1. School Psychology Quarterly, 15, 75-105.

Sugai, G., & Homer, R. H. (2005). Schoolwide positive behavior supports: Achieving and sustaining effective learning environments for all students. In W. L. Heward et al., (Eds.), Focus on behavior analysis in education: Achievements, challenges, and opportunities (pp. 90-102). Upper Saddle River, NJ: Person. Education, Inc.

Walker, H. M., & Shinn, M. R. (2010). Systematic, evidence-based approaches for promoting positive student outcomes within a multi-tier framework: Moving from efficacy to effectiveness. In M. R. Shinn & H. M. Walker (Eds.), Interventions for achievement and behavior problems in a three-tier model including RTI (pp. 1-26). Washington, DC: National Association of School Psychologists.

Whitehurst, G. J. (2002, October). Evidence-based education. Paper presented at the Student Achievement and School Accountability Conference. Retrieved from http://www2.ed.govinclb/methods/whatworksiebiedlite-index.html.

What Works Clearinghouse. (2011). What Works Clearinghouse Procedures and Standards Handbook (Version 2.1). Retrieved from http://ies.ed.govincee/wwc/pdfireference_resources/wwc_procedures_v2_1_standards_handbook.pdf.

What Works Clearinghouse. (2012). Adolescent literacy intervention report: Peerassisted learning strategies. Retrieved from http://whatworks.ed.gov

Witt, J. C., VanDerHeyden, A. M., & Gilbertson, D. (2004). Troubleshooting behavioral interventions: A systematic process for finding and eliminating problems. School Psychology Review, 33, 363-383.

Wolf, M. M. (1978). Social validity: The case for subjective measurement or how applied behavior analysis is finding its heart. Journal of Applied Behavior Analysis, 11, 203-214.

Yeh, S. S. (2007). The cost-effectiveness of five policies for improving student achievement. American Journal of Evaluation, 34, 220 241.

Trina D. Spencer Northern Arizona University Ronnie Detrich Wing Institute Timothy A. Slocum Utah State University

Correspondence to Trina D. Spencer, Institute for Human Development, Northern Arizona University, PO Box 5630, Flagstaff AZ, 86011-5630; e-mail: trina.spencer@nau.edu.
Teachers, like scientists, are ruthless pragmatists (Gersten & Dimino,
  2001; Gersten, Chard, & Baker, 2000). They believe that some
  explanations and methods are better than others. They think there is
  a real world out there--a world in flux, obviously--but still one that
  is trackable by triangulating observations and observers. They believe
  that there are valid, if fallible, ways of finding out which
  educational practices are best. Teachers believe in a world that is
  predictable and controllable by manipulations that they use in their
  professional practice, just as scientists do. Researchers and
  educators are kindred spirits in their approach to knowledge, an
  important fact that can be used to forge a coalition to bring hard-won
  research knowledge to light in the classroom (p. 35).
Gale Copyright:
Copyright 2012 Gale, Cengage Learning. All rights reserved.