Dissertation report on Validation of the Instructional Consultation Teams Level of Impleme

Description
There is great variation of public consultations. In some countries there is a list of all consultations, or consultations are mentioned in normal news feed. Depending on the country there can be national or regional public consultations.

ABSTRACT

Title of Dissertation: VALIDATION OF THE INSTRUCTIONAL CONSULTATION TEAMS LEVEL OF IMPLEMENTATION SCALE- REVISED Sonja Ann McKenna, Doctor of Philosophy, 2005

Dissertation directed by:

Professor Sylvia Rosenfield Department of Counseling and Personnel Services

Consultation has been proposed as a viable indirect service delivery system for schools (Sheridan & Gutkin, 2000), enabling teachers and other professionals to assist students by receiving support through collaborative problem solving. Researchers have delineated components and characteristics thought to be important in consultation processes (Conoley, 1981). It is challenging to ensure if the process of consultation is being implemented in the way it was intended, or if it is bein g implemented with integrity. There is growing recognition that many research studies have not examined the treatment integrity of consultation (Gutkin, 1993). Researchers are increasingly required to assess the integrity with which consultees implement interventions designed within consultation. However, there is a gap in the literature on the treatment integrity of the consultation process itself. Instructional Consultation Teams is a collaboration model that has been used in a variety of schools (Rosenfield & Gravois, 1996). Critical components were

delineated and a Level of Implementation (LOI) scale was developed (Fudell, 1992). The collaborative process element of the scale assesses consultant behaviors and determines if the consultant has implemented the critical components. However, the data are collected via self-report interviews, which may be distorted based on the respondents’ perceptions (Gutkin, 1993). This study analyzed the match between 20 consultant/consultee dyads consultation behaviors and their self-reports of the behaviors in the consultation sessions. By listening to audiotaped consultation sessions created for on-line coaching, and scoring a verification measure of consultation behaviors, consultant/consultee dyad’s interactions were assessed to determine the presence of instructional consultation critical components. The scores from listening to the audiotapes were then compared to the LOI-R interviews conducted after cases were completed. Results indicated that self-report, as measured by the LOI-R, and implemented behaviors, as measured by coding audiotapes of the sessions, were significantly related. All 23 items indicated no significant discrepancy between the self-reported behaviors and the observed behaviors. The LOI-R and audiotape scoring both indicated high levels of implementation for the 7 dimensions investigated. The LOI-R was thus considered a valid measure of instructional consultation process implementation.

VALIDATION OF THE INSTRUCTIONAL CONSULTATION TEAMS LEVEL OF IMPLEMENTATION SCALE- REVISED

By

Sonja Ann McKenna

Dissertation submitted to the Faculty of the Graduate School of the University of Maryland, College Park in partial fulfillment of the requirements for the degree of Doctor of Philosophy 2005

Advisory Committee: Professor Sylvia A. Rosenfield, Chair Professor Chan Dayton Adjunct Faculty Todd Gravois Assistant Professor Cheryl Holcomb-McCoy Associate Professor William O. Strein

©Copyright by Sonja Ann McKenna 2005

ACKNOWLEDGEMENTS

I would like to acknowledge and sincerely thank everyone who supported me in reaching this point in my academic and professional life. I truly could not have accomplished all that I have without many people’s support, assistance and encouragement. My committee members Dr. Dayton, Dr. Gravois, Dr. Holcomb-McCoy and Dr. Strein, deserve recognition for responding to many, many questions. Their enthusiasm and encouragement, as well as professional guidance made it possible for me to complete this project. My dissertation chair and advisor, Dr. Sylvia Rosenfield, had the vision for this project and for school psychology practice that goes beyond “the way it has always been done.” Completing a long-distance dissertation from New Jersey was challenging, but the task was made much easier with the professional assistance of the CAPS support staff and the Lab for IC Teams graduate assistants. I especially acknowledge Claire Ward, for all of her assistance throughout my years at University of Maryland, and Lauren Costas and Megan Masterson, for providing me with archived data, lists of names, dates, schools and other pieces of information sent, emailed and faxed. A special note of gratitude goes to my University classmates who assisted with the data collection and other practical details of this study: to Dr. Efrat Benn, Dr. Mary Levensohn, and Ms. Jess Parr-Smith for completing LOI-R interviews by phone in addition to their own busy schedules, to Lauren Costas for doing double duty as the main Lab for IC Teams connection prior to doing interviews while interning in Nebraska, and to Dr. Kate Cramer for meeting with me repeatedly for interrater

ii

reliability measures and for all the encouragement and practical suggestions for how to “get it done.” On the topic of encouragement, I would like to extend my sincere appreciation to my unofficial dissertation support group of University of Maryland graduates and classmates. When I needed information on what to expect and how to go about figuring things out, Dr. Kate Cramer, Dr. Lindsay Vail, Dr. Mary Levensohn, and Dr. Meryl Sirmens were always just an email away. In addition, my classmates Karen Jones and Ricia Weiner were always available for pep talks and/or commiseration, depending on the day. I cannot thank you enough! I received so much encouragement from my co-workers, friends and family that I cannot begin to express my gratitude. Working in Vernon Township School District, New Jersey, provided me with a very unexpected source of support. From the district administrators, and the teachers and support staff at Rolling Hills Elementary School to my Child Study Team teammates, everyone has been extremely encouraging and supportive. I especially thank the CST secretaries for their personal interest in my wellbeing and for helping me in all my professional endeavors, including this project. I also recognize several friends, specifically the NewsomePfeiffer family and the Ramberg family for housing me on my many trips from New Jersey to Maryland on dissertation business. I am so very blessed to have such a supportive family. I need to thank my parents and siblings (thanks for the “eternal student’ jokes) for supporting me through this process. My family, including my in-laws and my extended family, never doubted that I would keep working, keep going and keep motivated to finish.

iii

Finally, to my husband Shawn who has been through it all with me: you have my undying love and appreciation. I could not have finished this project, and I would not have the life I always wanted, without you. Thank you for supporting me thought everything.

iv

TABLE OF CONTENTS List of Tables………………………………………………………………viii List of Figures………………………………………………………………ix Chapter 1: The Problem……………………………………………………. 1 Instructional Consultation …………………………………………. 2 Importance of Treatment Integrity and Level of Implementation…. 4 Treatment Integrity………………………………………… 4 Treatment Integrity in Consultation ……………………….. 4 Level of Implementation……………………………………..5 Level of Implementation Scale for Instructional Consultation………………………… 6 Challenges with Self-reports and Interviews………………………. 8 Statement of the Problem …………………………………………. 10 Definition of Terms ………………………………………………. 11 Instructional Consultation ………………………………… 11 Level of Implementation ………………………………….. 12 Critical Dimensions ……………………………………….. 12 Level of Implementation- Tape Version ………………….. 12 Chapter 2: Review of the Literature……………………………………….. 14 Consultation……………………………………………………….. 14 Definition …………………………………………………. 14 General Characteristics ……………………………………. 15 Instructional Consultation ………………………………… 17 Entry and contracting ………………………………18 Problem Identification and Analysis ……………….19 Intervention Planning and Development ………….. 20 Intervention Implementation, Evaluation, and Modification………..21 Termination ……………………………………….. 21 Prereferral, Problem Solving, and Consultation Teams…………… 22 Research ……………………………………………………23 General trends……………………………………… 23 Team characteristics ………………………………. 24 Team processes …………………………………… 25 Outcomes research on problem solving teams ……. 27 Level of Implementation for School Based Problem Solving Teams…….. 29 Level of Implementation Research for Instructional Consultation Teams... 31 Challenges with Self-Report Interview Measures……….…..……. .38 Memory Errors …………………………….…….………… 38 Interview Strategies to Improve Recall …………...……….. 40

v

Need for Verification/Validation Techniques………..….… 47 Treatment Integrity ……………………………………………….. 48 Definition …………………………………………………. 48 Importance in Research …………………………………… 51 Treatment Integrity in School Interventions ……………… 52 Treatment Integrity in Consultation Process ……………… 55 Level of Implementation ………………………………………….. 61 Relationship Between Treatment Integrity and Level of Implementation……. 62 Level of Implementation in Program Evaluation …………. 63 Summary ………………………………………………………….. 66 Chapter 3: Methodology…………………………………………………… 68 Participants………………………………………………………… 68 Participant Selection/Recruitment…………………………. 69 Descriptive Data ………………………………………….. 70 Participants ……………………………………….. 70 School systems ……………………………………. 72 Instruments………………………………………………………… 72 Level of Implementation Scale- Revised (LOI-R) …………72 Case Manager and Teacher Interview forms ……… 75 Scale validity and reliability measures ……………. 75 Interrater reliability ……………………………….. 76 Level of Implementation- Tape Version ………………….. 76 Protocol development …………………………….. 76 Interrater reliability……………………………….. 77 Procedure………………………………………………………..… 78 Data Analysis……………………………………………………… 82 Chapter 4: Results…………………………………………………………. 84 Research Question 1……………………..………………………….84 Summary of the Available Data…………………………….84 Individual Item Responses………………………………….85 Item Implementation Results……………………………….88 Dimension Data …………………………………………… 90 Research Question 2 ………………………………………………. 91 Research Question 3 …………………………………………….. 92 Item Comparison …………………………………………. 93 Dimension Comparisons……………………………………93 Summary data …………………………………….. 93 Line graph data ……………………………………. 96 Individual case analysis …………………………. 105 Summary …………………………………………………………..108

vi

Chapter 5: Discussion ……………………………………………………..110 High Implementation of the Instructional Consultation Process As Determined by Level of Implementation Measures …. 111 Comparison of LOI-R and Level of Implementation- Tape Version Results …… 113 Item Comparisons…………………………………………. 113 Dimension Comparisons……………………………………113 Individual Case Comparisons………………………………117 Limitations………………………………………………………… 119 Unavailable Audiotapes…………………………………… 119 Not Applicable Items……………………………………… 120 Lack of Behavioral Cases ………………………………… 121 Solicited Cases from Agreeable Teachers………………… 121 Participant Characteristics………………………………… 122 Implications…………………………………………………………122 High Levels of Implementation………………………….… 122 Diverse participant group factors………………….. 123 Importance of training …………………………….. 124 Influence of audiotaping ……………………….….. 124 Influence of coaching ……………………………....125 Validation of the LOI-R………………………….…………125 Evidence of validity……………………….………. 125 Program evaluation …………………….…………..126 LOI-R interview process ………………..………….127 Areas for Future Research………………………………..…………128 Generalizability………………………………..……………128 High implementation………………..………………128 Cases with behavioral referral concerns…..……….. 130 Awareness of Instructional Consultation Principles……..…130 Summary………………………………………………………..…..131 Appendix A: Level of Implementation Scale for Instructional Consultation Teams: Administration and Scoring Guide…. 133 Appendix B: Match between LOI-R Items and Tape Version Items…….. 161 Appendix C: Level of Implementation-Tape Version Scoring Protocol and Interrater Reliability Training Manual……………...…. 166 Appendix D: Completeness of Audiotaped Case Sessions ……………… 172 References ………………………………………………………………..…173

vii

LIST OF TABLES

1. 2. 3. 4.

Participant Characteristics ………………………………………. 71 Summary Information on Available Taped Sessions ……………. 81 Summary Information for Scoreable Items from Tape Sessions…..84 Percentage of Dimensions Implemented as Observed by Scoring Level of Implementation- Tape Version .………….……..80 Percentage of Dimensions Implemented as Reported in LOI-R Interviews …………….………………..………………….81 Frequencies and Exact Significance Levels of LOI-R Item and Tape Scored Item Pairs ……….…………………………..….83 Summary of Percentages of Level of Implementation for the Dimensions……………………………………………………79 Percentage of Dimension Implementation for Case 15 and Case 16………………………………………..…………………..96

5.

6.

7.

8.

viii

LIST OF FIGURES

1.

Percentages of Dimension 1- Collaborati ve Communication for LOI-R data and Level of Implementation- Tape Version data.……….97 Percentages of Dimension 2- Contracting for LOI-R data and Level of Implementation- Tape Version data ……...…………………98 Percentages of Dimension 3- Problem Identification for LOI-R data and Level of Implementation- Tape Version data …….……………..100 Percentages of Dimension 4- Intervention Design for LOI-R data and Level of Implementation- Tape Version data .……………………101 Percentages of Dimension 5- Intervention Implementation for LOI-R data and Level of Implementation- Tape Version data……..………….102 Percentages of Dimension 6- Evaluation and Follow Up for LOI-R data and Level of Implementation- Tape Version data…………..…….103 Percentages of Dimension 7- Curriculum based Assessment for LOI-R data and Level of Implementation- Tape Version data.…..…….106

2.

3.

4.

5.

6.

7.

ix

Chapter 1: The Problem Within the past decade, schools have attempted reforms to confront the necessity of providing effective instruction to students who have difficulties learning (Knoff, 2002). Unfortunately, many students do not make adequate progress within traditional classrooms and instructional environments. Different programs have been proposed and implemented in an effort to assist these students improve their academic outcomes (Fullan, 1983; Shapiro, 1987). However, no program can be judged as successful unless there is assurance that the program is being implemented as intended (Fullan, 1983; Kovaleski, 2002; Leithwood & Montgomery, 1980). If there is no such assurance, any improvements or lack of improvement cannot be attributed to the program (Gresham & Kendell, 1987; Gutkin, 1993). There are many types of programs that are intended to assist students improve their academic experiences. One approach is to use multidisciplinary teams, which are required by the 1997 Individuals with Disabilities Education Act (IDEA). Many schools use the teams solely for the mandated referral and assessment process. However, after the assessment process had been completed, many referred students are not found eligible for special education services (Will, 1986). Teachers and schools are again faced with the challenge of educating these students in traditional classroom settings. These “difficult-to-teach” students are challenging to teachers, yet often not eligible to receive special education services (Fuchs & Fuchs, 1989; Fuchs, Fuchs, Bahr, Fernstrom & Stecker 1990). Many scholars and researchers have recommended pre-ref erral services and interventions as a way to assist teachers instruct these

1

students in general education settings and to reduce unnecessary assessments (Chalfant & Pysh, 1989; Nelson, Smith, Taylor, Dodd & Reavis, 1991). Some schools have begun innovative programs that use school-based intervention teams to provide prereferral services to students prior to being assessed for potential disabilities (Bahr, Whitten, Dieker, Kocarek, & Manson, 1999; Chalfant & Pysh, 1989; Kovaleski, 2002). Many schools use problem - solving teams to assist teachers instruct these students in the least restrictive environment of the general education classroom (Kovaleski, Gickling, Morrow, & Swank, 1999; Rosenfield & Gravois, 1996). As Vail (1996) defines, “A school-based problem - solving team is a collaborative problem-solving entity in schools that is composed of educational professionals representing diverse fields, participating as equally contributing members” (p. 12). One purpose of the teams is to provide teachers with assistance to support student learning in the general education environment (Fudell & Dougherty, 1989). Many school teams use forms of consultative, problem solving models (Allen & Graden, 1995). There are several different types of problem solving models used in schools, including behavioral consultation (Bergan, 1977; Bergan & Kratochwill, 1990), mental health consultation (Caplan, 1970), and instructional consultation (Rosenfield, 1987). Instructional consultation, delivered through Instructional Consultation Teams (Rosenfield & Gravois, 1996), is the focus of this investigation. Instructional Consultation Instructional consultation is a collaborative approach with overlap of school consultation and behavioral consultation skills (Rosenfield, 2002a), and is based on

2

the general problem solving stages found in other forms of consultation (Rosenfield, 1987). Instructional consultation was originally designed for individual consultant use. However, it became apparent that both consultants and consultees needed a way to conceptualize the different service delivery model and inherent assumptions, and a team-based model was developed (Rosenfield & Gravois, 1996). The team structure is an attempt to “implement the concepts of instructional consultation at the school level” (Rosenfield & Gravois, 1996, p. 21). The Instructional Consultation Team members assume that: 1) all students are learners and are able to learn when the environment and instructional tasks meet the students’ needs; 2) the focus of problem solving and intervention planning is the match between the student, instruction and the instructional task, rather than the place where the student is instructed, and 3) the school should build a problem solving community with norms of collaboration and shared expertise. Instructional Consultation Teams are comprised of members representative of the school building stakeholders, including general and special education teachers, pupil personnel staff, specialists, and a building administrator (Rosenfield & Gravois, 1996). The team members receive referrals from and work with individual teachers on classroom and student concerns. After the team receives a teacher referral, a team member is assigned as a consultant case manager to guide the instructional consultation process with the consultee teacher. Teams meet weekly to receive requests for assistance, document case processes and outcomes, evaluate team effectiveness in the school, and assess team training needs. Schools that initially implement instructional consultation undergo an annual level of implementation

3

assessment (Rosenfield & Gravois, 1996). Instructional Consultation Team implementation is assessed through the Level of Implementation Scale-Revised (LOIR; Fudell, Gravois & Rosenfield, 1996). Importance of Treatment Integrity and Level of Implementation Treatment Integrity Treatment integrity is the extent to which an independent variable, intervention or program is implemented as planned (Gresham, Gansle & Noell, 1993; Macmann et al., 1996; Moncher & Prinz, 1991; Peterson, Homer & Wonderlich, 1982; Reimers, Wacker & Koeppl, 1987; Telzrow & Beebe, 2002; Yeaton & Sechrest, 1981). A measure of treatment integrity is necessary to determine that outcomes of a particular program were due to the features of the program. This concept is especially important when the intervention or program is complex, such as consultation (Gresham & Kendell, 1987; Gutkin, 1993) or other problem solving processes (Macmann et al., 1996; Telzrow & Beebe, 2002). Treatment Integrity in Consultation It is challenging yet critical to assess the treatment integrity of consultation (Gresham & Kendell, 1987; Gutkin, 1993). Consultation is a multifaceted process. One must have assurance that all the necessary components in each of the stages are being implemented prior to drawing conclusions about the effectiveness or usefulness of a consultation program (Rosenfield, 1992). Examining the treatment integrity of an intervention is particularly important in assessments of more complex interventions (Shapiro, 1987), such as the collaborative problem solving process of instructional

4

consultation. As Shapiro (1987) states, “Unless assured that treatment integrity is high, conclusions drawn about treatment effectiveness will be questionable” (p. 294). In addition, it is beneficial to obtain observational assessments of treatment integrity of level of implementation (Gutkin, 1993). Self-reports or perceptions of implementation are inadequate to assess treatment integrity (Telzrow & Beebe, 2002; Witt, 1997). As Gutkin (1993) states, “Given the inherent complexity and subtleties of consultation, one cannot assume that consultation services are being delivered as intended simply because consultants honestly try to do so and believe that they have succeeded” (p. 229). There is growing recognition of the necessity of assessing treatment integrity of the interventions developed through consultation (Telzrow & Beebe, 2002, Upah & Tilly, 2002). However, there is less research on the treatment integrity of consultation processes themselves (Gresham & Kendell, 1987; Gutkin, 1993) and on the implementation of the necessary components of consultation (Fuchs & Fuchs, 1989; Fuchs et al., 1990; Gutkin, 1993). The evaluation of treatment integrity of consultation process is necessary to undertake prior to assessing the outcomes of consultation cases, in order to ensure that those outcomes are related to implementation of the consultation process. If the consultation processes occur as part of a more complex program, assessing the treatment integrity of the consultation process is one aspect of assessing the larger level of implementation of the program. Level of Implementation Level of implementation is similar to the concept of treatment integrity in that it is the assessment of the actualization of a program within a particular system. It is

5

“the degree to which the various elements of an innovation have been operationalized as intended” (Fudell, 1992, p. 10). Assessments of level of implementation have a set criterion level of acceptable implementation (Fudell, 1992). This criterion level is specified prior to the assessment of the program elements. Level of implementation is typically used in program evaluation and may be useful for planning the growth and evaluating the outcomes of an innovation within a system (Kovaleski, 2002; Leithwood & Montgomery, 1980). Level of Implementation Scale for Instructional Consultation Any innovation needs to be evaluated to determine if it is being implemented the way in which it was intended to be (Kovaleski, 2002). To assess the level of implementation of Instructional Consultation Teams in schools, the LOI-R assessment is conducted (Rosenfield & Gravois, 1996). The purpose of the assessment is to provide schools with information on the Instructional Consultation Teams systems’ implementation, so that the school teams can identify training and development needs in their teams and schools. The LOI-R consists of two main components: the collaborative consultation process and the service delivery system (Fudell, 1992; Vail, 1996). Seven specific dimensions are identified as the essential characteristics for each of the two main components. The percentage implementation of the dimensions is determined by analyzing team member interview responses and examining various documentation forms. The collaborative consultation process component consists of seven dimensions, each with a varying number of behavioral indicators (Vail, 1996). The dimensions include the problem-solving steps of the collaborative process:

6

Contracting, during which the elements of the collaborative relationship are discussed; Problem Identification, during which the concern is defined and measured; Intervention Planning and Design, during which the strategies and techniques to address the concern are specified in detail; Implementation, during which the intervention strategies are put into place and data are collected regarding student progress and treatment integrity of the intervention; and Evaluation and Follow up, during which data are used to determine progress and need for modifications. There are two additional dimensions within the collaborative communication component: Clear accurate communication, which is an indication of the agreement between the case manager and referring teacher regarding the process and outcomes of the case; and Curriculum Based Assessment, which is the use of classroom based materials to define the concern, measure progress and determine outcomes. To assess the implementation of the collaborative consultation process, case managers and teachers are interviewed about their cases after the cases have concluded (Rosenfield & Gravois, 1996). Implementation is considered high if both the case manager and teacher indicate following the consultation process, and their responses are in agreement. The level of implementation of each dimension is considered adequate when practice reaches the 80% criterion level (Fudell, 1992; Vail 1996). The LOI–R scale represents an attempt to measure the integrity with which the consultation process is being implemented within the Instructional Consultation Team model (Fudell, 1992; Rosenfield & Gravois, 1996). To assess the level of implementation of instructional consultation in schools, the assessments need to be

7

completed with a reliable and valid instrument. The reliability of the LOI-R scale initially was assessed through the examinations of inter-rater reliability (Fudell, 1992). Initial content validity was assessed through expert judgement when the scale was created. However, there is a question of whether the interview method used in the LOI-R captures the critical components of the collaborative process. Challenges with Self-reports and Interviews Many challenges in using interview methods have been identified (Belli, Shay & Stafford, 2001; Jobe, 2003; Jobe, Tourangeau & Smith, 1993). There are a number of factors that may impact on a person’s ability to recall and report upon events from their own lives (Pearson, Ross & Dawes, 1994). According to Jobe (2001), when using any type of self-report technique including interviews, investigators “assume that the respondent understands the questions and terminology in the same way that the investigator does, accurately recalls the information, and accurately formulates answers” (p. 219). These assumptions may not always be accurate, leading to incorrect information from the self-report measure. Another important factor is the retention interval. A large amount of research has determined that recall worsens with longer intervals between the acquisition and retrieval of information (Rubin & Wenzel, 1996). Research within cognitive psychology to increase accuracy of self-report methods indicates use of objective validation techniques and verification data (Croyle & Loftus, 1994; Jobe, 2003). A validation technique entails obtaining verification data about actual behaviors in the form of participant behavioral diaries, record review, or actual observation of the behaviors under investigation. Validation

8

techniques are used as a criterion measure of actual behaviors against which the selfreport information can be compared. The self-report measure that most closely corresponds to the validation information is considered to be the most accurate. For a number of reasons, people do not always accurately report on their own behaviors (Jobe, 2001; Pearson et al., 1994), including behaviors within consultation interactions (Gutkin, 1993; Witt, 1997). During the LOI-R interviews, participants may unwittingly inaccurately report what occurred in consultation sessions. Although the LOI-R interview process was designed to compare the independently gathered information from the consultant and consultee, additional validation of the process was seen as needed. Without validation information in the form of observations of the actual consultation sessions, the responses from the LOI-R interviews cannot be assessed to determine if the interviews reflect what transpired during the problem solving process. An additional method of measuring the validity of the LOI-R was needed to verify the accuracy of the self-report interview and determine the validity of the level of implementation scale for assessing the instructional consultation process. To assess the validity of the interview process and, thereby the treatment integrity of the consultation process for instructional consultation, this investigation compared participants’ actual behaviors with their self-reported behaviors. The consultation dyads’ actual behaviors were listened to via audiotapes of the weekly consultation sessions. These audiotapes were created as part of an on-line coaching process to provide feedback to case managers newly implementing instructional consultation in their schools. A created measure, the Level of Implementation- Tape

9

Version, was used as a validation technique to assess implementation of the consultation process as observed from the audiotapes. The information provided by the Level of Implementation-Tape Version served as the criterion to which the LOI -R interview responses were compared. Statement of the Problem Problem solving collaboration and different forms of consultation are being used in many schools to assist teachers with instructing students who demonstrate varying academic and social-behavioral needs (Kratochwill, Elliott & Callan-Stoiber, 2002; Zins & Erchul, 2002). Research indicates that consultation with teachers can be helpful in supporting them to instruct students with challenging needs (Fuchs & Fuchs, 1989; Fuchs et al., 1990). In addition, consultation teams are viewed as an efficient and effective way of supporting teachers (Allen & Graden, 2002; Fudell, 1992; Kovaleski, 2002; Rosenfield, 1992, 2002a; Rosenfield & Gravois, 1996; Vail, 1996). However, before student improvements and other benefits can be attributed to consultation, researchers must assess the treatment integrity of the collaborative consultation process. Currently, there is a lack of research on the treatment integrity of consultation processes (Gresham & Kendell, 1987; Gutkin, 1993), and the level of implementation of consultation innovations in schools. Since the dimensions of the instructional consultation process have been delineated by Fudell (1992) in the development of the LOI and refined through use of the LOI-R (Fudell et al., 1996; Vail, 1996), this study examined how well the LOI-R measures actual consultation practices. The validity of the LOI-R interviews was investigated to assess the measure of treatment integrity for the instructional

10

consultation process. Consultation practices were verified through listening to audiotapes of consultation session and scoring the Level of Implementation- Tape Version. The observed behaviors were compared with the self-reported interview results of the LOI-R scale. Research questions were: 1. What are the levels of implementation for the collaborative process dimensions, as determined by the Level of Implementation- Tape Version? 2. What are the levels of implementation for the collaborative process dimensions, as determined by the LOI-R Case Manager Interviews and Teacher Interviews? 3. What is the relationship between the levels of implementation as assessed through the LOI-R interviews and through the Level of ImplementationTape Version? Definition of Terms Instructional Consultation Instructional consultation is a collaborative, problem solving process used to assist teachers with instructing students in various educational settings (Rosenfield, 1987). It follows the same basic tenets of most consultation models including indirect service delivery, collaborative relationships, shared decision making between the consultant and consultee, and the problem solving stage-based process (Rosenfield, 2002a). This version of consultation emphasizes examining the instructional environment, assessing task demands in the student’s current setting and using curriculum-based assessment and measurement.

11

Level of Implementation Level of implementation is similar to the concept of treatment integrity in that it refers to the extent to which the elements of a program or innovation have been operationalized as intended (Fudell, 1992). However, “level of implementation is a measure of the extent to which the innovation is implemented, not simply whether or not it is in place; and it provides an appraisal of the various components that determine appropriate implementation” (Vail, 1996, p. 15). Level of implementation measures are often used in program evaluation (Leithwood & Montgomery, 1980; Tharp & Gallimore, 1979). Critical Dimensions Critical dimensions are the characteristics and activities essential to the specified intervention model (Rubin, Stuck & Revicki, 1982). Prior to assessing implementation, the characteristics of the intervention must be defined, including critical processes, structures, and support components (Fudell, 1992). For this study, the critical dimensions of Instructional Consultation Teams have been defined by the LOI–R scale. The collaborative consultation process dimensions were of particular interest in this investigation. Level of Implementation- Tape Version The Level of Implementation- Tape Version is a measure created to assess the performance indicators when listening to instructional consultation audiotaped sessions. The measure was created for this study to closely mirror the items on the LOI-R interview. The Level of Implementation- Tape Version data were also used to calculate the seven dimensions of the collaborative process component. The items and

12

dimension calculations were compared to the items and dimension calculations as measured by the LOI-R interviews.

13

Chapter 2: Review of the Literature This review presents information on several different aspects of this study. The aspects are: 1) consultation, including the definition, general characteristics, and a description of instructional consultation; 2) school based problem solving teams, including the definition and use of level of implementation in school based problem solving teams; 3) treatment integrity, including the definition, its importance in research, prior research in school interventions, and treatment integrity in consultation processes; and 4) level of implementation, including the definition, relationship between treatment integrity and level of implementation, research studies and program evaluations. Consultation Definition The term “consultation” has been used to describe a variety of activities. Some activities for which the term consultation is use include personnel discussions with administrators, training faculty, and research planning (Johnson, 1998). The various applications stem from the different models. There are commonalties among the consultation models used in schools and for school teams (Allen & Graden, 2002; Rosenfield, 2002a; Zins & Erchul, 2002; Zins, Kratochwill & Elliott, 1993), leading to the detailing of general characteristics and features of which consultation is comprised. The common features of consultation include recognition that it is an indirect service in which the consultant works collaboratively with a change agent who then interacts directly with the client (Allen & Graden, 2002; Zins & Erchul, 2002). The

14

consultant and the change agent (or consultee) use problem solving steps to develop a plan that the change agent will be primarily responsible for implementing. In school settings, the change agent is typically teachers or parents, who then work directly with the student (Gutkin & Curtis, 1990). General Characteristics Many theorists and authors (Gutkin & Curtis, 1990; Henning-Stout, 1993; Rosenfield, 1987; Zins et al., 1993) identified several other features of consultation interactions, in addition to the characteristic use of indirect service delivery. Most consultative interactions applied in schools uses active problem solving (Allen & Graden, 2002; Henning-Stout, 1993, Kratochwill et al, 2002). The approach dictates prevention by building consultees’ skills, as well as intervention in the immediate presenting problem (Zins & Erchul, 2002). Active problem solving indicates that “consultation should serve the immediate function of remediating an identified problem. The process of determining the best path to remediation should allow the consultee (teacher, counselor, caseworker, parent) to acquire skills for responding to similar problems in the future” (Henning-Stout, 1993, p. 16-17). Another general feature of many consultation models is the collaborative nature of the interactions between the consultant and consultee (Allen & Graden, 2002; Gutkin & Curtis, 1990; Rosenfield, 2002a; Zins & Erchul, 2002). The interactions between the consultant and the consultee require working together to bring about change for the student (Gutkin & Curtis, 1990; Henning-Stout, 1993). The consultee is actively involved in all aspects of the process, particularly defining the problem, and developing, implementing and assessing the intervention plan

15

(Gutkin & Curtis, 1990; Rosenfield, 1987). Both the consultant and consultee are involved in learning about a student’s presenting challenges and the circumstances in which the challenges occur. Once a problem has been defined, the consultant and consultee persist together to determine a strategy to address the concern (Allen & Graden, 2002; Kratochwill et al, 2002; Rosenfield, 1987, 2002a). Collaboration is interactional work. In instructional consultation as in other collaborative processes, “all specific recommendations about instruction are worked through together with the teacher [consultee]” (Rosenfield, 1987, p. 11). The collaborative nature of the consultation relationship is thought to enhance the consultee’s commitment to the intervention (Telzrow & Beebe, 2002; Zins, Curtis, Graden, & Ponti, 1988). Confidentiality is another key characteristic of consultation (Gutkin & Curtis, 1990). To facilitate the collaborative relationship, the consultant and consultee must share an understanding about the confidentiality of the case. Honest communication can occur when the consultation and consultee have a shared understanding of the aspects of the case that are private and aspects that are public (Conoley & Conoley, 1982; Gutkin & Curtis, 1990; Zins et al., 1988). In addition to confidentiality, the consultant/consultee dyad must have a shared understanding of the voluntary relationship of consultation (Henning-Stout, 1993; Zins & Erchul, 2002). In a truly collaborative relationship, “consultees must be aware of and willing to act on their right to exit the relationship at any time,” (Henning-Stout, 1993, p. 18). In addition, consultees may choose to not implement the interventions as planned (Kratochwill & Pittman, 2002). If participation is not on

16

a voluntary basis, the consultee may be less engaged in the process, implement the planned intervention with less integrity and, therefore, the case outcomes may be less positive (Gutkin & Curtis, 1990; Telzrow & Beebe, 2002). Gutkin and Curtis (1990) conceptualized the power structure between consultant and consultee to be egalitarian and nonhierarchical. Other authors agree that the consultation relationship is collegial and collaborative (Allen & Graden, 2002; Conoley & Conoley, 1982; Zins et al., 1988). Additional authors view the optimum consultation relationship to be more cooperative, where the consultant is responsible for directing the process of consultation and guiding the collaborative dyad through the consultation stages (Conoley & Gutkin, 1986; Erchul, 1987; Erchul & Chewning, 1990; Johnson, 1998; Martin, 1978; Witt, Erchul, McKee, Pardue & Wickstrom, 1991; Zins & Erchul, 2002). The distinction above does not preclude all consultation interactions from being collaborative with both parties building a shared understanding of the work they undertake (Henning-Stout, 1993). Many consultation models have proposed the idea that the consultant and the consultee bring different, but equally relevant and useful, perspectives and knowledge sets to the consultation interaction (Kratochwill et al, 2002; Rosenfield, 2002a; Zins & Erchul, 2002). Consultants may be both directive and collaborative, depending on the skills and knowledge of the consultee (Kratochwill & Pittman, 2002). Instructional Consultation Instructional consultation follows the basic underlying structure of other problem solving consultation models (Rosenfield, 1987, 2002a). The main level of interaction is providing indirect service delivery by a case manager to a consultee

17

teacher. The indirect service delivery system is typically provided through the school or district implementation of the Instructional Consultation Teams innovation (Rosenfield & Gravois, 1996). The focus of instructional consultation is the explicit examination of the instructional environment and curricular tasks to which the student is exposed (Rosenfield, 2002a). In addition, consultant knowledge of evidence based practices in instruction provides the necessary content to use during the collaborative problem solving process (Rosenfield, 1987, 2002a). The following description focuses on the consultant-consultee behaviors and processes at each of the problem solving stages, within the system-wide context of Instructional Consultation Teams (Rosenfield & Gravois, 1996). Entry and contracting. The entry and contracting stage describes a process of introducing the concept of collaboration and Instructional Consultation Teams model to schools and individuals (Rosenfield, 2002a). As stated by Rosenfield and Gravois (1996), “Entry is usually accomplished at the school and system level, and involves the decision to use consultation as a process for problem solving in a building or district” (p. 26). Contracting is the introduction of the collaborative problem-solving model to a person who may access the process via the Instructional Consultation Team to obtain assistance on a referral concern. Within contracting, the two collaborators (case manager and teacher consultee) discuss and agree upon guidelines for how they will work through the problem solving process together. Some topics for discussion include reviewing the Instructional Consultation Team’s specific processes for the particular school, reviewing problem solving stages, clarifying ownership of the referral concern, discussing time involvement and explaining data collection.

18

Contracting ends with an explicit agreement for the two people to work together on the referral concern. Problem Identification and Analysis. Problem Identification and Analysis has been described as the most critical stage of the problem solving process (Gresham & Kendell, 1987; Kratochwill et al., 2002; Rosenfield & Gravois, 1996). Within this stage, the initial referral concern is used as a starting point for developing a shared understanding of the problem without labeling the student or behaviors. The problem must be defined in a manner that allows the consultative dyad to “resolve the situation by moving the student toward more positive growth and development” (Rosenfield & Gravois, 1996, p. 30). In instructional consultation as in many problem solving models, problem definition should be in terms of the discrepancy between the student’s current and desired performance, and should be in language that renders the problem measurable and observable (Allen & Graden, 2002; Rosenfield, 2002a; Zins & Erchul, 2002). Through the Problem Identification and Analysis stage, the case manager and consultee assess the referred student’s skill levels and learning in the area of concern (Rosenfield & Gravois, 1996). Within the instructional consultation process, these assessment activities result in information that can be used to make modifications in the student’s instructional environment. Therefore, the instructional consultation process stresses using assessment activities that are tied to the current curriculum, such as Curriculum Based Assessment (CBA). Classroom observations can also be an important part of identifying and analyzing the problem. However, the observations must be of specific behavior, and must also assess information on the classroom

19

environment and the instruction (Rosenfield, 1987). The final steps of the Problem Identification and Analysis are obtaining a baseline rate of the student’s current functioning in the defined problem and setting goals for the student to achieve. Intervention Planning and Development. During the Intervention Planning and Development stage, the case manager and the consultee teacher develop strategies to assist the student make the gains specified within the goal setting phase of Problem Identification and Analysis (Rosenfield, 1987, 2002a; Rosenfield & Gravois, 1996). If the problem solving process has been followed and the Problem Identification stage has resulted in an observable and measurable statement of the problem, the intervention can follow from the prior processes. In other cases, the consultative dyad will need to draw on the experiences of themselves and other Instructional Consultation Team members. The goal for the Intervention Planning stage is to produce a description of strategies that specify intervention techniques, necessary materials, people responsible for implementing the strategies, timing and frequency of the intervention, and a plan for assessing the effectiveness of the intervention. The plan for assessing the effectiveness must include details about how and when data will be collected and reviewed to monitor the student’s progress. The strategies agreed upon in the Intervention Planning stage must be “considered realistic and reasonable to those who must actually conduct the implementation” (Rosenfield & Gravois, 1996, p. 34-35). Some researchers and theorists have hypothesized that realistic and reasonable interventions have a better chance of being implemented with integrity (Rubin et al., 1982; Telzrow & Beebe, 2002). Other factors considered in the treatment integrity of intervention

20

implementation are ease of implementation, positive techniques rather than negative consequences, high perceived effectiveness, and match with the environmental or classroom contexts (Telzrow & Beebe, 2002). Intervention Implementation, Evaluation and Modification. In this stage, the responsible persons conduct the intervention in the agreed-upon manner (Rosenfield, 1987, 2002a; Rosenfield & Gravois, 1996). As Rosenfield (1987) states, “It is not until an intervention is implemented that its feasibility and effectiveness are really tested” (p. 36). The intervention is evaluated using the planned monitoring system. If the student is making adequate progress, the intervention continues. If the student is not progressing or the intervention is not practical for the consultee’s use, the intervention must be modified. This stage can also called “Intervention evaluation and redesign” and in which “a data based decision about continuing, modifying, or terminating the intervention is made by the teacher and the case manager” (Vail, 1996, p. 14). Termination. Termination is the formal closure of the problem-solving process (Rosenfield, 1987; Rosenfield & Gravois, 1996). This stage is important whether or not the problem has been effectively resolved. If a consultative dyad has not been successful in promoting a resolution of the student’s problem, other resources need to be explored. Formal termination encourages accountability to the student’s progress. If a case was successful in assisting student progress, the case manager should use the consultation time to work on other cases. There should be a process by which the consultee can re-access the case manager if new concerns arise. In a successful case,

21

the consultative dyad should have a formal end-point during which the successes can be celebrated. Prereferral, Problem Solving, and Consultation Teams There has been an increase in professional collaboration via teams in schools within the past 25 years (Allen & Graden, 2002; Kovaleski, 2002; Zins & Erchul, 2002). Schools are increasing the use of internal problem solving teams to address diverse student needs (Bahr et al., 1999; Iverson, 2002; Kovaleski, 2002). Teams are currently used for a variety of purposes, such as grade level planning, multidisciplinary issues such as the special education referral and placement process (Friend & Cook, 1997; Fudell & Dougherty, 1989), and supporting teachers in planning and implementing pre-referral services and interventions (Buck, Polloway, Smith-Thomas & Cook, 2003; Gravois & Rosenfield, 2002a; Kovaleski, 2002; Rosenfield & Gravois, 1996). There are many advantages of problem solving teams for intervention planning and implementation (Kovaleski, 2002; Vail, 1996), although school professionals need to examine the processes and efficacy of specific team functioning prior to unconditionally accepting team models (Iverson, 2002). The team consultation model can benefit teachers, students in general education settings, and the school system through practical means of reducing costs of special education assessments, freeing more time for direct services to students, and producing creative strategies that teachers are more likely to use given their involvement in intervention creation (Vail, 1996). Team consultation services in schools can be offered in a variety of structures, such as teachers helping teachers (Chalfant & Pysh, 1989),

22

support personnel helping teachers (Kovaleski, 2002), and combinations of professionals assisting general education teachers (Buck et al., 2003; Iverson, 2002; Rosenfield & Gravois, 1996). Research General trends. Use of prereferral intervention teams has become a significant factor in schools, as demonstrated by state mandated use, increasing amounts of research, and inclusion in educational professional training programs (Buck et al., 2003). In a replication of Carter and Sugai’s 1989 national survey of state prereferral practices, Buck et al. (2003) found that the overall percentage of states requiring or recommending prereferral teams remained at about 70%. As reported in 2003, of 50 states and the District of Columbia surveyed, the majority either required or recommended a prereferral process (43% and 29%, respectively). As compared to results reported in 1989, general educators continued to be the primary professional group responsible for implementing the prereferral process in states that mandated those procedures (Buck et al., 2003). In 1989, Carter and Sugai reported that the three most common prereferral strategies were instructional modifications, counseling and behavior management strategies. According to Buck et al., (2003), these strategies continued to be recommended by prefererral teams, but the number of states reporting team use of instructional modifications (96%) and behavioral management (92%) increased substantially. Although the number of states requiring or recommending prereferral processes did not substantially change from when reported in 1989 to when reported in 2003 (Buck et al., 2003), the literature indicates increasing importance of school

23

prereferral programs and intervention assistance teams (Kovaleski, 2002; Nelson et al., 1991; Safran & Safran, 1996). It is possible that more individual school districts are opting to use prereferral teams to meet the mandates of the 1997 Individuals with Disabilities Education Act (Gravois & Rosenfield, 2002). Another recurring theme when discussing school teams is the need for training (Iverson, 2002; Rosenfield, 2002b; Gravois, Knotek & Babinski, 2002). Training on topics such as problem solving skills, leadership and group management enhances collaboration on teams (Fudell, 1992; Iverson, 2002; Thousand & Villa, 1992). When teams are comprised of various professional roles, the amount of exposure to and training in team functioning can be divergent. Adequate and appropriate training in group collaboration processes is essential (Fullan, 1991; Kovaleski, 2002; Iverson, 2002; Gravois et al., 2002; Gravois & Rosenfield, 2002; Nelson et al., 1991; Rosenfield, 2002b). Team characteristics. There is a growing research base on the characteristics that comprise effective problem solving teams (Bahr et al., 1999; Iverson, 2002; Vail, 1996). School based problem solving teams are typically comprised of educational professionals representative of diverse roles in the school system (Iverson, 2002; Kovaleski, 2002). The multidisciplinary approach has been recommended to increase the number and diversify the types of solutions offered by the team (Pugach & Johnson, 1989). A survey of state education agency personnel indicated that general education teachers, administrators and counselors were the most cited professionals having responsibility for implementing the prereferral process and heading the prefererral teams (Buck et al., 2003).

24

The presence of school administrators as members of problem solving teams has been debated (Kovaleski, 2002). Current research indicates that administrators are a valuable component of team membership and enhance team functioning (Kovaleski, 2002; Rosenfield & Gravois, 1996). Of team members surveyed in 121 schools in three states, 35% identified an administrator as the person who led the team, and the majority of members identified administrators as the most effective communicators (Bahr et al., 1999). Administrators’ presence on the team membership is considered valuable and indicated on most teams (Fudell & Dougherty, 1989; Zins et al., 1988) to demonstrate tangible support of the problem solving process (Kovaleski, 2002; Safran & Safran, 1996), and to allocate resources (Kovaleski, 2002; Rosenfield & Gravois, 1996). Iverson (2002) specified a designation of the manner in which problem solving teams deliver services to the consultees. In the broad participation model, the entire team meets with the person requesting assistance. In the case manager model such as provided through the Instructional Consultation Teams (Rosenfield & Gravois, 1996), one person of the team membership is designated to work with the person requesting assistance. The more team members interacting at one time with the person requesting assistance, the more important process functions become (Iverson, 2002). Team processes. There is some research on team process (Iverson, 2002). The process variables affecting problem solving team functioning are of specific interest for this study, as the variables for effective team functioning are the same variables necessary for successful individual consultation (Curtis & Stollar, 2002). In fact,

25

according to Curtis and Stollar (2002), “the principles of collaborative planning and problem solving that apply to individual consultation are directly relevant to systemslevel consultation… Collaborative one-on-one consultation and systems-level consultation are directly parallel in almost every aspect” (p. 226). Two important process variables that are present in problem solving team functioning are collaboration (Friend & Cook, 1992; Gravois, 1995; Iverson, 2002) and use of the problem solving process (Curtis & Stollar, 2002; Kovaleski, 2002; Rosenfield & Gravois, 1996; Vail, 1996). Collaboration is widely regarded as necessary for effective team functioning (Friend & Cook, 1992; Gravois, 1995; Thousand & Villa, 1992). When considering input from all members, assembly effect bonus (Iverson, 2002), or interpersonal dependence (Thousand & Villa, 1992) is the acknowledgement that the group can accomplish more by working together than each individual working separately. This is a goal of collaborative team functioning. An important feature of collaboration is determining how team participants will work together (Allen & Graden, 2002). Group process skills, such as facilitating group communication, listening, and group decision making, are necessary for successful group functioning (Iverson, 2002). To facilitate group problem solving, training in collaboration and communication is needed. Although not all prereferral teams are specifically designated as problem solving teams, researchers acknowledge that the problem solving teams focusing on prereferral intervention design and implementation have evolved since the late 1970s (Kovaleski, 2002; Iverson, 2002). The problem solving process is the systematic approach used to identify and define a problem, design strategies to remedy the

26

problem, and evaluate the strategies and outcomes once interventions are implemented (Allen & Graden, 2002). Team based collaborative problem solving stages are similar to the stages in instructional consultation (Rosenfield, 1987) and behavioral consultation (Bergan & Kratochwill, 1990), and include problem identification, intervention development, and intervention plan implementation and evaluation (Kovaleski, 2002). Outcomes research on problem solving teams. Most of the research on the outcomes for problem solving teams has been in the form of satisfaction surveys and participant judgements of worth or outcomes (Fuchs & Fuchs, 1989; Fuchs et al., 1990; Fudell, 1992; Vail, 1996). These satisfaction studies generally show that teachers indicate positive responses to working with problem solving teams on student concerns (Henning-Stout, 1993; Safran & Safran, 1996; Vail, 1996). After summarizing the results of three problem solving team lines of research, Safran and Safran (1996) concluded “educators are positive about the process, the goals and the importance of team problem solving” (p. 368). Additional research on student outcomes has been gathered via teachers’ judgements of a student’s success or improvement. In general, teachers report that the interventions produce the desired effects (Chalfant & Pysh, 1989; Fuchs & Fuchs, 1989; Pugach & Johnson, 1988). Of team members surveyed in three states, the most frequently used quality index reported was teacher judgements of intervention effectiveness (Bahr et al., 1999). However, one weakness of these approaches is that the assessment of satisfaction or judgements of worth do not give information about student progress and outcomes (Safran & Safran, 1996).

27

Some studies of problem solving team outcomes have examined student behavior directly. In the Fuchs and Fuchs (1989) study wherein teachers reported perceptions of student improvement, behavioral observations determined that there were no significant changes in the frequency of student problem behaviors. However, severity of student behavior may have been the indicator on which the teachers were focusing. In another study of directly assessed student results, Kurtalt (1990) used reading achievement scores to determine that the students whose teachers received consultation improved relative to the students whose teachers did not receive consultation from a problem solving team. Much of the outcomes research on prereferral intervention problem solving teams has investigated the rates of student referrals to the special education evaluation process and to special education programs (Nelson et al., 1991; Safran & Safran, 1996). Nelson et al. (1991) reviewed five studies of prereferral intervention programs and found that the interventions reduced the number of students referred for special education assessment and placement. In a review of the research on three types of prereferral intervention programs, Safran & Safran (1996) found that both types of teams and the one other program reduced the number of students referred for special education assessment. The review of Teacher Assistance Teams (Chalfant & Pysh, 1989) outcomes indicated that, of the 386 students served by the Teacher Assistance Teams, only 21% were referred to special education assessment (Safran & Safran, 1996). A summary of Mainstream Assistance Teams (Fuchs & Fuchs, 1989) research indicated that teachers who used the team process referred the students for special education services less frequently.

28

The overall review indicated that, when team-based programs were in use, a consistent reduction in special education referral rates was found (Safran & Safran, 1996). In a program evaluation, Instructional Consultation Teams research indicated significant reductions in the percentage of students referred to special education assessments (Gravois & Rosenfield, 2002). However, other research has determined that after special education evaluation is completed, prereferral interventions did not appear to significantly impact the number of students who were found eligible to receive special education services (Flugum & Reschly, 1994). Most of the problem solving team studies did not assess the extent to which the participants implemented the interventions with integrity (Nelson et al., 1991), or the extent to which the teams followed the collaborative problem-solving model. Without such measures, the extent to which the results can be attributed to the problem solving team’s intervention process or attributed to other factors cannot be determined. To determine that the team intervention was responsible for the outcomes, the level of implementation of team functioning needs to be assessed. Level of Implementation for School Based Problem Solving Teams Macmann et al. (1996) described a system for assessing problem solving and decision-making processes. Because psychologists engage in decision making in professional practices, “the technical adequacy of the entire decision making process requires scrutiny” (Macmann et al., 1996, p. 137). They describe the key tasks and the reliability and validity issues that arise at each of the four major stages of problem solving. Because team-based collaborative problem solving is a complex task (Kovaleski, 2002; Iverson, 2002), assessments of the reliability and validity of the

29

problem solving process need to be undertaken prior to evaluating outcomes. In Pennsylvania, the state mandated prereferral intervention teams, called Instructional Support Teams (IST), were assessed to identify high and low implementation schools and compared to schools with no teams (Kovaleski, Gickling, Morrow & Swank, 1999). The IST model is a broad participation model during which the team works through a problem solving process. In this model, a support teacher performs many of the procedural activities and assists the referring teacher with intervention implementation after the team collaboratively determines the intervention plan. Dependent academic performance variables of time on task, work completion and comprehension were compared for “at risk” students and “average” students. Results indicate that, in the high implementation schools, at risk students made significantly greater academic skill gains than in the low implementation schools and no implementation schools. In addition, the students’ performance gains were maintained and began to approximate the average students’ behaviors over time. The level of implementation data collection used to determine high and low implementation teams was part of the state evaluation process (Kovaleski et al., 1999). In the first phase, the instrument used contained 103 items to assess the presence, absence or degree of the elements in place. In the second phase, the validation instrument was composed of seven broad area s of implementation, which were rated on a four point scale (0 = feature not in place… 3 = feature in place at model level). Because measures of level of implementation were assessed, in the high implementation schools, the differences in students’ academic skills can be attributed to the problem solving process, as facilitated by the IST model.

30

Level of Implementation Research for Instructional Consultation Teams Fudell (1992) developed the Level of Implementation (LOI) scale to examine changes in the level of implementation of Project Link, an early intervention team model that used the instructional consultation process. The original LOI scale consisted of three areas corresponding to the program’s critical components: the collaborative consultation process, the specific delivery system of the program, and the supports that facilitated the development and maintenance of the program. The scale was modified and condensed to the two dimensions of the collaborative consultation process and the service delivery systems. For the LOI scale (Fudell, 1992), and the subsequent LOI - R scale (Fudell et al., 1996), interviews and record reviews are used to gather information regarding the service delivery system implementation, which includes items such as how referrals are managed, who comprises the team, and number of cases addressed by team members. To gather information to assess the collaborative consultation process, consultant case managers and consultee teachers participate in individual interviews regarding their instructional consultation cases. The behavioral indicators for each of seven critical components are assessed through interview items addressing elements of the instructional consultation steps. For the collaborative consultation process assessment, the case manager and teacher interview responses are scored as 1 point for the presence of an element and appropriate implementation, or 0 points for the absence of an element or inappropriate or incomplete implementation (Fudell et al., 1996). In addition, on several interview items, collaboration between the case manager and teacher is

31

assessed through the correspondence of the two peoples’ responses. For these items, the responses of the case manager and teacher must match for the item to be scored as 1. The percentages of implementation are calculated for each of the seven critical dimensions of the collaborative consultation process. The collaborative consultation process consists of seven critical dimensions, each with a varying number of behavioral indicators (Fudell et al., 1996; Vail, 1996). The critical components (and behavioral indicators) are as follows: 1. Clear, accurate communication; 2. Contracting (discuss four elements of collaborative relationship, agree to work together); 3. Problem Identification (state discrepancy of demonstrated and desired behaviors, complete activities for analyzing academic problems or behavioral problems); 4. Intervention recommendations (discuss interventions based on effective teaching practices, agree on intervention selected, specify responsibilities for implementation, plan for intervention monitoring); 5. Implementation (indicate agreement about if intervention is implemented as planned, discuss if monitoring occurs as specified, show evidence of frequent graphing of monitoring data); 6. Evaluation and Follow up (use data to determine progress, use data to base decisions of continuing, modifying or terminating intervention). 7. Curriculum Based Assessment (use assessments reflecting an evaluation of behavior in the natural environment, focusing on the individual child

32

and based in the curriculum; and use assessment for monitoring ongoing student progress). Using the original LOI scale, Fudell (1992) examined 13 schools’ levels of implementation during their first year using the consultation teams model. She found that the schools’ levels of implementation increased over the school year. However, there were significant differences in the LOI scores, indicating that site-specific factors influenced the amount of implementation at each school. To assess the reliability of the LOI, two raters coded the same audiotaped interview sets (approximately 20% of the total) during the first data collection (Fudell, 1992). The total inter-rater reliability was .88 (range = .79 to 1.00 for 4 interview sets). A random inter-rater reliability check was performed on three audiotaped interview sets during the second data collection. The total inter-rater reliability was .92 (range = .85 to 1.00). Test-retest reliability was also assessed during the first and second data collection periods by conducting phone interviews with two teachers and one consultant or principal one week after initial interviews. The results totaled .78 and .88 for the two data periods, with ranges of .69 to .85 and .85 to 1.00, respectively. Vail and Strein (1997) investigated the level of implementation of Instructional Consultation Teams in 13 different schools in their first year and 10 schools in their second or third year of implementation. Using the LOI-R, results indicate that the teams’ mean level of implementation on all 14 level of implementation dimensions was relatively high. The schools’ use of the collaborative consultation process dimensions did not significantly increase in the three years

33

examined. The implementation of the service delivery system increased from the first to the second year of implementation. Again, there was significant variation between the schools, indicating site-specific factors had a large impact on the results. In a more detailed analysis of the results (Vail, 1996), the specific components of the LOI-R scale were examined. The schools all followed the same pattern of implementing the various dimensions. For the collaborative consultation process domain, the highest levels of implementation were found in the Entry and Contracting Dimension and the Intervention Development Dimension. These elements were considered “at or above criterion level of implementation (80%)” for both the first and second year teams. For the second and third years, the same two elements were in the criterion level along with the Problem Identification Dimension and the Collaborative Communication Dimension. The dimensions implemented the least were Intervention Evaluation for both the first and second or third year teams, and Curriculum Based Assessment for the first year teams (Vail, 1996). These dimensions were judged to be “Far below criterion level of implementation (< 65%).” The comparison of first and second and third year teams indicates increases in components with decreases in other components, lending to the stability of overall level of implementation across the years. The Instructional Consultation Teams (Rosenfield & Gravois, 1996) model of service delivery was developed through research that can be assessed through a framework proposed by Tharp and Gallimore (1979) for intervention in complex social problems (Gravois & Rosenfield, 2002). Throughout its use, the Instructional

34

Consultation Teams innovation has incorporated a program evaluation design to assess program integrity and level of implementation. The initial evaluative model of the Instructional Consultation Teams entailed assessments of training, implementation and outcomes. In a comprehensive program evaluation, the authors present a summary of 23 different studies on various aspects of Instructional Consultation Teams used to develop and refine the model (Gravois & Rosenfield, 2002). To further enhance the evaluation, the authors defined theory linking the program design to the intended purposes. Gravois and Rosenfield (2002) also used the verifiable criteria for confirmatory program evaluation (Reynolds, 1998, as cited in Gravois & Rosenfield, 2002) to demonstrate a causal relationship between use of Instructional Consultation Teams on the reduction of number of students evaluated for and placed into special education programs. The consistency criterion indicates that causal inference is strengthened if the program demonstrates similar effects across different populations at different times and under different types of analyses and model specifications. By evaluating three different studies of the impact of Instructional Consultation Teams on the rates of student referrals to and placement into special education programs, consistency evidence was presented. In the first study discussed by Gravois and Rosenfield (2002), 10 schools demonstrated a 27% decrease in special education referrals and a 25% decrease in special education placement during the schools’ first year of implementation of Instructional Consultation Teams, as compared to the prior year. The following year, four additional schools began Instructional Consultation Teams and indicated a 55%

35

decrease in the number of students placed in special education programs. In the second study discussed, the percentages of school population receiving special education services for 13 schools implementing Instructional Consultation Teams were compared to the percentages of 20 schools serving as comparison sites (Gravois & Rosenfield, 2002). In addition, pre-post comparisons were demonstrated. Results indicated that the comparison schools’ pre-implementation percentages averaged 12.55%, while the Instructional Consultation Team schools’ percentages averaged 14.14%. During Instructional Consultation Teams implementation, the comparison schools’ percentages remained at an average of 12.18%, while the Instructional Consultation Teams school percentages declined to an average of 11.99%. The third study investigated patterns of referrals to special education assessment and placement after students were served through the Instructional Consultation Teams or through a different school based prereferral intervention team (Gravois & Rosenfield, 2002). Within the 20 schools, significantly fewer students served through Instructional Consultation Teams were referred to or placed in special education services. In addition, significantly fewer African- American students served through Instructional Consultation Teams were referred to, or placed in, special education in comparison to the African-American students served by the other school teams. The summative results of the above studies provide confirmatory evidence that Instructional Consultation Teams reduces the number of students evaluated and placed in special education services (Gravois & Rosenfield, 2002). The findings were

36

consistent across varying populations, times, places, and study methodologies. The consistency criterion is based on the assumption that the program can be articulated, has treatment integrity, and the program theory can be adequately measured (Reynolds, 1998, as cited in Gravois & Rosenfield, 2002). As in any evaluation of intervention implementation, prior to attributing the outcome effects to the intervention, treatment integrity of the intervention must be established. From the inception of the Instructional Consultation Teams model, schools’ implementation has been assessed using the level of implementation measure (Fudell, 1992; Rosenfield & Gravois, 1996). Problem solving teams are playing an increasingly important role in school functioning (Chalfant & Pysh, 1989; Iverson, 2002; Kovaleski, 2002; Vail, 1996). They now serve many diverse purposes (Bahr et al., 1999), including providing prereferral services prior to accessing special education services (Buck et al., 2003). Schools are examining ways to create more collaborative and efficiently functioning teams (Fullan, 1991; Rosenfield, 1992). Collaborative problem solving teams are one proposed method of providing services to students and their teachers in a more efficient and effective manner (Fuchs & Fuchs, 1989; Fuchs et al., 1990; Kovaleski, 2002). There is increased research on the general trends, composition, processes and outcomes of problem solving teams. However, outcomes from problem solving teams cannot be attributed to collaborative team functioning unless assessments are undertaken to ensure appropriate levels of team implementation. Although research is currently focused on evaluating the integrity with which the interventions proposed by the problem solving teams are implemented in the

37

classroom (Upah & Tilly, 2002; Telzrow & Beebe, 2002), there continues to be a lack of research regarding treatment integrity and level of implementation for the problem solving process of consultation services and teams (Gresham & Kendell, 1987; Gutkin, 1993). In addition, reliable and valid measures are needed to adequately assess the integrity and levels of program implementation. Challenges with Self-report Interview Measures In contrast to many problem solving teams models, the Instructional Consultation Teams innovation has been subject to measures of implementation prior to assessing the outcomes of the program (Gravois & Rosenfield, 2002). The level of implementation assessment via the LOI-R (Fudell et al., 1996) includes a specific examination of the implementation of the consultation process, as conducted between each consultant/case manager and consultee/referring teacher. Several items from each of the separate interviews with the case managers and teachers are then compared for response matches for the items measuring collaboration. These interviews, in combination with other information provided via the LOI-R, appear to yield an accurate representation of how Instructional Consultation Teams is implemented in a certain school. It is especially important to note that the LOI-R contains an assessment of the consultation process. However, because the level of implementation measure relies on self-report interviews, the LOI-R interview measures themselves also need to be subject to verification. Memory Errors There are many challenges to using self-report information collected through interview measures (Belli et al., 2001; Jobe, 2003; Jobe et al., 1993). Theoretical

38

models for memory structures (Tourangeau, 2000), as well as theoretical and experimental evidence for poor memory and responding (Belli et al., 2001; Jobe, 2003, Jobe, 2000; Tourangeau, 2000) demonstrate many ways in which respondents may give inaccurate information in interviews. However, research has provided several ideas for improving self-report information collected via interviews (Croyle & Loftus, 1994; Pearson et al., 1994; Suchman & Jordan, 1994). There are at least three general types of material in memory, including facts, knowledge of how to do things, and personal experiences (Tourangeau, 2000). The personal experiences type of memory is also termed autobiographical memory, which is the subject of most self-report interviews (Jobe, 2003; Tourangeau, 2000). The selfreport information collected via the LOI-R Case Manager Interview and Teacher Interview (Fudell et al., 1996) can be termed autobiographical memory. There is agreement that personal memories are stored as “mini-narratives” regarding the story of the individual’s experiences based on intentions, actions and outcomes (Tourangeau, 2000). The information probed for during the LOI-R interviews are events that occur within the case manager’s and teacher’s personal experiences. There are a number of challenges to obtaining accurate information from people’s memory of autobiographical events. Tourangeau (2000) identified four major classes of memory problems. These problem classes are encoding, storage, retrieval and reconstruction. During encoding, memory is impacted by a person’s initial processing of the event. If the event is processed superficially or with minimal representation, it is less likely to be remembered at a later time. During storage, an event may be incorporated into a person’s long term memory. Storage can be

39

positively impacted through rehearsal or elaboration of the initial event, and also may be subject to judgements of current beliefs and inferences. Retrieval failure occurs when information is stored, but is not accessible for conscious recall (Tourangeau, 2000). The most impactful retrieval problem appears to be the passing of time (Belli et al., 2001; Tourangeau, 2000). As Tourangeau states, “No single variable seems to have such a profound effect on the accessibility of a memory than its age” (2000, p. 36). Theories indicate that memory decay occurs because of the interfering effects of later, similar experiences. At least four empirical functions have been proposed to account for the relationship between amount retained information and the retention interval (Tourangeau, 2000). When retrieval yields partial results, details of experiences and events can be reconstructed. Reconstruction errors include people’s tendency to report on their current attitudes and behaviors while attributing them to the past, and the tendency to estimate frequency of events rather than to attempt to recall and count each occurrence of an event. Another reconstruction error occurs when the respondent attempts to “fill in” missing details of a recalled experience (Tourangeau, 2000). Frequently the respondent uses generic details of typical events for a situation, rather than the actual memory of the situation itself. In autobiographical memory, the respondent may attempt to make the memory conform to an existing understanding when filling in the details. Interview Strategies to Improve Recall Research from cognitive psychology has suggested ways in which to improve the recall of participants in retrospective self-report interviews (Jobe, 2003;

40

Tourangeau, 2000). Memory can be improved by addressing the encoding and storage problems by making an event more salient and emotionally impactful or by increased rehearsal. However, the retrospective interviews used in research are frequently measures of incidental memory, or events that people did not know that they would be asked to remember (Jobe, 2000; Pearson et al., 1994). Therefore, strategies to improve recall frequently focus on remedying retrieval and reconstruction errors. Addressing retrieval problems in a variety of ways can improve recall. One strategy indicates that allowing the respondent more time can improve recall (Tourangeau, 2000). The types and number of cues given to jog the memory can also improve a respondent’s ability to recall information. Researchers state that understanding the way in which memory is organized can be beneficial in identifying the best way to access the stored information (Belli et al., 2001; Croyle & Loftus, 1994). In addition, structuring interviews in the manner in which events are remembered is a technique to improve the accuracy in autobiographical memory (Belli et al., 2001). In traditional survey interviewing methodology, standardized questions are developed and intended to be administered without variation, although trained interviewers may deviate from wording standardization (Schober, Conrad & Fricker, 2004). The standardized administration is intended to avoid response bias from variations in wording and to reduce training and administration costs (Belli et al., 2001). However, there have been calls in the literature for a more responsive interview methodology, designed to use the conversational aspects of interviewerrespondent interactions (Suchman & Jordan, 1994). In addition, researchers have

41

investigated using collaborative conversational techniques to improve the accuracy of self-report interview responses (Belli et al., 2001; Conrad & Schober, 2000; Schober et al., 2004; Suchman & Jordan, 1994). Interviews are inherently social interactions (Suchman & Jordan, 1994). However, when using standardization techniques, the common interactions used in social conversations are suppressed. In ordinary conversations, the participants themselves have local control over the topic, flow and depth of the interaction. In contrast, the interview is externally controlled by the questionnaire author who is not present. Social conversationalists can accommodate specific listeners and circumstances. However, in standardized interviews, interviewers may have to administer questions that are not understood by respondents or are not applicable due to the respondents’ prior responses. Conversational behaviors include re-explaining to correct for misunderstandings, and making inferences and avoiding irrelevant questions based on prior responses can be useful in gaining more accurate information from self-report interviews (Suchman & Jordan, 1994). Also, standardization typically does not allow respondent elaboration or personal input, which can lead to questionable accuracy. When the interviewer does not use these conversational behaviors and, for example, poses irrelevant questions, the respondent can become less involved with the interview process. As a result, the interview may produce less valid information. Several researchers have conducted experimental studies to compare the results of the standardized interview format with the conversational interview format (Conrad & Schober, 2000; Schober et al., 2004; Schober & Conrad, 1997). In their

42

first study, Schober and Conrad (1997) conducted a laboratory experiment by giving fixed scenarios to study participant respondents so that the level of complexity of the responses could be randomized. Respondents were given either a complicated response set or a straightforward response set. Professional interviewers were trained to use one of five interviewing techniques conducted by phone. In all cases, interviewers were to first read the items as worded. Experimental groups were one strict standardization group, leaving the interpretation to the respondent, two respondent-initiated groups, providing clarification if explicitly asked by the respondent, and two mixed initiative groups, providing clarification if the interviewer felt the respondent needed it or if asked by the respondent. Clarification consisted of two assigned types; the interviewer could read all or part of the standardized definitions or the interviewer could paraphrase the concepts in their own words. Results indicated that, for the straightforward scenarios, interviewer coding was extremely accurate in both the standardized and conversational groups (Schober & Conrad, 1997). When presented with the complex scenarios, the responses were quite inaccurate in the standardized interviews, but increased in accuracy when the different conversational interviews were used. Interestingly, the responses were most accurate when the interviewer paraphrased the concepts and was able to provide clarification as he or she felt the respondent needed it, not wait until the respondent requested clarification. In an article comparing this study with another, Schober et al. (2004) state, “comprehension accuracy was poorest for the most strictly standardized interviews” (p. 180).

43

In a follow up study, Conrad & Schober (2000) examined the extension of these findings in a non-laboratory study. Using professional interviewers, actual telephone respondents were interviewed first using the standardized interview and, one week later, were interviewed a second time using exactly the same interview items, but using either the standardized interview or the conversational interview. The conversational interviewers were instructed to say whatever they needed to in order to assure that the respondent understood the intent of the questions, whether the respondent directly asked for clarification or not. Results of this study indicate that the conversational interview respondents changed their responses more than the standardized interview respondents from the first to the second interview (Conrad & Schober, 2000). The changed responses appeared to be more in line with the information the survey was seeking. When asked about purchases, fewer than 60 percent of the items that were listed for the standardized interview respondents were considered accurate for inclusion in the data. As the authors state, “Conversational interviewers helped respondents apply the concepts to their circumstances along the lines the survey designers intended, and this produced the intended understanding substantially more often” (Conrad & Schober, 2000, p. 20). To investigate actual interview practices, Schober et al. (2004) used the same scenarios from the Schober and Conrad 1997 study. Professional Census Bureau interviewers were instructed to conduct face-to-face family interviews exactly as they typically do. The agency training was somewhat conflicting, as manuals stated that interviewers were to read the questions exactly as written and to use only non-

44

directive probes, although training videos indicated clarification of questions at the respondent’s request was acceptable. Results indicated that, in general, the professional interviewers used strict standardization for over 80% of all questions (Schober et al., 2004). However there was substantial variability in the interviewers’ styles. Of the 11 interviewers, 1 followed the strictest standardization procedures for all interview items. Four were highly standardized, providing definitions in response to questions. Three interviewers deviated from standardization for at least 4 of the 12 questions in each interview. Results indicated that interviewers who deviated from the standardization the most actually obtained greater information accuracy than the traditional standardized method (Schober et al., 2004). This effect was especially apparent for the questions with the complex scenarios. In sum, the authors state, “Allowing interviewers to use some of the collaborative resources of ordinary conversation (providing respondentinitiated or scripted clarification) is better than denying all of them (strictly standardized interviewing), but even better is allowing interviewers to collaborate more as they do in spontaneous conversation” (Schober et al., 2004, p, 185). In a different line of research, Belli et al. (2001) experimentally investigated the benefits of the Event History Calendar in comparison to the traditional questionlist survey instrument. The Event History Calendar was formulated to use memory structures to promote the narrative style of remembering. The methodology uses different cuing mechanisms, such as top-down cuing, sequential cuing, and parallel cuing to enhance recall. In addition, interviewers can use flexible conversational

45

interviewing to promote comprehension of the survey items. Participant respondents were interviewed using the Event History Calendar or the question list. Results were compared to the data from the previous year collected using the question list. Overall, results indicated that using the Event History Calendar yielded higher-quality retrospective reports in comparison to the question list (Belli et al., 2001). Respondents reported that the Event History Calendar was easier to understand than the question list. Interviewers reported that they preferred administering the Event History Calendar, although it was viewed as more problematic for respondents for remembering past events. The interviewers’ perceptions of respondent problems may be because the Event History Calendar calls for the recall of more information with more fine details than traditional question lists. Research using flexible interviewing that allows the interviewer to depart from the scripted questions does not adversely impact memory for autobiographical information (Belli et al., 2001; Conrad & Schober, 2000; Schober et al., 2004; Schober & Conrad, 1997). In fact, more flexible interview methodologies that tap into the way in which memories are created “have demonstrated considerable potential to enhance recall for events that occurred several years previously” (Belli et al., 2001, p. 2). These conversational interactions also can yield more accurate interview information, as the respondents develop a shared understanding of the questionnaire meaning through scripted or unscripted information (Schober et al., 2004). There is a growing research base regarding the utility of creating flexibility within the interview process (Belli et al., 2001; Conrad & Schober, 2000; Schober et al., 2004; Schober & Conrad, 1997). Although the standardization of interview

46

questions is intended to increase validity of the information collected, the rigidity of the process can lead to inappropriate and inaccurate information (Suchman & Jordan, 1994). By using a collaborative approach and viewing interviews as an interactional exchange, an interviewer can use conversation behaviors to assist the respondent provide more relevant and accurate information. If validity of data from interview measures is defined as the extent to which the question is heard and responded to as it was intended to be, using a collaborative approach (Suchman & Jordan, 1994) or structuring a way for explanations to be offered (Belli et al., 2001; Schober et al., 2004) is appropriate. A way to increase validity is to increase the stability of the interview item meanings by achieving joint understandings of the interview items and process (Suchman & Jordan, 1994) and allowing the interviewer to respond to confusion or suspected misunderstandings (Schober et al., 2004). Although increased training costs and interview lengths were cited as limitations to this interview approach (Belli et al., 2001; Schober et al., 2004), the collaborative construction of interview meaning is likely to yield more accurate and more useful information due to increased understanding from participants (Suchman & Jordan, 1994). Need for Verification/Validation Techniques Regardless of the strategies used to increase the validity of interview data, the information gained through self-reports should be subject to objective verification (Croyle & Loftus, 1994) or validation techniques (Jobe, 2003). Verification and validation techniques are used as criterion measures to judge the veracity of the information provided from the self-report measure. Research investigating the

47

validity of self-reports using observations of the behaviors in question, as Croyle and Loftus state, is “sorely lacking” (1994, p. 96). Observational measures of the behaviors that the interview respondent is reporting are challenging to implement. However, verification of information obtained during self-report interviews is important in assessing the utility of the interview methodology and the quality of the data collected through that interview process. Only after a researcher confirms that the participant engaged in the behaviors that the participant reported should the information be used for additional purposes. Treatment Integrity Treatment integrity is the concept of an intervention being implemented as intended. It is especially important for complex interventions and innovations (Yeaton & Sechrest, 1981; Telzrow & Beebe, 2002), such as consultation behavior. This section presents several topics including treatment integrity definition, treatment integrity in consultation, and evaluation of treatment integrity. Definition Treatment integrity is the extent to which an intervention was implemented and conducted as planned (Yeaton & Sechrest, 1981). The intervention, or intended program is the independent variable in experimental studies (Peterson et al., 1982). Gresham et al. (1993) also defined treatment integrity as the degree to which an independent variable is implemented as intended. Related terms include intervention adherence and intervention fidelity, which both refer to the degree to which an intervention is implemented as planned (Moncher & Prinz, 1991; Telzrow & Beebe, 2002)

48

Treatment integrity affects the interpretation of any program or intervention outcome. If an intervention is not implemented as intended, the resultant effects may be different than those anticipated. The outcomes may not be due to the planned intervention since any potential changes may have substantially altered the intervention. Interpretations of outcomes are dependent on treatment integrity, but also on treatment effectiveness, treatment acceptability, and social validity (Shapiro, 1987). The factors of treatment effectiveness, treatment acceptability, and social validity interact with treatment integrity in multiple ways and must be considered when evaluating an intervention’s effects (Telzrow & Beebe, 2002). Treatment effectiveness, or “strength of treatment” (Yeaton & Sechrest, 1981), is related to the degree of change and the maintenance and generalizability of the change due to the particular intervention (Shapiro, 1987). It refers to the likelihood that a treatment will have the intended results for the participants. Yeaton and Sechrest (1981) discuss “strength of treatment” as the likelihood that a certain intervention or treatment will have the intended outcome. Interventions with high effectiveness have a greater probability that the intended effects will be evident in the outcomes. Treatment effectiveness is linked to ease of implementation and treatment integrity (Telzrow & Beebe, 2002). Many interventions with potentially high treatment effectiveness (i.e., “strong” interventions) are challenging to implement. A strong intervention may not be effective if it is improperly implemented. If an intervention is not implemented as intended, it may not be as effective as warranted.

49

When a planned program or treatment is complex, tedious, has a long duration or involves many participants, there is less likelihood that the intervention will be implemented as intended (Shapiro, 1987). Treatment acceptability is the degree to which the change agent agrees with the proposed or implemented intervention (Shapiro, 1987). Treatment acceptability is an important factor in the consideration of treatment integrity of a particular intervention (Reimers et al., 1987). The person or people responsible for implementing the treatment determine the acceptability of an intervention (Rosenfield & Gravois, 1996). Some studies have demonstrated that if intervention implementers find the treatments unacceptable, they are less likely to implement it as it was intended, although other researchers have found that there may be less of a link than initially hypothesized (Telzrow & Beebe, 2002). Social validity refers to judgements about the social significance of the treatment goals, the perceived appropriateness of procedures and the social importance of the planned intervention as determined by the change agents implementing the proposed intervention (Wolf, 1978). Telzrow and Beebe (2002) have stated that, to increase treatment integrity of professionals implementing interventions, the “so what?” test should be applied when selecting behaviors and setting goals for intervention. The “so what?” test refers to the idea that, if the student improves in the behaviors targeted by the intervention, the student will accrue meaningful gains with positive impact on life functioning.

50

Importance in Research It is necessary to examine the treatment integrity of any program or intervention because of the relationship between treatment integrity and the outcomes (Peterson et al., 1982). Information about the integrity of a treatment needs to be assessed prior to drawing conclusions about the effectiveness of the intervention. If treatment integrity is not investigated, one cannot be certain that the interventions were not changed or modified in some way. Therefore, if change is found in the dependent variable, it is not certain that the outcomes were due to the planned intervention, the variations, or other extraneous factors. Without treatment integrity, it is not possible to know whether an outcome is related to an intervention (Gresham et al., 1993). One requirement of treatment integrity is clarity regarding the intervention design. If an independent variable is not described in detail in a research article, others who want to replicate the findings will have difficulty determining if they are attempting the same intervention or if they changed an element or technique. In addition, without assessment and documentation of treatment integrity, it is difficult to compare studies that attempt to demonstrate the replicability of a technique, program or intervention (Moncher & Prinz, 1991). Although treatment integrity is important, assessments of integrity are not included in most research articles. Peterson et al. (1982) and Gresham et al. (1993) reviewed reports of experimental studies from the Journal of Applied Behavior Analysis. When reviewing the journal from 1968 to 1980, Peterson et al. found that most provided a definition for the independent variable, but did not provide a description of any accuracy checks used. Gresham et al. extended this inquiry using

51

studies published from 1980 to 1990 to determine if professionals in the field had improved in reporting accuracy checks after the publication of the prior review. They determined that only 15.6% (25 of 158 studies) adequately met both criteria of defining the independent variable and reporting accuracy checks on that variable. The lack of information on treatment integrity in empirical studies is not limited to behavioral interventions. When evaluating 359 studies in the fields of clinical psychology, behavioral therapy, psychiatry, and marital/family therapy, Moncher and Prinz (1991) found similar lack of reporting of treatment integrity measures. They rated the information provided about treatment integrity, specifying areas of promotion and verification of the correct treatment, data collection and training. Less than 6% of all studies reported using the three procedures of providing manuals, using supervision and examining the intervention events. About 26% of studies contained reports of the training utilized. Treatment Integrity in School Interventions There are few studies of treatment integrity in consultation process and practice (Gutkin, 1993). Most of the available studies of treatment integrity in schools focus on the integrity with which teachers implement intervention plans developed through consultation (Gresham & Kendell, 1993; Noell, Witt, Gilbertson, Ranier & Freeland, 1997; Telzrow & Beebe, 2002; Witt, Noell, LaFleur & Mortenson, 1997). Insights can be gained by reviewing these studies in preparation for examining the treatment integrity of the consultation process. Gresham and Kendell (1987) published an examination of school consultation research methodology. They noted difficulties conducting school consultation

52

research, such as the complexity of the process and the time and expense costs. They also noted that no consultation study had included an assessment of the integrity of the treatment developed during the consultation sessions. In the years following this finding, consultation research methodology has improved and some empirical examinations have assessed treatment integrity of teachers’ use of consultation interventions (Jones, Wickstrom & Friman, 1997; Noell et al., 1997; Witt et al., 1997). The following studies examined the treatment integrity of teacherimplemented classroom interventions developed through consultation. Several school consultation studies have specifically examined the treatment integrity of teacher-implemented classroom interventions (Jones et al., 1997; Noell et al., 1997; Witt et al., 1997). These investigations assessed the number of steps the teachers completed in a specific academic intervention. Each step yielded a permanent product, such as a graded paper or a reward sticker in place. The number of permanent products served as the treatment integrity measure. In the Noell et al. and the Witt et al. studies, the number of correctly completed steps decreased rapidly within the first few days of implementation. Jones et al. (1997) extended the research by providing consultation to three teachers in a residential treatment setting for adolescents. The independent variable of interest was the teachers’ use of attention to students’ appropriate, on-task behavior. The intervention developed during the consultation sessions was to provide contingent reinforcement for student on-task behavior. This intervention was based on the facility’s existing approach of providing praise and points for appropriate behavior.

53

During baseline data collection, contingent reinforcement ranged from 0 to 13% (percentage of two minute intervals during which a positive consequence was delivered by the teacher contingent upon the students’ on-task behavior; Jones et al., 1997). After developing intervention plans, the teachers’ percentage of adherence to the intervention plans ranged from 0 to 56%. The first author then provided performance feedback to the teachers by stating the percentage of times the student was on task and the teachers provided the appropriate attention. In the performance feedback condition, the teachers’ adherence to the intervention plan rose to the range of 30 to 100%. Although increases were found during the consultation and performance feedback phases, all three teachers responded with low levels of treatment integrity during the ‘consultation alone’ phase. These findings challenge the assumption that traditional behavioral consultation results in adequate levels of treatment integrity, but lend support to recent empirical investigations …suggesting that simply asking a teacher to implement consequences may result in inadequate levels of integrity (Jones et al., 1997, p. 324). The above studies lend evidence to the importance of treatment integrity, and the assessment of treatment integrity, particularly in consultation. The teachers in the studies may have genuinely believed they were implementing the intervention as intended. However, without an assessment of teacher behavior in the classroom, there is no assurance that an intervention is implemented as planned during the consultation sessions. Researchers have identified mechanisms to increase the integrity of a treatment intervention developed within consultation interactions (Jones et al., 1997; Noell et al, 1997; Witt et al., 1997). Parallel to Jones et al. (1997) study, both Witt et

54

al. (1997) and Noell et al. (1997) found that the number of steps that the teachers completed increased after the consultant began providing the teacher with daily feedback. For the Witt et al. and Noell et al. studies, feedback was information about the permanent products collected. The feedback consisted of the consultant reviewing the number of completed steps and the importance of the steps that were missed the prior day. As in the Jones et al. (1997) study, when given daily feedback, teachers increased the number of completed steps and increased treatment integrity of the interventions. Developing scripts that list the treatment intervention steps is another method researchers have found to increase the integrity of the intervention (Ehrhard, Barnett, Lentz, Stollar & Reifin, 1996). Consultants collaboratively developed scripts stating each behavioral step of the intervention with parents or teachers of four preschool children. These steps were written as checklists of the steps to be completed in the intervention. When using the scripts, both parents and teachers implemented the intervention as planned. Treatment acceptability also interacted with treatment integrity as teachers and parents expressed satisfaction with the interventions. Treatment Integrity in Consultation Process The reviewed research studies investigated the treatment integrity of the interventions developed within the consultation relationship. However, one cannot know if the process of consultation had integrity, or if consultation was implemented as it was intended to be. It is challenging to evaluate the integrity of the process of consultation (Gresham & Kendall, 1987; Gutkin, 1993), but it is important to do so for several reasons. Competence in consultation is becoming more important for

55

practitioners (Jones, 1999; Rosenfield, 2002b). Particularly in schools, consultation is a service in which more school psychologists are engaging to benefit students served and teachers requesting services (Gravois & Rosenfield, 2002; Gresham & Kendell, 1987; Kratochwill et al., 2002; Rosenfield, 2002a). Consultation process research has addressed some important aspects of what comprises appropriate collaborative consultation work sessions. Some of these features, which were discussed in prior sections, include collaborative problem solving, voluntary commitment from the consultant and consultee, communication behavior, and collaborative interpersonal relationships (Allen & Graden, 2002; Henning Stout, 1993; Kratochwill, Elliott & Callan-Stoiber, 2002; Rosenfield, 2002a). However, the treatment integrity of the consultation process has not been adequately examined (Gutkin, 1993). Without treatment integrity assurance including detailed definition of the consultation independent variable and systematic checks on the independent variable, research into consultation processes and outcomes is challenging to interpret. In addition to being necessary for methodological rigor in research, assessments of treatment integrity would be helpful for practitioners (Fuchs & Fuchs, 1989; Fuchs et al., 1990; Gresham & Kendell, 1987; Johnson, 1998; Jones, 1999). The characteristics identified in process research are beneficial for practitioners, as they represent skills necessary for effective consultation (Fuchs & Fuchs, 1989; Fuchs et al., 1990; Gutkin, 1993). Identification of the essential characteristics could provide a method for practitioners to assess themselves on these skills (Jones, 1999). In addition, because of the similarity of the problem solving process as used in both

56

individual consultation and system-wide consultation (Curtis & Stollar, 2002), assessments of treatment integrity of the consultation process could lend themselves to the assessment of problem solving team functioning. Consultation implementation has been partially addressed in the consultation literature (Gravois & Rosenfield, 2002; Gutkin, 1993; Henning-Stout, 1993; Kratochwill et al., 2002; Rosenfield, 2002a). Researchers are beginning to identify characteristics that assist the consultation process and yield better outcomes (Allen & Graden, 2002; Henning-Stout, 1993). Several researchers are using component analysis to differentiate the essential elements and processes of consultation (Fuchs & Fuchs, 1989; Fuchs et al., 1990). Fuchs and Fuchs (1989) and Fuchs et al. (1990) attempted to delineate the consultation process by “seeking to identify a most effective and efficient means” of consultation (Fuchs & Fuchs, 1989, p. 261). Using a component analysis of behavioral consultation, they assigned three groups to differing levels of consultation. Level 1 included Problem Identification and Problem Analysis. Level 2 included Problem Identification, Problem Analysis and Plan Implementation. Level 3 included Problem Identification, Problem Analysis, and Plan Implementation, as well as an optional stage of implementation evaluation. They also observed the students affected by the consultation interventions to compare outcomes with levels of consultation. Results indicated that all consultation dyads assigned to levels 1 and 2 conducted the sessions with integrity (Fuchs & Fuchs, 1989). However, the consultant dyads in level 3 did not complete the final stage of the consultation process. In comparison to the control group, greater percentages of the three consultation groups’

57

students demonstrated improved behavior as assessed by teacher ratings of decreased severity (75%, 88%, and 63% improved in level 1, 2 and 3 groups, respectively, compared to 29% improved in the control group). Unfortunately, due to the lack of differentiated implementation of level 3 group, the researchers could not conclude that increased stage implementation results in better student outcomes. In addition, the plans developed by the consultation dyads did not include the monitoring or data collection necessary for the interventions selected. This oversight was an apparent lack of integrity for the consultation process being studied. In a follow-up study, Fuchs et al. (1990) again assigned four groups of participants to the same conditions as the previous study (three levels of behavioral consultation and one control group). Additional methodologies included increasing the frequency of student observations, comparing student behavior to a comparison peer’s behavior, developing a list of interventions from which the consultation dyads could select, and assessing the integrity with which the selected interventions were implemented. The intervention selected most frequently was behavioral contracting. Results indicated that teachers in all three of the consultation conditions complied with the monitoring and data collection procedures specified by the intervention plans (Fuchs et al., 1990). In addition, all dyads in the most inclusive level of consultation, which included evaluating the intervention plan and making any needed modifications, determined that the students had met the contracted goals or were making progress. The teachers in this group chose not to use the evaluation stage, again calling into question the integrity of the consultation process. In true collaborative consultation, consultees must be free to not engage in parts of the

58

problem solving process (Allen & Graden, 2002; Henning-Stout, 1993). However, if the problem solving stages are not applied with integrity, the results are difficult to attribute to the consultation process (Kovaleski, 2002; Zins & Erchul, 2002). Outcomes indicated that the students in the consultation groups achieved their contract goals during 66% of the monitoring sessions (Fuchs et al., 1990). There were no significant differences between the levels of consultation and the percentages of contract goals achieved. However, the students in the least inclusive consultation group (Level 1-Problem Identification and Problem Analysis only) did not significantly reduce the initial discrepancy between their behavior and that of observed comparison peers. Students in the more inclusive consultation groups significantly reduced initial discrepancies of target behaviors. Fuchs et al. (1990) indicate that components of behavioral consultation are “important and additively related” (p. 508). However, because of the lack of discrimination between the higher levels of consultation (level 2 and level 3), the component analysis only assessed the beginning stages of the consultation process. The authors cite the irony that in their prior study (Fuchs & Fuchs, 1989), a comparative component analysis could not be fully conducted due to poor intervention implementation, whereas in the currently discussed study, the interventions were so effective that they did not need to be modified. Each modification of the consultation plan represents a lack of treatment integrity for the consultation process being studied. The available research does not allow professionals to determine experimentally what aspects of consultation are critical (Fuchs & Fuchs, 1989; Fuchs

59

et al., 1990). The prescribed consultation practices are based on theoretical models that are beginning to be investigated with experimental rigor (Gravois & Rosenfield 2002; Kovaleski, 2002). Investigations are currently most focused on the treatment integrity with which consultees are able to implement proposed interventions (Telzrow & Beebe, 2002; Upah & Tilly, 2002), rather than focusing on the integrity with which the consultation process is conducted. Current research has not adequately assessed treatment integrity of the consultation process to determine if professionals and researchers are implementing all the features theorized to contribute to positive consultation outcomes (Fuchs & Fuchs, 1989; Fuchs et al., 1990; Gresham & Kendell, 1987; Gutkin, 1993). Assessing professionals’ implementation of the identified features of consultation can lead to informed judgements about professionals’ competencies in consultation. Currently, competence is assumed if a person has completed a certain amount of training (Jones, 1999). However, classroom training and practicum hours do not necessarily ensure competence (Anton & Rosenfield, 2000; Gravois et al., 2002). Because consultation is complex, it is challenging to assess the process of consultation (Gresham & Kendell, 1993). Within different consultation models, there may be some dimensions that are essential and some dimensions that are flexible (Rosenfield, 1992). Some researchers have proposed that a unique set of assessment or evaluative tools should be used for each model of consultation, because of the different emphases and facets of the different models (Jones, 1999). To determine if the process of consultation is being implemented as intended,

60

observational methods need to be used (Gutkin, 1993; Jones et al., 1997). According to Gutkin (1993), “Without assurances of treatment integrity for the consultation process, it is not possible to determine what intervention process is actually being examined in any given study” (p. 230). Multiple observations and program evaluation techniques can be beneficial in evaluating if the process of consultation is being implemented as intended and with treatment integrity. Level of Implementation As stated by Fudell (1992), “level of implementation is the degree to which the various elements of an innovation have been operationalized as intended. It is measured by an evaluation of the extent and accuracy with which the defined critical dimensions of the model have been put into practice” (p. 10-11). Programs or innovations are comprised of critical dimensions, which are activities and characteristics essential to the existence of the program (Rubin et al., 1982). Critical dimensions including separate processes, structures and support components are measured by observing the program as it is being implemented by the system adopting the innovation. Level of implementation is similar to treatment integrity in that it is an assessment of whether an intervention is being implemented in the intended manner. In assessing level of implementation, the individual components of an innovation, program or intervention are assessed for each part’s treatment integrity. After determining the treatment integrity of the components of the program, researchers can ensure that a program is actually in place. Level of implementation measures assess the critical components of the program, as well as the relationship between the

61

components in the program or innovation. The measure of level of implementation and treatment integrity is not trivial. Prior to drawing conclusions about a program’s effects and outcomes, the implementation of the model must be assessed. As stated by Kovaleski (2002), “demonstrating that prereferral teams are effective in meeting the needs they were intended to address is critical…Before implementation, school districts should put in place procedures to collect ongoing data that can be used for program evaluation” (p. 649). If the level of implementation is not assessed, researchers and program providers cannot be assured that the intervention was implemented to the degree intended, or was implemented in the intended form. Any measured outcomes cannot be attributed to the innovation or program. Relationship Between Treatment Integrity and Level of Implementation Many researchers use the terms treatment integrity and level of implementation to refer to the same process of examining the extent to which an intervention or program was implemented in the way it was intended. Two differences between treatment integrity and level of implementation have been explicated. Within the concept of level of implementation, a set criterion level of program operationalization is required prior to beginning the assessment (Fudell, 1992; Rubin et al., 1982). In addition, this standard is a form of judgment based on a predetermination of what criterion level is acceptable for the program in question. In level of implementation assessment, a set criterion level of performance for each critical component is identified prior to the investigation of implementation (Rubin et al., 1982). In assessing level of implementation of a process, intervention or

62

program, a percentage of the number of dimensions present is calculated to determine the overall degree to which a program is “put in place” in a particular setting (Fudell, 1992). With the a priori determination of criterion levels for high, average, and low levels of implementation, conclusions can be drawn as to degree of implementation realized by a particular facility or program (Wang, Nojan, Strom & Walberg, 1984). Leithwood and Montgomery (1980) describe a process for evaluating curriculum program integrity. They delineate three areas in which methodology must be specified by program evaluators. The areas are identifying the practices specified by the more general program policies and tenets, describing the actual implementation to compare to the intended practices, and identifying discrepancies between the intended program and the actual implementation practices. Assessing the implementation process is described as a subjective task: “Judging the ‘degree’ of implementation depends on both features of the implementer’s behavior and the point of view of the judge” (Leithwood & Montgomery, 1980, p. 198). Within level of implementation measures, the criteria by which subjective judgments are made are specified before the assessments are completed. Level of Implementation Research in Program Evaluation Many level of implementation studies are incorporated into evaluations of programs. The program evaluation influence on the level of implementation research is evident, as many researchers use measures of implementation when conducting program evaluations (Gravois & Rosenfield, 2002). However, measurements of implementation are needed for all program evaluations in order to determine the effectiveness of the program outcomes (Fullan, 1983).

63

Program evaluations have been used to investigate a variety of school innovations, programs and changes. The approach of evaluating the system’s components prior to determining effectiveness has allowed national and international studies to be compared, and determinations of effectiveness to be drawn (Fullan, 1983; Stoll, Wikeley, & Reezigt, 2002). Prevention researchers have investigated the extent to which prevention programs have examined level of implementation and treatment integrity (Domitrovich & Greenberg, 2000). Of the 34 effective prevention program studies reviewed, 11 studies linked some form of level of program implementation assessment with the participant outcomes. However, 59% (20 programs) included information about assessment of program treatment integrity or level of adherence, and only 21% (7 studies) indicated assessment of more than one implementation dimension. Fullan (1983) conducted an extensive review of the educational programs funded by Follow Through national grants. In this critique of previously published program assessment results, Fullan expresses concern that the degree of implementation was not assessed in the prior evaluations. To accurately assess the effectiveness of any innovative program, the program’s critical components must be described, operationalized, and then assessed. The typical assessment techniques may include interviews, observations and document analysis. Three types of variables affecting the assessment of program implementation are model attributes or characteristics, implementation strategies including training of the change agents and district and school factors. As Fullan states, “The implementation perspective is

64

critical for both the planning and the evaluation of new models and programs” (p. 224). Level of implementation assessments have been used in a variety of educational initiatives in which program evaluations are conducted (Fullan, 1993; Mirel, 2001). Recent evaluations have been conducted on school reform initiatives such as creating high schools with smaller populations (High Time, 2003), using block scheduling for high school instruction (Tan et al, 2002), implementing school wide behavioral intervention system (Eber, Lewis-Palmer, & Pacchiano, 2002), including students with severe disabilities in general education settings (Hunt & Goetz, 2002), and addressing bullying behaviors (Stevens, Van Oost & De Bourdeaudhuij, 2001). Other evaluations have focused on school problem solving team implementation, functioning and outcomes as related to district, school, teacher and student functioning (Friedland & Walz, 2003; Friend & Cook, 1997; Hunt & Goetz, 2002; Johnson, 2000; National TEEM Outreach, 2001; O’Sullivan & Page, 2000; Ward, Korinek & McLaughlin, 1998). These large-scale evaluations repeatedly demonstrate the importance of evaluating level of program implementation prior to assessing outcomes. When evaluating the impact of a national funding initiative for school reform, the results were difficulty to assess due to the variety of programs that lacked appropriate measures of implementation and evaluation (Mirel, 2001). In an ethnographic evaluation of school collaboration teams, Gerstl-Pepin & Gunzenhauser (2002) commented on the challenges of researching interpretations of the collaborative processes and the accompanying issues of race, class and epistemological

65

assumptions. When program evaluations include assessments of level of implementation, logistical challenges can be identified (National TEEM Outreach, 2001). Several program evaluations (Friend & Cook, 1997; Johnson, 2000; O’Sullivan & Page, 2000; Ward et al., 1998) used the evaluation methodology and outcomes for further program definition and refinement, as did Gravois and Rosenfield (2002) when evaluating implementation and outcomes of Instructional Consultation Teams. Summary Schools are faced with increasingly complex challenges, including many “difficult–to-teach” students (Fuchs & Fuchs, 1989; Fuchs et al., 1990). Many schools are using school based problem solving teams for more than the federally mandated special education assessment process (Bahr et al., 1999; Chalfant & Pysh, 1989; Gravois & Rosenfield, 2002). Teams are being tapped as sources of collaboration and consultation to foster teachers’ professional development and skill growth (Iverson, 2002; Kovaleski, 2002; Zins & Erchul, 2002), as well as to improve student outcomes (Fullan, 1992; Rosenfield, 1992, 2002a; Rosenfield & Gravois, 1996). Unfortunately, the pace of research on the fundamental aspects of consultation and collaboration has not increased at the same rate as the increase in consultation use. Researching consultation processes is challenging, yet important (Gresham & Kendell, 1987; Gutkin, 1993; Telzrow & Beebe, 2002). A vital aspect of consultation and important research topic is the treatment integrity of the consultation process and the level of implementation of programs that support collaborative indirect service

66

delivery. Increased attention to consultation processes can benefit schools and students as school professionals increase their use of the indirect service delivery model. This study provides information on an existing measure of implementation and integrity of instructional consultation. The critical components of instructional consultation have been delineated by prior research (Fudell, 1992). A level of implementation measure, the LOI-R, has been developed, utilized, and revised based on research (Gravois & Rosenfield, 2002; Rosenfield & Gravois, 1996). However, the current LOI-R measure relies on self-report of consultation behaviors. Research indicates that self-report may not reflect the accuracy of the implementation of an intervention or program. To fully assess the implementation of the instructional consultation process, an observational measure of the consultation sessions was needed. This study fulfilled that need, and assessed the match between the reported behaviors and the actual behaviors in which consultant dyads engage when conducting instructional consultation.

67

Chapter 3: Methodology Participants The participants of this study were 20 case manager-teacher consultation dyads. The case managers were school-based practitioners who had previously attended a 20-hour Instructional Consultation Team workshop. The initial workshop training occurred during the 2001-2002, 2002-2003, and 2003-2004 school years. Practitioners then elected to take an instructional consultation case and receive individual email-based coaching. To participate in case coaching, the school-based practitioners serving as case managers needed to engage teachers from their school communities to serve as consultees in the consultation cases. As per the coaching suggestions, case managers could choose to solicit teachers with whom they would work (Vail, 2003). For the on-line coaching component, case managers were required to audiotape their case sessions for coaching purposes. Case managers taped their sessions, and then mailed the tapes and supporting documentation to their coaches. Coaches responded to the case managers’ taped sessions via email. The coaches’ feedback was returned to the case managers prior to the following consultation session with the teachers, so that the case managers could incorporate the feedback into their sessions without delay. Participating teachers gave written consent for the case sessions to be audiotaped. In addition, teachers and case managers gave consent for their taped sessions to be used in this study.

68

Participant Selection/Recruitment The case managers and teachers who consented to participate in this study were recruited during three different time periods. During the 2001-2002 school year, seven practitioners elected to participate in on-line coaching for their first instructional consultation case as case managers. These case managers were initially contacted by phone for participation in this study. If they agreed, the case managers were asked to provide contact information for the teachers with whom they worked during their coached instructional consultation case. All seven case managers verbally agreed to participate, gave teacher contact information and were mailed additional information and consent forms to their school or home addresses, depending on their preference. Six case managers sent back consent forms, but two teachers were not able to be contacted. Of the four case managers and teachers who gave written consent, all the necessary data including adequate numbers of taped sessions and the LOI-R interviews were collected from three cases. During the 2002-2003 school year, five additional practitioners participated in the on-line coaching sessions. These practitioners and the teachers with whom they conducted the consultation cases were phoned with the request to participate. During the phone conversations, it was ascertained that the participants had completed the LOI-R interviews as part of their schools’ Instructional Consultation Teams implementation process. These participants were requested to provide consent for permission to use the archived LOI-R interview data as well as their taped sessions. No further participation was requested. If they agreed, additional information and consent forms were mailed to their preferred addresses. Three of the five consultation

69

dyads’ case managers and teachers returned consent forms and had adequate number of tapes and the LOI-R interviews. During the 2003-2004 school year, 47 practitioners were scheduled to participate in on-line coaching. Packets including study information, consent forms and return envelopes were mailed to the case managers at their school addresses. For each case manager, an additional packet was included with a hand written note requesting that the participant pass the packet on to the teacher with whom he or she completed the on-line coaching case. Of the 47 mailed requests for participation, 19 did not respond. One mailing was returned by the post office. Three people returned the packets indicating that they were not involved in on-line coaching. Five case managers returned signed consent forms, but the teachers did not. Unsuccessful attempts were made to reach the teachers by phone. Four consultation dyads had completed consent forms, but the case manager or teacher could not be reached to schedule the LOI-R interview. Two consultation dyads had completed consents and completed interviews, but their case session tapes were missing. From the 2003-2004 on-line coaching participants, 13 cases had signed consent forms and all necessary data. Descriptive Data Participants. Consultant case managers and consultee teachers shared some similar characteristics and positions within the schools in which the cases took place. The majority of participants were female and Caucasian. The majority of teachers were general education or unspecified specialty teachers with a mean of over nine years of experience. For the case mangers, the positions and years experience

70

reported may not have been indicative of the amount of career experience the case managers had in schools. Of the case managers who indicated that they served as Instructional Consultation (IC) Team facilitators, several indicated that they had more than 20 years of experience in the non-IC Team facilitator role. This information leads to the supposition that the three case managers who only reported their roles as IC Team facilitators may have had prior roles in the schools in which they served more years than reported for the facilitator role (see Table 1). Table 1 Participant Characteristics _________________________________________________________________ Case Manager _________________ Teacher ______________

Gender Female Male Race Black 0 1 Caucasian 18 18 Unspecified 2 1 Position Teacher 4 18 (General Ed. or not specified) Teacher 5 1 (Special Education) ICT/IST Facilitator 7 0 School Psychologist 5 0 School Counselor 2 0 Teacher/Reading 2 1 Consultant Years Experience Range 1-35 1-29 Mean 10.3 9.6 ________________________________________________________________ 17 3 17 3

71

School systems. During the 2001-2002, 2002-2003 and 2003-2004 school years, practitioners from over 25 school districts in six states engaged in the on-line coaching component of the Instructional Consultation Teams training. Typically, a school system will send one practitioner from each of several different schools within the district for the 20-hour training, with the practitioner option to continue with the on-line coaching component. The school districts in which the participants in this study conducted their instructional consultation cases were located three states. Thirteen cases were conducted in seven different Maryland school districts. Six cases were conducted in four North Carolina school districts. One case was conducted in a Michigan school district. Eighteen of the 20 cases took place in elementary schools. The remaining two cases took place in middle schools. Instruments Level of Implementation Scale-Revised (LOI-R) The original LOI scale was initially developed for use in a collaborative consultation prereferral intervention program called Project Link (Fudell, 1992). To assess the level of implementation, Fudell operationalized the critical components of the collaborative consultation teams program being implemented in Project Link schools. The Project Link developers and facilitators then judged the components to be relevant and representative of the team intervention. Originally, the three components were defined as the 1) collaborative consultation process, 2) the procedural system for delivering the process to the school community, and 3) elements that encouraged the further evaluations using the LOI scale. The two final elements were combined to form what is currently termed the delivery system

72

component (Fudell et al., 1996). Other changes for the Case Manager Interview and Teacher Interview included reformatted interview protocols, additional wording to clarify existing interview items, and the addition of interview items regarding model delivery. Appendix A contains an updated LOI-R administration manual (Gravois, Fudell & Rosenfield, 2005) and the interview protocol. The current LOI-R scale is comprised of two components. The collaborative consultation process component is defined as “a stage-based method of problem solving, utilizing interactive, non-hierarchicalrelationships among professionals with diverse areas of expertise” (Fudell et al., 1996, p. 189). The service delivery system component is defined as “the structure by which the collaborative consultation process is delivered by a team to a school” (Fudell et al., 1996, p. 190). Information needed to assess the implementation of both components is collected through interviews with various school personnel, including team members and those receiving services from the team, and through record review (Fudell, 1992; Vail, 1996). This investigation focuses on the collaborative process dimensions. Within the collaborative process component, individual interviews with the case manager and the referring teacher are used to determine if the consultation dyad completed the instructional consultation stages with integrity. These interviews are conducted after the case has at least reached the stage of Intervention Evaluation. The interviews can also be conducted after the case has concluded. At the point of the interviews, the consultation dyad typically has engaged in at least seven consultation sessions, which may have occurred over two to three months. Within a school that has an operational Instructional Consultation Team, the LOI-R interviews are usually

73

conducted at the school with all the case managers and their consultee teachers at either the mid- point, or toward the end of the school year. Through the self-report interviews, the presence or absence of each collaborative communication element is assessed. The items are scored by information gathered regarding collaboration between the consultant case manager and consultee teacher, assessment activities conducted, content of the intervention planned, assessment of the intervention, and use of the data for decision-making (Fudell, 1992; Fudell et al., 1996). In addition, agreement between the interview responses of the case manager and teacher is used to determine if several of the items were implemented with integrity. To earn a score of 1, an item must be assessed as implemented correctly and the two interviewees’ responses must match. If the case manager’s, or teacher’s or both responses indicate that an element was implemented incorrectly or if the case manager’s and teacher’s responses do not match, the item is scored 0. The seven collaborative communication dimensions are comprised of several items each. Dimension implementation is calculated as the percentage of the items earning a score of 1. The levels of implementation of dimensions are assessed within school systems in comparison to the criterion level of implementation of 80% (Vail, 1996). Implementation of 75% to 79.9% is considered “Approaching criterion level of implementation.” Implementation of 65% to 74.9% is considered “Below criterion level of implementation.” Implementation below 65% is considered to be “Far below criterion level of implementation.”

74

Case Manager Interview and Teacher Interview forms. The LOI-R Case Manager Interview and Teacher Interview forms are used to script the LOI-R interviews (Fudell, 1992; Fudell et al., 1996). For each case, the two interviews are conducted separately, to compare agreement between the case manager’s and teacher’s responses. The Case Manager Interview consists of 17 possible items, and the Teacher Interview consists of 18 possible items. There are two items included on each interview form that pertain to the service delivery system component, not the collaborative process component. In addition, there are different items scored dependent upon if the referral concern is a behavioral or academic issue. Scale validity and reliability measures. Content validity of the original LOI scale was assessed through an expert panel composed of the model developers and four district facilitators who assisted schools adopt the Project Link model (Fudell, 1992). The panel members judged that the scale adequately measured the critical components. Some changes in wording and scoring to allow for variations between schools were suggested. Interrater reliability and test-retest reliability were assessed on the original LOI. For interrater reliability, two data collectors scored four audiotaped LOI interviews (two case manager-teacher dyads; Fudell, 1992). Interrater reliability was calculated by dividing agreements by the sum of agreements and disagreements. Reliability on the four interviews ranged from .79 to 1.00. Total interrater reliability was .88. Interrater reliability was rechecked at the second data collection. Reliability ranged from .85 to 1.00. Total interrater reliability was .92. Test-retest reliability was assessed during the initial data collection period. Interviews were re-administered by

75

phone to two available teachers and one case manager within one week of the initial interview. Reliability ranged from .69 to .85. Total test-retest reliability was .78. During the second data collection, interviews were re-administered to two teachers and a principal. Reliability ranged from .85 to 1.00. Total test-retest reliability was .88. These levels of reliability were found to be adequate for use of the LOI scale. Interrater reliability. The present study investigated the interrater reliability of the LOI-R interview process. For consenting participants, the LOI-R interviews were audiotaped. Interrater reliability assessments were conducted on four cases. After the interviewers scored the cases, the investigator scored the four cases using the audiotaped interviews. Using percentage of agreements divided by the sum of agreements and disagreements, the interrater reliability ratings ranged from 90% to 96.8%, which is considered to be within the acceptable range. Level of Implementation- Tape Version Protocol development. A protocol for scoring the taped case sessions was developed, and is found in Appendix B. The Level of Implementation- Tape Version was used as the criterion for comparison for the self-report interview responses. It was developed to closely mirror the item wording of the LOI-R Case Manager Interview and Teacher Interview. The 18 case manager interview items and the 17 teacher interview items used for calculating the dimensions for the collaborative consultation process were operationalized. The items were also reworded to account for listening to the case sessions as a third person and to exclude second person pronouns. When listening to the tapes of the cases session, the scorer determined if the case manager and teacher addressed the components within the consultation

76

session. Each case session audiotape was reviewed for the presence or absence of the critical components. Interrater reliability. The Tape Sessions Scoring Protocol was piloted on two cases. A graduate student trained and experienced in both instructional consultation and LOI-R interviewing underwent a 90-minute training regarding listening to the tape sessions and scoring the Level of Implementation- Tape Version. A manual used for specific scoring guidelines was reviewed. Logistical concerns were discussed, such as how to account for missing tapes and how to respond if the tape sessions did not indicate the item response. In addition, the graduate student was given an opportunity to ask questions regarding the protocol and the match with the LOI-R interview items. After the training, the manual was updated to include information about which the graduate student had asked (see Appendix C). The graduate student and the investigator listened to the first pilot case separately, and then met to compare scoring and discuss differences. Using Cohen’s kappa (2004), interrater agreement was calculated as .96. The graduate student and the investigator listened to the second pilot case separately then met to discuss scoring differences. When comparing protocol scoring, there were no differences on the scoring of the items. Interrater reliability was calculated as 1.00. When evaluating Cohen’s kappa, interrater reliability is considered satisfactory when the obtained Kappa is greater than .70 (Cohen’s Kappa, 2004). Therefore, the interrater reliability for the Level of Implementation- Tape Version as assessed on pilot cases was satisfactory.

77

After the pilot cases were assessed, the investigator listened to the audiotapes and scored the first four cases of the study using the Level of Implementation- Tape Version. The interrater reliability was then reassessed using the fourth case. The trained graduate student listened to the fourth case and completed the Level of Implementation- Tape Version. Using Cohen’s Kappa (2004), interrater reliability was calculated at .92. The graduate student and this investigator discussed and resolved differences in scoring. Final item scoring used for this case was based on the resolution of differences. Procedure Participants were solicited from existing case manager-teacher dyads that had audiotaped their case sessions for the on-line coaching requirement. The case managers were school-based practitioners who received workshop training in Instructional Consultation Teams, and then participated in on-line coaching for an actual instructional consultation case. Two case managers obtained cases from referrals to the existing referral systems within their schools. At least 13 case managers solicited the participating teachers to engage in consultation for the case managers’ practice cases. The Instructional Consultation Teams training in which the case managers participated involved a 20-hour workshop focusing on developing the knowledge and skills required to be an effective instructional consultant (Gravois et al., 2002; Vail, 2003). The topics addressed in the training included explication of the critical components of the Instructional Consultation Teams model, assumptions of instructional consultation, collaborative communication skills, problem solving

78

process and stages, Curriculum Based Assessment/instructional assessment, and using the Student Documentation Form (SDF). After completing the training, all workshop attendees had the opportunity to engage in on-line coaching to gain feedback on their use of the skills learned in the training. The process of on-line coaching was developed to address time and logistical constraints of providing feedback to newly trained consultants on their developing skills (Vail, 2003). Coaching for instructional consultation cases consists of several activities, which are cyclical in nature. First, the case manager and coach engage in a pre-conference to select the focus skills for development and practice, and to determine a method of collecting data on the case manager’s use of the skills. Next, the case manager meets with the referring teacher, conducts the consultation session, and collects data on the case manager’s use of the identified skills. Third, the coach and case manager conduct a coaching conference to review the data, and then cycle back to decide on a continued focus of skill development. In the on-line coaching process, the case manager engages in consultation sessions with the teacher, audiotapes the session, and mails the tape and any case documentation to the coach (Vail, 2003). The coach listens to the audiotape and sends a coaching response by email. Coaches are also available to respond to any direct emailed questions posed by the case managers as they conduct their cases. The coaches who served as the on-line coaches were school psychologists experienced with instructional consultation, having received a two-semester course through the University of Maryland College Park School Psychology program, and having conducted cases in their internships and/or at their job sites (Vail, 2003). In

79

addition, coaches were experienced in the traditional coaching, having served as coaches in schools that were newly implementing Instructional Consultation Teams. The coaches received a Coaching Manual to assist with the logistics of the coaching process. The case mangers who participated in the on-line coaching course were provided with a manual addressing steps for recording and sending tapes to coaches, suggestions for finding a teacher consultee and for taping, and guide sheets of the problem solving stages (Vail, 2003). The manual stated the requirement of taping at least five consultation sessions. The five sessions consisted of a minimum of one for contracting, two for problem identification, one for intervention design and one for intervention evaluation. After case manager-teacher dyads completed their cases, if needed, the LOI-R Case Manager Interview and Teacher Interview were administered by research staff, excluding the investigator. The interviews were conducted either face-to-face or by phone. With the permission of the case manager and teacher, the interviews were audiotaped, as per standard LOI-R administration. There was variation in the amount of time that passed between the final consultation session and the LOI-R interviews. In some schools, interviews were conducted within the month of the case conclusions. For other cases in which the research staff conducted the interviews by phone, several months passed between the completion of the case and the interviews. At the conclusion of the consultation cases, the audiotapes created for coaching purposes were scored using the Level of Implementation- Tape Version. Cases had varying numbers of sessions taped, and the sessions varied in length (see

80

Table 2). The majority of cases had five or more sessions, with a range of 6 minutes, 30 seconds to 18 minutes, 40 seconds. Table 2 Summary Information on Available Taped Sessions

Time ____________________________ Session Type n Average Min Max

_________________________________________________________________ Contracting Problem ID Problem ID (2nd session) Problem ID (3rd session) Problem ID (4th session) Problem ID (5th session) Int. Design Int. Design (2nd session) Int. Design (3rd session) 18 19 15 9 3 1 19 4 1 6:31 17:40 15:11 17:33 12:33 8:08 18:10 14:49 4:10 9:50 10:24 4:20 7:27 7:22 10:02 5:28 6:10 5:37 1:23 3:21 14:07 29:42 31:00 29:17 19:17 26:57 23:02 18:05 17:37

Int. Implement/Evaluation* 15 Int. Implement/Evaluation (2nd)* 4

Closure 2 7:55 3:07 12:42 _________________________________________________________________ *Note. For 5 cases, the Closure stage was included with either the first or second session of the Intervention Implementation/Evaluation stage.

81

The investigator listened to each of the session tapes available while noting the presence or absence of the critical components and scoring the items as 1 or 0. While listening to the tapes to score the Level of Implementation- Tape Version, the length of each session was timed. In addition, qualitative notes were taken regarding subjective judgments of instructional consultation process implementation. Data Analysis To respond to the research questions, several data analyses were performed. For question 1: “What are the levels of implementation for the collaborative process dimensions, as determined by the Level of Implementation- Tape Version?” and question 2: “What are the levels of implementation for the collaborative process dimensions, as determined by the LOI-R Case Manager Interviews and Teacher Interviews?” the same types of data analyses were performed. Frequency and percentage data regarding item and dimension scores were calculated for questions 1 and 2. Quantitative and qualitative analyses were conducted for question 3: “What is the relationship between the levels of implementation as assessed through the LOI-R interviews and through the Level of Implementation- Tape Version?” To compare the LOI-R interview responses to the criterion Level of Implementation- Tape Version, item comparisons were assessed using the McNemar test. McNemar tests were conducted for all LOI-R and Level of Implementation- Tape Version item pairs for which there were variability in ratings and more than one case. McNemar tests were not completed for items with all “Yes” ratings and the five items that pertained only to behavioral cases.

82

Dimension comparisons between the LOI-R interview results and the Level of Implementation- Tape Version criterion results were analyzed using summary percentage data and graphical representations. In addition, qualitative information collected when listening to the taped sessions was helpful in examining individual cases. After comparing the LOI-R information and the Level of Implementation-Tape Version results, some cases needed additional investigation of the individual Case Manager Interview Form and Teacher Interview Form to determine patterns of discrepancies.

83

Chapter 4: Results Results are described in this chapter. To answer research question 1, the following are presented: a summary of the items that were scored using the Level of Implementation- Tape Version when listening to the audiotapes, a description of the critical components and dimensions implemented, and an analysis of the dimensions that were calculated. To answer research question 2, a summary of the dimensions implemented during the Instructional Consultation Teams cases, as assessed by the LOI-R Case Manager Interviews and Teacher Interviews are presented. Research question 3 is addressed through a comparison of item implementation as assessed by the criterion Level of Implementation- Tape Version used when listening to the taped sessions and by the LOI-R interviews, and a comparison of the dimensions calculated from the criterion Level of Implementation- Tape Version in contrast to the dimensions as reported in the LOI-R interviews. Graphical representations of dimension comparisons and discussion of individual cases will also be included in the chapter. Research Question 1 What are the levels of implementation for the collaborative process dimensions, as determined by the Level of Implementation- Tape Version? Summary of the Available Data The audiotaped sessions revealed a substantial amount of information. However, there were gaps in the information provided. For several cases, unavailable tapes made it challenging to assess each item on the Level of Implementation- Tape Version to determine if the critical components were present and not taped, or if the

84

case manager consultants did not address those concerns during the sessions. To add to the complexity, several tapes were mislabeled. Overall, the majority of the cases had audiotaped data for the majority of sessions. Of the 20 cases, 11 were considered to be complete. Two cases without contracting sessions and one case without the Problem Identification stage yielded a great deal of scoreable information because those cases reached the Intervention Implementation/Evaluation stage. Of the six cases that did not reach Intervention Implementation/Evaluation stage, several yielded less information than the cases that were missing other stages. Individual Item Responses In all 20 cases, the majority of items were able to be assessed when scored with the Level of Implementation- Tape Version (see Appendix D). Of the 20 total cases, 15 had information available to determine responses for 84.6% to 100% of the items on the Level of Implementation- Tape Version. The remaining five cases had more limited information. One case had scoreable responses for each item (100% scoreable). Ten cases had all scoreable items except for one or both of the final two items: “C17) Did it appear that the teacher participated in all meetings (including IC Team meetings) during which the referral problem was discussed?” and “T17) If it was specified during the taped sessions, state what the teacher did with the completed referral/request for assistance form.” These items are included within Dimension 1Collaborative Communication, but are also used for tracking systems implementation of Instructional Consultation Teams in a school environment.

85

For many participants, these two items were not applicable. For Item C17, the response that demonstrates appropriate implementation indicates that the teacher participated in all meetings during which the referral question was discussed, including the teacher attending Instructional Consultation Team meetings during which the specific case would be discussed. For Item T17, the response that demonstrates appropriate implementation indicates that the referring teacher submitted the referral/ request for assistance form to the Instructional Consultation Team by placing it in a designated location. In many of the schools in which the cases took place, the case managers were learning the process to introduce the Instructional Consultation Teams model to the school. Twelve schools did not have Instructional Consultation Teams at the time the cases began. The remaining 8 cases took place in schools that were beginning Instructional Consultation Teams, but the teams were likely not fully functioning during the times the cases took place. Since these schools did not have regular Instructional Consultation Team meetings, the teacher would not have participated in those meetings, and there typically were no designated locations for the referring teachers to place the completed referral/request for assistance forms. In addition, because the case managers were learning this new problem solving model and were receiving coaching on their cases, at least 13 and as many as 17 case managers solicited cases from teachers. As part of the coaching process, the case managers were instructed to select teachers they suspected would be receptive to the instructional consultation process and willing to audiotape their sessions (Vail, 2003).

86

Of the 20 cases, 3 appeared to have all sessions taped. However, one item was not readily apparent from listening to the sessions. The item states: “T13) Describe what type of information was collected during the intervention and how often the information was collected. Was the information graphed/charted?” In these cases, the presence or absence of the case manager and/or referring teacher graphing the data could not be determined by listening to the taped sessions. Six cases were missing session tapes and, therefore, not all items could be scored with the Level of Implementation- Tape Version. Two of the six cases were missing the first tape, which should contain the contracting stage. For these two cases, the presence or absence of the first and/or second item could not be ascertained. In one case, the response to the second item could be heard within the second, subsequent taped session. Four of the six cases with missing tapes excluded the final case session. Therefore, the presence or absence of the items addressing the final stage of instructional consultation could not be determined. For the cases with missing tapes, it was difficult to determine the presence or absence of several of the Level of Implementation- Tape Version items. It is possible that the case managers did not complete the problem solving stages with high levels of implementation. It is also possible that the consultation dyads completed the appropriate stages within their later, untaped sessions. It could not be determined if these unscoreable items were due to a lack of case session tapes, or if the items were not present within the case managers’ and teachers’ sessions. For example, after an intervention strategy was planned, it was not apparent if the case manager and teacher determined how the effectiveness of the strategy was to

87

be monitored. The case manager and teacher discussed case progress, but in general, subjective terms, without relying on data. In later sessions, the case manager and teacher may have revisited the subject of data collection and planned for objective monitoring and data based decision-making. However, these discussions were not apparent from the available taped session information. Although there were cases with data points missing due to various reasons, the total number of responses for each item are all above 70% except for two items (see Table 3). With the exception of the 5 items pertaining to cases addressing behavioral concerns, the items of the LOI-R were able to be validated using the information from the audiotapes. Of the 19 non-behavioral items, 11 items have 18, 19 or 20 scoreable responses. Two items (C17 and T17) are related to Instructional Consultation Team functioning, and may not have been relevant in most cases. These items had the lowest response rates, the only items for which the response rates were below 70%. Five items pertain only to cases with behavioral concerns. Only one case within this data set addressed a behavioral concern. Therefore, the behavioral items were not able to be validated. Item Implementation Results The overall score for each item indicating presence or absence of the item was also high. The scoring on the Level of Implementation- Tape Version is 1 point for the presence of the item or 0 points for the absence of the item. Excluding the behavioral items because there was only one case addressing a behavioral concern, the range of scores per item was .74 to 1.00 points. The majority of items (18 of 24, including behavioral items) obtained a mean score of .94 points or higher.

88

Table 3 Summary Information for Scoreable Items from Tape Sessions Item number n Percent Mean score SD _________________________________________________________________ C1/T1 C2/T2 C3/T3 T4 T5 C4 C5 C6 T6a C7a C8/T8a C9/T7a C10a C11/T9 C12/T10 C13/T11 C14/T12 T13 T14 C15 18 19 20 19 19 20 19 19 1 1 1 1 1 20 20 19 17 14 17 17 90.0 95.0 100.0 95.0 95.0 100.0 95.0 95.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 95.0 85.0 70.0 85.0 85.0 .94 .84 .95 1.00 1.00 1.00 1.00 .74 1.00 1.00 1.00 1.00 .00 1.00 1.00 1.00 1.00 .79 1.00 1.00 0 0 0 0 .43 0 0 .24 .37 .22 0 0 0 0 .45

89

Table 3 (Continued) Summary Information for Scoreable Items from Tape Sessions Item number n Percent Mean score SD _________________________________________________________________ T15 C16/T16 C17 T17
a

15 16 9 4

75.0 80.0 45.0 20.0

.87 .88 1.00 1.00

.36 .34 0 0

behavioral item.

Dimension Data The Level of Implementation- Tape Version responses were used to calculate the percentage of implementation of each of the seven dimensions that comprise the collaborative process section of the LOI-R. For Dimension 1- Collaborative Communication, unavailable items C17 and T17 were not included in the calculations, due to the corresponding procedure for unavailable/inapplicable data calculations from the LOI-R interviews. The level of implementation of the dimensions, as assessed the Level of Implementation-Tape Version scored via listening to the audiotaped sessions, was within the acceptable range. No dimension was implemented at a level below the criterion level of 80% (Vail, 1996), and three dimensions were implemented at 100% (see Table 4).

90

Table 4 Percentage of Dimensions Implemented as Observed by Scoring Level of Implementation- Tape Version

Dimension

n

Mean

Min

Max

_________________________________________________________________ 1 - Collaborative Communication 2 – Contracting 3 – Problem Identification 4 – Intervention Development 5 – Intervention Implementation 6 - Evaluation & Follow Up 7 - Curriculum Based Assessment 20 19 20 20 18 17 20 96.3 89.5 94.3 100.0 100.0 82.4 100.0 88.8 50.0 100.0 100.0

66.7 100.0 100.0 100.0 100.0 100.0 0.0 100.0

100.0 100.0

_________________________________________________________________

Research Question 2 What are the levels of implementation for the process dimensions implemented, as determined by the LOI-R Case Manager Interviews and Teacher Interviews? As assessed by the LOI-R interview process, all but one of the dimensions were implemented at acceptable levels of the 80% criterion level (Vail, 1996) or higher (see Table 5).

91

Table 5 Percentage of Dimensions Implemented as Reported in LOI-R Interviews

Dimension

n

Mean SD

Min

Max

________________________________________________________________ 1 – Collaborative Communication 2 - Contracting 3 – Problem Identification 4 – Intervention Development 5 – Intervention Implementation 6 – Evaluation & Follow Up 20 20 20 20 20 20 89.2 97.5 93.3 85.0 95.0 78.8 20.8 11.2 11.1 31.5 15.4 34.7 29.4 50.0 71.4 0.0 50.0 0.0 100.0 100.0 100.0 100.0 100.0 100.0

7 – Curriculum Based Assessment 20 96.3 9.2 75.0 100.0 _________________________________________________________________

The levels of implementation for each of the dimensions were relatively high as determined by the traditional LOI-R interview and scoring process. As assessed by the interview process, implementation was above 90% for four of the seven dimensions. The lowest level of implementation was 78.8%, which is below 80% considered to be the criterion level of adequate implementation (Vail, 1996). Research Question 3: What is the relationship between the levels of implementation as assessed through the LOI-R interviews and through the Level of Implementation- Tape Version?

92

Item Comparison For all 26 item comparisons, there were no significant differences between the proportion of agreements of the presence or absence of the items as assessed by the LOI-R interview process and by listening to the audiotapes of case sessions (see Table 6). Using the McNemar test, the proportion of agreement of presence behaviors indicating a “yes” response for a particular item was not different between the two measures. For 10 of the 26 comparisons, the McNemar test calculations were not necessary, due to the perfect agreement between the proportion of agreement as measured by the LOI-R and the Level of Implementation- Tape Version. In these cases, the unanimous presence of “yes” responses indicated that there was no difference between the proportion of agreement between the two methods of assessing implementation of the items. In addition, five comparisons were of behavioral items, for which there was only one case. Of these five variables, one comparison was necessary to calculate; the other four items indicated perfect agreement. Dimension Comparisons Summary data. Summary data indicated that there was a high level of overall agreement between the mean percentages implemented for each of the seven dimensions as measured by both the LOI-R interview and the Level of Implementation- Tape Version (see Table 7). This result is not unexpected due to the high degree of implementation for all dimensions as assessed by both methods.

93

Table 6 Frequencies and Exact Significance Levels of LOI-R Item and Tape Scored Item Pairs
______________________________________________________________________________

Item pairs Frequencies Exact sig. LOI-R Tape YY NN YN NY (2-tailed) ________________________________________________________________ C1; T1 C2; T2 C3; T3 T4 C5 T5 C6 C11; T9 C12; T10 C13; T11 C14 T13 T15 C16; T16 C17 C1/T1 C2/T2 C3/T3 T4 C5 T5 C6 C11/T9 C12/T10 C13/T11 C14/T12 T13 T15 C16/T16 C17 16 16 16 17 18 18 14 17 17 17 16 11 11 12 7 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 1 3 1 0 0 0 4 0 0 0 0 2 2 2 0 1 0 3 1 1 1 0 3 3 2 1 0 2 2 1 1.00 (NS) .25 (NS) .63 (NS) 1.00 (NS) 1.00 (NS) 1.00 (NS) .13 (NS) .25 (NS) .25 (NS) .50 (NS) 1.00 (NS) .50 (NS) 1.00 (NS) 1.00 (NS) 1.00 (NS)

Note. YY = presence of item (score of 1) on both LOI-R interview and on Level of Implementation- Tape Version. NN= absence of item (score of 0) on both LOI-R interview and on Level of Implementation- Tape Version. YN = presence of item

94

(score of 1) on LOI-R interview, and absence of item (score of 0) on Level of Implementation- Tape Version. NY = absence of item (score of 0) on LOI-R interview, and presence of item (score of 1) on Level of Implementation- Tape Version. Table 7 Summary of Percentages of Level of Implementation for the Dimensions _________________________________________________________________ Dimension Taped Sessions LOI-R Interview

1) Collaborative Communication 2) Contracting 3) Problem Identification 4) Intervention Development 5) Intervention Implementation 6) Evaluation and Follow Up 7) Curriculum Based Assessment

96.3 89.5 94.3 100.0 100.0 82.4 100.0

89.2 97.5 93.3 85.0 95.0 78.8 96.3

Although there was not perfect agreement between the taped session data and the interview results, there were commonalties. Cases scored the lowest percentage of implementation on Dimension 6-Evaluation and Follow Up as assessed by both the LOI-R interview process and the Level of Implementation-Tape Version. Two of the most highly implemented dimensions as assessed by the Level of ImplementationTape Version, Dimension 5- Intervention Implementation and Dimension 7- CBA,

95

were also two of the three most highly implemented dimensions as assessed by the LOI-R interview. Line graph data. The line graphs for each of the seven dimensions allow a one-to-one comparison of the LOI-R interview results and the Level of Implementation- Tape Version results for each case. The data tended to display similar patterns for the percentages of dimensions implemented as assessed by the two measures. In addition, when large discrepancies between the two measures’ scores were found in a particular case, the individual interview responses from the case manager and the teacher for that case were compared. For Dimension 1- Collaborative Communication of the 20 cases, 7 cases had perfect agreement between the LOI-R calculations and the Level of ImplementationTape Version calculations, and had implementation of 100% as assessed by both measures. For 11 cases, the differences between the LOI-R calculations and the Level of Implementation- Tape Version calculations were divergent by the scores of one or two items. For Case 15 and Case 16, the data were largely discrepant, with the Level of Implementation- Tape Version calculations indicating 100% implementation and the LOI-R calculations indicating 33% and 29% implementation, respectively. When listening to the taped sessions, the consultation dyads completed the necessary components of the dimension. However, when discussing the cases with the interviewer for the LOI-R interview process, the case managers and teachers did not report completing the elements and their responses frequently did not indicate agreement (see Figure 1).

96

120%

100%

Percentage Implemented

80%

60%

40% LOI Taped 20%

0% 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Case Number

Figure 1. Percentages of Dimension 1- Collaborative Communication for LOI-R data and Level of Implementation- Tape Version data.

For Dimension 2- Contracting, of the 19 cases for which dimensions were calculated, 14 had perfect agreement between the LOI-R and the Level of Implementation-Tape Version data and had implementation of 100% as assessed by both measures (see Figure 2). Four cases (Cases 3, 5, 13, & 14) had lower tape version scores (measuring 50% implementation) than LOI-R scores (measuring 100% implementation). For Cases 3, 5, and 13, during the LOI-R interviews, the case

97

120%

100%

Percentage Implemented

80%

60%

40% LOI Taped 20%

0% 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Case Number

Figure 2. Percentages of Dimension 2- Contracting for LOI-R data and Level of Implementation- Tape Version data.

managers and teachers specified that the teacher agreed to work within the Instructional Consultation process to address the student concerns. However, when listening to the taped sessions, this specification was not apparent. For Case 14, when listening to the taped sessions, the case manager did not appear to explain the consultation stages. In contrast, within the LOI-R interview, it was indicated that the case manager did complete the contracting stage by explaining the consultation stages. One case (Case 16) had a lower LOI-R score (50%) than taped information

98

score (100%). For this case, the taped session information indicated that the case manager completed all of the elements of the contracting stage. However, during the LOI-R interview, the teacher’s and case manager’s responses did not indicate agreement, which results in a score of N (or 0) for item. For Dimension 3- Problem Identification, of 20 cases, 12 had perfect agreement between the LOI-R calculations and Level of Implementation- Tape Version calculations, and had implementation of 100% as assessed by both measures (see Figure 3). During the sessions for Case 1 and Case 2, the consultation dyad did not engage in goal setting. For Cases 9, 11, and 19, the dyads did not specify the terminal goal for the concerns that they were addressing. In contrast, with the exception of Case 16, the participants specified terminal goals during the LOI-R interview. For Dimension 4- Intervention Development, all 20 cases were scored at 100% implementation as assessed by the Level of Implementation- Tape Version. Of these, 15 had perfect agreement with the LOI-R interview data (see Figure 4). For the remaining cases, three (Cases 3, 18 & 19) had scores of 67% from the LOI- R interview. For each case, the session information indicated that the case manager and teacher completed all of the elements of the Intervention Planning stage. However, during the LOI-R interview for Case 3, the case manager’s and teacher’s responses both indicated a behavioral intervention, but the case manager also mentioned an academic intervention that the teacher did not mention. For Case 19, the LOI-R interview indicated that the case manager’s and teacher’s responses did not indicate agreement of the intervention to be implemented. For Case 18, the teacher’s response

99

120%

100%

Percentage Implementation

80%

60%

40% LOI Taped 20%

0% 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Case Number

Figure 3. Percentages of Dimension 3- Problem Identification for LOI-R data and Level of Implementation- Tape Version data.

did not match the case manager’s in describing the method of determining the effectiveness of the intervention. The case manager indicated that data were collected. The teacher’s response indicated a more informal method of assessment. Cases 15 and 16 scored 0% for this dimension as measured by the LOI-R interviews, but scored 100% implementation as assessed by the Level of

100

120%

100%

Percentage Implemented

80%

60%

40% Taped LOI 20%

0% 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 Case Number

Figure 4. Percentages of Dimension 4- Intervention Development for LOI-R data and Level of Implementation- Tape Version data.

Implementation- Tape Version. For these cases, when listening to the taped information, it appeared that the case managers and teachers completed the elements of the intervention planning stage. However, during the interview the case managers’ and teachers’ responses did not agree. It appeared that the teachers were not aware of the data collection that the case managers completed and used to make decisions regarding the students’ progress.

101

For Dimension 5- Intervention Implementation, of the 18 cases for which the dimension was calculated, 17 displayed agreement between the 100% implementation as assessed by the LOI-R calculations and the Level of Implementation- Tape Version calculations (see Figure 5). For Case 15, the taped information indicated that the consultation dyad completed all of the elements necessary for the dimension. However, during the LOI-R interview, the teacher’s response indicated that the case manager and she did not meet on a regular basis to determine if the intervention was being implemented as planned.

120%

100%

Percentage Implemented

80%

60%

40% LOI Taped 20%

0% 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Case Number

Figure 5. Percentages of Dimension 5- Intervention Implementation for LOI-R data and Level of Implementation- Tape Version data.

102

For Dimension 6- Evaluation and Follow Up, of the 17 cases for which the dimension was calculated, 8 had perfect agreement between the LOI-R interview and the Level of Implementation- Tape Version and had implementation of 100% as assessed by both measures (see Figure 6). The remaining case scores were more disparate than for the other dimensions.

120%

100%

Percentage Implemented

80%

60%

40% LOI Taped 20%

0% 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Case Number

Figure 6. Percentages of Dimension 6- Evaluation and Follow Up for LOI-R data and Level of Implementation- Tape Version data.

Case 1 demonstrated opposing scores for Dimension 6- Evaluation and Follow Up as the LOI-R interview indicated 100% implementation and the taped information

103

indicated 0% implementation. For this case, during the LOI-R interview the case manager and teacher stated that they collected data during the intervention, and they used that data to assess the student’s progress and make decisions regarding the intervention plan. However, when listening to the taped information, it appeared that these elements were not in place. In addition, it appeared that the teacher relied on informal measures to assess the student’s progress. For Case 9, the LOI-R interview indicated 25% implementation while the taped information indicated 100% implementation for Dimension 6. Although the case manager and teacher charted the student progress using curriculum based assessment data during the taped case sessions, during the LOI-R interviews, the teacher reported that the data were inconsistently graphed. In addition, it was indicated that the data were not used to make decisions regarding the student’s progress and regarding the intervention. Case 11 demonstrated that the LOI-R interview indicated 50% implementation, while the taped information indicated 100% implementation for Dimension 6. For this case, both measures indicated that the case manager and teacher collected and graphed student data, and, when listening to the taped sessions, it appeared that they used the data to make decisions regarding the student’s progress and any changes to the intervention. However, the LOI-R interviews indicated that the case manager and teacher did not describe using the data for decision-making, but described informal observations to determine student progress. For Case 13, it appeared that the inverse occurred. Within the sessions, the case manager and teacher appeared to be using informal information to assess student progress. However,

104

during the LOI-R interview, both reported using data to make decisions regarding the discontinuation of the intervention because the student had met their goal. It may be that, although the taped information revealed part of the problem solving process, the case manager and teacher used the data to modify the intervention so that they could better assess the student. They appear to have had continued the case in sessions beyond those that were taped. For Dimension 7- Curriculum Based Assessment, all 20 cases were scored at 100% implementation as assessed by the Level of Implementation- Tape Version. LOI-R interview data indicated agreement of 100% implementation for 17 cases (see Figure 7). For Dimension 7- Curriculum Based Assessment, the remaining three cases (2, 16 and 19) were scored at 75% implementation as assessed by the LOI-R interviews. For each of these cases, during the taped sessions, the case manager and teacher discussed the use of curriculum based assessment, including analysis of entry level skills, error analysis and specification of a terminal academic goal. However, within the LOI-R interviews for these cases, either the case manager, teacher or both did not detail each of the elements of the curriculum based assessment used to conduct academic analysis for the student’s concern. Individual case analysis. Case 15 and Case 16 displayed a high degree of variability of dimension scores when comparing the results of the LOI-R interview and the Level of Implementation- Tape Version. For these two cases, the LOI-R Case Manager Interview and the Teacher Interview were reviewed to determine where differences occurred. For Dimension 1- Collaborative Communication and Dimension 4- Intervention Development, both cases were

105

120%

100%

Percentage Implemented

80%

60%

40% LOI Taped 20%

0% 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Case Number

Figure 7. Percentages of Dimension 7- Curriculum Based Assessment for LOI-R data and Level of Implementation- Tape Version data.

assessed at 100% implementation as scored by the Level of Implementation- Tape Version, but scored significantly lower as assessed by the LOI-R interview. On Dimension 5- Intervention Implementation and Dimension 6- Evaluation and Follow Up, either one or both cases were missing item scores so that the dimension percentages could not be calculated (see Table 8). Qualitative information regarding the cases indicated that Case 15 and Case 16 achieved lower implementation scores for different reasons. For Case 15, the case manager and teacher appeared to be using the instructional consultation process

106

Table 8 Percentage of Dimension Implementation for Case 15 and Case 16 Case 15 Dimension LOI-R Tape Case 16 LOI-R Tape

1) Collaborative Communication 2) Contracting 3) Problem Identification 4) Intervention Development 5) Intervention Implementation 6) Evaluation and Follow Up 7) Curriculum Based Assessment

33.0 100.0 100.0 100.0 71.0 100.0 0.0 50.0 0.0 100.0 100.0 NA

29.0 50.0 71.0 0.0 50.0 0.0 75.0

100.0 100.0 83.0 100.0 NA NA 100.0

100.0 100.0

Note. NA= Not able to be calculated due to missing items.

correctly. However, only the first three sessions' audiotapes were available for analysis. Therefore, Dimension 6- Evaluation and Follow Up could not be calculated using the taped data. In addition, examination of the Case Manager Interview responses and the Teacher Interview responses revealed that the case manager and teacher did not accurately report what transpired within the consultation sessions. Therefore the LOI-R interview results indicated lower implementation than may have actually occurred. In contrast, although the Case 16 case manager and teacher completed many of the elements needed for the dimensions, it appeared that they did not follow the

107

instructional consultation process. Within the taped sessions, the case manager and teacher discussed many of the elements for the consultation process. However, it appeared that they did not implement an intervention or use data for decision making about the student’s progress and the integrity of intervention implementation. From the taped information, it was unclear if the consultation dyad assessed if the intervention was implemented as planned. It also appeared that the case manager was bringing the case to closure within the fifth session, although the consultation dyad had not evaluated the intervention or tracked the student’s progress. Although it appeared that the case manager was concluding the case, many of the items were not scored as “0,” indicating nonimplementation, because it was possible that the consultation dyad may have revisited these issues during future, untaped sessions. When comparing the results of the tape analyses with the LOI-R interview results, it appears unlikely that the consultation dyad addressed these issues in future sessions. The individual LOI-R interviews represent a summary of the complete instructional consultation process, and two participants both indicated that they did not complete critical elements of the process. Their interview responses indicated that they did not implement the intervention implementation and evaluation steps with integrity. Summary Results of this study indicated that, overall, the consultation dyads implemented the instructional consultation process with high integrity. This result was found when assessing implementation using the LOI-R interview measure, as well as using the Level of Implementation- Tape Version. Dimension scores were at

108

the criterion level of 80% (Vail, 1996) and higher, with the exception of one dimension as assessed by the LOI-R. When comparing the results of the LOI- R interview and the criterion information obtained by listening to the tapes and scoring the Level of Implementation-Tape Version, there was a large amount of agreement between the two measures. The LOI-R interview process reflected the implemented consultation process, thus validating the LOI-R Case Manager Interview and Teacher Interview as a measure of the level of implementation for the collaborative process dimensions of Instructional Consultation Teams. The item matches between the LOI-R interview information and the Level of Implementation- Tape Version, while not perfect, were above 70%. These levels were similar enough to determine that the participants were reporting the skills and behaviors in which they engaged. In addition, the patterns of dimension implementation were similar in that both measured indicated the highest levels and lowest levels of implementation for the same dimensions. Individual case information revealed similarities in patterns of implementation as assessed by both the LOI-R interviews and by the Level of Implementation- Tape Version. When examining dimension matches and individual case manager and teacher interviews, there were individual case differences that may serve to illustrate further directions of exploration for practice and research. However, overall results indicated that the LOI-R interview process captures the behaviors of consultant dyads engaged in instructional consultation. These results give additional evidence of the validity of LOI-R interviews as a measure of the integrity of the instructional consultation process.

109

Chapter 5: Discussion Consultation consists of a complex set of behaviors, skills and knowledge (Gutkin, 1993). Assessing the treatment integrity and level of implementation of consultation practices is a challenge that researchers and practitioners continue to address (Telzrow & Beebe, 2002). In previous consultation research, the assessment of treatment integrity generally has addressed the integrity with which the consultee implements the planned intervention. There has been less research on the consultation process. The research that has measured the integrity of the consultation process has been variations of self-report measures, such as interviews, checklists and permanent product assessments. There is very little research that provides validation techniques for assessing the veracity of the participants’ self-reports. This study is unique because it is a comparison of what people say versus what they do in the context of the instructional consultation process. The present study used an observational measure as a validation technique to assess the validity of the LOI-R Case Manager and Teacher Interviews of the Instructional Consultation Teams model. This study compared participants’ selfreported behaviors against the criterion measure of behaviors observed when listening to consultation session audiotapes. The results of this study lend additional validity to the LOI-R interview process. Results indicated that the LOI-R interviews reflect the consultation behaviors in which case managers and teachers engage. This evidence of validity supports the use of the LOI-R interviews for assessing instructional consultation implementation.

110

When listening to the audiotaped consultation sessions, the Level of Implementation- Tape Version was used to assess the presence or absence of the critical components of which the instructional consultation process is comprised. The completeness of audiotapes per case impacted the amount of data gathered. However, the data available were sufficient to determine levels and patterns of implementation. The Level of Implementation- Tape Version item results were directly compared to the LOI-R interview item results. The dimension scores calculated using the Level of Implementation- Tape Version items were also compared to the dimensions as calculated based on the LOI-R interview items. Results from the Level of Implementation- Tape Version and the LOI-R interviews indicated that implementation of the instructional consultation process for the majority of cases was assessed to be high by both measures. In addition, interview results generally matched the criterion behavioral observation results. Overall results indicated that the LOI-R interview process is a valid method for obtaining information on the actual case manager and teacher behaviors occurring within the instructional consultation case sessions. High Implementation of the Instructional Consultation Process as determined by Level of Implementation Measures Both the Tape Version and the Interview Version of the LOI demonstrated a high level of implementation of the instructional consultation process in these 20 cases. Results of the Level of Implementation- Tape Version represented the criterion to which the self-report interview results were compared. The observational information indicated that there was a high level of implementation of the seven

111

collaborative process dimensions of the instructional consultation process in the studied cases, although there were lower levels of implementation for certain items. For the Instructional Consultation Teams level of implementation assessments conducted in schools, 80% is considered to be the LOI-R criterion for adequate implementation (Vail, 1996). Using the information as observed by listening to the taped sessions, mean implementation was assessed at 100% for three dimensions: Dimension 4- Intervention Development, Dimension 5- Intervention Implementation, and Dimension 7- Curriculum Based Assessment. The other four dimensions (Dimension 1- Collaborative Communication, Dimension 3- Problem Identification, and Dimension 2- Contracting, and Dimension 6) were implemented at high levels of from 88.7% to 95.6%. The means of Dimension 4- Intervention Development and Dimension 7- Curriculum Based Assessment were comprised of percentage scores from all 20 cases. As calculated using the self-reported information from the LOI-R interviews, the levels of implementation of the seven collaborative consultation dimensions were also quite high. Self-report interview responses indicated that Dimension 2Contracting was the most highly implemented, with a mean implementation of 98.8% for all 20 cases. Implementation for three other dimensions (Dimension 5Intervention Implementation, Dimension 7- Curriculum Based Assessment, and Dimension 3- Problem Identification) were assessed as implemented at mean percentages above 90%. Dimension 1- Collaborative Communications and Dimension 4- Intervention Development were implemented with mean percentages of 86.4% and 85%, respectively. Only one dimension’s mean implementation rate fell

112

below the LOI-R 80% criterion level; Dimension 6- Evaluation and Follow Up was assessed at 78.8% implementation. These levels of implementation, as assessed by listening to the taped sessions or the interviews, are quite high, in fact at and above the LOI-R criterion level of implementation, especially for beginning consultants. In addition, these high levels of implementation were obtained across a diverse group of participants with different training groups, training dates, implementation dates, geographical areas and school districts. Comparison of LOI-R and Level of Implementation- Tape Version Results Item Comparisons To investigate the validity of the LOI-R Case Manager Interview and Teacher Interview process, the information reported by the participants via the LOI-R interviews was compared to the criterion of behaviors observed via the Level of Implementation- Tape Version, as scored by listening to the session audiotapes. First, individual items that were used to calculate the dimensions were compared across the two methods. Item comparisons indicated that there were no items for which there was a significant difference between the proportion of indications of element presence and indications of element absence, as determined by the McNemar test. Di mension Comparisons Additional validation of the LOI-R interview process was provided by the high levels of agreement between the levels of implementation of the seven collaborative process dimensions when the traditional LOI-R interview results were compared to the Level of Implementation- Tape Version results. The overall

113

percentages of implementation of the dimensions as measured by the Tape Version and the LOI-R interviews were similar. All dimensions were calculated as implemented between 78.8% and 100%. Similarities were observed in the rank orders of the levels of the dimensions implemented. For example, Dimensions 4, 5, and 7 were three most highly implemented as assessed by the Tape Version, while Dimensions 2, 5 and 7 were the three most highly implemented as assessed by the LOI-R interviews. The most discrepant levels of implementation for a single dimension were found on Dimension 2- Contracting. This discrepancy is surprising, given that this dimension typically is assessed as being implemented at high levels via the LOI-R interview process within schools beginning Instructional Consultation Teams (Vail, 1996). In addition, this dimension’s implementation appears to remain at high levels as consultants gain additional experience with the instructional consultation model. Self-report interview responses indicated that the dimension was calculated as the highest implemented as assessed by the LOI-R interviews (mean = 97.5%), congruent with the Vail (1996) study, which used the LOI-R interviews. In contrast, the observational measure results indicated the second lowest level of implementation as assessed by the Level of Implementation- Tape Version information (mean = 89.5%). This discrepancy may be due to the retention interval (Tourangeau, 2000). During the LOI-R interview, the interviewees are asked to recall what was discussed at the first session. This session likely occurred at least two to three months prior to the interview. In general, the more time that passes between acquiring information and retrieving that information, the less likely it is to be accurate upon retrieval

114

(Rubin & Wenzel, 1996; Tourangeau, 2000). Participants may have not had clear recall of the first consultation session, so they were not as accurate when reporting their behaviors during the LOI-R interviews. Participants also may have had tendency to base the self-reports on their memories of what occurred within the perspective of the ensuing consultation sessions (Vail, 2003). Research on self-report information obtained via interviews has demonstrated that, when asked to recall an attitude or behavior in retrospect, participants generally report their present behaviors while attributing the present circumstances, attitudes and behaviors to the past (Pearson et al., 1994). As the case managers and teachers worked through the consultation process, their understanding of the elements typically discussed in the Contracting session may have evolved. Due to the case managers’ and teachers’ experiences within the cases, they may have been reporting on their understanding of the dimension elements at the time of the interviews, not their recall of behaviors in which they engaged during the first session. Dimension 6- Evaluation & Follow Up demonstrated the most individual case variability between the LOI-R interview scores and the Level of ImplementationTape Version scores. The overall mean implementation scores for the dimension revealed lower implementation than the other dimensions, as assessed by both the LOI-R calculations (78.8% mean implementation) and the Level of ImplementationTape Version calculations (82.4% mean implementation). The lower levels of implementation and less consistency between cases were not unexpected, given that this dimension is one that schools beginning implementation of Instructional

115

Consultation Teams often have challenges putting into place (Vail, 1996). In addition, there is growing evidence that the critical components of the dimension, such as using data for decision making, may be more challenging than other consultation skills for beginning case manager consultants to implement (Vail, 2003). There was evidence of high implementation for Dimension 7- Curriculum Based Assessment (CBA), as assessed by the criterion Level of Implementation- Tape Version (100%) and as assessed by the LOI-R interviews (96.3%). This high level of implementation is unusual because there is evidence that it is one of the more challenging dimensions to implement within schools beginning the Instructional Consultation Teams process (Vail 1996). Beginning consultants and their coaches have subjectively reported that this dimension is one of the most difficult to implement (Vail, 2003). Although there was little difference between the taped and interview versions, the discrepancy may be due, in part, to the scoring requirements of the LOI-R interview. It appeared that, on cases where case managers and teachers did not report 100% implementation, often the teacher did not report that the consultation dyad engaged in the CBA activities. This pattern of the teacher not indicating that the dyad conducted CBA occurred even when the teachers were present during the sessions in which the case managers conducted CBA or in which the case managers discussed the results with the teachers to plan the interventions. Both participants need to indicate that an element is present to obtain a positive score for that item on the LOIR interview.

116

Self-report can be negatively impacted if the information to be recalled was not salient for or unique to the participant (Pearson et al., 1994). Within the coaching cases, the case managers were encouraged to complete the CBA activities to gain experience. The teachers in these cases did not appear to conduct CBA activities themselves, and, therefore may not have had enough personal experience to accurately self-report during the LOI-R interview regarding those behaviors. Individual Case Comparisons Reporting differences were observed in the dimensions for several individual cases. In many instances of disagreement between the two measures, the behaviors for the dimensions’ items were observed to have occurred within the sessions, yet the case managers or teachers did not report the components within the interview. In other instances, the LOI- R interview information indicated that there was not agreement between the case manager and teacher regarding the case process or content. For several cases, complete sets of session audiotapes were not available, which may have influenced the similarity of results between the LOI-R and the Level of Implementation- Tape Version. As documented by the graphs in Chapter 4, there were overall high levels of agreement between the two methods by case for each of the seven dimension. Although not perfect agreement, the graphs indicate that the overall levels of implementation were similar when assessed through both measures. Case 15 and Case 16 appeared to be unusual in comparison to the other cases in the data set. When individually examining these cases, it appeared that the levels of implementation were more variable across the two measures than for the other cases. Qualitative data

117

indicated that each of these cases demonstrated different patterns to account for their variability. Case 15 consultation dyad appeared to use the consultation process correctly, but during the LOI-R interviews the participants did not accurately report the behaviors in which they engaged during the actual sessions. It may have been that the case manager and teacher did not recall that they had engaged in the behaviors about which the interviewer was asking. Alternatively, if the participants did not understand the intent of the interview item, they may have provided inaccurate information (Jobe, 2001). As a related reason for the inaccurate reporting, the case manager and teacher may have accurately remembered and reported the behaviors, but did not indicate the links between the behaviors and the consultation processes. Several of the LOI-R interview items call for interviewees to state the purpose of the consultation elements. For example, the case manager and teacher both need to indicate that they conducted CBA for the purpose of clarifying the referral concern. When considering Case 16, different hypotheses were generated regarding why the case manager’s and teacher’s interview responses differed from the behaviors observed when listening to the case sessions. Qualitative observations of the case taped sessions indicated that the case manager and teacher did not “get” the idea of the instructional consultation process. The case obtained high levels of implementation on some dimensions as assessed by the Level of ImplementationTape Version. However, because of the lack of audiotaped sessions, these high levels may have been due to the inability to distinguish between elements not put in place, and elements that would be addressed in future, untaped sessions.

118

The LOI-R interview process was able to assess the lower levels of implementation of this case. However, unlike Case 15 where the dyad may not have been able to articulate the link between elements and rationale but performed the consultation behaviors, the participants of Case 16 appeared to be unable to articulate the rationale because they did not adequately implement those behaviors. Limitations There were several factors that served as limitations for this study: 1) unavailable audiotape sessions, 2) items that were not applicable due to schools’ lack of Instructional Consultation Teams, 3) dimensions that could not be calculated for each case, and 4) lack of behavioral cases. In addition, there were factors that lessened the generalizability of these results, including case managers soliciting cases for practice and the homogeneity of participant characteristics. Unavailable Audiotapes Some cases did not have every audiotape for each of the consultation sessions. In cases where tapes were missing, it was sometimes challenging to determine if the critical components were implemented with integrity. It was more difficult to determine if an element was not present. If a behavior was demonstrated in the taped session, it could definitively be scored as present. If a behavior was not demonstrated in a taped session, the evaluator could not determine if it would be demonstrated in future, untaped sessions. Therefore it could not be counted as definitively absent. This limitation was especially problematic for cases in which actual implementation of the consultation process was low.

119

When cases were missing tapes and items could not be scored using the Level of Implementation- Tape Version, the dimensions comprised of those items could not be calculated. This situation was potentially problematic for items that were used in several different dimensions. For example, LOI-R items C1 and T1 are both included in the calculations of Dimension 1- Collaborative Communication and Dimension 2Contracting. Several items are used to calculate more than one dimension. Fortunately, enough cases had scoreable items for the majority of the dimensions. All dimensions could be calculated for at least 17 cases. In this manner, all seven dimensions could be validated using information from the data set of 20 cases. Not Applicable Items Most of the schools in which the cases took place did not have Instructional Consultation Teams or the systems to support Instructional Consultation Teams in place during the times the cases were being conducted. Some items on the LOI-R were not applicable to the cases because the items referred to Instructional Consultation Team functioning, and thus the items could not be validated through this study. Item C17 refers to the teacher attending all meetings at which the referral concern is discussed. Item T17 refers to the systems issue of what the teacher did with the completed referral/request for assistance. Because most schools did not have active teams, these items were scored as “not applicable” for the majority of cases. These two items were not validated by this study because of the lack of Instructional Consultation Teams in the schools.

120

Lack of Behavioral Cases A limitation of this study was the lack of cases addressing behavioral concerns. Of the 20 cases, only 1 addressed a behavioral concern. Due to this low number, the behavioral items were not validated. Solicited Cases from Agreeable Teachers Because the case managers were learning the instructional consultation process and working in schools without Instructional Consultation Teams, many indicated that they solicited a teacher with whom to work. Part of the suggestions for the on-line coaching component is for case managers to select a teacher with whom they would be comfortable working and who would be open to working with the case manager as he or she learned the Instructional Consultat ion Teams model (Vail, 2003). The teachers selected by the case managers who agreed to work with the case managers as they underwent coaching may have been different in some important respect than teachers who typically request assistance from Instructional Consultation Teams. Therefore, the high levels of implementation and the match between the levels of implementation as measured by the LOI-R and the Level of ImplementationTape Version may not be obtained if the participant teachers were not solicited by the case managers. Additional research into the generalizability of these results to other cases for which teachers follow more typical referral concern patterns could be interesting and potentially informative.

121

Participant Characteristics The results of this study may be less generalizable due to lack of variability of participant demographic characteristics, although the participants were diverse in other respects. The majority of participants in both the case manager and teacher roles were Caucasian women. There were three men who participated as case managers and three men who participated as referring teachers. Of the men, five of the six were Caucasian. There were only three participants who identified their race as a category other than Caucasian. Due to the lack of diversity in participant characteristics, the results obtained within this study may not be generalizable to other groups of people with more diverse demographic characteristics than were represented in this study. Implications The results indicated that there was a high level of implementation of the instructional consultation process, as assessed by the LOI-R interview and the Level of Implementation-Tape Version, and that there was a high degree of match between the levels of implementation as assessed by the two measures. First, the level of implementation was high for beginning case manager consultants. The consultation dyads were able to engage in complex behaviors and skills with high integrity, as measured by the traditional LOI-R self-report interview and by the Level of Implementation- Tape Version observational measure. Second, the Level of Implementation- Tape Version enabled the researcher to observe the behaviors in the consultation sessions by listening to audiotapes of the instructional consultation sessions, and then compare those observations to the self-report LOI-R interview.

122

Using the observational results allowed a criterion measure to verify the accuracy of the self-report interview. High Levels of Implementation The mean percentages of implementation of all dimensions are very impressive, especially given that the case managers were novice consultants. When schools undergo the LOI-R process, implementation of 80% is used as a benchmark for a school’s overall implementation, including the cases of case managers who are more experienced with instructional consultation (Vail, 1996). Beginning case managers may not be expected to implement the instructional consultation process with as high a degree of integrity. However, it appears that the case managers and teachers who participated in this study were able to implement the majority of components with adequate, and above, levels of integrity. These high levels of implementation were found when the instructional consultation process was assessed through both the LOI-R interview process and the investigator listening to audiotapes of the consultation sessions for scoring the Level of Implementation- Tape Version measure. Diverse participant group factors. The high levels of implementation in this study are particularly instructive, as the participant group was diverse in terms of training group, training times, school setting and geography. Different groups of participants in this study received their initial training and conducted their first case with the on-line coaching component during various school years. Participant dyads were from different states across the country. Some of the participant dyads worked within the same school districts, but did not work at the same school buildings.

123

The consistently high level of implementation across time and settings indicates that the Instructional Consultation Teams model is being implemented with integrity in many different locations and during different school years. Although there were many varying factors among the participant group, the consistent feature was high implementation. The high level of implementation reveals other implications manifested through this study. Importance of training. One major implication for the high levels of implementation was the presence of training. The behaviors as observed via the Level of Implementation- Tape Version indicated that case managers often were able to conduct all aspects of the instructional consultation process with high levels of integrity, including the dimensions such as Curriculum Based Assessment, which may be more difficulty for beginning consultants (Vail, 2003). The high levels of implementation for beginning consultants indicate the importance of providing sound initial training, follow up coaching and ongoing training. This training may be especially important as case managers complete their first cases. Influence of audiotaping. The high rates of implementation may have been due, in part, to audiotaping of the case sessions (Bernard & Goodyear, 1998) and the coaching received by the participant case managers (Vail, 2003). Audiotaping case sessions for coaches’ reviews to give developmental feedback to the case managers may have resulted in the case managers being more reflective of their own practices within the consultation sessions. The use of audiotapes for supervision can prompt beginning professionals to be more aware of their own performance (Bernard & Goodyear, 1998). Although the coaching relationships were non-evaluative and hence

124

not considered supervision (Vail, 2003), taping the sessions may have caused the case managers to be more aware and reflective of their own behaviors. This heightened awareness and reflection may have prompted the case managers to engage in the instructional consultation behaviors at high rates of implementation. Influence of coaching. The high rates of implementation also may have been influenced by the presence of coaching for the case managers. In an investigation of the email component of the coaching process, Vail (2003) coded the coaches’ written feedback to the case managers’ session tapes. In at least two instances, coaches noted that the case managers had neglected to implement critical components, such as determining baseline functioning and defining goals prior to planning the intervention. After receiving feedback from their coaches, the case managers then implemented the elements in the following sessions. The presence and input of the coaching component allowed the case managers to receive feedback if they were not implementing the instructional consultation process with integrity. Coaching assisted the beginning case managers to implement all of the necessary elements in the process, increasing appropriate implementation. Validation of the LOI-R Evidence of validity. One major implication for this study is the evidence of validity for the case manager and referring teacher interview components of the LOIR scale. The results of this study indicate that, when compared to the criterion of observed behaviors as assessed via the Level of Implementation- Tape Version, the LOI-R Case Manager Interview and Teacher Interview measure captures the collaborative process dimensions of instructional consultation.

125

For many self-report interviews, researchers have found that participants are not always able to give accurate information regarding personal behaviors and experiences (Jobe, 2001). Using a validation technique of comparing the self - report interview responses to the criterion measure of behaviors as observed via listening to session audiotapes, it was determined that the participants in this study were able to accurately report about their behaviors within the consultation sessions. This match between participant self-report and observed behaviors strengthens the justification of using an interview methodology to determine treatment integrity of the instructional consultation process. When giving self-reports, people may report engaging in behaviors that they did not complete (Pearson et al., 1994). The participants of this study did not report engaging in behaviors that they did not accomplish, as a whole. Overall, they accurately reported engaging in behaviors that were observed from listening to the audiotapes. Program evaluation. This study’s comparison of people’s actual behaviors to their self-reported behaviors has important implications for the use of the LOI-R scale for program evaluation. When assessing a program’s outcomes, researchers must ensure that the program is implemented in the manner in which it is intended to be, prior to attributing any outcomes to the program. The additional evidence of validity provided by this study adds to the initial reliability and content validity work done by Fudell (1996), to demonstrate that the LOI-R scale is a valid method of assessing implementation of the instructional consultation process and the implementation of Instructional Consultation Teams innovation.

126

The results indicate that, when the LOI-R scale assesses high implementation, the outcomes of Instructional Consultation Teams may be accurately attributed to the innovation. Program evaluation research of the Instructional Consultation Teams innovation can be used to assess a variety of outcomes resulting from the interventions provided through instructional consultation. Decision makers such as directors of special education, associate superintendents and other administrators in school districts are able to obtain an accurate measure of their schools’ implementation. The administrators also gain assurance that the outcomes attributed to Instructional Consultation Teams are linked to appropriate implementation of the model. LOI-R interview process. The veracity of the self-report information as compared to the observed behaviors as assessed by the Level of ImplementationTape Version has implications for the process of the LOI-R interviews. The high levels of accuracy of the LOI-R interview responses may be due to the collaborative conversational approach undertaken by the interviewers. The manner in which the LOI-R interviews are conducted already applies the recommendations of using collaborative construction of meaning to increase interview validity (Suchman & Jordan, 1994). The LOI-R Case Manager Interview and Teacher Interview items are intended to be presented initially without variation. The LOI-R interviews begin with open ended, non-leading questions (i.e., “Tell me about some of the activities you and the case manager [teacher] undertook to better define the problem”). If an interviewee does not initially give information about the specific items of interest (CBA,

127

instructional levels), the interviewer then probes more directly using conversational language for the specific information. The interview process allows for the interviewer to explain items, redesign or restate questions based on an interviewee’s prior responses, or recognize inappropriate questions for a particular interviewee. In addition, the LOI-R interviewers are experienced in Instructional Consultation Teams and collaborative problem solving. They know what information each interview question is intended to draw. When using collaborative construction of meaning, the LOI-R interviewers use conversational interactions as means to assist the interviewees to understand the intent of the questions, thereby obtaining more accurate information from the interviewee. By using their experiences and knowledge, the interviewers are able to standardize the interpretation of the interview items to the interviewee, as recommended by Suchman & Jordan (1994). In this manner, the LOI-R interview process may yield more valid information and be less fraught with difficulties than other self-report interview instruments. Areas for Future Research Generalizability This study demonstrated that the LOI-R interview process captures the level of implementation of the consultation process as it is conducted in instructional consultation cases. However, this study had several factors that limit the generalizability to other situations, which should be addressed in future research. High implementation. The cases examined in this study were conducted as part of a training sequence in which beginning consultants received systematic coaching on these specific cases (Vail, 2003). In the cases presented in this study, the

128

case managers were able to receive feedback on the implementation of each of the consultation stages. If they did not address an element of the instructional consultation process, the coaches most likely gave the case manager feedback regarding the element and suggestions for the case managers to readdress the element. Due to this feedback, the case managers should have implemented the cases with high levels of integrity, as was demonstrated. However, if coaching was not provided, the results may have been lower implementation. In addition, research demonstrates that when supervisees are aware that their behaviors are observed, their behaviors may change (Bernard & Goodyear, 1998). The presence of coaching and taping the sessions may have made the case managers more aware of their behaviors, hence increasing their implementation. In addition, audiotaping and coaching may cause case managers to be more self-reflective of their own interactions in the consultation sessions. This heightened level of self-awareness may also have contributed to the high levels of implementation observed in these cases. Because participant case managers audiotaped their sessions and were coached and received feedback on implementation, the data may not be representative of implementation for beginning case managers who are not in the same circumstances. The results may not be generalizable to cases in which the case managers, after receiving training, do not tape their sessions and undergo on-line coaching. Future research may include comparisons of the implementation of instructional consultation process between participant case managers who tape their case sessions and receive on-line coaching, and case managers who do not.

129

In addition, the high levels of implementation may not be generalizable for cases in which the case managers are more experienced with the instructional consultation process. Because the case managers in this study were new to the process of instructional consultation, they may have been very cautious of veering too far from the prescribed steps and stages. During the taped case sessions, several case managers remarked to the referring teachers that they were using their manuals as the dyad proceeded through the consultation process. It may be that, as consultants become more experienced with the instructional consultation process, they do not rely on or refer to the manual as much as new consultants may. The result may be that the consultants with more experience may implement instructional consultation with less integrity than observed in this study. Future research may include comparing the levels of implementation between case managers who have experience with the instructional consultation model and those who are instructional consultation novices. Cases with behavioral referral concerns. One of the limitations of this study, as stated earlier, was that only one case addressed a behavioral concern. More research needs to be conducted with cases addressing behavioral concerns to validate the interview items for cases with behavioral concerns. Awareness of Instructional Consultation Principles Within the comparisons of individual case LOI-R interview items and the Level of Implementation- Tape Version items, an interesting pattern arose. Frequently, the consultation dyad appeared to complete the consultation stages and steps needed to demonstrate adequate implementation during the audiotaped sessions. In contrast, when responding to the LOI-R interviews, the case manager, the teacher,

130

or both participants did not report having completed the elements necessary to indicate implementation. It may be that the case managers and teachers were unaware of the language used to describe and assess the elements of the instructional consultation stages. As an alternative hypothesis, the reason for the discrepancy may be that the participants were aware that they were completing the elements, such as collecting data within the intervention, but did not recognize the reason for completing the element, such as making decisions regarding student progress. Future research may lead to answers regarding the reasons there are mismatches between the self-reported behaviors within the LOI-R interview process and the actual behaviors as observed within instructional consultation sessions. Additional research may inform better practice as professionals become aware of ways in which to assist consultant dyads not only implement the consultation process with high integrity, but also reflect accurately on the rationale for their practice. Summary This study represents an ongoing assessment of the methods of measuring the implementation of instructional consultation within schools. The information gained through this study indicates the validity the LOI-R Case Manager Interview and the Teacher Interview process as a means of gathering information of what actually occurs within instructional consultation sessions. The high levels of implementation as assessed by both the interview measure and listening to taped data of the consultation sessions indicated that training and coaching appear to be important mechanisms for case managers to apply and refine their newly learned instructional

131

consultation skills. The high levels of agreement between the self-report information and the observed information indicates that the LOI-R interview is a valid means of obtaining information regarding case manager and teacher behavior within the consultation sessions. The high level of implementation and the evidence of validity of the LOI-R interview measure have significant implications for the Instructional Consultation Teams project (Rosenfield & Gravois, 1996). The evidence of validity is important for the ongoing program evaluation of Instructional Consultation Teams. Outcomes can be attributed to Instructional Consultation Teams in schools when the LOI-R scale assesses high implementation. In addition, because consultation is a complex set of behaviors, more information gained about consultation in a variety of contexts and situations will better inform future practice.

132

APPENDIX A

LEVEL OF IMPLEMENTATION SCALE FOR INSTRUCTIONAL CONSULTATION TEAMS

ADMINISTRATION AND SCORING GUIDE

Todd Gravois Rosalyn Fudell Sylvia Rosenfield

Reprinted with permission of the authors.

133

Table of Contents

Overview

..................................................................3 ..................................4 .................................5 ..............................................................7 ................................................7 .......................................................8 ..........................................9 ................................................12 .....................................................14 ................................................. 15

General Directions for Administration Specific Directions for Administration Item Scoring Principal Interview Team Survey Case Manager Interview Teacher Interview Form Review Goal Attainment

Appendix A (Critical Dimensions of IC-Teams) Appendix B (Level of Implementation Scale)

..................... 16 ........................ 21

134

LEVEL OF IMPLEMENTATION SCALE FOR INSTRUCTIONAL CONSULTATION TEAMS OVERVIEW The Level of Implementation Scale (LOI) is comprised of several interviews and record reviews which provide information on the collaborative process and delivery system involved in the Instructional Consultation Team model. Each aspect of the Level of Implementation Scale is designed to corroborate the presence of a specific Critical Dimension Indicator (see Appendix A). In general each administration of the Level of Implementation will consist of the following:

Team Survey Principal Interview Case Manager Interview(s) Referring Teacher Interview(s) Documentation/Form Review(s)

The number of Case Manager, Referring Teacher Interviews and Form Reviews conducted varies according to the number of active cases in progress and at the intervention stage. The LOI Scale provides both formative and summative information regarding the progress each team has made in implementing the model. The scale is formative in that information collected can be utilized to identify future training, specific needs regarding faculty awareness of the team and its process, needs regarding the collaborative and delivery variables of the model. The scale is summative in that an acceptable level of implementation is desired (80%). This acceptable level of implementation is reflected in the overall benchmark resulting from the administration of the LOI Scale.

135

DIRECTIONS FOR ADMINISTRATION: Establishing Rapport: The Level of Implementation Scale’s (LOI’s) interviews should administered using an objective, conversational approach. To establish rapport, it is helpful to inform both Team Members, Case Managers and Teachers that information is being collected about the “process” of their work rather than success or failure of the intervention strategies. It is useful to indicate to respondents that the information collected is confidential and will only be shared as part of an overall team level of functioning. No individual data will be reported. However, if an individual Case Manager so desires, an Individual Case Manager Profile will be provided upon request. The Individual Case Manager Profile will only be shared with the involved Case Manager. No such information will be available for referring teachers. General Administration Guidelines: The following key points are emphasized regarding the general administration/ scoring of the LOI Scale. • One half-hour should be scheduled for each interview. • Some items are cross referenced. This means that to receive a positive score, there must be substantial agreement between two informants to determine that the critical dimension is present or not. There are no requirements for perfect agreement. Most cross referencing in scoring occurs within the interview of Case Managers and the respective Teachers with whom they are consulting. • Scoring of the scale should occur after all interviews and record reviews are completed. Many interview questions are “cross referenced” and all information must be collected prior to assigning a score. • Administer the Case Manager and Teacher interview separately. The Case Manager Interview should be administered prior to the Teacher Interview whenever possible. When interviewed first, the Case Manager typically provides more detailed responses which assists in effective prompting during the subsequent Teacher Interview. • Extra care and time should be taken to build rapport when interviewing referring teachers. Because referring teachers have not been involved in extensive training, nor necessarily comfortable with the idea of being interviewed, efforts should be made to clarify the purpose of the LOI Scale, its impact on the IC Team and individual team members. In addition, alternative wording and prompting may be required during the teacher interview. Some wording may be unfamiliar or new to teachers (i.e. data, intervention, etc.) and interviewers are encouraged to simplify or choose alternative terms in order to facilitate accurate responses. As an example, 136

interviewers may substitute “information” for “data”; “strategies or techniques” for “intervention”. Because the Teacher may not be as specific or detailed as the Case Manager in providing responses, at times it may be necessary to ask directly whether or not some aspect of the collaborative process occurred. The use of more direct questions, based upon Case Manager information, may be appropriate when there are indicators that the teacher is speaking of similar situations but not offering full descriptions. • The goal of each question is to investigate whether or not the indicated Critical Dimension (indicated in parentheses) is present. Begin with general questioning (such as those presented within the interviews) and then progress to more specific and directed questioning if necessary. Use alternative wording, prompting and direct questioning in order to acquire a fuller understanding of the processes employed. Notations in the comments section should be made to indicate the types of prompting or alternative questioning used. • Items within shaded areas are typically not administered directly, but instead based upon other responses or review of records. • Form reviews may be conducted during the interview or immediately following the case manager/ teacher interview. A request may be made to have the case manager/ teacher leave the form so that it may be reviewed and then placed in the holder’s mailbox upon completion of the review. SPECIFIC DIRECTIONS FOR ADMINISTRATION: A mid-year administration of the LOI provides formative information for the team in terms of their progress and continued training needs. This mid-year administration, combined with an end of the year administration provides summative information regarding the teams overall Level of Implementation. Hence, it is the end of the year summation of all administrations of the LOI which provides benchmarks as to the IC-Teams Level of Implementation. A. Principal Interviews It is best to administer the Principal Interview prior to the scheduled Team Survey. The Principal Interview may be administered in person or by phone if a face to face interaction is difficult to arrange prior to the Team Survey. The Principal Interview must be administered during the mid-year LOI. However, administration of the interview at the end of the year is at the team’s and interviewer’s discretion. For example, if there are 100% positive scores on the mid-year Principal Interview, an end of the year interview can be foregone. The Principal Interview should be conducted with the building principal as a first choice. An Assistant Principal may participate in the interview process if they have taken primary responsibility as the administrative representation on the team. A notation should be made if the Assistant Principal was the respondent.

137

B. Team Survey The Team Survey is administered during a regularly scheduled IC Team meeting at the mid-year administration of the LOI. Again, an end of the year administration of the Team Survey is at the discretion of the interviewer, facilitator and team. All team members should be encouraged to participate in answering the questions. Items centered on Delivery System Forms may be presented during the general team meeting or may be conducted with the designated Systems Manager at a separate time. C. Case Manager, Referring Teacher Interviews and Review of Forms 1. Interview Time: Approximately one-half hour should be scheduled for each interview. 2. Selecting Cases to be Interviewed for teams at Initiation and Early Implementation. During the initiation and early implementation phase of the IC-Team process (when all team members have yet to take cases and when the team has not reached 80% implementation) all case managers are to be interviewed provided the following conditions are met: • Only cases which have reached the Intervention Implementation Stage of Problem Solving are interviewed. • Only one interview need be conducted with each Case Manager. For example, if a team member is Case Manager for three cases, only one of these three cases need be selected to be interviewed. A random selection process is suggested in determining which case to interview. • Cases are interviewed only once. However, an exception may be made if the Case Manager specifically request that the case be reinterviewed to provide information for continued training OR if a Case Manager requests that a case be re-interviewed because a different problem has been defined since the first interview. 3. Selecting Cases to be Interviewed for teams at High Implementation and Institutionalization Phase. During the latter stages of implementation and into early institutionalization of the IC-Team process (when all team members have previously been interviewed at least once, and the team has achieved 80% implementation) a random process may be used in selecting cases to be interviewed. In addition, the following should be considered a guideline: • All new team members should be included in the interview process. • An adequate representation of the team should be interviewed to provide an on-going measure of team functioning. For example, at least 50% of case managers should be included in the random interview process. In addition, all new team members should be

138

interviewed. 4. Recording Responses. Specific directions for administering and scoring the Case Manager and Teacher Interviews are provided in the following sections. Interviewers should read these sections thoroughly prior to administering the LOI. Because many items are cross referenced between the Case Manager and Referring Teacher Interviews, it is necessary for interviewers to record responses to items for later comparison. Enough information should be recorded in the spaces provided, and in the comments section, to assure adequate interpretation at a later time. It is imperative to record verbatim statements and summarized statements of respondents responses whenever there is a “blank” provided.

139

ITEM SCORING PRINCIPAL INTERVIEW (P 1 - P 5) P 1 through P 3 are based upon the opening response of principal to the first question regarding team composition.

P1

Score Yes if within Principal’s description of the team there is representation from both general and special education classroom teachers.

P2

Score Yes if within Principal’s description, the team the membership is between 8-14 members.

P3

Score Yes if within Principal’s description of the team, the majority of teacher representation is from general education when considering other “specialist teachers”.

P4

Score Yes if Principal attends a majority of team meetings and is currently a case manager or has been within the last calendar year.

P5

Score Yes if regular team meetings are indicated and matches team response (Tm 1).

140

TEAM SURVEY (Tm 1 - Tm 10) Tm 1 Score Yes if regular meeting times are indicated and matches Principals response (P 5). Regular team meetings should occur no less than once every other week and preferably once per week.

Tm 2 NOTE: THIS SCORE MAY BE ADJUSTED DEPENDING ON THE PHASE OF TRAINING OF THE TEAMS PHASE 1 SCHOOLS: Score Yes if the facilitator or systems manager is indicated. PHASE 2-3 SCHOOLS: Score Yes if team members indicate a designated systems manager. Score NO: if the facilitator is indicated as the systems manager. Our goal is to have a system's manager separate from the facilitator. This both helps the facilitator focus on facilitating the team meeting rather than doing the clerical work--and it also begins to develop other team member participation in the process. Tm 3 NOTE: THIS SCORE MAY BE ADJUSTED DEPENDING ON THE PHASE OF TRAINING OF THE TEAMS PHASE 1 SCHOOLS: Score Yes if principal, facilitator or systems manager are specified as organizing and leading team meetings. PHASE 2-3 SCHOOLS: Score Yes if Systems Manager is specified as organizing and conducting business aspects of the team meetings (i.e., records updates, distributes new case assignments). Tm 4 Score Yes if System Manager is specified as receiving referral form/ request for assistance from teachers or if there is a designated file/ box in which referrals are placed. In rare cases, schools have divided the duties of the system’s manager so that another individual receives teacher referrals different from the designated systems manager. In such cases, Score Yes if the person designated to receive referrals is recognized throughout the school as the exclusive entry point to the team.

Tm 5

Score Yes if a single Case Manager is assigned for each referral.

Tm 6

Score Yes if Cases are in progress, and team indicates procedure by which members are kept abreast of individual case progress. Terms such as updates, reviews and discussion are sufficient to score Yes.

141

Tm 7

NOTE: THIS SCORE MAY BE ADJUSTED DEPENDING ON THE PHASE OF TRAINING OF THE TEAMS PHASE 1 SCHOOLS: Score Yes if 3 of 4 responses are checked. PHASE 2-3 SCHOOLS: Score Yes if 4 of 4 responses are checked.

Tm 8

NOTE: THIS SCORE MAY BE ADJUSTED DEPENDING ON THE PHASE OF TRAINING OF THE TEAMS PHASE 1 SCHOOLS: Score YES if 3 of 4 responses are checked. PHASE 2-3 SCHOOLS: Score Yes if 4 of 4 responses are checked.

ITEMS TM9 AND TM 10 TO BE COMPLETED WITH SYSTEM’S MANAGER: Tm 9 Score Yes if Referral Form/ Request for Assistance is available and includes the indicated information.

Tm 10

Score Yes if Tracking Form is available, is being accurately utilized and includes ALL indicated information.

142

CASE MANAGER INTERVIEW (C 1 - C 20) C1 Score Yes if an Entry/ Contracting interview was conducted, all the indicated aspects are checked (4 of 4) and generally matches Teacher’s response (Tr 1). If required, prompt the Case Manager by asking directly whether aspect has been reviewed. Alternative Wording Suggestions: Other prompts or questions include: “Tell what you told the teacher about the Instructional Consultation process”; Tell me how you described the problem-solving process during your first meeting. If the consultant remains unclear you may address the information which should occur at Entry and Contracting through more directed questions. For example: “At your first meeting with the teacher, did you talk about how the two of you were to collaborate and what it means?” Note: If, the Case Manager covers all 4 indicated aspects, but the teacher is not sureabout one of the aspects (such as confidentiality), but covered the other 3 aspects, then a YES score would be appropriate, as it “generally matches.” C2 Score Yes if Case Manager indicates a mutual agreement to engage in problem solving and matches Teacher response (Tr 2). Score Yes if Case Manager describes the referral concern in terms of a discrepancy between current and desired performance and matches Teacher’s Response (Tr 3)

C3

Questions C 4 through C 6 are administered for academic concerns. Items C7C10 are administered for behavioral concerns. Item C4 is administered for all concerns. C 4* (Always administered for academic and behavioral concerns). Score Yes if Case Manager indicates that activities were undertaken to determine that the student had adequate entry level skills to participate in the current curriculum demands. Instructional assessments or Curriculum Based Assessments (these terms are interchangeable) should be conducted in areas relevant to the concerns described in C3. Activities that can be included in a Yes Score, would be: conducting running records of current reading material, conducting an Instructional Assessment/ CBA, word search procedures, review of Dolch or vocabulary lists, review of math work samples; assessment of math performance using curriculum material, etc. Alternative Wording Suggestions: If the term CBA is not mentioned, ask if any assessments were conducted. Describe what you did to assess the students’ academic functioning. What material did you use? What goal did

143

you want the student to reach? C5 (Omitted if referral concern is centered solely on a Behavioral Concern). Score Yes if Case Manager indicates that further analysis of student’s academic functioning was conducted around targeted areas of concern. A Yes score may be given if the Case Manager indicates that task or error analysis were conducted to further identify a specific or targeted area of concern. Examples include: phonic skills analysis, probe of specific math facts or skills, etc. Not applicable for behavioral concerns as indicated in C3. C6 (Omitted if referral concern is centered solely on a Behavioral Concern). Score Yes if Case Manager describes the terminal goal or desired performance for the academic concern presented. Not applicable for behavioral concerns as indicated in C3. C7 (Omitted if referral concern is centered solely on an Academic Concern). Score Yes if actions were taken to assure behavior was not a result of academic difficulties or mismatch between student needs and instructional environment. Not applicable for academic concerns as indicated in C3.

C8

(Omitted if referral concern is centered solely on an Academic Concern). Score Yes if actions were taken to identify and isolate the setting or situation in which the behavior occurred. These include direct observations of student within the classroom setting or self monitoring techniques, review of permanent products, or interviews which could be substantiated with any of the above. Not applicable for academic concerns as indicated in C3.

C9

(Omitted if referral concern is centered solely on an Academic Concern). Score Yes if actions were taken to identify antecedents and consequences relevant to the behavior of concern. These include direct observations of student within the classroom setting or self monitoring techniques, etc. Not applicable for academic concerns as indicated in C3.

C 10

(Omitted if referral concern is centered solely on an Academic Concern). Score Yes if desired performance is specified for the behavioral concern indicated. Not applicable for academic concerns as indicated in C3.

Questions C 11 and C 12 are not administered directly, but instead based upon Case Managers response to preceding question.

144

C 11

Score Yes if Case Manager’s description of strategies or interventions matches Teacher’s description (Tr9) and logically relates to the identified problem (C3 & T3) . Score Yes if for each primary strategy there is specification of who, when ,what and how often is involved in the intervention and Case Manager’s response and matches Teacher’s response (Tr10). Score Yes if Case Manager describes the plan to monitor the strategy/ intervention and matches Teacher’s description (Tr11) and logically relates to identified problem (C3 & T3). Score Yes if Case Manager indicates that efforts were made to ensure that the intervention was operationalized as planned and matches Teacher’s response (Tr12). Alternative Wording Suggestions: Relate back to the previous question on intervention strategies (C 11 and C 12). “You’ve described agreeing upon a particular strategy. After it was implemented, did you both meet to discuss how it was being implemented and whether there were any difficulties or changes needed?”

C 12

C 13

C 14

C 15

Score Yes if Case Manager indicates regularly scheduled meetings in which monitoring of the intervention/ strategy occurred and matches Teachers response (Tr14). Score Yes if decision to change, terminate or continue the intervention was based upon data and matches Teacher’s response (Tr16). Score Yes if Case Manager indicates that teacher was included in all IC Team meetings in which the case was discussed, beyond brief updates, and matches Teachers response (Tr18). Score Yes if Case Manager indicates that teacher was active participant in choosing, developing and implementing the intervention. Score Yes if seven (10) or fewer school days passed between the receipt of the referral and first contact (Entry/ Contracting) with Case Manager. Score Yes if Case Manager has data generated from this case or can indicate data is available with the referring teacher.

C 16

C 17

C 18

C 19

C 20

PILOT For Items C11 and C12, rate intervention 1, 2 or 3 as to the extent to which ITEM: intervention is based upon best practices of behavioral and instructional principles.

145

TEACHER INTERVIEW (T 1 - T 18) T1 Score Yes if Teacher’s response indicates that an initial entry and contracting interview was conducted and in general matches key aspects checked in Case Manager question C 1. Prompting may occur to substantiate Case Manager response. Score Yes if Teacher indicates an agreement to work with the Case Manager and Team and matches Case Manager’s response (C 2). Score Yes if Teacher describes the referral concern in terms of a discrepancy between current and desired performance and matches Case Manager’s Response (C 3). Teacher may need prompting and alternative wording. Alternative Wording Suggestions: Use alternative wording and prompting such as Tell me where the student was functioning when you began and where did you expect him/her to be functioning as an end result of your working with the IC Team/ Case Manager. T4 Score Yes if Teacher indicates that there was assessment of student’s academic skills and instructional level as part of the problem identification activities. Score Yes if Teacher indicates that assessment conducted were from the classroom relevant material and focused upon the individual student rather than simply comparing the student to a norm group. Score Yes if Teacher indicates that for a behavioral concern, an assessment of the student’s academic skills and instructional level were conducted relevant to the times/ situations in which the behavior concern occurs. Score Yes if Teacher indicates that antecedents/ consequences were analyzed as part of problem analysis. Score Yes if Teacher indicates that settings and situations were explored as part of problem analysis.

T2

T3

T5

T6

T7

T8

Questions T 9 and T 10 are scored based upon Teacher’s response to preceding question and are not asked directly. T9 Score Yes if Teacher’s description of strategies or interventions matches Case Manager’s description (C 11). Score Yes if for each primary strategy there is specification of who, when and what is involved in the intervention and Teacher’s response matches Case Manager’s response (C 12).

T 10

146

T 11

Score Yes if Teacher describes the plan to monitor the strategy/ intervention and matches Case Manager’s description (C 13). Note: The plan to monitor is related to the actual procedure by which they were going to determine student progress rather than simply looking at the chart. An example of the expected response would be, “Each week the student reads a passage from his/her text and we collect his/her Correct Words per Minute.” These items (C13/ T11) are intended to ensure that both case manger and teacher understand the means by which they would collect data and monitor whether the strategy is actually working.

T 12

Score Yes if Teacher indicates there was a consensual agreement that the intervention was operationalized as planned and matches Case Manager’s response (C 14). Alternative Wording Suggestions: Relate back to the previous question on intervention strategies (T 4 and T 5)). “You’ve described agreeing upon a particular strategy. After it was implemented, did you both meet to discuss how it was being implemented and whether there were any difficulties or changes needed?”

T 13

Score Yes if Teacher’s descriptions of type of information and collection procedures support that the intervention plan was being monitored as described in T11 and that there was frequent graphing/ charting of measurement data weekly or under other regular schedule which is supported by rationale. Score Yes if Teacher verbally indicates regular scheduled meetings and generally matches Case Manager’s response (C 15). Score Yes if Teacher’s response recognizes that success or lack of success is judged by the data collection procedures indicated to monitor progress (or other appropriate objective information). Score Yes if decision to change, terminate or continue the intervention was based upon data and matches Case Manager’s response (C 16). Sc ore Yes if Teacher submitted completed referral form to system’s manager (or designated location) and matchesTeam response (Tm 4). During first year interviews, Score Yes if Teacher indicates positive responses for 2 of 3 choices. For subsequent interviews, Score Yes if Teacher indicates positive response for 3 of 3 choices.

T 14

T 15

T 16

T 17

T 18

147

FORMS (F 1 - F10) For each Case Manager/ Teacher pair interviewed, the Student Documentation Form should be reviewed. A request may be made of either the case manager or teacher to leave the form for review and indicate that it will be returned to the facilitator or placed in the appropriate school mail box.

F1

Score Yes if SDF is available and has the identifying information completed (i.e. names, grade, school, case manger, etc.). Concern 1: Score Yes if GAS is completed Steps 1-4. Includes general statement of concern, indication that instructional level was considered, an observable/ measurable statement of current performance and a measurable short term goal with time specified. Concern 1: Score Yes if there is an operational definition of Concern 1. Includes specification of what, when and where of behavior. Concern 1: Score Yes if for Concern 1 there are 3-5 baseline data points, a clearly marked vertical axis (or can reasonably be deduced from information contained in the operational definition) and data entry post-intervention implementation graphed on a weekly or regular basis with rationale provided. Concern 1: Score Yes if for Concern 1 there is specification of what, when, how often and who for the intervention; there is an indication of intervention implementation (either by a heavy line on the graph or notation in the Consultation Summary); and there are indications of monitoring of intervention progress (either through continued intervention if progress toward goal or change intervention if no progress). NOTE: a score of Yes requires that some change in the intervention occurs after 6 weeks of intervention if goal is not obtained. Score Yes if there if Page 4 of SDF contains dates of consultations, brief summaries of consultation contacts, and indication of follow-up meetings and tasks.

F2

F3

F4

F5

F6

Items F7- F 10 should be administered for cases with a second identified concern that has reached the intervention implementation stage: F7 Concern 2: Score Yes if GAS is completed Steps 1-4. Includes general statement of concern, indication that instructional level was considered, an observable/ measurable statement of current performance and a measurable short term goal with time specified. Concern 2: Score Yes if there is an operational definition of Concern 2.

F8

148

Includes specification of what, when and where of behavior. F9 Concern 2: Score Yes if for Concern 2 there are 3-5 baseline data points, a clearly marked vertical axis (or can reasonably be deduced from information contained in the operational definition) and data entry post-intervention implementation graphed on a weekly or regular basis with rationale provided. Concern 2: Score Yes if for Concern 2 there is specification of what, when, how often and who for the intervention; there is an indication of intervention implementation (either by a heavy line on the graph or notation in the Consultation Summary); and there are indications of monitoring of intervention progress (either through continued intervention if progress toward goal or change intervention if no progress. NOTE: a score of Yes requires that some change in intervention occurs after 6 weeks of intervention if goal is not obtained.

F 10

149

GOAL ATTAINMENT (GA1-GA2) Note: Only one rating is provided for each concern:

GA 1 Concern 1: Rate only one of three options: • Use Rating A if Concern 1 has a minimum of one (1) baseline point, a short term goal established and two (2) data points post intervention. • Use Rating B if Concern 1 has a minimum of one (1) baseline point and two (2) data points post intervention. • Use Rating C if Concern 1 has no baseline data nor a short term goal established and/ or limited data points post intervention.

GA 2 Concern 2: Rate only one of three options: • Use Rating A if Concern 2 has a minimum of one (1) baseline point, a short term goal established and two (2) data points post intervention. • Use Rating B if Concern 2 has a minimum of one (1) baseline point and two (2) data points post intervention. • Use Rating C if Concern 2 has no baseline data nor a short term goal established and/ or limited data points post intervention.

150

APPENDIX A CRITICAL DIMENSIONS OF IC TEAMS

151

CRITICAL DIMENSIONS OF INSTRUCTIONAL CONSULTATION TEAMS Indicators Collaborative Consultation Process - A stage-based method of problem-solving utilizing interactive, nonhierarchical relationships among processionals with diverse areas of expertise is routinely utilized by the staff for classroom-based problems. 1. At all stages, interactions between the case manager and referring teacher are characterized by accurate, clear communication. (a) Effective communication is evidenced by teacher and case manager having the same perceptions of issues discussed, or an understanding of the other’s perception. For each case, the following stages are sequentially implemented until the problem is satisfactorily resolved: 2. Contracting(a) An interview between the consultee and consultant has been conducted in which the following have been discussed: (1) the consultation process; (2) the meaning of collaboration; (3) the time involvement, and; (4) confidentiality. (b) There is evidence of a mutually agreed upon contract to engage in the problem-solving process. 3. Problem Identification(a) There is a statement of discrepancy, from the consultee’s perspective, between desired and actual performance for the referred child. (b) For academic problems, the following activities are completed: (1) analysis of entry level skills using curriculum-based assessment; (2) analysis of targeted academic task; (3) specification of terminal goal in behaviorally descriptive terms. (c) For behavioral problems, the following activities are completed; (1)

152

analysis of immediate antecedents/ consequences; (2) analysis of setting and situation; (3) statement of desired behavior. 4. Intervention Recommendations(a) Intervention recommendations based on effective teaching practices are produced by team members/ case managers/ teachers. (b) A consensual decision is reached on recommendations to implement. (c) There is evidence of the specification of who is responsible for what, when. (d) A plan for monitoring the effectiveness of the intervention is developed. 5. Implementation of Intervention(a) There is consensual agreement between the consultant and consultee about the extent to which the specified plan has been operationalized. (b) The plan is monitored as specified. (c) There is evidence of frequent graphing of measurement data. 6. Evaluation and follow-up of intervention (a) Data are used to determine level of progress. (b) The decision to terminate, continue or change the intervention is based on data. 7. Curriculum-based assessment is a method to determine baseline levels of academic functioning from the student’s own curriculum in order to monitor on-going performance to determine the success/ failure of an intervention. (a) The assessment reflects an evaluation of academic behavior in the natural environment (b) The assessment focuses on the individual child rather than on a normative group. (c) The child is tested on material from the instructional curriculum. (d) The assessment method used is appropriate for continuous monitoring of 153

student progress in order to alter interventions as needed. Delivery System- The structure by which the collaborative consultation process is delivered by a team to a school is developed and maintained. 8. In each building, a permanent support team is specified. It is characterized by: (a) Representation from general and special education and pupil support services personnel. (b) Presence of building administrator as regular and active team participant. (c) Team comprised of between five and nine permanent members. (d) The majority of teacher representation is from general education. 9. There is a referral process by which teachers and staff can access the team. (a) A referral form (or request for assistance) which, at a minimum, includes teacher’s and student’s names, a brief statement of the problem, and the teacher’s available time to meet is readily available. (b) A person to receive the referral form has been designated. 10. The referring teacher becomes a part of the process by participating in all problem solving activities. (a) The referring teacher becomes a temporary team member, participating in all meetings which focus on the referral problem. (b) The referring teacher is actively involved in planning and implementing the intervention. 11. A designated systems manager whose role includes: (a) Organization of team meetings (b) Receipt of referral form from consultee (c) Monitoring of the status of all cases 12. For each referral, a case manager is assigned whose role includes: (a) Timely initial contact with consultee (within 7 days) (b) Collection and organization of all data 154

(c) Monitoring of all consultation contacts (d) Reporting to team on case progress 13. The functions of the team are clearly specified and engaged in. (a) There is evidence of formal or informal needs’ assessment to determine the team’s own needs. (b) A plan to include goals, activities, and consultants is developed each year. (c) Regular meeting times and place are specified. (d) Team business includes review of new referrals, case updates, case problems, team process. (e) There is evidence that the team allots time for practice in specified areas of the consultation process. (f) Teams engage in maintenance activities including : 1) Regular team processing of issues and concerns; and 2) Reflection of team’s effectiveness through self assessment and evaluation. 14. A tracking process to insure systematic record-keeping in order to document the delivery system is in place. (a) There is an up-to-date tracking form indicating the status of all cases, reported at 4-6 week intervals. (b) There are up-to-date monitoring forms for individual cases summarizing all consultation contacts. (c) Student Progress Forms are completed detailing the referral concern stated in discrepancy between current and desired performance, goals, and interventions. Graphic display of data is available for each case.

155

APPENDIX B LEVEL OF IMPLEMENTATION SCALE

156

LOI CASE MANAGER INTERVIEW CASE MANAGER’S NAME:___________________________________ TEACHER’S NAME:__________________________________________ FIRST NAME OF REFERRED CHILD:___________________________ Process Delivery COMMENTS

C1 Y N (Tr1)

At your first meeting, how did you explain the problemsolving process to _____________? (2a) _____ Consultation stages _____ Meaning of Collaboration _____ Time to meet _____ Parameters of confidentiality _____ _________________________________ _________________________________ Did _________ agree willingly to work with the IC Team? (2b) YES NO

C2 Y N (Tr2)

C3 Y N (Tr3)

Describe the initial referral concern. What concerns did you and the teacher focus upon? What was the current/ baseline performance and goals established for the concern(s)? (3a) ______________________________________ ______________________________________

What activities did you and ________ undertake to identify the presenting problem?:(Check the activities described by Case Manager to identify the academic or behavioral problemVERBAL DESCRIPTIONS) C4 Y N C5 Y N C6 Y N ACADEMIC (3b) ___ Analysis of entry level skills using CBA/Inst. Assessment ___ Analysis of targeted academic task(s).(i.e.TASK OR ERROR ANALYSIS) Specify How? ___________ ________________________________________ ___ Specification of terminal goal. What? _______ _________________________________ BEHAVIOR (3c): ___ Possibility of academic problem assessed (3b). ___ Analysis of entry level skills using CBA/Inst. Assessment in main academic areas or during time in which behavior occurs. ___ Analysis of setting and situation. How? _________________________________ ___ Analysis of antecedents/ consequences. How? _________________________________ ___ Specification of desired behavior. How? _________________________________

C7 Y N C4* Y N C8 Y N C9 Y N

C10Y N

157

LOI CASE MANAGER INTERVIEW (CONTUNUED) Process Delivery What strategies or interventions did you agree to implement? Describe them. Who was responsible for each aspect? When was the intervention to take place? Strategy _______________________________ Who? _______________________________ When? _______________________________ Strategy _______________________________ Who? _______________________________ When? _______________________________ COMMENTS

C Y N 11 (Tr9) C Y N 12 (Tr10) C Y N 13 (Tr11)

There is agreement between Case Manager and Teacher as to which interventions to implement and strategy relates to identified concern? (4b) There is evidence of specification of who is responsible for what, when in intervention development? (4c) How was the effectiveness of the strategy/ intervention to be monitored? (4d) _____________________ _____________________________________ Did you and the teacher meet to determine whether the intervention/ strategy was implemented as planned? YES NO Did you and the teacher agree as to how much modification was needed, if any? (5a) YES NO How many times did you have scheduled/ formal meetings with the teacher to discuss case progress? (5b) ____________________________________________ How was the decision to modify, continue, or terminate the intervention made? (6b) YES if based upon data/information NO if not based upon data/ information Y N (Tr18) Did ________ participate in all meetings (including IC Team meetings) during which the referral problem was discussed, that is beyond brief updates? (10a) YES NO Did the teacher actively plan and make the decision as to which intervention to implement? (10b) YES NO How much time passed between the teacher’s request for assistance (date of referral) and your first meeting? (12a) _________________________________ Do you or (teacher) have data generated from this case? (12b) YES NO

C Y N 14 (Tr12) C 15 Y N (Tr14)

C Y N 16 (Tr16)

C 17

C 18 C 19 C 20

Y N

Y N

Y N

158

LOI TEACHER INTERVIEW SCHOOL:__________________________________________________ CASE MANAGER’S NAME:___________________________________ TEACHER’S NAME:__________________________________________ FIRST NAME OF REFERRED CHILD:___________________________ Process Delivery T1 Y N (C1) What was your understanding of what the IC Team (collaborative problem-solving) process would be after your first meeting with the Case Manager? (2a) _________________________________________________ ___________________________ Did you agree to work on [the student’s] problem with the Case Manager and Team? (2b) YES NO

COMMENTS

T2

Y N (C2)

T3

Y N (C3)

Describe the initial referral concern. What concerns did you and the case manager focus upon? What was the current/ baseline performance and goals established for the concern(s)? (3a) _____________________________________ ____________________________________ ____________________________________

What are some activities that you and the case manager undertook to better define the problem? ACADEMIC (3b) ___ Assessment of student’s academic skills and instructional level. (7) ___ Assessments conducted in classroom material and is focused upon the individual student rather than norm group. (7b; 7c) BEHAVIOR (3c) ___ Assessment of student’s academic skills and instructional level relevant to times/ situations of behavioral concern. (3a) ___ Analysis of antecedents/ consequences ___ Analysis of settings and situations

T4 T5

Y N Y N

T6 T7 T8

Y N Y N Y N

What strategies or interventions did you agree to implement? Describe them. Who was responsible for each aspect? When was the intervention to take place? Strategy _______________________________ Who? _______________________________ When? _______________________________ Strategy _______________________________ Who? _______________________________ When? _______________________________

159

LOI TEACHER (CONTINUED) CASE MANAGER: _______________________/ TEAHCER:____________________ Process Delivery T9 Y N (C11) Y N (C12) There is agreement between Case Manager and Teacher as to which interventions to implement and strategy relates to identified concern? (4b) There is evidence of specification of who is responsible for what, when in intervention development? (4c) How was the effectiveness of the strategy/ intervention to be monitored? (4d) _________________________________ _______________________________________________ Did you and the case manager meet to determine whether the intervention was implemented as planned? YES NO Did you and the case manager agree as to how much modification was needed, if any? (5a) YES NO Describe what type of information was collected during the intervention. How often was the information collected (6a)? ______________________________________ Was information graphed/ charted? (5c) ______________________________________ Did you have scheduled meetings with the case manager to discuss the student’s progress? (5b) YES NO After participating in the IC process for this case, how would you rate the outcome (listen to all choices, then decide): "We achieved…. ___ much more than expected ___ somewhat more than expected ___ what was expected, ___ somewhat less than expected ___ much less than expected (How do you know? (6a) _______________________ ____________________________________________ ____________________________________________

COMMENTS

T10

T11 Y N (C13)

T12 Y N (C14)

T13 Y N

T14 Y N (C15)

T15

Y N

T16 Y N (C16)

How was the decision to continue, modify, or terminate the intervention made? (6b) YES if based upon analysis of data NO if not based upon data Y N (Tm4) Y N What did you do with the completed referral/ request for assistance form? (9b; 11b) Did you feel that you were a contributing part of the problem solving process? ___________________ 160 To the IC Team (10a)? _______________________ That your input was valuable? (10b) _____________________________________

T17

T18

APPENDIX B Match Between LOI-R items and Tape Version Items LOI-R Case Manager (C) and Teacher (T) Interview Item ___________________________________________________ (C 1) At your first meeting, how did you explain the problemsolving process to [teacher]? ____ Consultation stages ____ Meaning of collaboration ____ Time to meet ____ Confidentiality ____ ______________________________________ (T 1)What was your understanding of what the IC team (collaborative problem-solving) process would be after your first meeting with the case manager? Level of Implementation- Tape Version Item ___________________________________________________ C1) During the first meeting, how did the case manager explain the problem solving process to the referring teacher? Did the case manager explain/discuss: Consultation stages? Meaning of collaboration? Time to meet? Parameters of confidentiality? Other? T1) What did the teacher’s understanding of the IC Team process (collaborative problem solving process) appear to be after the first meeting with the case manager?

(C 2) Did [teacher] agree to willingly work with the IC-Team? (T 2) Did you agree to work on [student’s] problems with the case manager and the team? (C 3) Describe the initial referral concern. What concerns did you and the teacher focus upon? What was the current/baseline performance and goals established for the concern(s)? (T 3) Describe the initial referral concern. What concerns did you and the case manager focus upon? What was the current/baseline performance and goals established for the concern(s)?

C2; T2) Did the teacher agree willingly to work on the student’s problem with the case manager (and IC Team)?

C3; T3) Describe the initial referral concern. What concerns did the case manager and the teacher focus upon? What were the current/baseline performance and goals established for the concern(s)?

[Case manager] What activities did you and [teacher] undertake to identify the presenting problem? (Check the activities described by the case manager to identify the academic or behavioral problem.) Academic (C 4) Analysis of entry level skills using CBA/Inst. Assessment (C 5) Analysis of targeted academic task(s), (i.e. TASK OR ERROR ANALYSIS) Specify How? (C 6) Specification of terminal goal. What? Behavior (C 7) Possibility of academic problem assessed (C 4) Analysis of entry level skills using CBA/Inst. Analysis in the main academic areas or during time in which the behavior occurs. (C 8) Analysis of setting and situation. How? (C 9) Analysis of antecedents/consequences. How? (C 10) Specification of desired behaviors. How? [Teacher] What are some of the activities that you and the case manager undertook to better define the problem? Academic (T 4) Assessment of student’s academic skills and instructional level (T 5) Assessments conducted in classroom material and is focused upon the individual student rather than the norm group Behavior (T 6) Assessment of student’s academic skills and instructional level relevant to times/situations of behavioral concern (T 7) Analysis of antecedents/consequences (T 8) Analysis of settings and situations

For Identified Academic Concerns: What were some activities that the case manager and the teacher undertook to better define the problem? T4) Assessment of student’s academic skills and instructional level T5) Assessments conducted in classroom materials and focused upon the individual student rather than norm group C4) Analysis of entry level skills using CBA/Instructional assessment C5) Analysis of targeted academic tasks (i.e., Task or error analysis). Specify how. C6) Specification of terminal goal. Specify what: For Identified Behavioral Concerns: What are some activities that the case manager and the teacher undertook to better define the problem? T6) Assessment of student’s academic skills and instructional levels relevant to the times/situations of behavioral concern. C4) Analysis of entry level skills using CBA/Instructional assessment in the main academic areas or during the time in which the behavior occurs. C7) Possibility of academic problem assessed C8; T8) Analysis of setting and situation. How? C9; T7) Analysis of antecedents/consequences. How? C10) Specification of desired behavior. How?

162

[Case Manager & Teacher ] What strategies or interventions did you agree to implement? Describe them. Who was responsible for each aspect? When was the Intervention to take place? Strategy: Who: When: Strategy: Who: When: (C 11) (T 9) There is agreement between Case Manager and Teacher as to which interventions to implement and strategy relate to identified concern? (C 12) (T 10) There is evidence of specification of who is responsible for what, when in intervention development? (C 13) (T 11) How was the effectiveness of the strategy/intervention to be monitored? (C 14) Did you and the teacher meet to determine whether the intervention/strategy was implemented as planned? YES NO Did you and the teacher agree as to how much modification was needed, if any? YES NO (T 12) Did you and the case manager meet to determine whether the intervention was implemented as planned ? YES NO Did you and the case manager agree as to how much modification was needed, if any? YES NO

Briefly describe intervention(s):

C11; T9) Is there agreement between the Case Manager and Teacher as to which interventions to implement? Does the strategy relate to the identified concern? C12; T10) Is there evidence of specification of who is responsible for what, when, in the intervention development? C13; T11) How was the effectiveness of the strategy/intervention monitored? C14; T12) Did the teacher and case manager meet to determine whether the intervention/strategy was implemented as planned? Did the teacher and case manager agree as to how much modification was needed, if any?

163

(T 13) Describe what type of information was collected during the intervention. How often was the information collected? Was the information graphed/charted?

T13) Describe what type of information was collected during the intervention and how often the information was collected. Was the information graphed/charted? Did it appear that the case manager and teacher graphed/charted the data during the sessions? Did one member bring the completed work to the session? Please describe: Did it appear that the case manager and teacher used/worked with the Student Documentation Form (SDF) within the sessions? Please describe:

(T 14) Did you have scheduled meetings with the case manager to discuss the student’s progress? (C 15) How many times did you have scheduled/formal meetings with the teacher to discuss case progress?

T14) Did the teacher and case manager have scheduled meetings to discuss the student’s progress? C15) Approximately how many times did it appear that the case manager and teacher have scheduled/formal meetings to discuss case progress? T15) Based on the teacher’s responses within the sessions after participating in the IC process for this case, estimate how the teacher would rate the outcome: “We achieved… a) much more than expected, b) somewhat more than expected; c) what was expected; d) somewhat less than expected; e) much less than expected.” Does it appear that this comment would have been based on data?

(T 15) Was the intervention successful? How do you know? [OR] (T 15) After participating in the IC process for this case, how would you rate the outcome (listen to all choices, then decide): “We achieved __ much more than expected, __ somewhat more than expected; __ what was expected; __ somewhat less than expected; __ much less than expected.” How do you know?

164

(C 16) How was the decision to modify, continue, or terminate the intervention made? YES if based upon data/information NO if not based upon data/information (T 16) How was the decision to continue, modify, or terminate the intervention made? YES if based upon data/information NO if not based upon data/information (C 17) Did [teacher] participate in all meetings (including IC Team meetings) during which the referral problem was discussed, that is beyond brief updates? (T 17) What did you do with the completed referral/request for assistance form?

C16; T16) How was the decision to modify, continue or terminate the intervention made? Was the decision based on data?

C17) Did it appear that the teacher participated in all meetings (inc. IC Team meetings) during which the referral problem was discussed? T17) If it was specified during the taped sessions, state what the teacher did with the completed referral/request for assistance form?

(C 18) Did the teacher actively plan and make the decision as to which intervention to implement? (T 18) Did you feel that you were a contributing part of the problem solving process? To the IC Team? That your input was valuable? (C19) How much time passed between the teacher’s request for assistance (data of referral) and your first meeting? (C 20) Do you or [teacher] have data generated from this case?

165

APPENDIX C
Level of Implementation- Tape Version Protocol C1) During the first meeting, how did the case manager explain the problem solving process to the referring teacher? Did the case manager explain/discuss: Consultation stages? Y N Meaning of Collaboration? Y N Time to meet? Y N Parameters of confidentiality? Y N Other? ____________________________________ T1) What did the teacher’s understanding of the IC Team process (collaborative problem solving process) appear to be after the first meeting with the Case Manager? __________ ________________________________________________________________________ C2; T2) Did the teacher agree willingly to work on the student’s problem with the case manager (and IC Team)? Y N C3; T3) Describe the initial referral concern. What concerns did the case manager and the teacher focus upon? ____________________________________________________ What were the current/baseline performance and goals established for the concern(s)?___ ________________________________________________________________________ For Identified Academic Concerns: What were some activities that the case manager and the teacher undertook to better define the problem? T4) Assessment of student’s academic skills and instructional level Y N T5) Assessments conducted in classroom materials and focused upon the individual student rather than norm group. Y N C4) Analysis of entry level skills using CBA/ Instructional assessment Y N

C5) Analysis of targeted academic tasks (i.e. Task or error analysis) Y N Specify how:___________________________________________ C6) Specification of terminal goal Y N Specify what:__________________________________________ For Identified Behavioral Concerns: What are some activities that the case manager and the teacher undertook to better define the problem? T6) Assessment of student’s academic skills and instructional levels relevant to the times/situations of behavioral concern. Y N C4) Analysis of entry level skills using CBA/Instructional assessment in the main academic areas or during the time in which the behavior occurs. Y N C7) Possibility of academic problem assessed. Y N C8; T8) Analysis of setting and situation. Y N How? _________________________________________ C9; T7) Analysis of antecedents/consequences. Y N How? _________________________________________ C10) Specification of desired behavior. Y N How? _________________________________________ Briefly describe intervention(s): _____________________________________________ _______________________________________________________________________

166

C11; T9) Is there agreement between the Case Manager and Teacher as to which intervention(s) to implement? Y N Does the strategy relate to the identified concern? Y N C12; T10) Is there evidence of specification of who is responsible for what, when, in the intervention development? Y N C13; T11) How was the effectiveness of the strategy/intervention monitored? ____________________________________________________________________ C14; T12) Did the teacher and case manager meet to determine whether the intervention/strategy was implemented as planned? Y N Did the teacher and case manager agree as to how much modification was needed, if any? Y N T13) Describe what type of information was collected during the intervention and how often the information was collected: ______________________________________ Was the information graphed/charted? Y N Did it appear that the case manager and teacher graph/chart the data within the sessions? Y N Did one member bring the completed work to the session? Y N Please describe:__________________________________________________________ Did it appear that the case manager and teacher used/worked with the Student Documentation Form (SDF) within the sessions? Y N Please describe: __________________________________________________________ T14) Did the teacher and case manager have scheduled meetings to discuss the student’s progress? Y N C15) Approximately how many times did it appear that the case manager and teacher have scheduled/formal meetings to discuss case progress? ________________________ T15) Based on the teacher’s responses within the sessions after participating in the IC process for this case, estimate how the teacher would rate the outcome: “We achieved… a) much more than expected; b) somewhat more than expected; c) what was expected; d) somewhat less than expected; e) much less than expected.” Did it appear that this rating would be based on data? Y N C16; T16) How was the decision to modify, continue or terminate the intervention made? _________________________________________________________________ Was the decision based on data? Y N C17) Did it appear that the teacher participated in all meetings (inc. IC Team meetings) during which the referral problem was discussed? Y N T17) If it was specified during the taped sessions, state what the teacher did with the completed referral/request for assistance form: _________________________________

167

Level of Implementation- Tape Version Interrater Reliability Training Materials: • LOI-R interview protocols (Case manager and teacher forms) • Level of Implementation- Tape Version protocols Introduction: Thank you for agreeing to assist as an interrater reliability rater. I know that you have taken time away from your own work and I appreciate it. Since you have conducted LOI interviews and are familiar with the scoring, I will mainly be addressing the potential differences between the interview scoring process and the taped case session scoring process as measured by the Level of Implementation- Tape Version protocol. I am asking you to listen to tapes of consultation sessions from entry and contracting through termination (or until the case manager and consultee stopped taping their sessions). We are going to review the IC stages and the behavioral elements of which each stage is comprised so that when we listen to the tapes individually, we can score the Tape Version protocols in the same way. We’ll begin by listing the different stages of the IC process and how they correspond to the LOI dimensions. There are several indicators for each of the seven dimensions. Following are general descriptions of the dimensions and the behavioral components. 1) Clear, accurate communication; 2) Contracting (case manager discusses IC process, particularly problem solving stages, data collection and confidentiality); 3) Problem Identification (analysis of the concern using classroom based measures and data specifically collected to measure progress; defining the behavior in clear, objective and measurable terms); 4) Intervention recommendations (determining the details of the intervention, including the specifics of time, method, personnel, and monitoring procedures); 5) Implementation (discussion of needed modifications or troubleshooting for practicality problems); 6) Evaluation and Follow up (discussion of data to determine progress and to make decisions about continuing, modifying or terminating intervention). 7) Curriculum Based Assessment (use of the curriculum to assess the student’s strengths, weaknesses and entry-level skills). In examining the Tape Version protocol, one can identify where the different components of the IC process are assessed. While listening to the sessions, you will indicate whether the consultation dyad completed the elements of the IC process. In most cases, to indicate completion you would make a notation to describe manner in which they completed the element, then circle Y for yes to indicate that the element was included in the session discussions. If you do not hear evidence of the dyad 168

completing the element, you will initially leave the item blank, as the dyad may discuss the element in future sessions. However, if the dyad has moved on through the next IC stage and you still have not heard evidence of the element completion, you will score it as “N” for no. The following are irregularities that may occur when scoring the Tape Version protocol based on the taped sessions, rather than assessing level of implementation through the LOI-R interview forms. 1) In several instances, a particular element will be discussed over several sessions. For example, the consultation dyad may not discuss all of the contracting elements within the first session. The consultant may readdress a particular point, such as following the problem-solving steps, in the second or third session. If a particular element is addressed in any of the sessions, credit is given for the completion of that item. However, if there is a notable delay or if the elements are significantly out of sequence and you do give credit for element completion, please note the irregularities on the Tape Version protocol (tape number, how the topic arose, what the circumstances were like within the session, etc.) 2) The consultation dyad may discuss several different options for completing an element before deciding which to use. For example, a consultation dyad may discuss and/or collect baseline data on several different student concerns prior to identifying the concern for which they will develop the intervention. Likewise, a dyad may develop and/or implement several different interventions while in the stage of intervention development. In these situations, please note all the concerns or interventions discussed, but score the Tape Version protocol based on the concern for which they use a consistent intervention, and score the Tape Version protocol based on the intervention for which they collect consistent data used to make decisions about the student’s progress. 3) There may be instances in which an element seems to not apply for the particular case. For example, in developing a homework chart for increased work completion, the analysis of antecedents/consequences and analysis of settings and situations may not appear to be applicable. If a particular element was not addressed but also did not seem applicable, please note N/A, but SCORE the item as “N.”

169

The following Table lists the Summary of the Collaborative Process Domain of the Level of Implementation Scale- Revised (Vail, 1996). • 1. Collaborative Communication: Teacher and casemanager have the same perception of issues discussed, or an understanding of the other’s perception 2. Entry and contracting Consultant and consultee discuss the consultation process, the meaning of collaboration, time involved, and confidentiality Consultant and consultee reach a mutual agreement to engage in the process 3. Problem Identification Statement of discrepancy between desired and actual student performance Academic concerns: a) Analysis of entry level skills using CBA b) Analysis of targeted academic task c) Specification of goal in behavioral terms Behavioral concerns: a) Analysis of antecedents and consequences, setting and situation b) Statement of desired behavior 4. Intervention Development Intervention based on effective teaching practices Consensual decision on which interventions to implement Specification of who is responsible for what and when Plan for monitoring the effectiveness of the interventions 5. Intervention Implementation Consensual agreement between consultant and consultee about the extent to which the specified plan is operationalized Plan is monitored as specified Measurement data are graphed frequently 6. Evaluation and Follow-up Level of progress determined by data Decision to terminate, continue or change intervention based on data 7. Curriculum based assessment Assessment reflects an evaluation of academic behavior in the natural environment Assessment focuses on the individual, rather than the normative group Child is tested in curriculum material

• • • •



• • • • • • • • • • • •

170



Assessment method used is appropriate for continuous monitoring of student progress in order to change interventions as needed

(Fudell, Gravois & Rosenfield, 1994. Critical Dimensions of Instructional Consultation Teams, from Level of Implementation Scale for Instructional Consultation Teams: Administration and Scoring Guide. In Vail, 1995, p. 201)

171

APPENDIX D

Completeness of Audiotaped Case Sessions

Completeness

Session Names

Number of cases

Fully Complete

Contracting Problem Identification Intervention Design Intervention Implementation/Evaluation Closure Contracting Problem Identification Intervention Design Intervention Implementation Contracting Problem Identification Intervention Design

6

Complete Without Closure

5

Majority Complete

6

No Contracting With Closure

Problem Identification Intervention Design 1 Intervention Implementation/Evaluation + Closure Problem Identification Intervention Design Intervention Implementation/Evaluation Contracting Intervention Design Intervention Implementation/Evaluation

No Contracting Without Closure

1

No Prob. ID, Without Closure

1

___________________________________________________________________

172

REFERENCES Allen, S. J., & Graden, J. L. (1995). Best practices in problem solving teams. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology III. Washington, DC: National Association of School Psychologists. Allen, S. J., & Graden, J. L. (2002). Best practices in collaborative problem solving for intervention designs. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology IV (pp. 565-582). Bethesda, MD: National Association of School Psychologists. Anton, J. M., & Rosenfield, S. (2000, February). A survey of preservice consultation training and supervision. Poster session presented at the annual meeting of the National Association of School Psychologists, New Orleans. Bahr, M., Whitten, E., Dieker, L., Kocarek, C., & Manson, D. (1999). A comparison of school-based intervention teams: Implications for educational and legal reform. Exceptional Children, 66, 67-83. Belli, R. F., Shay, W. L., Stafford, F. P. (2001). Event History Calendars and question list surveys: A direct comparison of interviewing methods. Public Opinion Quarterly, 65 (1), 45-74. Bergan, J. R. (1977). Behavioral consultation. Columbus, OH: Merrill. Bergan,. J. R., & Kratochwill, T. R. (1990). Behavioral therapy and consultation. New York: Plenum Press. Bernard, J. M., & Goodyear, R. K. (1998). Fundamentals of clinical supervision (2nd ed.). Boston: Allyn & Bacon. Buck, G. H., Polloway, E. A., Smith-Thomas, A., & Cook, K. W. (2003). Prereferral intervention processes: A survey of state practices. Exceptional Children, 69, 349-360. Caplan, G. (1970). The theory and practice of mental health consultation. New York: Basic Books. Carter, J., & Sugai. G. (1989). Survey on prereferral practices: Responses from state departments of education. Exceptional Children, 55, 298-302. Chalfant, J. C., & Pysh, M. V. (1989). Teacher assistance teams: Five descriptive studies on 96 teams. Remedial & Special Education, 10, 49-58. Cohen’s Kappa, Index of interrater reliability. Retrieved December 30, 2004, from http://www - class.unl.edu/psycrs/handcomp/hckappa.PDF

173

Conoley, J. C., & Conoley, C. W. (1982). School consultation: A guide to practice and training. Elmsford, NY: Pergamon Press. Conoley, J. C., & Gutkin, T. B. (1986). School psychology: A reconceptualization of service delivery realities. In S. N. Elliott & J. C. Witt (Eds.), The delivery of psychological services in schools: Concepts, processes and issues (pp. 3393-424). Hillsdale, NJ: Erlbaum. Conrad, F. G. & Schober, M. F. (2000). Clarifying question meaning in a household telephone survey. Public Opinion Quarterly 64, 1-28. Croyle, R. T., & Loftus, E. F. (1994). Improving episodic memory performance of survey respondents. In J. M. Tanur (Ed.), Questions about questions: Inquires into the cognitive bases of surveys (pp. 95-101). New York: Russell Sage Foundation. Curtis. M. J., & Stollar, S. A. (2002). Best practices in system-level change. In A.Thomas & J. Grimes (Eds.), Best practices in school psychology IV (pp. 223-234). Bethesda, MD: National Association of School Psychologists. Domitrovich, C. E., & Greenberg, M. T. (2000). The study of implementation: Current findings from effective programs that prevent mental disorders in school-aged children. Journal of Educational and Psychological Consultation, 11, (2), 193-221. Eber, L., Lewis-Palmer, T., & Pacchiano, D. (2002). School-wide positive behavior systems: Improving school environments for all students including those with EBD. In A system of care for children’s mental health: Expanding the research base. Proceedings of the Annual Research Conference, held February 25-28, 2001 in Tampa, FL. (ERIC Document Reproduction Service No. ED465253) Ehrhard,K. E., Barnett, D.W., Lentz, Jr. R. E., Stollar, S. A., & Reifin, L. H. (1996). Innovation methodology in ecological consultation: Use of scripts to promote treatment acceptability and integrity. School Psychology Quarterly, 11, (2), 149-168. Erchul, W. P. (1987). A relational communication analysis of control in school consultation. Professional School Psychology, 2, 113-124. Erchul, W. P., & Chewning, T. G. (1990). Behavioral consultation from a request-centered relational communication perspective. School Psychology Quarterly, 5, 1-20.

174

Flugum, K. R., & Reschly, D. J. (1994). Prereferral interventions: Quality indices and outcomes. Journal of School Psychology, 32, 1-14. Friedland, B. L., & Walz, L. M. (2003, March). Evaluating teaming skills in a rural university clinical experience: Continuation across two summers. In Rural Survival, Proceedings of the annual conference of the American Council on Rural Special Education (ACRES). Salt Lake City, UT. (ERIC Document Reproduction Service No. ED476215) Friend, M., & Cook, L. (1992). Interactions: Collaboration skills for school professionals. New York: Longman. Friend, M., & Cook, L. (1997). Student-centered teams in schools: Still in search of an identity. Journal of Educational and Psychological Consultation, 8, 3-21. Fuchs, D., & Fuchs, L. S. (1989). Exploring effective and efficient prereferral interventions: A component analysis of behavioral consultations. School Psychology Review, 18, 260-279. Fuchs, D., Fuchs, L. S., & Bahr, M. W. (1990). Mainstream assistance teams: A scientific bases of the art of consultation. Exceptional Children, 57, 128139. Fuchs, D., Fuchs, L. S., Bahr, M. W., Fernstrom, P., Stecker, P. M., (1990). Prereferral intervention: A prescriptive approach. Exceptional Children, 56, (6), 493-513. Fudell, R. (1992). Level of implementation of Teacher Support Teams and teachers’ attitudes toward special needs students. Unpublished doctoral dissertation, Temple University, Philadelphia. Fudell, R., & Dougherty, K. (1989). Teacher Support Teams: State of policy and description of elements. Unpublished manuscript. Fudell, R., Gravois, T., & Rosenfield, S. A. (1996). Appendix A: IC-Team LOI- Revised. In S. A. Rosenfield & T. A. Gravois. Instructional consultation teams: Collaborating for change. New York: Guilford Press. Fullan, M. (1983). Evaluating program implementation: What can be learned from Follow Through. Curriculum Inquiry, 13, (2), 215-227. Fullan, M. (1991). The new meaning of educational change. New York: Teachers College Press.

175

Gerstl-Pepin, C. I., & Gunzenhauser, M. G. (2002). Collaborative team ethnography and the paradoxes of interpretation. International Journal of Qualitative Studies on Education, 15, 137-154. Gravois, T. A. (1995). The relationship between communication use and collaboration of school-based problem - solving teams. Dissertation Abstracts International, 56 (11A), 4324 (University Microfilms No. AAI9607765). Gravois, T. A., Fudell, R., & Rosenfield, S. A., (2005). Level of Implementation Scale for Instructional Consultation Teams: Administration and scoring guide. Unpublished document. Gravois, T. A., Knotek, S., & Babinski, L. M. (2002). Educating practitioners as consultants: Development and implementation of the Instructional Consultation Team Consortium. Journal of Educational and Psychological Consultation, 13(1&2), 113-132. Gravois, T. A., & Rosenfield, S. (2002). A multi-dimensional framework for evaluation of Instructional Consultation Teams. Journal of Applied School Psychology, 19 (1), 5-29. Gresham, F. M., Gansle, K. A., & Noell, G. H. (1993). Treatment integrity in applied behavior analysis with children. Journal of Applied Behavior Analysis, 26, (2) 257-263. Gresham, F. M., & Kendell, G. K. (1987). School consultation research: Methodological critique and future research directions. School Psychology Review, 16, (3), 306-316. Gutkin, T. B. (1993). Conducting consultation research. In J. E. Zins, T. R. Kratochwill & S. N. Elliott (Eds.), Handbook of consultation services for children (pp. 227-248). San Francisco, CA: Jossey-Bass. Gutkin, T. B., & Curtis, M. J. (1990). School-based consultation: Theory, techniques, and research. In T. B. Curtis & C. R. Reynolds (Eds.), The handbook of school psychology (2nd ed.). pp. 577-611. New York: Wiley. Henning-Stout, M. (1993). Theoretical and empirical bases of consultation. In J. E. Zins, T. R. Kratochwill & S. N. Elliott (Eds.), Handbook of consultation services for children (pp. 15-45). San Francisco, CA: Jossey-Bass. High Time for High School Reform: Early findings from the evaluation of the National School District and Network Grants program. (2003). Washington, DC: American Institute for Research. (ERIC Document Reproduction Service No. ED476004)

176

Hunt, P., & Goetz, L. (2002). Inclusive reform in urban schools through peerto-peer support from school teams. Directed research projects: Educating children with sever disabilities in inclusive settings. (Final Project Report, October 1997- September 2000). San Francisco, CA: San Francisco State University. (ERIC Document Reproduction Services No. ED469843) Iverson, A. M. (2002). Best practices in problem solving team structure and process. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology IV (pp. 657-669). Bethesda, MD: National Association of School Psychologists. Jobe, J. B. (2000). Cognitive processes in self-report. In A. A. Stone, J. S. Turkkan, C. A. Bachrach, J. B. Jobe, H. S. Kurtzman & V. S. Cain (Eds.), The science of self-report: Implications for research and practice (pp. 25-28). Mahwah, NJ: Lawrence Erlbaum Associates. Jobe, J. B. (2003). Cognitive psychology and self-reports: Models and methods. Quality of Life Research, 12, 219-227. Jobe, J. B., Tourangeau, R., & Smith, A. F. (1993). Contributions of survey research to the understanding of memory. Applied Cognitive Psychology, 7, 567-584. Johnson, T. L. (1998) An analysis of request-centered relational communication within behavioral consultation using a sample of practicing school psychologists. Unpublished doctoral dissertation, Iowa State University. Johnson, S. (2000). Intervention/prevention program evaluation, 1998-99, Eye on evaluation. Raleigh, NC: Wake County Public School System. (ERIC Document Reproduction Services No. ED438317) Jones, G. (1999). Validation of a simulation to evaluate instructional consultation problem identification skill competence. Dissertation Abstracts International, 60 (12A), 4317. Jones, K. M., Wickstrom, K., F., & Friman, P. C. (1997). The effects of observational feedback on treatment integrity in school-based behavioral consultation. School Psychology Quarterly, 12, (4), 316326. Knoff, H. M. (2002). Best practices in facilitating school reform, organizational change, and strategic planning. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology IV (pp. 235-253). Bethesda, MD: National Association of School Psychologists.

177

Kovaleski, J. F. (2002). Best practices in operating pre-referral intervention teams. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology IV (pp. 645-655). Bethesda, MD: National Association of School Psychologists. Kovaleski, J. F., Gickling, E. E., Morrow, H., & Swank, P. R. (1999). High versus low implementation of Instructional Support Teams. Remedial and Special Education, 20, (3) 170-183. Kratochwill, T. R., Elliott, S. N., & Callan-Stoiber, K. (2002). Best practices in school-based problem-solving consultation. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology IV (pp. 583-608). Bethesda, MD: National Association of School Psychologists. Kratochwill, T. R., & Pittman, P. H. (2002). Expanding problem-solving consultation training: Prospects and frameworks. Journal of Educational and Psychological Consultation, 13(1&2), 69-95. Kurtalt, S. K. (1990). Collaboration in the classroom: Implementing consultation-based prereferral intervention as the service delivery system of five elementary multi-disciplinary teams. Dissertation Abstracts International. 51 (00A), 3018. (University Microfilms No. AAG9034633). Leithwood, K., & Montgomery, D. (1980). Evaluating program implementation. Evaluation Review, 4, 193-214. Macmann, G. M., Barnett, D. W., Allen, S. J., Bramlett, R. K., Hall, J. D., & Ehrnardt, K. E. (1996). Problem solving and intervention design: Guidelines for technical adequacy. School Psychology Quarterly, 11, (2), 137-148. Martin, R. (1978). Expert and referent power: A framework for understanding and maximizing consultation effectiveness. Journal of School Psychology, 16, 49-55. Mirel, J. (2001). The evolution of the New American Schools: From revolution to mainstream. Washington DC: Thomas B. Fordham Foundation. (ERIC Document Reproduction Service No. ED461945). Moncher, F. J., & Prinz, R. J. (1991). Treatment fidelity in outcome studies. Clinical Psychology Review, 11, 247-266.

178

National TEEM Outreach: Successfully including young children in kindergarten and subsequent general education classrooms. (Final report, October 1998- September 2001). (2001). Burlington, VT: Center on Disability and Community Inclusion. (ERIC Document Reproduction Service No. ED464433) Nelson, J. R., Smith, D. J., Taylor, L., Dodd, J. M., & Reavis, K. (1991). Prereferral intervention: A review of the research. Education and Treatment of Children, 14, 243-253. Noell, G. H., Witt, J. C., Gilbertson, D. N., Ranier, D. D., & Freeland, J. T. (1997). Increasing teacher intervention implementation in general educational settings through consultation and performance feedback. School Psychology Quarterly, 12, (1), 77-88. O’Sullivan, R. G., & Page, B. (2000). Collaborative Evaluation of Schools Attuned. A paper presented at the annual meeting of American Educational Research Association, April 24- 28, 2000 in New Orleans, LA. (ERIC Document Reproduction Service No. ED441824) Pearson, R. W., Ross, M., & Dawes, R. M. (1994). Personal Recall and the limits of retrospective questions in surveys. In J. M. Tanur (Ed.), Questions about questions: Inquires into the cognitive bases of surveys (pp.65-94). New York: Russell Sage Foundation. Peterson, l., Homer, A. L., & Wonderlich, S. A. (1982). The integrity of independent variables in behavior analysis. Journal of Applied Behavior Analysis, 15, (4), 477-492. Pugach. M., & Johnson, L. J. (1989). Prereferral interventions: Progress, problems and challenges. Exceptional Children, 56, 217-226. Reimers, T. M., Wacker, D. P., & Koeppl, G. (1987). Acceptability of behavioral interventions: A review of the literature. School Psychology Review, 16, (2), 212-227. Rosenfield, S. (1987). Instructional consultation. Hillsdale, NJ: Lawrence Erlbaum Associates. Rosenfield, S. (1992). Developing school-based consultation teams: A design for organizational change. School Psychology Quarterly, 7, 27-46. Rosenfield, S. (2002a). Best practices in Instructional Consultation. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology IV. Bethesda, MD: National Association of School Psychologists.

179

Rosenfield, S. (2002b). Developing instructional consultants: From novice to competent to expert. Journal of Educational and Psychological Consultation, 13 (1&2), 97-111. Rosenfield, S. A., & Gravois, T. A. (1996). Instructional consultation teams: Collaborating for change. New York: Guilford Press. Rubin, D. C., & Wenzel, A. E. (1996). One hundred years of forgetting: A qualitative description of retention. Psychological Review, 103, 734-760. Rubin, R., Stuck, G., & Revicki, D. (1982). A model for assessing the degree of implementation in field-based educational programs. Educational Evaluation and Policy Analysis, 4, 189-196. Safran, S. P., & Safran, J. S. (1996). Intervention assistance programs and prereferral teams. Remedial & Special Education, 17, 363-370. Schober, M. F., Conrad, F. G., & Fricker, S. S. (2004). Misunderstanding standardized language in research interviews. Applied Cognitive Psychology, 18, 169-188. Shapiro, E. S. (1987). Intervention research methodology in school psychology. School Psychology Review, 16, (3), 290-305. Stevens, V., Van Oost, P., & De Bourdeaudhuij, I. (2001). Implementation process of the Flemish antibullying intervention and relation with program effectiveness. Journal of School Psychology, 39(4), 303-317. Stoll, L., Wikeley, F., & Reezigt, G. (2002). Developing a common model? Comparing effective school improvement across European countries. Educational Research and Evaluation, 8, 455-475. Suchman, L., & Jordan, B. (1994). Validity and the collaborative construction of meaning in face-to-face surveys. In J. M. Tanur (Ed.), Questions about questions: Inquires into the cognitive bases of surveys (pp. 241-267). New York: Russell Sage Foundation. Tan, S.-L., Callahan, J., Hatch, J., Jordan, T., Esmond, N., & Burnham, B. (2002). An evaluation of the Millard High School block schedule. Utah. (ERIC Document Reproduction Service No. ED477714) Telzrow, C. F., & Beebe, J. J. (2002). Best practices in facilitating intervention adherence and integrity. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology IV (pp. 503-516). Bethesda, MD: National Association of School Psychologists.

180

Tharp, R. G., & Gallimore, R. (1979). The ecology of program research and evaluation: A model of evaluation succession. In L. B. Sechrest (Ed.). Evaluation Studies Review Annual, 39-60. Beverly Hills, CA: Sage Publications. Thousand, J. S., & Villa, R. A. (1992). Collaborative teams: A powerful tool in school restructuring. In R. A. Villa, J. S. Thousand, W. Stainback, & S. Stainback (Eds.). Restructuring for caring and effective education: An administrative guide to creating heterogeneous schools (pp. 73-108). Baltimore: Paul H. Brookes. Tourangeau, R. (2000). Remembering what happened: Memory errors and survey reports. In A. A. Stone, J. S. Turkkan, C. A. Bachrach, J. B. Jobe, H. S. Kurtzman & V. S. Cain (Eds.), The science of self-report: Implications for research and practice (pp. 29-47). Mahwah, NJ: Lawrence Erlbaum Associates. Upah, K. R., & Tilly, W. D., III. (2002). Best practices in designing, implementing, and evaluating quality interventions. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology IV (pp. 483-501). Bethesda, MD: National Association of School Psychologists. Vail, P. L. (1996). Instructional Consultation teams: Analysis of level of implementation over two years and its relationship with team collaboration. Unpublished masters thesis, University of Maryland, College Park. Vail, P. L. (2003). On-line coaching of consultation skills: Through the eyes of coaches and consultants. Unpublished doctoral dissertation, University of Maryland. Vail, L., & Strein, W. O. (1997, August 17). Instructional Consultation Teams: Analysis of level of implementation over two years and its relationship with team collaboration. Poster presentation at the annual convention of the American Psychological Association, Chicago. Wang, M., Nojan, M., Strom, C., & Walberg, H. (1984). The utility of degree of implementation measures in program implementation and evaluation research. Curriculum Inquiry, 14, 249-286. Ward, S. B., Korinek, L., & McLaughlin, V. (1998). An investigation of intervention assistance teams at a preservice level. School Psychology International, 19, 279-286. Will, M. C. (1986). Educating children with learning problems: A shared responsibility. Exceptional Children, 52, 411-416.

181

Witt, J. C. (1997). Talk is not cheap. School Psychology Review, 12, 281-292. Witt, J. C., Erchul, W. P., McKee, W. T., Pardue, M. M., & Wickstrom, K. F. (1991). Conversational control in school-based consultation: The relationship between consultant and consultee topic determination and consultation outcome. Journal of Educational and Psychological Consultation, 2(2), 101116. Witt, J. C., Noell, G. H., LaFleur, L. H., & Mortenson, B. P. (1997). Teacher usage of interventions in general educational settings: Measurement and analysis of the independent variable. Journal of Applied Behavior Analysis, 30, (4), 693-696. Wolf, M. M. (1978). Social validity: The case for subjective measurement or How applied behavior analysis is finding its heart. Journal of Applied Behavior Analysis, 11, 203-214. Yeaton, W. H., & Sechrest, L. (1981). Critical dimensions in the choice and maintenance of successful treatments: Strength, integrity, and effectiveness. Journal of Counseling and Clinical Psychology, 49, (2) 154-167. Zins, J. E., & Erchul, W. P., (2002). Best practices in school consultation. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology IV (pp. 625-643). Bethesda, MD: National Association of School Psychologists. Zins, J. E., Kratochwill, T. R., & Elliott, S. N. (1993.) Current status of the field. In J. E. Zins, T. R. Kratochwill & S. N. Elliott (Eds.), Handbook of consultation services for children (pp. 1-12). San Francisco, CA: Jossey-Bass. Zins, J., Curtis, M., Graden, J., & Ponti, C. (1988). Helping students succeed in the regular classroom. San Francisco: Jossey-Bass.

182



doc_770048202.pdf
 

Attachments

Back
Top