Case Studies for the evaluation reports of a comprehensive community initiative

Description
Evaluation is a systematic determination of a subject's merit, worth and significance, using criteria governed by a set of standards.

ABSTRACT

Title of dissertation:

AN ANALYTIC CASE STUDY OF THE EVALUATION REPORTS OF A COMPREHENSIVE COMMUNITY INITIATIVE Angela Katherine Frusciante, Doctor of Philosophy, 2004

Dissertation directed by:

Professor Hanne B. Mawhinney

This study is a case study of the evaluation reports of the Neighborhood and Family Initiative (NFI). NFI was a ten-year Ford Foundation sponsored comprehensive community initiative (CCI) in four low-income neighborhoods in four United States cities. The NFI evaluation was longitudinal, interdisciplinary, and multi-tiered. Through this study of the eleven publicly released evaluation reports, I found that the evaluators not only wrote about CCIs and evaluation but also evidenced evaluation as part of loosely linked network supporting urban community development. The knowledge community addressed in the study is the Aspen Roundtable on Comprehensive Community Initiatives a national coalition supporting the discussion of evaluation appropriate to community initiatives. The study involved the identification of reporting dimensions from descriptive analysis, evaluation lessons from the documented evaluators’ interpretations, and change constructs from my theoretical concerns. The study resulted in a discussion of issue areas to be addressed in understanding evaluation reporting of complex social and policy initiatives. These issue areas included: community organization building versus coalition

formation, comprehensiveness as a lens for change, audience, institutional distancing, and learning, knowledge development and education. With the study, I also provide an innovative methodological approach to analyzing change through the language evaluators put to initiative reporting. The qualitative approach involved devising a process for analyzing description and evaluator written reflection but also analyzing change of evaluator interpretations. Unlike qualitative approaches that emphasize only themes as recurrences over time, the approach to this study centered ideas as clusters that changed in configuration over time.

AN ANALYTIC CASE STUDY OF THE EVALUATION REPORTS OF A COMPREHENSIVE COMMUNITY INITIATIVE by Angela Katherine Frusciante

Dissertation submitted to the Faculty of the Graduate School of the University of Maryland, College Park in partial fulfillment of the requirements for the degree of Doctor of Philosophy 2004

Advisory Committee: Professor Hanne B. Mawhinney, Chair Professor Howell Baum Professor Barbara Finkelstein Professor Meredith Honig Professor Sylvia Rosenfield

©Copyright by Angela Katherine Frusciante 2004

DEDICATION To my mom Rose, my elementary school principal Charles O’Hara, and my K-adult students, who all supported me in my learning that social injustice, even when uncovered in the halls of education, is never right, is never tolerable, and is never justified, not even in the pursuit of knowledge. And to those members of the research profession who nurtured me, cared for me, laughed and silently cried with me, but nonetheless welcomed me -- values and all.

ii

TABLE OF CONTENTS LIST OF TABLES LIST OF FIGURES CHAPTER ONE: INTRODUCTION.....................................................................7 Comprehensive Community Initiatives and Evaluation in Context .........11 NFI as a Case for Understanding CCI Evaluation Reporting ....................17 Analytic Case Study...................................................................................21 Definition of Key Terms............................................................................28 CHAPTER TWO: CONCEPTUAL CONTEXT ...................................................30 Comprehensive Community Initiative Background ..................................30 Comprehensive Community Initiative Concepts .......................................33 Evaluation Approaches Influencing CCI Evaluation.................................43 Evaluation for Social Program Development ................................47 Evaluation for Social Change ........................................................50 Evaluation in the Context of Community Initiatives .................................59 CCI Evaluation Reporting..........................................................................69 CHAPTER THREE: METHODOLOGY ..............................................................75 Qualitative Research of CCI Evaluation....................................................75 Case Study .................................................................................................76 Case Selection............................................................................................79 Methods......................................................................................................82 Data ................................................................................................84 Analytic Questions.........................................................................87 Techniques .....................................................................................90 Coding Primary Data .........................................................91 Graphic Displays................................................................92 Textual Summaries (Including Visuals) ............................93 Analytic Memoing .............................................................93 Investigative Iterations...................................................................94 Immersion into the Data and Segmenting..........................95 Visual Diagramming of Text Units....................................97 Analytic Layering ..............................................................98 Construct Definition.........................................................102 Challenges to Credibility of Change Analysis Using Documents .......................104 Trustworthiness Standards .......................................................................107 Standard of Reflexivity ...............................................................108 Standard of Descriptive and Interpretive Coherence ...................110 Standard of Process Adherence ...................................................112

iii

Standard of Transferability ..........................................................113 Trustworthiness Approaches to Standards...............................................114 Identifying Data ...........................................................................115 Using Description ........................................................................115 Providing an Audit Trail ..............................................................116 Being Transparent........................................................................117 CHAPTER FOUR: CASE STUDY FINDINGS .................................................121 NFI Evaluation as a Case of Learning About Evaluation Reporting.......122 NFI Central Organizations as Members of a Knowledge Community ................................................................................124 NFI Structural Change as Initiative Decentralization..................126 NFI Context as the Knowledge Community Boundaries ............127 NFI Evaluation Purpose and Structure for Learning ...................130 NFI Evaluation Reports as Public Knowledge Development..................132 The 1992 Chapin Hall Report ......................................................133 The 1993 Chapin Hall Report ......................................................137 The 1993 and 1994 Michigan Reports.........................................142 The 1995 Chapin Hall Report ......................................................143 The 1997 Chapin Hall Report ......................................................147 The 1998 Milwaukee Report .......................................................151 The 1999 Chapin Hall Report ......................................................152 COSMOS 2000 ............................................................................154 The 2000 Chapin Hall Reports ...................................................159 Dimensions in Relation to Surrounding Literature..................................169 Topical Questions as Lessons Documented By NFI Evaluators .............172 Documenting Change in Reporting ........................................................183 Description Change Constructs ...................................................184 Development as a Change Construct ..............................184 Resource as a Change Construct......................................194 Participation as a Change Construct ................................198 Evaluation Change Constructs.....................................................203 Internal and External Communication as Change Constructs ......................................................................203 Data as a Change Construct .............................................208 Outcomes as a Change Construct ....................................212 Context as a Change Construct ........................................217 Reporting Issues.......................................................................................221 Reporting and Comprehensiveness..................................225 Reporting and Communication ........................................226 Reporting and Funding ....................................................227 Reporting and Sustainability............................................228 Reporting and Knowledge Norms ...................................230 Reporting and Decentralization .......................................230 Reporting and Knowledge Communities.........................233

iv

CHAPTER FIVE: DISCUSSION........................................................................235 Review of the Problem, Purpose and Questions That Guided the Study ...............................................................................................235 Overview of the Study Process and Findings ..........................................236 Discussion of Findings.............................................................................240 Community Organization Building vs. Coalition Creation .........240 Comprehensiveness as a Lens for Change...................................243 Audience in Evaluation................................................................245 Understanding Institutional Distance...........................................250 Independence ...................................................................253 Communication................................................................255 Data Leveraging...............................................................257 Learning, Knowledge Development, and the Educational Potential of CCI Evaluation Reporting ......................................258 Study Meaning to Evaluation Approaches ..............................................263 Purpose of Evaluative Work ........................................................265 Data Interpretation .......................................................................266 Evaluator Roles............................................................................268 Interim Conclusions .................................................................................269 Reflection on Limitations of Studying the Reporting of a Changing Initiative.................................................................................272 Contributions of Study to Policymaking, Theory-Development Within Initiatives, and Evaluation Language Practice..........................277 Language in Reporting: Implications for Future Research......................279 APPENDICES A. Selection Process for National Initiative.............................................289 B. Narrative Criteria.................................................................................301 C. Description Information Matrix ..........................................................304 D. Description Information Text..............................................................311 E. Evaluation Overview Information Matrix ...........................................315 F. Selection of Information Search Locations .........................................328 REFERENCES ....................................................................................................333

v

LIST OF FIGURES

1. Aspen Roundtable Evaluation Heuristic....................................................67 2. CCI Evaluation Literature..........................................................................71 3. Analytic Approach .....................................................................................95 4. Visual Diagramming..................................................................................97 5. NFI Evaluation Problems, Purposes and Challenges...............................174 6. Chapin Hall 1992 Report Diagram – Development.................................186 7. Chapin Hall 2000 Report Diagram – Development.................................187 8. Chapin Hall 1995 Report Diagram – Participation..................................200

vi

LIST OF TABLES

1. Primary Data .......................................................................................85 2. Trustworthiness Approaches..............................................................114

vii

CHAPTER ONE INTRODUCTION

There are many ideas to share about the Ford Foundation’s Neighborhood and Family Initiative (NFI), a national community development initiative that took place in four United State urban neighborhoods. In 1990, the Ford Foundation began funding what was to become a ten-year demonstration initiative. There were many people involved and just as many views. There was also the potential for changed views because the initiative was to involve learning, and because lessons were to be shared publicly. The initiative funding included support for evaluation that, according to initial program reports, would contribute to both the learning and public reporting of the initiative. The evaluators recognized and acknowledged some of these lessons. Other changes, as areas of potential learning, were not stated but were evidenced in the ways in which the evaluators documented the initiative. In the public reports, national evaluators came to refer to NFI as a comprehensive community initiative (CCI). CCIs are approaches to neighborhood change within which participants plan and implement strategies to address geographically targeted issues related to development. NFI evaluators documented the programmatic adjustments of NFI and described the initiative structure as it changed over the decade of development funding. They described a centralized initiative structure that became decentralized as local collaboratives began to take on responsibility for making decisions appropriate to the circumstances influencing development in their communities. However, evaluators

1

also revealed, in their description, an evaluation structure that remained predominantly centralized in the reporting of the process of the initiative. The NFI “national” evaluators claimed to use the evaluation process to build theory and stated that they had a participatory intent in conducting the evaluation with the various members and contributors of the initiative. At times, the national evaluators also described the evaluation as ethnographic. Throughout the evaluation reports, the NFI evaluators reflected on the challenges of the evaluation process and the changes that took place in evaluation responsibility. In the final reports, the NFI evaluators began to refer to a “theory-of-change” evaluation approach, the language used by the Aspen Roundtable on Comprehensive Community Initiatives for Children and Families (Aspen Roundtable). This shared terminology is not surprising since the Ford Foundation funded both NFI and the Aspen Roundtable, and since the director of the evaluation firm that conducted the NFI national evaluation served as co-chair of the Aspen Roundtable and was a member in the Roundtable’s steering committee on evaluation. Therefore, the NFI evaluation and the Aspen Roundtable overlapped in membership, funding, and focus and thus NFI was linked to both local circumstances addressed by the initiative and a national knowledge community as indicated by the Aspen Roundtable. Since the NFI evaluation reporting was occurring at the same time as public discussions about the challenges of CCI evaluation that is both theory-based and participatory, the evaluation reports provided a medium to understand CCI evaluation reporting as situated within a system of the ideas of a broader knowledge community. When funded as part of a nonprofit initiative such as NFI, CCI evaluation reports are themselves products of public investment made possible with monies that are set

2

aside through tax incentives. Tax exemptions for nonprofits are allocated with the understanding that the specific nonprofit is engaging in public activities that would otherwise be conducted by the government (Hawks, 1997, p. 8). Because of the public investment, readers might conclude that the NFI evaluators would have shared publicly the model of comprehensive development they claimed to be refining (Chaskin, 1992). Readers might also expect that the NFI evaluation documents would have provided a picture of evaluation as it relates to comprehensive initiatives and that the evaluators would have outlined their developing theory. However, although NFI evaluators engaged in a ten-year description of a model for comprehensive development, there is no evidence that a theory for evaluation was developed. Despite the omissions, the NFI reports do have public value. The reports offered details about NFI as a community initiative and they provided a snapshot of the way in which NFI evaluators framed the initiative and changed that framing over the course of Ford Foundation evaluation funding. Therefore, the NFI reports, as evaluation reports, offered evidence of CCI evaluation reporting. Even without an explicit evaluation theory, this evidence provided a means to identify issues important to CCI evaluation reporting. In the context of community initiatives and the framing of CCIs, evaluation is a phenomenon experienced by those involved and constructed as participants reflect on their involvement and give language to their experiences and understanding. As evidence of CCI evaluation, the NFI reports served as textual data for examining how evaluators put language to evaluation within a comprehensive community initiative. Because the reports were written and released over the course of NFI, they also offered opportunities to identify change in evaluation reporting over time and, through change, to think about

3

learning and education as related to evaluation reporting. Using the reports to study CCI evaluation allowed me to both explore evaluation reporting and to maximize the public investment in NFI by using the reports to develop what the evaluators did not – a public understanding of CCI evaluation reporting. The NFI evaluation became a case that I utilized for understanding CCI evaluation reporting through an analysis of change as it occurred over time. I was interested in NFI as a case even though I did not participate directly in NFI. Rather I came to an understanding of NFI from my analysis of those evaluation reports that were produced and publicly released by evaluators working within the initiative. I came to this understanding after spending more than a decade studying and working within the field of social development and evaluation. My experience within social development and evaluation has been holistic in nature. I have explored social initiatives from a variety of perspectives, working at the local, regional, and national levels, in the private, public, and nonprofit sectors and in community training, education, and policy research. I have worked with groups that held to perspectives including historical, architectural, political, psychological, educational, economic, legal and anthropological views. I have worked with both quantitative and qualitative data from basic, applied, and participatory stances. With this experience as a backdrop, I wanted to learn something, from studying NFI, about how community initiatives looked in reports and how evaluators communicated CCI evaluation through their reports. Having been involved in evaluation, I was doubtful about the ability of the reports to assist me in understanding and engaging in CCI evaluation. I assumed that the writings in reports that were publicly available

4

would not provide as deep or informed an understanding as those writings, such as journal articles, geared toward professional and academic audiences with specific expertise. I was skeptical of the ability of professional evaluators to achieve comprehensiveness in their evaluating and to reflect on their own involvement in the enterprise of evaluation. I was also curious about the ability of evaluators to offer publicly valuable information given the pressures of private philanthropic control over funding. I wanted to see if my skepticism was justified and to know what was left, in written form, of the CCI evaluations. I wanted to find out if there was anything more to learn from these documents about how to understand evaluation within CCIs. I hoped to demonstrate that, through systematic analysis of reports, it would be possible to maximize the learning from the publicly sanctioned private investment in an example of a CCI reporting. In the process of this study, I did learn from the reports, and I also was able to utilize the text to reveal issues that were related to evaluation reporting but that were evidenced rather than discussed in the reports themselves.

Comprehensive Community Initiatives and Evaluation in Context

The term comprehensive community initiatives (CCIs) was used to describe one approach of neighborhood reform geared toward improving quality of life for children and families in communities disadvantaged by poverty (Baum, 2001; Brown, 1996; Fraser, Kick, & Williams, 2002; Roundtable on Comprehensive Community Initiatives, 1997). Supporters of CCIs sought to focus attention geographically and to attract investment, realign and mobilize local and institutional resources, identify and develop

5

social capital, and increase civic engagement (Brown, 1996; Kingsley, McNeely, & Gibson, n.d.; Stone, 1996). CCI approaches are grounded in a legacy of local, state, and nationally supported neighborhood development efforts that have taken place within various funding structures and policy mandates. Nineteenth century settlement houses, the Federal Community Action and Model Cities programs of the 1960s, the Community Development Corporations of the past 30 years, and a variety of grassroots efforts, provide guideposts for the history of neighborhood development (Baum, 2001; Fraser, Lepofsky, Kick, & Williams, 2003; Kubisch, Fulbright-Anderson, & Connell, 1998; O'Connor, 1995; Stone, 1994)1. Supporters of these different community initiative strategies have included citizens, professionals, public representatives, and private philanthropic contributors. Like some of the previous community initiatives, CCIs were designed to promote local participation (Chaskin & Abunimah, 1997; Roundtable on Comprehensive Community Initiatives, 2002; Stone, 1996), systemic approaches to development (Brown, 1996; Spruill, Kenney, & Kaplan, 2001; Stone, 1994), and mobilization of resources to address development issues in targeted geographic areas (Chaskin, 1997; Chaskin & Abunimah, 1997). Unlike past community efforts that have focused on internal organizational issues of community-based entities, horizontal relationships within community systems, or categorical program impacts of services for specific individuals, the study of CCIs has brought a focus on the multiple dimensions of community initiatives as they involve complicated combinations of strategies situated within complex contexts. Dynamism is sometimes regarded as the hallmark of contemporary community

1

For a detailed history of community initiatives, the reader may want to look to work by historian Alice O’Connor and publications from the Urban Institute.

6

initiatives. However, their vagueness in relation to the addressing of complexity and the achievement of synergy of development strategies raises concerns about both the legitimacy of CCI work and evaluative reports of their importance and success. In relation to concerns about community initiatives including those considered comprehensive, one area of inquiry that has received increased and sustained interest is the evaluation of initiatives (Fraser et al., 2002; Hollister & Hill, 1995; Murphy-Berman, Schnoes, & Chambers, 2000; O'Connor, 1995; Petersen, 2002; Schulz, Israel, & Lantz, 2003; Springer & Phillips, 1994; H. Weiss, Coffman, & Bohan-Baker, 2002). Numerous researchers have discussed their experiences with evaluation, have offered new tools and new ways of understanding evaluation, have conducted analyses of evaluation findings, and have commented on methodological concerns for establishing legitimacy (Connell, Kubisch, Schorr, & Weiss, 1995; Fulbright-Anderson, Kubisch, & Connell, 1998; Mattingly, Prislin, McKenzie, Rodrigquez, & Kayzar, 2002; Petersen, 2002; Schulz et al., 2003). A few researchers have also sought to systematically study evaluation approaches and their use (Christie & Alkin, 2003; Henry & Mark, 2003; Nichols, 2002; Segerholm, 2003). These various concerns have also been raised in relation to the understanding of large-scale community initiatives that have received funding to support longitudinal evaluation. Longitudinal evaluations have occurred with the support of funding from large foundations like the Ford Foundation. Evaluations of foundation supported initiatives are products of privately generated public investment (Hall, 2003). As a publicly funded activity and as a form of public reporting, evaluation has a potential purpose in linking community initiatives to a broader audience. As a public endeavor, the understanding of

7

community initiatives and their evaluation may also influence CCI success since the public message of CCIs may serve to contribute to the addressing of relevant contextual factors and the creating of conditions supportive of increased investment in community development. With this public purpose, the discussions of the process of evaluation go beyond methodological rhetoric to the heart of the design and use of publicly sanctioned social investment and the rights of citizens to claim ownership over the knowledge developed and reported within and throughout funded initiatives. Evaluation is thus itself a public good (Segerholm, 2003) making the discussions of evaluation a concern of interest to participants other than fund managers. However, traditionally evaluation has been utilized solely for objective program monitoring to inform managers of why programs fail (Scriven, 1997; Sechrest, 1994; Stufflebaum, 1994). Evaluation has sometimes been used as a learning tool inside an organization and for the purpose of program improvement or efficiency (Christie & Alkin, 2003; Owen & Lambert, 1998). On occasion, evaluation has been itself understood in an involved and participatory orientation with multiple stakeholders taking part in planning and informing the organization’s activities (Brandon, 1998; Cousins & Earl, 1992; Nichols, 2002). In each of these views of evaluation, researcher concern has been focused on specific organizational issues and often categorically targeted outcomes. However, Greene (2000) emphasizes that evaluation is distinguished from other forms of research by the “explicit value dimensions of its knowledge claims, by the overt political character of its contexts, and by the inevitable pluralism and poly-vocality of its actors” (p. 981). As Greene also notes, although learning may be crucial to the functioning of organizations

8

engaged in a process of ongoing adjustment and improvement, evaluation embraces a socio-political role. Moving the understanding of evaluation beyond attention to existing individual and organizational behavior means that issues of evaluation also move beyond a focus on existing conditions and to possibilities of learning and change. Researchers have addressed the idea of evaluation being related to social and political dynamics and have considered the implications of understanding evaluation as a socio-political activity (Connell et al., 1995; Fulbright-Anderson et al., 1998; Segerholm, 2003; Springer & Phillips, 1994; H. Weiss et al., 2002). As Weiss wrote about the complexity of evaluation, researchers may have multiple allegiances: He has obligations to the organization that funds his study. He owes it a report of unqualified objectivity and as much usefulness for action as he can devise. Beyond the specific organization, he has responsibilities to contribute to the improvement of social change efforts. Whether or not the organization supports the study’s conclusions, the evaluator often perceives an obligation to work for their application for the sake of the common weal. On both counts, he has commitments in the action arena. He also has an obligation to the development of knowledge and to his professions. As a social scientist, he seeks to advance the frontiers of knowledge about how intervention affects human lives and institutions. (C. H. Weiss, 1972, p. 8) With the awareness of this complexity, public readers of evaluation may now critique reports with attention to socially and politically aware perspectives such as understanding of coalitional activity. For example, researchers have questioned evaluation as an important tool to use in influencing public policy (Henry & Mark, 2003; Springer & Phillips, 1994). Researchers have discussed evaluation with respect to concerns about nonprofit investment (Fine, Thayer, & Coghlan, 1998; Hall, 2003; Hattrup McNelis & Bickel, 1996) and some researchers have concentrated on evaluation as it might be used to focus the investment in planning and implementation on targeted results (MurphyBerman et al., 2000; Nichols, 2002; Rossi, 1999). In some cases, researchers have

9

addressed evaluation in terms of understanding community initiatives as important to social change goals (Baum, 2001; Petersen, 2002; Sawicki & Flynn, 1996; Treno & Holder, 1997). Others have focused on community evaluation as an integral process for addressing urban issues either in relation to poverty or general improvement of quality of life in urban areas (Connell & Aber, 1995; Fraser et al., 2002; Sawicki & Flynn, 1996). The ideas of evaluation have thus been expanded from a mechanism for monitoring categorical funds or a tool for organizational learning into evaluation for public policy, nonprofit investment, poverty alleviation, and improved quality of life in urban areas. This expansion calls for enhanced understandings of evaluation, as reported, as a process of knowledge development dependent upon purposes and contexts within which learning is to occur. In the processes of knowledge development, heuristics for understanding evaluation in relation to various types of social services have been created (Finkelstein & Croninger, 1997; Mattingly et al., 2002). Writings have also highlighted overall challenges to the assumptions and practice of community initiative evaluation noting their complexity and limitations (Baum, 2001; Berkowitz, 2001; Edelman, 2000; Fraser et al., 2002; Gambone, 1998; Hollister & Hill, 1995; Rossi, 1999; Segerholm, 2003). However, the community development field has been limited in addressing knowledge development. For example, Berkowitz (2001) stated: It is common in community development writing to acknowledge that real-world community interventions are convoluted, multi-faceted, or, in a word, messy. And they are. But the community coalitions under consideration may simply be too messy, too unruly to be tamed by traditional scientific methodology as presently understood. Scientific method is not too strong, but too feeble, not quite up to the task at hand. This is a difficult position to take with full seriousness, because it exposes our own weaknesses past the point of comfort. Still, it’s an open question… The

10

method drilled into most of us has been to narrow one’s vision, to stuff nature into tiny compartments, to isolate small sets of variables, to consider them apart from their social context, and then to suggest such pigeonholing reflects the world. That, in some measure, is the nature of social scientific inquiry. This approach may work up to a point, but with phenomena as complex as coalitions that point may have been reached…(p. 224) Presenting evaluative challenges and responding with practical research tools has been one option adopted by social scientists attempting to understand complex phenomena (Donaldson & Gooler, 2003; Schulz et al., 2003). Researchers have also conducted empirical studies of cases of community evaluation for providing general professional implications (Milligan, Coulton, York, & Register, 1998; Murphy-Berman et al., 2000). However, researchers have been limited in their study of cases of evaluation for understanding CCI evaluation reporting This reluctance may be an oversight, or, as Berkowitz (2001) notes, reluctance on the part of researchers to examine the limitations of their own approaches for addressing messiness in community development. To address the gap in understanding evaluation reporting, I utilized a specific case in order to raise issues relevant to CCI evaluation reporting and identify change in reporting in order to understand CCI learning and knowledge development.

NFI as a Case for Understanding CCI Evaluation Reporting

Hypothetically, if a researcher were to look for an ideal situation for understanding evaluation reporting in CCIs, she might look for a case where evaluation was an explicit part of a multi-year community initiative, thus providing a longitudinal record of the reporting. She would want an evaluation conducted both throughout the

11

initiative process, and with attention to goals and process to identify the initiative understandings of the evaluators themselves. This process focus might provide statements of potential learning. She would look for an evaluation that received consistent resources and management throughout its timeframe and an evaluation where reports were publicly available throughout the chronology of an initiative to ensure the availability of reports as the data necessary for the study. There are practical considerations of community initiatives that prevent researchers from finding this ideal for researching CCI evaluation reporting. Community initiatives, when funded, are usually funded modestly. With many competing demands for resources, evaluation is often conducted with the minimum of financial resources, technical assistance, and associated expectations. Funded initiatives may not be long-term, or when long-term, may change dramatically with shifting funding policies and transitions in management and leadership that may also influence the conduct of evaluation. Even in cases where evaluation support is high, as with largescale national demonstration projects, evaluation may occur without connection to existing knowledge. Evaluation may be targeted only for internal use, may be censored to the point of being mundane, may be focused on outcomes to the exclusion of process, or may be reported publicly only in summative reports with no evidence of the learning that may have occurred throughout the evaluation process itself. Within these practical contexts, CCI evaluation participants struggle to engage in evaluation that is both credible in the documentation of the complexity and dynamism of CCIs and supportive of the collaborative and community-based principles of CCIs. It is therefore not surprising that discussions of CCI evaluation practice have focused on

12

challenges rather than the dimensions of holism, engagement, intensity and informed action that researchers note are supposed to characterize CCIs. Common are comments about evaluations cancelled before public reports were released, stunted learning processes, and program money spent on internal reports rather than the use of reports publicly leveraged for meaningful understanding by broader audiences. These tendencies are consistent when considering that evaluations reported publicly risk attracting much of the conflict, political pressure, and blame for any perceptions that an initiative has failed to meet stated goals. However, despite the importance of public reports to the perceptions of the success of CCIs, researchers have not studied CCI evaluation reports. The Ford Foundation’s Neighborhood and Family Initiative (NFI) offered an opportunity to address CCI evaluation reporting. Although it was faced with all these concerns, the NFI evaluation provided evidence for understanding CCI evaluation reporting because NFI was an example of CCI evaluation that also included many of the ideal characteristics that I have outlined. NFI was a community development initiative in four localities – Detroit, Memphis, Hartford and Milwaukee. Unlike many community initiatives, NFI funding was long-term, covering approximately ten years of activity. From early in the initiative, evaluation was reported as integral to the demonstration purpose of the initiative (Chaskin, 1992). Evaluation was conducted over the course of the initiative rather than only at the completion of funding, and foundation managers allocated resources for evaluation reporting to be included throughout the initiative. The approach to NFI evaluation was supported through technical assistance provided by national intermediaries. NFI evaluators claimed to have engaged in a form of evaluation that was process-oriented, reported periodically and publicly, and involved with specific

13

reflection by the evaluators on the evaluation process itself (Chaskin, ChipendaDanoshka, & Toler, 2000). Although the NFI evaluators did not initially use “theory-ofchange” language and literature, they claimed to be developing theory and conducting evaluation with a participatory intent (Chaskin, 1992), a focus consistent with the Aspen Roundtable description of theory-of-change evaluation. Critical to this study, NFI evaluators released eleven reports over the course of the initiative. Unlike evaluations that are summative in nature and focused on a notion of completion, the NFI reports gave attention to process in the initiative at the same time that they provided evidence of that process over time. This evidence was suitable for my use as primary data. The central national organizations involved in NFI also wrote about the practice of CCIs and CCI evaluation, writings that provide literature to deepen the understanding of the NFI evaluation reporting. The NFI evaluation was connected, through participant membership, to the work of the Aspen Institute Roundtable on Comprehensive Community Initiatives for Children and Families, a coalition of individuals who came together to advance the discussion of CCIs and their evaluation (Connell et al., 1995; Fulbright-Anderson et al., 1998). The ideas of this group contributed to the literature and, for purposes of this study, also provided contextual data for understanding the ideas of CCI evaluation. Still, even when evaluation, as in NFI, includes attention to process and outcomes, is well-funded, takes place over the course of an initiative, and is supported by scholarly ideas, understanding evaluation reporting is difficult. My first reading of the NFI evaluation reports left me wondering how to locate key areas of change that would direct attention to potential learning and knowledge development. I wondered how to interpret

14

the writings of evaluators as they engaged in evaluation reporting and publicly reflected on their concepts of, and participation in, CCIs. I turned to the idea of conducting a case study to understand the NFI reports as part of a CCI evaluation approach. I sought first to understand issues that occurred in this example of evaluation reporting, and then to situate these issues within the context of the emerging research concerns of CCI evaluation.

Analytic Case Study

With this analytic case study, I explored NFI evaluation reports as primary data. I identified concepts in the evaluation reports and explored the change in concepts over time, asking what these changes might reveal about the learning and the knowledge development of the initiative. I then used the analytic approach to develop an understanding of reporting of CCI evaluation. To foster an analytic approach, I embraced a qualitative orientation. Sharan Merriam is a noted scholar qualitative research including case study procedures. According to Merriam (2001), qualitative research is an umbrella-term that includes research focused on understanding how people make sense of the world (p. 6). She explains that case study is often used to focus on a single phenomenon of interest and that the term heuristic “means that case studies illuminate the reader’s understanding of the phenomenon under study” (p. 30). The term analytic is used to refer to studies of language (Audi, 1999). For the purpose of this case study, I utilized an analytic approach to interpret the language of the reporting of CCI evaluation. Consistent with Creswell’s

15

(1998) suggestions, I addressed this purpose with an overarching research concern and several sub-issues each of which I formulated as questions. What can I understand about evaluation reporting through the evaluation language of CCI evaluation reports? What are the CCI evaluation concepts in the evaluation reports and how do these concepts change over the course of the initiative? What do these changes reveal about the learning and the knowledge development of the initiative? How might the change constructs that I developed from the evaluation concepts contribute to understanding CCI evaluation reporting? How can these reported concepts inform our understanding of the educational potential of CCI evaluation reporting? These research questions span the range of questions laid out by Maxwell(1996) (1996), who points to three types of qualitative research questions. Descriptive questions ask about what actually happened in terms of observable (or potentially observable) behavior or events. Interpretive questions, in contrast, ask about the meaning of these things for the people involved: their thoughts, feelings and intentions. Theoretical questions ask about why these things happened [and] how they can be explained. { #92@p. 59-60} With these types of questions, my study is based on my use of reports as text for studying a case of CCI evaluation reporting. A methodology for this study needed to assist me in addressing a number of challenges related to utilizing reports as text for analysis. The design needed to allow me to draw from text as empirical data to be analyzed, and the study required that I situate primary data, in the form of the evaluation reports, in relation to broader literature of a

16

knowledge community writing about CCI evaluation. I also needed an approach that would facilitate the identification of change constructs from the evaluation text and would enable me to utilize the study of text and the analysis of change constructs to develop an understanding of CCI evaluation reporting. Therefore, I sought a methodology that would provide an analytic process for incorporating a combination of types of data. The methodology I used developed during interaction with my data as I worked with the data and addressed standards of trustworthiness to support the quality of my study. By addressing these methodological needs, my intent was to contribute to theory, policy, and practice through the dissemination of insights to potential CCI participants. CCI participants may include community initiative funders who want to understand the products of evaluation and explore the ways in which they can maximize evaluative investment. Evaluation facilitators might want to use this study to learn more about ideas of evaluation reporting. CCI participants might include professional practitioners (e.g. social service providers, and development personnel) and residents seeking to understand the evaluative reporting to which they might contribute. The study might assist policymakers concerned with the conduct of evaluation reporting or educators seeking to understand the reported learning of evaluation funders, practitioners, residents and decision-makers. Finally, the study holds insights for an audience of social policy researchers interested in evaluation ideas about reporting issues relevant to the addressing of complex community concerns and quality of life issues in disadvantaged neighborhoods.

17

In the following chapters, I share the findings of this study. I also drew from the work of Sharan Merriam (2001) in creating a writing structure suited to the nature of the study. Merriam suggests that there is no standard format for a case study report, rather that the report should suit the purpose and the audience. Merriam (2001) asserts that all qualitative reports discuss the problem investigated, the way the study was conducted, and the findings that resulted. Having conducted a case study, I sought to convey the analytic work that I had done. This meant that I wanted a writing structure that would enable me to introduce and develop important concepts related to my study, elaborate the analytic methods that I utilized, describe the case, and expand upon my findings by discussing how lessons from the study could contribute to the ideas of evaluation reporting. According to Merriam, there are some guidelines for case study reporting. The problem that gives rise to the study should be presented in initial sections and should include reference to the literature, a description of the theoretical framework for the study, a problem statement, a purpose statement and research questions. Merriam stated that the methodology section should be included in the main text of the study, particularly when speaking to research audiences, and that the methodology should include information about sample selection, data collection and analysis, and approaches to addressing reliability and validity. Case study reports should also include findings and what the researcher has come to understand about the phenomenon. Often a findings section includes quotes, references, and documentary evidence. The discussion section includes what the researcher makes of the findings and any unique contribution that the study makes to the knowledge base.

18

I have organized my dissertation into five chapters. Chapter Two includes the conceptual context of the problem -- an elaboration of CCI concepts, evaluation ideas, concerns of evaluation within CCI contexts and a review of literature. I have provided a figure indicating that the study of evaluation reports is lacking in CCI research. Chapter Two also includes a framing of the interpretive nature of my study and the methodological issues that I had to consider in the development of an analytic approach. Chapter Three is a discussion of that approach including an outline of the data, questions, techniques and the investigative iterations that I utilized to understand the text of the evaluation documents. Chapter Three also includes commentary on the challenges of the analysis and the ways in which I addressed the trustworthiness of my study. In Chapter Four, I present an overview of the NFI evaluation case and offer a detailed description of findings organized from multiple views consistent with Maxwell’s (1996) types of research questioning. In addressing the questioning, I describe each NFI report highlighting key concerns revealed in the reports, evaluation ideas as discussed by evaluators, and overviews in which evaluators describe the initiative at each point in time. I identify dimensions that inform my understanding of NFI evaluator statements in relation to broader Aspen Roundtable CCI writings and I then utilize topical questions to compare NFI evaluator identified lessons to evaluation lessons as reported by the Aspen Roundtable. This description of reports, as a first view, is consistent with Maxwell’s descriptive interest, wherein the researcher presents observable events. In this case study the reports are the observable events. I then approach the reports according to a cross-report analysis of the key topical issues related to the initiative and its evaluation. I also address the reports with attention

19

to the relationship of NFI evaluation lessons to the ideas of the Aspen Roundtable. This second view -- analyzing the reports according to the deeper meanings that emerge for the evaluators across the time-line of reporting and with respect to the lessons that NFI evaluators discussed -- is consistent with Maxwell’s interpretive questioning. For Maxwell, the interpretive questioning involves asking about the meaning of the participants. In this study, I was also concerned with meaning as it emerged in the evaluator statements throughout the reporting. I therefore discuss the change constructs that I derived from the content analysis of segments of text. These I drew from primary data that included the text from the eleven NFI evaluation public reports. In this way, I address Maxwell’s theoretical interest by seeking to understand the deeper changes that were revealed in the evaluators’ statements. The change constructs were revealed in my analysis of primary data rather than simply stated by the evaluators themselves. With attention to descriptive, interpretive, and theoretical questioning, I sought to deepen the understanding of the evaluation reporting by identifying changes and questioning change, so that I could then discuss these findings in relation to NFI and broader discussions of CCI evaluation. In Chapter Five, I discuss my findings as they relate to possibilities of learning and knowledge development through evaluation reporting and to broader discussions of evaluation. I provide a review of the problem, purpose, and questions that guided my study and an overview of the study process and findings. I elaborate upon emphasis areas that emerged through the study. These include the distinction between community organization building and coalition creation, complexities of NFI evaluators’ use of the

20

term comprehensiveness as a lens for change, issues of audience in evaluation and the complexities of understanding institutional distance in relation to CCI evaluation. I conclude the discussion of findings with issues, of learning, knowledge development, and education, related to CCI evaluative reporting. Also in Chapter Five, I reflect on the process of my study and the limitations of analytic approach. I present my thoughts on the contributions of the study to policymaking, theory development in community initiatives, and evaluation language practice. I conclude the study with specific implications for future research about, and for, CCI evaluation language development. Through the structure of the dissertation, I have attempted to draw the reader into a narrative within which I have viewed CCI evaluation reporting first through literature and then through multiple approaches to documenting the issues raised in various iterations of analysis of the reports. In this way, I sought to bring the reader into deep understandings of issues and then back out to broader questions. This movement culminated with a discussion of change in reporting as it relates to issues of learning and knowledge development in theory-driven evaluation. As the reader chooses her own movement through the narrative, I ask her to consider the ways in which, from beginning to end, I have addressed the dimensions of CCIs as the strands that run through my thinking. How is it that ideas of holism run through my descriptions of approaches to evaluation? How is it that I demonstrate engagement through understandings of the processes of analytic approaches? How do I explore the ideas of intensity in CCIs through my own processes of reflection? How do I demonstrate issues related to informed action as I construct CCI reporting itself as an action?

21

By following the chapter content and the narrative, readers -- including funders, residents, practitioners, researchers, policymakers, and social scientists -- may utilize this dissertation in a number of ways that I have intended. I imagine they may also use it in ways that I have not intended. The dissertation can be used to illuminate the intrinsic value of NFI as a case. In this way, the reader would look to the findings to come to understand reporting of comprehensive initiatives and processes of evaluation. Another approach would be to focus attention on the instrumental nature of the case, looking at the problem identification and discussion for understanding the nature of evaluation reporting within complex initiatives and the interaction between evaluation designs and processes. Readers interested in analytic case study methodology and framework development may also read the report with attention to the analytic layering of the research process. Because I have engaged in an analytic case study wherein I was myself developing an understanding throughout the narrative of the study, I utilize Chapters Two and Three to discuss key terms related to my study. I provide here a brief definition of terms, as I came to utilize them in my analytic development, in order to provide the reader with an overview as she reads further for elaboration and contextualizing of these concepts.

Definition of Key Terms

Analytic: A classification of research that relies on the systematic examination of text through an interpretive process. Analytic studies are designed to deepen the understanding of the meaning of the text.

22

Case Study: An approach to research that centers on the ability of the researcher to identify distinct boundaries of the phenomenon and to utilize multiple types of data in exploring the phenomenon.

Comprehensive community initiative (CCI): An approach to neighborhood development in which a structure is provided within which participants may create various strategies to community development. The actual structure and activities of CCIs vary according to the ideas of community development as they are influenced by the providers of resources and those contributing their time through participation.

Change construct: A cluster of ideas that coalesce around a single concept, are rich in data, and occur in various configurations over the time-span of the reporting of an initiative.

23

CHAPTER TWO CONCEPTUAL CONTEXT Comprehensive Community Initiative Background A comprehensive community initiative (CCI) is a framework for developing reform strategies in communities. Through CCIs, individuals who serve as funders, practitioners, and residents, work together for neighborhood change. According to Baum (2001), these initiatives are “community” initiatives both because communities are the focus of the initiatives and because the initiatives involve an adherence to the idea that communities are “instruments of their own change” (p. 147). There have been three precursors to current community initiatives. One predecessor of CCIs is an approach to communities that relies on ideas of service integration. Service integration approaches are sometimes referred to by names such as coordinated services or linked services. These efforts have focused on coordination but have often remained entrenched in ideas of reform geared toward categorically funded programs that are also often focused on individual issues of specific populations of individuals (Stone, 1994). Target populations are subgroups of individuals designated as common with respect to some shared trait or similar service need (Treno & Holder, 1997). Treno and Holder noted that targeting a population is useful when a problem is located solely within that target population, but that this approach is limited because the effects of any program tend to last only as long as the program itself with the community structures left unchanged. The community structures themselves then continue to

24

“generate replacement at-risk individuals” (Treno & Holder, 1997, p. 135). Therefore, the program investments do not result in sustainable change at the community level. A second precursor to CCIs was an orientation to community development that emanated from the ideas of initiatives that were geographically focused and were considered to be neighborhood-based or grassroots. These approaches tended to embrace notions of empowerment and asset development and were often designed to encourage resident awareness and participation in the leveraging of resources to influence community change (Kretzmann & McKnight, 1993; Stone, 1994). Whether manifested as discrete nonprofit organizations, coalitions of private and public participants, or less formal resident voluntary action groups, community-oriented approaches shared a grounding in two beliefs. As Chaskin and Abunimah (1997) described “one is a philosophical belief in the democratic process and its appropriate connection to local associational action” and “the other is a pragmatic belief in the ability of decentralized approaches to provide more connected, responsive, and coordinated strategic action” (p. 3). Community-based approaches included efforts at physical and economic development as well as those emphasizing social organizing for participation in public policy (Peirce & Steinbach, 1987; Stoecker, 1997; Twelvetrees, 1996). Stone (1994) suggested that, although many of the same themes emerge in service reform approaches and community development approaches, the two remain differentiated. For service reform, the task may be described as one of “improving the lives of children and families where they live,” while for community development, it might be one of “improving the life of communities in which children and families live” (p. 9). For both service integration and even some community development approaches though, the focus on individual programs or

25

collections of programs remains a limiting feature, with emphasis on a conception of community issues as isolated rather than systemic in nature (Center for the Study of Social Policy, October 1996). A third orientation is therefore a community systems approach which differs from programmatically focused initiatives in the emphasis on communities as “complex living systems whose elements are individual beings” (Spruill et al., 2001, p. 105) rather than on communities as containers of issues. A community systems orientation embraces the idea that the reasons for troublesome social issues, as well as the strategies for the alleviation of issues, are primarily interconnected rather than individual in nature. Since societal problems “are the result of the social, economic, and structural relationships within community systems,” they must always be targeted as “aggregate-level problems” (Treno & Holder, 1997, p. 135). However, even with a systems approach, conceptualizations of some community-based strategies are often limited by the treatment of communities as de-contextualized from larger structures and policies that influence local conditions (Brown, 1996). The combination of a systems approach to community and an awareness of holistic contexts beyond the locality has lead to community approaches that take on an embedded or even multiply-centered orientation to understanding social issues and interventions. The community initiatives specifically called CCIs, which began in the 1980s and early 1990s, are examples.

26

Comprehensive Community Initiative Concepts

Over the past decade, researchers of various types of community initiatives have provided in-depth understandings of the challenges of these initiatives. Topics of concern have included governance and community decision-making processes (Chaskin, 2003; Chaskin & Abunimah, 1997; Chaskin & Garg, 1997; Chaskin & Peters, 2000) and questions of the nature of collaboration, coalition building, and citizen involvement (Chavis, 2001; Connor, 2003; Foster-Fishman, Berkowitz, Lounsbury, Jacobson, & Allen, 2001; Himmelman, 2001; Kaye, 2001; Schulz et al., 2003; Twelvetrees, 1996; White & Wehlage, 1995; Wolff, 2001a, 2001b). Communication and issues of consensus have sometimes been fore-grounded within issues of collaborative planning and development (Baum, 1994; Fischler, 2000; Innes, 1995; Innes & Booher, 1999b; Nichols, 2002). Community has been identified as a social unit that involves a system of shared ideas, and social capital has been questioned as a characteristic of neighborhoods (Chaskin, 1997; Petersen, 2002; Spruill et al., 2001; Temkin & Rohe, 1998). Issues of community building have been placed within the context of urban policy (Clavel, Pitt, & Yin, 1997; Fraser et al., 2002; Hula, Jackson, & Orr, 1997; Temkin & Rohe, 1996) and poverty alleviation efforts (Fraser et al., 2002; Stone, 1996), with descriptions given of specific organizing attempts wherein community building was treated as an essential concept of development (Baum, 1997; Connor, 2003; Fraser et al., 2003; Medoff & Sklar, 1994; Stoecker, 2003). In addition, community building structures, such as those of community development corporations, have been described and critiqued (Clavel et al., 1997; Peirce

27

& Steinbach, 1987; Stoecker, 1997). The growth of the CDCs in the 1980s led researchers to question the effect that formalization would have on grassroots efforts for the poor. Within the community development field, there have been debates about the potential for increasing the numbers of incorporated community organizations. There have been expressed hopes that these formal organizations would expand to include more of the middle-class, but there have also been noted fears that, unless these organizations could be brought together into a larger coalition, the increased formalization would serve to further disenfranchise those in poverty (Clavel et al., 1997). These concerns spurred discussions about development approaches as they relate to civic capacity building (Chaskin, 2001; Chaskin, Joseph, & Chipenda Danoshka, 1997; Connor, 2003; Kingsley et al., n.d.). In these ways, contemporary researchers have added to literature that has influenced the field of community development and connected concepts of community to ideas of urban policy and social reform. One piece of literature is Arnstein’s (1969) conceptualization of a hierarchy of citizen participation which she described using the metaphor of a “ladder.” The rungs symbolized increasing levels of participation; from bottom to top, the rungs from nonparticipation to citizen power included: manipulation, therapy, informing, consultation, placation, partnership, delegated power, and citizen control. In her ladder, Arnstein revealed that there could be a focus on participation that resulted in nothing more than tokenism, a concern echoed in the focus of contemporary organizers (Stoecker, 1997, 2003). Warren (1978) focused, in questioning community activity, on notions of vertical relations and horizontal relations in order to describe the relationships that exist between the local unit and the larger society, and the local unit and other local units.

28

Warren (Warren, 1973) also questioned the dichotomy of truth and love in community orientations. For Warren, truth referred to a notion of the absolute of a value, an adherence to a moral superlative with which an individual claims to hold an idea greater than oneself, thus justifying him to believe in the lesser value of those holding different views. Love, for Warren, is an orientation with which an individual sees the essential worth of all human beings despite views of truth. According to Warren, individuals hold these two orientations at the same time and only feel a tension when the values come together as they often do in the field of community change. Scherer (1972) too was concerned with ideas of love, emphasizing that the difference between communities and institutions has to do with the concept of roles. She asserted that communities enable an individual to have a more integrated existence than do institutions that require strict role adherence. She cautioned against accepting a simplified dichotomy and opened up the questions of the process of communication within social activity as associated with the concept of networks for communication. Scherer asked “Is community talk?” and wrote: John Dewey recognized that communication is at the heart of any community: we can only share in common what we can communicate with others. Communication – the process of receiving and sending messages—is, in fact, the life-blood of all social structures…Sociologically speaking, communication is the means by which the shared perspectives of the group, the agreed-upon understandings that permit existence, bind men to each other, reflect current social behavior, and actually mould future actions. All collectivities have some recognized channels of communication. But today we face new problems. The sheer quantity of information sent out by means of steadily improving technological instruments, and the increase in the number of channels from which men may select messages, is overwhelming. In the past, because they were isolated and self-sufficient social centres, communities provided effective screening devices to insulate members against conflicting and unrelated messages from outside. As these conditions have vanished, it has become impossible for communities to exclude other messages completely, although I would like to suggest that modern forms of community still serve as

29

clearing houses in which messages that are non-related or out of tune with the communal belief system are discarded. One method of sorting is by sending messages along private and personal channels that overlap at some points, which is basically the concept of social networks…(p. 104-105) Dennis Poplin (1972) identified various networks of ideas of community by providing an overview of community theories and methods for research. He brought issues of community activity and community leadership together as he reframed the functionalist study of community as a study of human action. He asked “could we not gain much by using human action itself as a unit of analysis?” (p. 180). Along with Scherer he emphasized community as a phenomenon suitable for focused research whereas Marris and Rein (1967) provided an analysis of community, not as a concept itself, but as the central focus of intentional social reform. Marris and Rein’s opening paragraphs provide a snapshot of social efforts. A reformer in American society faces three crucial tasks. He must recruit a coalition of power sufficient for his purpose; he must respect the democratic tradition which expects every citizen, not merely to be represented, but to play an autonomous part in the determination of his own affairs; and his policies must be demonstrably rational….No other nation organizes its government as incoherently as the Unites States. In the management of its home affairs, its potential resources are greater, and its use of them more inhibited than anywhere else in the world. Its policies are set to run a legislative obstacle race that leaves most reforms sprawling helplessly in a scrum of competing interests. Those which limp into law may then collapse exhausted, too enfeebled to struggle through the administrative tangle which now confronts them, and too damaged to attack the problems for which they were designed. (p. 7) With the emphasis on reform, Marris and Rein, contributed to ideas of community action and discussed the challenges of conducting research within reform efforts. They emphasized the tensions between experimental research requirements and the practical needs of programs seeking to provide immediate benefits to their target community.

30

Marris and Rein did not present a solution but rather described the experiences within the 1960s Ford Foundation reform efforts. As opportunities for focusing on community as a target for social reform, CCIs emerged from the same history and faced all of the difficulties of other community initiatives and encompassed all of the questions of research approach, as did this literature. Yet as comprehensive models, CCIs also face the challenges of moving toward holistic understandings of community systems and the dynamics of neighborhood change through community action. However, holism was not addressed fully in the early literature of community initiatives or community action. CCIs are grounded in an ideological stance, one supported by many nonprofit foundations, of the devolution of authority for increased local action. This stance encompasses the idea that successful change processes must meaningfully involve those individuals that are targeted as the beneficiaries of that change (Baum, 2001; Brown, 1996; Kubisch, Weiss, Schorr, & Connell, 1995). CCIs embody an inherent discomfort with the lack of representation, of low-income residents, in policy processes and a dissatisfaction with the extensive bureaucracies that make it difficult for citizens to coordinate services to meet even their most basic of needs (Chaskin & Peters, 2000). CCIs provide structures within which engagement can take place. The nature of this participation, the effectiveness of CCIs in fostering meaningful and legitimate involvement by citizens, and the ability of any CCI to provide a context of advocacy for participants, are issues for participants to address. The emphasis on questions of approach for social betterment makes the work of CCIs intensive in attention to the causes of social problems and the factors believed to

31

hinder the effective alleviation of these problems. Supporters of CCIs are explicit in their critique of social structures that contribute to disinvestment, disempowerment, and poverty and in their intention to alter these through CCI processes. Lack of coordination between service providers, categorical and symptom-focused service delivery systems, bureaucratization, limited organizational, institutional, and advocacy mechanisms in poor communities, and racism, are just some of these problems (Kingsley et al., n.d.; Stone, 1994, 1996; Stone & Butler, 2000). The awareness of conflict based on issues of cultural and racial power, diversity, and identity are not unique to the work of CCIs. However, the explicit efforts of CCI supporters to bring together members from differing social and economic positions with those more commonly positioned in the professional and policy circles, serves to draw these issues from the external context to deep within CCI functioning. If holism is an enduring feature and engagement and intensity are key aspects of CCIs, then informed action is a cornerstone. By informed action, I am referring to action by participants who are self-aware of their integral role as local participants in collectively mediating and influencing larger economic, social, and policy contexts. Supporters of CCIs often claim to embrace notions of information sharing as part of the effort to enhance the effectiveness of community initiatives (Stone, 1994). Yet, the focus also marks a desire, on the part of researchers, funders, and policymakers, to maximize the social learning, systems change potential, and credibility, associated with community initiatives. As indicative of this informational focus, those community initiatives that have attracted funding have often done so with a claim for being demonstration projects designed to share learning beyond the participants.

32

Although various researchers embrace different emphases as they define CCIs in terms of holism, engagement, intensity, and informed action, the Aspen Roundtable researchers provided a synthesis. Writing in 2002, and looking back over the more than ten-year use of the term CCI, researchers of the Aspen Roundtable released a document highlighting CCI characteristics based in the concepts of comprehensiveness and community building. The characteristics included that: • They are initiatives rather than projects or programs. This means that CCIs have a prescribed beginning and end. Their funding lasts longer than a traditional grant (usually 5-10 years)… A funder’s goals usually serve as a catalyst... They have an explicitly comprehensive approach. CCIs operate on the premise that problems in poor communities have many interrelated causes….They aim to foster synergistic interactions… They promote deliberate, community-based planning, grounded in the history of the community and the interests of community residents… They rely on governance structures or collaborative partnerships within the community… They draw on an array of external organizations for technical assistance, research, and other supports… They seek partnerships between the community and external sources of political and economic power… They have a learning component…(Kubisch et al., 2002, pp.13-14)

• •

• • • • •

Even with this synthesis, within the existing literature, there is still little consensus about definitions or about the range of appropriate classifications of initiatives that may be considered under the umbrella-term of CCI. Neither are there hard and fast distinctions about the number and combination of reform strategies that participants may utilize in addressing local issues, about why specific strategies are used, or about how strategies 33

contribute to CCI missions. Rebecca Stone (1996), director of the Core Issues in Comprehensive Community Building Initiatives project, summarized the state of the CCI field when she asserted that “the rate of project development and practice had far outstripped our learning….Put bluntly, the field knows more about what it’s doing than about how or why” (p. viii). CCI evaluators face the challenging task of addressing the what, how, and why, of the complex, and changing initiatives that they seek to describe and understand. In addition to the shifting nature of CCI definitions, there are various reasons why the task of evaluating CCI work is challenging. Participants may each have a different understanding of CCI engagement. CCI participants may strive for comprehensiveness whether or not they achieve it in programming (Brown, 1996). Participants may attempt multiple interventions simultaneously and efforts may both interact and depend upon one another, making it difficult to isolate the influences of any given strategy (Baum, 2001). CCI supporters may embrace the desire to develop political strength among residents of disadvantaged neighborhoods (Chaskin & Brown, 1996). This desire may be present whether or not supporters openly advocate for or against any specific policy that impacts those neighborhoods. CCI advocates may espouse a notion of local representation whether or not there are clear structures in place for designating this representation or for being accountable to identifiable constituents (Chaskin & Garg, 1996). The basic ideals of comprehensiveness embedded in ideas of community action may also be at odds with the realities of conflict that lead to policy change in American society (Marris & Rein, 1967, pp. 226-230). Finally, CCIs may themselves change over time in response to external circumstances and opportunities.

34

In relation to change, the notion of information sharing as a means for developing learning systems has characterized the attempts of CCIs (Springer & Phillips, 1994). More broadly, discussions of community collaboration draw from ideas that complex systems can adapt and change when information is communicated, throughout the system, in a dialogic interaction (Innes, 1995; Innes & Booher, 1999a, 1999b). According to Innes, approaches to consensus-building that bring together multiple interests in dialogue have the potential to prompt social learning and innovation. As integral to CCIs, this sharing of information for the purpose of consensus-building becomes a notion of effecting policy in real-time (Stone, 1994). However, the lines between information use for social learning versus political advocacy have fluctuated with the emphasis of each federal administration (O'Connor, 1995). The Aspen Roundtable has also seen shifts in membership during times of federal political transitions. Likewise, community initiatives have fluctuated in their call for, and sometimes resistance to, informational processes. Efforts at community level indicators (Coulton, 1995a; Sawicki & Flynn, 1996) and results-based approaches to accountability (Schorr, Farrow, Hornbeck, & Watson, 1994) are examples of efforts to impact systemic change through the utilization of information. Through these approaches, the search for meaningful indicators occupied the attention of evaluators during the 1990s with attention given to developing measures that could help in monitoring change in communities (Coulton, 1995a; Coulton & Hollister, 1998; Sawicki & Flynn, 1996). This trend came with a pervasive concern with the credibility of information. According to Stone (1994), information identification and sharing actually face many credibility obstacles, including those that are context-oriented,

35

psychological, and structural. She explains that the context of initiatives, multiply layered and including a variety of participants, causes uncertainty as to whose information is relevant and who has the obligation or permission to share information. Psychological barriers relate information to issues of power and the risks associated with sharing anything other than success stories. Structural characteristics often allow only for the minimum of data collection and few opportunities for multiple participants to interact meaningfully with this information (Stone, 1994). Still, there is hope among supporters of an information sharing emphasis, that the utilization of systematically identified information can both support internal confidence as to the appropriateness, viability, or success of strategies and strengthen learning claims made to external audiences. Over the past decade, private foundation managers interested in comprehensive community initiatives and their learning potential have invested both time and resources into the design and conduct of evaluations that supporters espouse to be congruent with the characteristics and missions of CCIs (Chaskin & Garg, 1996; Kubisch et al., 1995; O'Connor, 1995; Stone, 1994). Evidence of foundation investment in evaluation includes the publication of evaluation reports of a number of initiatives. A few examples of these publications are the evaluations of the Annie E. Casey Foundation’s Rebuilding Communities Initiative, Edna McConnell Clark Foundation’s Neighborhood Partners Initiative, and Surdna Foundation’s Comprehensive Community Revitalization Program (Roundtable on Comprehensive Community Initiatives, 2002). Various city, state, and federal funders have also supported community-based efforts and their evaluation (Roundtable on Comprehensive Community Initiatives, 2002; Wilder & Rubin, 1996). In addition, private and public supporters, for more than a decade, have invested time and

36

resources into a number of dissemination venues for highlighting the work of CCIs and the unique evaluation necessary to complement CCI missions. Venues include websites and symposia through which funders, professionals, researchers, and initiative participants; have come together in forums for identifying and elaborating upon strategies for evaluating CCIs. Given the complexity and dynamism of social change efforts, it is not surprising that social and policy researchers have noted the limited analyses that have actually been conducted “across levels of the system, [and] taking into account the full range of governmental, professional, familial, cultural, and economic actors and perspectives” (Finkelstein & Croninger, 1997, p. 4). Challenges to evaluating CCIs in ways that provide understanding of their dynamism, complexity, and systemic nature, have not stopped claims that there are approaches to evaluation that can be used to both understand and support the work of CCIs. Approaches discussed in the field of social evaluation provide a backdrop of issues of evaluation that provide a context for discussion of CCI evaluation.

Evaluation Approaches Influencing CCI Evaluation

CCI evaluation is set within a history of evaluation ideas and approaches. Evaluation emerged as a practice of program monitoring and impact assessment during the post World War II era when evaluation became prominent as a part of budgeting and policy decision-making (O'Connor, 1995; Patton, 1997b). In the 1970’s, the field became populated enough for the development of a professional association -- the Evaluation

37

Research Society -- and an evaluation network (Chelimsky, 1995; Patton, 1990, 1997b). By 1978, the journal entitled Evaluation Quarterly was developed for the study of evaluation (Hall, 2003), and by 1984, the American Evaluation Association was formed with evaluation reaching international importance (Patton, 1997b). With these milestones as a backdrop, the attention given today to evaluation as an integral component in the funding of community initiatives is evidence of the key role that evaluation continues to play in social initiatives (Fraser et al., 2002; Hall, 2003; Rossi, 1999). Within the history of the evaluation field there have been calls for continued strengthening of the discipline of evaluation as a unique contribution to social life (Scriven, 1994). However, there are also ongoing debates about what evaluation is, the role that evaluation should play in programs, and about the range of possible approaches for engaging in evaluation to support the goals and mission of social initiatives (C. H. Weiss, 1998; H. Weiss et al., 2002). Many evaluation debates focus on the search for increasingly rigorous and objective methods for meeting scientific standards and involve a view of evaluators as distant observers monitoring program output for managers. Owen and Lambert (1998) noted that within a managerial focus evaluation has increasingly become about developing indicators to assess organizational performance. There has been a move toward measuring process within evaluation (Smith, 1994). However, Sechrest (1994) laments that the focus on process has led the field of evaluation to stagnate by shifting the focus away from the measurement of outcomes as an indication of whether programs work. According to Chelimsky (1994), former president of the American Evaluation Association, the field of evaluation actually does have a seemingly insatiable desire for

38

basic research, resources, and new measures and methods. Even in the metaunderstanding of evaluation ideas themselves, Mark, Henry and Julnes (1999) have called for an adherence to realism and have supported the use of linear matrices for describing the distinct elements of evaluation planning and practice. From these discussions, it might appear that evaluation ideas have become consumed by the search for more rigorous designs and indicators of organizational productivity in the form of service delivery. While few in the evaluation field would argue that evaluation should turn its back on credibility or shy away from its role in addressing reality and outcomes of service and value, Patton (1990) has argued that the traditional discussions of rigor solely for managerial efficiency and monitoring reduce evaluation to its “lowest level” (p. 50). Schwandt (1992) also expressed this fear as he called for a “morally engaged evaluation practice.” No part of this call for a morally engaged evaluation practice should be interpreted to mean that we must choose between a technical means-end examination of program and moral examination…However, I do fear that we are defining the new horizons of evaluation practice largely in terms of improved systematic searches for scientific answers to problems. I am less than sanguine that continual refinement of our abilities to collect and interpret data really can offer any new insights. Does a portfolio approach to individual achievement claiming more authentic measurement or a program theory constructed from a causal model make that much difference in the way we live as program administrators, as teachers, as students, and parents? Shouldn’t we, as evaluators, have something to say about the way we live? (Schwandt, 2000, pp. 141-142) Evaluation, like community development, thus has been opened to broader questioning and critique of methods, designs, and world-views from the technical-rational approaches of managerial efficiency to morally engaged social inquiry. In response, evaluators have

39

adopted multiple approaches for engaging in problems of social importance and have continued to raise questions about the role of evaluation in community development. For instance, in the 1960s and 1970s, changes in the conceptualization of authority in relation to ideas of science met with an increasing emphasis on pluralism and made way for alternative paradigms in evaluation approaches (Alkin, 2004; Greene, 2000). The move brought to light approaches that openly and explicitly address tensions of pluralism and questions of authority as well as critique the purpose of evaluation. Carol Weiss (1972) noted that evaluation itself could be based on both overt and covert purposes for its conduct. She thus emphasized the political character of evaluations noting that evaluation is a political activity in three ways. It is political first because political processes bring evaluation into being, second because the results are fed into decision-making processes, and third because evaluation involves a political stance on the part of evaluators who choose to undertake specific studies (C. H. Weiss, 2004). This environment of critique and social and political awareness opened up space for understanding the problem of evaluation in terms, not only of what it is, but of how evaluation can be an integral part of organizations and coalitions seeking to maximize their social and political involvement (Fraser et al., 2002; Greene, 2000; Henry & Mark, 2003; Lincoln, 1994; H. Weiss et al., 2002). Even though some evaluators, theorists, and funders have endeavored to focus evaluation on the knowledge needs of the programs and social issues which evaluation is to address, Lincoln suggested that overall evaluators had “lost sight of the truth that science is about knowing” (Lincoln, 1991, p. 2). Lincoln refocused attention on the art and science of evaluation and revived the questions of meaning in relation to social programs. Therefore, although much of the emphasis in the

40

evaluation field has been on monitoring and objective impact, there have been ongoing efforts of scholars such as Lincoln, Finkelstein, Weiss, O’Connor, and Baum to comment on socially meaningful evaluation. With such commentary, traditional evaluation, as monitoring, has come to exist alongside a host of approaches that embrace interpretive understandings of evaluation and a variety of participatory and engaged stances. Some examples of these alternatives include efforts of evaluation for social program development and evaluation for social change.

Evaluation for Social Program Development

Evaluation as a mechanism for social program development is evidenced in approaches alternatively called formative evaluation, developmental evaluation, and stakeholder or utilization-focused evaluation. Of these, formative evaluation marks the earliest departure from the idea of externally-based objective outcome assessment (Rossi, 1999). Formative approaches have placed evaluation as a component of the program development process. Evaluation thus becomes a diagnostic tool and serves the role of producing empirical data so that decision-makers can improve program design and implementation (Rossi, 1999). According to Patton (1994), formative evaluation helps programs to prepare for summative evaluation by providing information in areas thought to impact goal achievement. Distinguished from formative evaluation is Patton’s approach to developmental evaluation. In developmental evaluation, there is no anticipation for summative evaluation but rather evaluation takes place as a part of the ever-changing nature of programs trying to respond to dynamic environments (Patton,

41

1994). Developmental evaluation therefore requires a concept of partnership, with the evaluator often invited into an organization to support evaluative questioning on an ongoing basis (Patton, 1997b, p. 104). Like formative evaluation, in developmental evaluation, there is an adherence to the notion of data as used for programmatic improvement with the primary participants frequently being managerial professionals. Stakeholder evaluation has broadened the questions of use and the intended users of programs (Christie & Alkin, 2003; Patton, 1994). It involves exploration of the ways in which the process of evaluation might be incorporated within attempts at organizational development (Nichols, 2002; Shula & Cousins, 1997). The evaluators in the stakeholder approaches come to play the role of mediators fostering the inclusion of ideas of various interested parties and bringing, to decision-makers, credible indicators of program process and outcomes. These approaches serve to involve multiple participants in the evaluation process, a process intended as a feedback mechanism for the efficient management of programming. Stakeholder approaches may resemble approaches toward democratic involvement, yet their primary purpose for involving stakeholders is for increasing the validity of evaluation findings to support better decision-making (Brandon, 1998). Without regard to the rationale for involvement, empirical study of evaluation use does support the idea that involvement increases participant satisfaction with evaluation (Fine et al., 1998). However, critique of the limited use of evaluation by decision-makers, despite participant satisfaction has led to utilization-focused evaluation with an emphasis placed on prompting intended use by intended users (Patton, 1997b). According to Patton, there are diversions that may pull evaluators away from the utilization purpose of

42

evaluation and engagement with participants in order to support use. Possible diversions include evaluators making all the decisions about evaluation, gearing an evaluation to an anonymous “audience” as a stakeholder group, targeting organizations rather than the individuals in the organizations, focusing on decisions rather than decision-makers, assuming that funders are the primary users, waiting until the reporting to think about users, or shying away from engagement altogether (Patton, 1997b, pp. 52-57). With approaches of evaluation for program development tending to keep the evaluator in the primary role as technician working alongside other professionals (Huberman, 1995), the position of evaluator remains one of value neutrality (Mathison, 2000). Participation, as it occurs in these approaches comes with the researcher’s intention of increasing the use of evaluation, with the involvement of stakeholders encouraged in order that the evaluation information will be the focus of practical application and decision-making (Christie & Alkin, 2003; Fine et al., 1998). Although participant involvement in evaluation may increase the usefulness of the evaluation, the reverse has not always held true. In a cross-case study, Cousins (1996) found that although researcher participation in evaluation in organizations aided in the evaluation results, the highest level of researcher participation did not necessarily yield the greatest results in improving the evaluation process or practitioner engagement in evaluation. Higher levels of researcher involvement sometimes even negatively impacted the success of the evaluation (p. 20). His findings indicate that questions of the type and intent of researcher participation are open for discussion in relation to evaluation. Tharp and Gallimore (1982) have asserted that the conditions for researchers to have the greatest impact on the quality of programs are often not present in programs.

43

They explain that in order to maximize a program’s growth, five conditions must be present for the inquiry process. These conditions include: time, stability of values and goals, stability of funding, evaluator authority, and administrative ability to maintain evaluation pressure (Tharp & Gallimore, 1982). Without these ideal conditions, supporters of community initiatives are left wondering how to address evaluation. For Mark, Henry and Julnes (Mark et al., 1999), the focus on utilization of evaluation findings is one way to address program and policies and is the key to understanding evaluation as integral to broader democratic processes and development of institutions that support social betterment. However, discussions of transformative approaches to evaluation raise questions, not of the utilization of findings, but of the utility of expectations that change will occur through existing institutional structures. Rather transformative approaches to evaluation highlight the possibilities of learning for social change by providing space for questioning processes of change as working not only within, but also perhaps beyond, and through, democratic structures.

Evaluation for Social Change

Although evaluation approaches for social change may be geared toward programs, the emphasis goes beyond the functioning of the program and include ideas of social issues that interact with or within social programs and initiatives such as CCIs. Examples of evaluation approaches that have a transformational or social change purpose include participatory evaluation, deliberative democratic evaluation, empowerment evaluation, and theory-based evaluation. Mertens (2002) noted that transformative

44

theory is an umbrella term that encompasses ideas that research can be emancipatory with approaches geared to supporting marginalized groups. She notes the commonalities amongst the various transformative positions. According to Mertens, the commonalities include awareness that “knowledge is not neutral, but is influenced by human interests; that all knowledge reflects the power and social relationships within society; and that an important purpose of knowledge construction is to help people improve society” (p. 104). As Mertens explains, the term transformative as applied to research approaches is often associated with ideas of constructivism and learning for social change. Researchers also assert that “individuals and groups learn by interpreting, understanding, and making sense of their experiences,” and that learners are therefore active participants in their own knowledge development (Preskill & Torres, 2000, p. 28). Evaluation approaches may embrace a recognition that evaluation always exists within a social system and authority structures, and that there is a need to explicitly link evaluation to those larger social structures (Henry & Mark, 2003; House & Howe, 2000; Segerholm, 2003). For Rossman and Rallis (2000), evaluation as learning involves the natural and active process through which an “individual transforms data” in order to use it for other purposes. By data, they refer to any sensory input and describe that: A learner receives input (data) and immerses herself in the data; she reflects on data, forming patterns and making meaning; insights emerge. She then applies her insights and tries out new ideas or actions. (p. 56) Also for Rossman and Rallis, learning takes on a social quality in that a learner interacts with her environment to make sense out of the data. When evaluators are involved in this process they become co-creators in the knowledge. When the knowledge is focused on

45

social change, the transformation takes on another dimension as the dialogue for understanding shifts “from knowing through talking to knowing through action (p. 56).” Within organizations, evaluators may take on new roles in order to facilitate learning within organizations (Preskill & Torres, 2000; Rossman & Rallis, 2000). Evaluators may take on a transformative approach to evaluation calling for a deep understanding of the intent and characteristics of initiatives and the opening up of possibilities of evaluation use for maximizing social influence. Although the conversation around transformational approaches can remain one of program or initiative improvement, improvement is always associated with an attention to social understanding or change (Henry & Mark, 2003; Springer & Phillips, 1994). In addition, transformative evaluation processes that incorporate communication, rather than solely information collection, also offer the possibility of learning with active involvement on the part of those who have traditionally not participated in evaluation processes. The focus becomes not one of outcomes alone but of the rethinking of the desired outcomes of social interventions and how evaluative questioning can best encourage communication throughout systems (Springer & Phillips, 1994). Various types of transformative evaluation can therefore address social change in community systems. Participatory evaluation, in its multiple forms, is an extension of the earlier stakeholder evaluation with a focus on deepening the utilization of evaluation through increasingly engaged participation (Cousins & Earl, 1992). Participatory evaluation as conceived by Cousins and Earl (1992) replaces the widespread input of stakeholder evaluation with the more intense interaction between evaluators and a smaller number of organizational personnel. The underpinning of this approach involves ideas of

46

organizational learning that include “integrating new constructs into existing cognitive structures” and expanding the opportunities for “social interpretation of information” (p. 401). Similarly, the concept of deliberative democratic evaluation supports the goal of participation through an emphasis on inclusion (of relevant interests), dialogue (to understand interests) and deliberation (grounded in reason, evidence, and valid argument) (House, 2004; House & Howe, 2000). The emphasis is on bringing together stakeholders in engagement that solicits communication of their interests and processing of what these interests mean to understandings of the value of a program (Mathison, 2000). However, deliberative democratic evaluation, because of its emphasis on reasoned participation and structured argument, precludes the possibility of evaluation without a shared basis for understanding reality. The approach therefore prohibits evaluation within highly diverse groupings and the possibility of change through communication of diversity. When used within environments that are institutional and hierarchical in nature rather than participatory and democratic evaluation have the potential to contribute to social change by including, in the conversation of change, the voice of those without authority in a structure (MacNeil, 2000). Participatory and democratic evaluation, even when undertaken within institutional settings marks an attempt to embrace principles of democracy in efforts to support the communication of voice of particular groups (Mathison, 2000). These approaches emerge from a critical theory orientation and deal with reform of organizations through the increasing of consciousness (Huberman, 1995). Forms of critically oriented evaluation have, as their goal, the inclusion of traditionally silenced voices within or outside an organization or institutional structure. When

47

participatory evaluation approaches involve concern with the issues and needs impacting marginalized groups, Fetterman (1996) calls this type of evaluation empowerment evaluation. The difference between empowerment evaluation and organizational forms of participatory and democratic evaluation approaches is one of degree rather than absolutes. Some proponents of an empowerment approach claim that evaluation should be dynamic and responsive to the “life cycle” of the program and should incorporate training for improvement as well as “advocacy,” “illumination,” and “liberation” (D. Fetterman, 1996, p. 6). In this way, empowerment evaluation brings evaluators into relationships with organizations in roles very different from that of traditional outside observer or program developer (D. Fetterman, 1997). However, the type and intensity of engagement have caused concerns for theorists in the evaluation field. In contrast to his own utilization-focused approach to program development, Patton notes that Fetterman’s empowerment evaluation is rife with the problems of clarity (1997). According to Patton, Fetterman’s book, Empowerment Evaluation: Knowledge and Tools for Self-Assessment, failed to adequately distinguish between collaborative, participatory, and empowerment approaches and it failed to fully address either the issues of accountability or self-assessment (Patton, 1997a). Patton also indicates that there is tension around the language of empowerment and the need to address ideas of selfdetermination and roles of empowerment evaluation. Stufflebaum (1994) further cautions against throwing away the professional status and standards of a field and warns that empowerment evaluation could be used as a “cloak of legitimacy to cover up highly corrupt or incompetent evaluation activity.” He states that:

48

a loose, open approach to evaluating and interpreting data permits authority figures to press their advantage and impose their self-interests with relative immunity to external review regarding the logic, philosophical base, and defensibility of their judgments and decisions. (Stufflebaum, 1994, p. 326) Contrary to this “loose” or potentially corrupt characterization of transformative evaluation approaches, such as empowerment evaluation, Mertens (1999) described one characteristic of transformational evaluation as involving a depth of understanding by the evaluators that – a depth that requires the evaluator to be involved within the community affected by evaluation. Other evaluation approaches that aim toward social change require the evaluator to be engaged, not just in the general community, but engaged in the most basic assumptions of the understanding of the specific initiative, as it is to operate within a community. In this way, the evaluative emphasis is on uncovering the structured logic often hidden within communities. Gaining attention in evaluation, and thus also influencing ideas related to evaluation of CCIs, is the idea of theory-based evaluation (C. H. Weiss, 1997). Evaluators have used theory-based evaluation to assist in understanding the how and why questions of a program (Donaldson & Gooler, 2003; Hasci, 2000; C. H. Weiss, 1997). This evaluation approach involves opening up the logic of programs for review through processes for indicating the beliefs and assumptions underlying ideas of social intervention and change. In practice, the approach consists of focusing attention on outcomes, approach, and context (Murphy-Berman et al., 2000; Schnoes, MurphyBerman, & Chambers, 2000). Although researchers tend to treat these separately in evaluation, in theory-based approaches, the ideas of outcome, approach, and context come together in the intent of generating evaluative understanding.

49

In attempting to elicit the underlying assumptions related to expectations that a program might have a certain desired result, theory-based evaluators pay attention to implementation theory (chain of implementation) and program theory (the assumptions about how implementation achieves outcomes) (C. H. Weiss, 1997). As used for policy understanding, theory-based evaluators go further to differentiate a policy’s theory of a problem, theory of a desired outcome, and theory of intervention (J. A. Weiss, 2000), and more recently have included attention to theories of sustainability (H. Weiss et al., 2002). Theory-based evaluation is therefore distinct from forms of formative evaluation in that it aims to distinguish theories as a way to structure evaluation and because it can therefore be directed explicitly toward participant concerns with mechanisms of change (C. H. Weiss, 1997). According to supporters, a theory-of-change approach is particularly suited to social programs where dynamism precludes control, and thus where random assignment and control groups are neither desirable nor possible. These are programs where the complexity and social nature of the program does not allow for replication, but require a deeper understanding of lessons in order to assist in the incorporation of learning within other unique programs (Hasci, 2000). The strength of theory-of-change evaluation is its espoused focus on the construction of knowledge rather than a preoccupation with isolated methods for data collection. Gambone (1998) states that data that is collected without theory is limited to description, but data that is connected to theory produces knowledge. Theory-of-change evaluation incorporates data as supporters seek to move the construction of knowledge -- as the linking of theory and information -- to within the realms of dynamic social initiatives.

50

With a theory-of change approach, evaluators are called to integrate their experience with knowledge development within the boundaries of social initiatives and to facilitate inclusion of participants in that knowledge development. Theory-of-change evaluation has therefore been promoted for its potential to (a) concentrate attention on specific aspects of a program, (b) make possible the aggregation of results into broader knowledge, (c) encourage an openness about what practitioners are intending to do and why, and (d) influence policy and popular opinion (C. H. Weiss, 1995). Use of the approach has the potential to help in building rapport with program staff, building cooperation and buy-in, and encouraging reflective practice (Huebner, 2000). The approach is also appealing since multiple theories may be simultaneously relevant in any given program (J. A. Weiss, 2000). Through the approach, complexity can be embraced rather than simplified. Nevertheless, theory-based evaluation, by definition, is neither exclusively formative nor inherently participatory but may be adjusted to the setting and nature of the evaluation task (Rogers, Petrosino, Huebner, & Hasci, 2000). Social change approaches, including theory-of-change evaluation, often share an embedded notion of learning through participation in the evaluation process itself. Learning in evaluation processes can be individual but is most often socially constructed (Preskill, Zuckerman, & Matthews, 2003). In order for the construction to be effective, the use of processes should be intentional (Preskill et al., 2003). Social construction in relation to community initiatives thus takes on a form of intentional consensus building around initiative meaning. As Innes and Booher (1999a) noted in relation to evaluating collaborative planning, in consensus-building process and outcome criteria meet and are informed by notions of communication. Lincoln too suggested that evaluators are facing

51

a changing social and political context that is postmodern in orientation. As such, communicating their commitments becomes a requirement in order for evaluators to take part effectively in a more clearly activist-oriented world (Lincoln, 1994). For Lincoln, action involves the communication of value in relation to social initiatives. Constructivism involves focusing on meaning-making activities thus requiring a selfreflexive stance by evaluators who are expected to come into evaluation with social change goals (Lincoln & Guba, 2004). This construction oriented stance is shared by Carol Weiss (1998) in her suggestion that use is not merely a transfer of lessons but also entails an active engagement on the part of users. She suggested: …we cannot transfer (and use) evaluation findings mechanically from one place to another. However, certainly, we can gain important information about what happens in the sites studied, and we can use that information as illustration and metaphor of what can happen under similar conditions elsewhere. (p. 29) If transferring findings is about learning and not simply sharing outcomes, then the issue of communicating value indeed becomes integral to evaluation. For engagement to occur in participatory and learning oriented environments, evaluators need to develop a “faith in others’ innate abilities, a desire to work with people, and a tolerance for imperfection” (Garaway, 1995, p. 98). This involves a sense of commitment to mutual learning and caring about participant interpretations developed through evaluations. Given the emergence of community initiatives within neighborhoods that have traditionally not been an active part of interpreting mainstream initiatives influencing policy, it is not surprising that new evaluation stances have been identified and socially aware manifestations of evaluation have emerged. The ideals of engagement of many community-based initiatives have demanded the participation of disenfranchised groups.

52

The desires for learning have led to calls for evaluation concepts and practices that can contribute to deep understandings of community initiatives and action–oriented approaches to communication for comprehensive change.

Evaluation in the Context of Community Initiatives

There is general agreement that evaluating community initiatives, whether or not they are intentionally comprehensive, is a task full of the complexities of evaluating any social action (Baum, 2001; Edelman, 2000). Debates continue over what type of evaluation is congruent with the nature of community change, which processes of evaluation are most likely to support initiative influence, and how to address the challenges of evaluation for community initiatives. (Baum, 2001; Chaskin, 2000; Connell et al., 1995; Edelman, 2000; Fraser et al., 2002; Fulbright-Anderson et al., 1998; Murphy-Berman et al., 2000; Rossi, 1999; Sawicki & Flynn, 1996; Stone, 1996). I focused this component of my literature search on the literature surrounding CCIs to help me identify gaps in research in the field. I highlighted the terms community initiative and community evaluation and drew upon literature from a variety of sources including: (a) disciplinary and field journals such as those serving the disciplines of sociology, psychology, anthropology, and the fields of education, community organizing, planning, and public administration; (b) topic journals such as those focusing on issues of community psychology, urban affairs, and civil society; (c) agency publications in the public and nonprofit sectors such as those from government bureaus and foundations; and (d) reports from research and training centers that engage in community initiative

53

research. My search was indicative of an issue-specific multidisciplinary bounding of the emerging field of community initiative evaluation. Within the literature, researchers each describe selected components of community initiatives. Some have become fairly general and even cliché in their usage. For example, community initiatives are complex. Within initiatives, the concept of community is at best variously defined, at worst ill defined, and boundaries are not easily identified or stabilized. When community boundaries are defined, basic data does not usually exist for small areas or the exact geographic area as relevant to the initiative. Although social research methodologies are useful, some of the standards of traditional social science, such as establishing conditions of controlled comparison, are ethically improper because they would mean that needed services would be intentionally denied to one community. Social science methodologies based on ideas of control may also be impractical, given the dynamism of community initiatives. The participatory intent of many community initiatives means that individuals with varying evaluation awareness and skill are brought together in the research endeavor. The immediate requirements of the change agendas of community initiatives -- that seek to influence social contexts, policy, or implementation -- are at odds with long-term systematic processes for knowledge development and thus place competing demands on community initiative evaluation. Community initiatives often have ambitious agendas of social influence including addressing complex and deep-seated issues such as poverty, racism, and inequity. Finally, given these agendas, there is often resistance on the part of directors, participants, and even evaluators, to collecting any data for fear that the data will be used

54

to support unreasonable expectations for initiatives; initiatives won’t be able to show success related to such complicated and wide scale issues. To calls for community initiative evaluation, researchers have responded with discussion of various approaches that mirror the evaluation for program development approaches present in the evaluation field (Christie & Alkin, 2003; Nichols, 2002). One example is the neighborhood indicators movement that involved a quantitative community data and using that data within a participatory process involving both residents and experts interested in improving the outcomes of interventions. The focus of critique by researchers was on accessibility to data and the validity of data. In the literature, concerns for the dynamics of resident involvement in the indicator process appeared only in passing with attention to problems with resident involvement in research sometimes framed as conditions of community pathology (Sawicki & Flynn, 1996). Additional approaches have focused on program dynamics for increased use of data and improved evaluation (Nichols, 2002). However, despite cautions that program development evaluation approaches were not aligned with community initiatives, theorists such as Rossi (1999) have continued to advocate for a diagnostic or a needbased approach to evaluation. Within community initiative literature, the term ecological is sometimes used in order to move understandings beyond ideas of pathology and need to systemic change. Ecological assessment (Goodman & Wandersman, 1996) has focused attention on the complexity of community initiatives and the need to understand systems and contextual influences. However, within comprehensive community research, the term ecological takes on a redundancy given that the notions of community and comprehensiveness overlap the term ecology in meaning.

55

Even though CCIs share a focus with ecologically oriented community-based initiatives, it is crucial to study CCIs as separate entities because of their explicit inclusion of the concept of comprehensiveness. Comprehensiveness is an elusive term that can as easily be applied to the ideas of participant inclusion as to the needs of communities or to approaches to understanding social activity. Whatever the meaning given to the term comprehensive, its inclusion alone makes CCIs unique. Although other community initiatives can involve a notion of ecology as a separate characteristic, can be adapted easily to the rhetoric of categorical implementation, or can be conveniently situated within a particular industry such as housing or health, CCIs retain their embrace of comprehensiveness no matter their context. The ideas of evaluation thus must also always be consistent with an intention of comprehensiveness. In relation to comprehensiveness, supporters of CCIs often seek to interrupt categorical approaches and work across programmatic and systemic boundaries (O'Connor, 1995; C. H. Weiss, 1995) in an attempt to address physical, social, and economic issues and their interconnections (Brown, 1996; Stone, 1994). With CCIs, supporters also seek to effect change in multiple arenas such as the individual, the neighborhood, and larger state and national policy circles (Roundtable on Comprehensive Community Initiatives, 1997; Stone, Dwyer, & Sethi, 1996). At the same time, CCIs are espoused to involve private, public, and nonprofit entities in the addressing of social issues. In other words, the work of CCIs is intended to embrace holism or the awareness that neighborhood life is embedded within a larger socio-political context (Connell et al., 1995; Fulbright-Anderson et al., 1998).

56

Amid local, state, and national political shifts, comprehensive community initiative (CCI) supporters find themselves increasingly pressured to attract and justify investment into ideas of holism. As a result, some initiatives emphasize innovative evaluation processes for use in documenting CCIs and attracting and sustaining support for comprehensiveness. Parallel to discussions of CCIs is thus the discussion of the type of evaluation appropriate to understanding CCI approaches, challenges, and accomplishments. According to the Aspen Roundtable, CCIs are particularly difficult to evaluate because of their horizontal complexity in working across sectors and systems. In their comprehensiveness, CCIs are also influenced by contextual issues beyond the initiatives themselves, with CCI evaluation needing to be flexible and constantly changing so that a focus on a broad range of outcomes can be achieved. To meet these challenges, there is a growing body of literature suggesting the need to reframe the ideas of evaluation and to explore constructive strategies to leverage evaluation investment into the strengthening of CCI work (Brown, 1996; Connell et al., 1995; Fulbright-Anderson et al., 1998; Stone, 1994). Members of the Aspen Roundtable have produced much of the CCI evaluation literature. Within the evaluation specific publications of the Aspen Roundtable, authors have adopted a theory-of-change approach as proposed by Carol Weiss, as the ideal approach for conducting CCI evaluation. The Roundtable has released two major publications outlining the history of CCI evaluation, ideas about evaluation and challenges to conducting CCI evaluation. The Roundtable has also provided discussions of what has occurred in particular sites that were trying to utilize a theory-of-change approach to evaluation (Connell et al., 1995; FulbrightAnderson et al., 1998).

57

According to Carol Weiss (1995) a theory-of-change approach is based on the task of making explicit the tacit assumptions underlying any program. She notes that whether or not community initiatives are based on an explicit theory, there is always an implicit theory and often many theories underlying a social effort. Weiss also asserts that CCI evaluators should take as their task a surfacing of those theories in enough detail that the theories can be examined and data can be collected to explore the ways in which these theories hold or break down throughout an initiative. In this way program evaluators can help in determining which theories are best supported by evidence. Weiss also noted that, although the emphasis in theory-based evaluation is not the collection of outcome indicators, the approach does lend itself to the collection of data related to the emerging theories and thus the collection of interim indicators of a program’s success. In this way, the approach addresses the “pitfalls” of past community evaluations where emphasis was placed on immediate individual-level change with no way of explaining “how and why effects” of longer-term program interventions (C. H. Weiss, 1995, p. 86). A theory-based approach to evaluation enables a deeper understanding about how and why a program works rather than just to what extent it works (C. H. Weiss, 1995, 2004). In this way, theory-of change evaluation approaches have the potential to serve as social and policy learning (Connell et al., 1995). Mapped onto the ideals of Weiss’ approach, there are challenges to CCI theoryof-change evaluation. As Weiss admits herself, the approach comes with difficulty associated with theorizing, measurement, testing, and interpretation. As she writes, there is complexity involved in surfacing theories in that the analytic stance required is different than the “empathetic, responsive, and intuitive stance of many practitioners”

58

who may like to work in gestalts rather than pulling apart ideas (C. H. Weiss, 1995, p. 87). The issues of complexity are also joined by challenges of building consensus, the political risks associated with a community releasing their theory, or political pressures to keep evaluation tied to current policy concerns. Theories of change in CCIs also may not lend themselves to generalizability to other settings. Another challenge is that theories of change are difficult to measure and are often too general to be amenable to testing because of the difficulty with determining the exact conditions that supported the theory (C. H. Weiss, 1995). Throughout the Aspen Roundtable publications, researchers provided a variety of discussions about the challenges of, and recommendations for, the practice of theory-ofchange evaluation in CCIs. Researchers have built conceptual models to help in providing guidance and to support a research base to theories-of-change (Connell & Aber, 1995). Other researchers such as Coulton (1995b) addressed issues of identifying both indicators of communities and contexts. Identifying boundaries in order to develop outcome measures in the absence of random assignment and controlled comparison groups proved challenging (Hollister & Hill, 1995). Identifying data appropriate for measurement (Coulton & Hollister, 1998; Gambone, 1998) and processes to establish causality (Granger, 1998; Hebert & Anderson, 1998; Milligan et al., 1998) were also difficult. Finally a theory-of-change approach presented new challenges for evaluators who found themselves adding to their skills, the political, educational, and methodological skills required to operate effectively as participants in the complex CCI environment (Brown, 1995, 1998; Milligan et al., 1998; Philliber, 1998).

59

In an attempt to adapt the theory-of-change approach to evaluation practice, Aspen Roundtable evaluators reflected on their approach to implementing theory-ofchange evaluation (Hebert & Anderson, 1998; Milligan et al., 1998; Philliber, 1998). Some shared their specific approaches. For example, Connell and Kubisch (1998) proposed a start from the end and work backward process with a series of steps to be adhered to after the larger questions of who participates and how the process will be guided were answered. These steps include identifying: 1) long term outcomes, 2) penultimate outcomes, 3) intermediate outcomes, 4) early outcomes, 5) initial activities, 6) resource mapping (p. 22). An additional series of steps described in the Roundtable writings included articulation of the theory, identifying benchmarks, designing methods to measure, collecting data, conducting analysis, modifying theories, and providing feedback (Milligan et al., 1998). The following figure summarizes the writings of the Aspen Roundtable. Overall, the writings included the major problems with past community initiative evaluations, the potential of a theory-of-change approach for addressing community evaluation, the problems associated with CCI evaluation and topics of learning as described in the form of discussion, reflections, and recommendations for evaluation. Figure 1 provides a heuristic as a summary of key issues conveyed in Aspen Roundtable writings.

60

Figure 1: Aspen Roundtable Evaluation Heuristic
Problems with past community evaluations Reliance on individual level data Inability to explain how and why effects of interventions (Weiss 1995 p. 86)

Theory-based evaluation potential as a community evaluation approach Concentrate evaluation attention and resources on key aspects of program Facilitate aggregation of evaluation results into broader base of theoretical and program knowledge Asks program practitioners to make assumptions explicit and reach consensus Evaluations addressing assumptions may have more influence on both policy and popular opinion (Weiss, 1995 p. 69) Challenges of CCI Evaluation Horizontal complexity Vertical complexity Contextual issues Positivist stance to measurement Flexible and evolving interventions General statements may not be testable Broad range of outcomes Not reproducible in other communities Absence of comparison community (Kubisch et al. 1995, 3-5) (Weiss, 1995 pp. 87-89) Challenges of theory-of -change evaluation Complexity in theorizing 1st involvement 2nd consensus 3rd public release of theory

Specific Approach to CCI evaluation Learning: Discussion, Reflections, Recommendations Research based frameworks for analysis of design and interventions to provide a “lens” for analysis of programs specific to a field like youth development (Connell & Aber 1995) Miscellaneous recommendations for evaluation practice (e.g. confusion, resources, skepticism, disagreement, planning) (Philliber, 1998; Kagan, 1998; Hebert & Anderson, 1998) Indicators and measurement issues -Outcome -Contextual (Coulton 1995; Gambone, 1998) Issues of Evaluator roles (Brown 1995, 1998) Approaches to the problems of comparison (counterfactual, unit of analysis, and boundary definition) (Hollister &Hill,1995) Availability and use of small-area data (Coulton & Hollister, 1998)

Specific steps for generating outcome expectations (Connell & Kubisch, 1998) Working w/ multiple stakeholders (Milligan et al. 1998)

Issues of positive causality (Granger, 1998)

61

Although much of the CCI evaluation writing has been produced by Aspen Roundtable members, in a more than decade of use, the term CCIs has expanded beyond this group. Researchers have written case studies documenting experiences with CCI evaluation (Murphy-Berman et al., 2000; Petersen, 2002; Schnoes et al., 2000); some have used the term in critique of a theory-of-change approach. Berk and Rossi wrote: So far, however, theory has not lived up to its promise in evaluation research. To begin, there is no agreement on what constitutes theory. For some evaluation researchers, a mere typology qualifies… For other evaluation researchers, any set of statements that link causes to effects qualifies. It does not matter how precise the statements are, whether they are internally consistent, whether they can be examined with data, or whether they are consistent with past empirical work and past theory supported by research. (1999, pp. 32-33) In studying evaluation practice, Christie found that a majority of evaluation researchers themselves do not report utilizing theoretical frameworks in their practice or when admitting to utilizing theory suggest that they use only part of a theory (Christie, 2003). When theory is utilized, as in Aspen Roundtable CCI evaluation, evaluation becomes different from other forms of systematic learning. Evaluation becomes focused on communication with the individuals or groups of individuals beyond those directly engaged in the learning processes. Although action learning, reflective practice, or organizational participation may have a public manifestation, public reporting is not inherent in the concept of either learning or action, nor is knowledge construction through theoretical questioning for use beyond the initiative always the expectation. These differences make addressing reporting crucial in CCI evaluation as distinguished from the traditional concepts of community indicators, categorical monitoring, organizational learning, or even action or participatory learning for organizational effectiveness. However, in CCI evaluation theory-of-change reporting is made challenging because of

62

the adherence to a notion of comprehensiveness in spite of categorical or isolating forces. Despite the challenges, within the Aspen Roundtable writings and beyond, the issue of reporting theoretical understandings has been given relatively little attention with limited understanding about the importance and the challenges of reporting CCI evaluation.

CCI Evaluation Reporting

Literature supporting comprehensive community initiative evaluation, such as that evidenced in the Aspen Roundtable publications, includes history of, and advocacy for CCI evaluation (Fulbright-Anderson et al., 1998; O'Connor, 1995; Stone, 1996; C. H. Weiss, 1995). Writings involve discussions of specific models and designs of CCI evaluations (Milligan et al., 1998; Murphy-Berman et al., 2000; Petersen, 2002; Schnoes et al., 2000), commentary about the potential of indicators, measures, and information use in evaluation (Coulton, 1995b; Coulton & Hollister, 1998; Gambone, 1998; Hebert & Anderson, 1998; Petersen, 2002), challenges specific to the roles of evaluators in CCI evaluation approaches (Brown, 1995, 1998) and discussions of the overall challenges and opportunities of evaluating CCI complexity (Connell & Kubisch, 1998; Hollister & Hill, 1995). Limited in the Aspen Roundtable and broader community development literature is an attention to the use of evaluation reports for CCI understanding of areas of holism, engagement, intensity, and informed action and the challenges specific to CCI evaluative reporting. When addressed, evaluation reports as products of evaluation are provided a dismal commentary or outright dismissal. Hall (2003) noted: …evaluation, while framed with the same rhetoric of rationality and purposiveness, in practice has taken on a very different function. Results-oriented

63

boards demand proof of foundation efficacy, but are indifferent to evaluation findings. Foundation management pressures staff to do evaluation, but does not use the information it generates in planning. Foundation staff do evaluation, but generally lack the resources or the competence to do it with any rigor. Grantees are compelled to participate in evaluation, but – in instances where they have access to its products – seldom find them useful. (p. 33) Even within the Aspen Roundtable writings, when addressed at all, reporting was often embedded within other discussions or was given passing attention rather than detailed discussion. One exception comes through the Aspen Roundtable second report on evaluation wherein Connell and Kubisch (1998) explain that theory-of change reports are attempts to cover both process documentation and outcomes in order to then explain how and why initiatives are working. As they describe, traditional evaluation reports often covered long-term outcomes and had little interim information. Traditional evaluation reports were often overly concerned with process resulting in little concern about whether programs were working or about explanation of the links between activities and outcomes. Even with the attention given to CCI evaluation as having the potential to inform various stakeholders, contribute to social and policy learning, and contribute to knowledge, CCI writings usually do not include an emphasis on evaluation reporting. Evaluation reports offer one form of knowing and communicating about CCIs and CCI evaluation. However, my search indicated that the understanding of CCI evaluation is incomplete in this area. Neither researchers, professional evaluation practitioners, nor local community participants, have analyzed actual CCI evaluation reporting as a key component of the CCI evaluation approach. Figure 2 illustrates my analysis that existing

64

CCI evaluation literature is missing scholarly attention to CCI evaluation reporting and thus that the understanding of CCI evaluation approaches is incomplete.

Figure 2: CCI Evaluation Literature

CCI evaluation challenges and opportunities CCI evaluation models and designs CCI indicators, measures, and information use CCI evaluator roles and relationship to participants

Writings about CCI evaluation approaches

?
CCI evaluation reporting

Situated within the complexity of a technologically advanced society, the world of evaluation is plagued with uncertainty; there are little guidelines or even questions to assist in interpretive acts for public reporting of decentralized complexity within which CCI evaluators find themselves. Caracelli and Preskill (2000) indicated how the 21st century poses challenges for evaluators. The environment within which evaluators work is complex both because the evaluation community holds different paradigms and also because of the external environment. Among external conditions, they note that technology, global concerns, and the wealth of publicly available information pose technical challenges as well as

65

challenges of interpretation and presentation of information for a diverse audience (Caracelli & Preskill, 2000). In addition, there is an increasing concern that the complexity is made more so by the challenging multi-organizational structures receiving funding and requiring evaluation (Frederick, Carman, & Birkland, 2002). The trends, in government, of focusing on demonstrable outcomes, the embrace of devolution as a possible approach to service delivery, and the involvement of nonprofit organizations in an environment of complex networks of service providers, are all contributors to a complex arena within which contemporary evaluators must operate (Frederick et al., 2002). Within this environment, it is surprising that the field of evaluation has not yet embraced the importance of examining reporting as an integral part of the endeavor and perhaps one of the most critical areas of evaluation in complex environments. The few texts that have dealt with evaluation reporting have taken the form of writings that acknowledge the importance of evaluation communication but they have read more like composition guides than serious attempts at understanding reports as an integral facet of evaluation (Morris, Fitz-Gibbon, & Freeman, 1987; Torres, Preskill, & Piontek, 1996). Throughout various evaluation approaches that involve stakeholders and participants, reporting has been noted as needing to be geared toward those stakeholders. Stake (2000) referred to feedback occurring throughout evaluation processes and emphasized the need to consider audiences when reporting. Stronach, Halsall, and Hustler (2002) focused on the funder as a primary audience for evaluation reports, and commented on the ways in which pleasing the funder influences evaluation reporting: At the same time, he was aware in ways not made clear in the report that the impact measures that the sponsors required could not be realistically met…This is a normal condition of “policy hysteria,” and indeed of life. And yet reporting had to correspond to the ways in which the evaluation proposal “parroted” material

66

from the funder’s documentation. Outcome measures known to be unavoidably contaminated were therefore accepted as measurement objectives, and at the same time the sponsors were reassured that their desire for outcome indicators that were “reliable and valid” would be met… Our point here is not that evaluation can be seen as flawed, especially in retrospect. Nothing new there. Nor that evaluation fails to offer definitive judgment. It is more that “reporting” is never a collation of methodologically justified findings without also being a tremendous admixture of other influences. Some of these are a legacy of the exigencies of the bidding process, some a careful reading of what “heuristically” might be viable as “formative” feedback, or as a summative account that would be read in a particular political context in specific ways, and that might have consequences for future evaluation business.” (pp. 180-181) The support for evaluation within CCIs and the emergence of theory-of-change approaches has expanded discussions of evaluation and serves to move evaluators toward a more enhanced and detailed understanding of theory and practice within the shifting contexts in which CCI evaluation is embedded. However, even with all of the discussion about comprehensiveness, CCI evaluators may have missed the potential of comprehensiveness by failing to question how evaluators put language to empirical analysis of evaluation reports. Rather, to endeavor to understand the construction of evaluation is to embrace, as Schwandt (2002) has done, evaluation as a form of social practice “shifting” the analysis of “what it means to perform evaluation practice...from mental acts directing conduct, to practice, or performance of social conduct” (p. 173). Evaluation thus becomes “an economic, socio-political, and cultural institutional practice” and “as an institution in its own right…evaluation practice accrues and exercises power to define the socio-political world” (p. 174). Madison (2000) too has documented the way in which the use of language serves to construct social problems and in turn entails consequences for the range of appropriate responses to those conceptions. She supported the social change importance of evaluative language. With reference to

67

policy, Cabatoff (2000) emphasized that it is a focus on language that moves evaluation beyond the ideas of utilization by individual stakeholders to concepts of policy communities and with this move confronts the potential to influence policy change With these notable exceptions in mind, the attention to reporting has been minimal in comparison to texts about evaluation design, measurement, roles and the overall challenges of evaluation. Together these exceptions comprise an emerging strand in evaluation research highlighting an area that is crucial to deepening understandings of social change efforts such as CCIs and CCI evaluation.

68

CHAPTER THREE METHODOLOGY Qualitative Research of CCI Evaluation Qualitative researchers focus on the holism and complexity of situations and issues and, in complexity, acknowledge multiple dimensions of meaning and interrelationships (Creswell, 1998; Marshall & Rossman, 1999; Schram, 2003; Stake, 1995). Stake (1995) writes that analysis “essentially means taking something apart” and involves “seeing how parts relate to each other and to other types and to putting the instance back together in a meaningful way” (Stake, 1995, pp. 71-75). As an analytic study of reporting, the methods I utilized allowed me to pull apart the data, examine it, relate parts of the reports to each other, develop categories and larger concepts, and engage in a process of reflection that also kept me cognizant of my role in bounding the study. For Merriam (2001), analytic studies are also different than descriptive studies because of their “complexity, depth, and theoretical orientation” (p. 38). Although I entered the study with a basic sense of the case and the data I was using, as I engaged with the text, my questions became more emic or related to an embedded meaning of the case. Particularly, as I engaged in working with the textual data, my processes for making meaning through analytic layering became focused, and I became aware of the multiple types of questions I was using to understand the meaning of the data. The techniques for the study also became more refined and congruent with the issues and data of the study as I proceeded with analysis of the selected case.

69

Case Study

In research, some theorists consider the case the object of study whereas others understand case study as a methodology (Creswell, 1998). A case study may also be the report resulting from research. Therefore, the term case study is used to describe the content of study, the process of study, and the product of study (Merriam, 2001). Furthermore, case studies are conducted in order for researchers to describe, explore, explain, interpret, or evaluate (Gall, Gall, & Borg, 2003; Merriam, 2001; Yin, 1994, 1998) and are particularly useful when the issue explored is complex and consists of multiple variables for understanding that issue (Merriam, 2001). Case studies can be particularistic; they can focus on a particular situation. Case studies can be descriptive, providing a rich thick description. Case studies can also be heuristic as when “case studies illuminate the reader’s understanding of the phenomenon under study” (Merriam, 2001, p. 29). In this section, I focus on case study as a process for research and specifically on qualitative approaches to case study for developing an understanding of CCI evaluation reporting. Case study methodologists differ on the focus of case study even though their definitions and concerns often overlap. For Yin (1994), case study is employed when how and why questions are desired, when there is little control over the situation being studied, and when that situation is a contemporary one (p. 6). Stake (1995) asserts that there cannot be a precise definition of a case study but rather refers to a case as itself “a specific, complex, functioning thing” (p. 2). To address this complexity, qualitative case studies are often holistic in nature with attention to multiple aspects of a situation

70

(Merriam, 2001; Yin, 1994). Although case study may involve the interaction between the emic (insider’s) and etic (outsider’s or researcher’s) perspective of a phenomenon (Gall et al., 2003; Merriam, 2001; Stake, 1995). There is often a sincere effort made on the part of researchers to both hear the views of participants and also to acknowledge multiple realities even if they are contradictory (Stake, 1995). Since philosophical stances toward case study and types of case methods differ, various terminology is used. However, the essence of case study is wholeness. For research purposes, wholeness presents itself as the need for a research topic to be bounded for study (Merriam, 2001; Stake, 1995; Yin, 1998). There are various ways of engaging in bounding, each of which places a differing understanding on the nature of that which is to be studied. For example, Yin (1994) addresses the notion of bounding by equating the definition of the “unit of analysis” with the definition of the case (p. 22). He emphasizes that the research questions must point definitively to a specific unit of analysis and that keeping the unit of analysis similar to existing case studies is essential for comparability to established research. However, as Yin (1994) observes, the variables of a phenomenon are often inseparable from that context. To be appropriate for case study, the phenomenon must be bounded either intrinsically (Merriam, 2001) or in relation to its context so that it can serve as a unit of analysis (Miles & Huberman, 1994). Although emphasis varies, simply put, there must be a way to suggest what the case is and what is outside of the case. Stake (1995) rather treats each case as a “system” that has its own inherent “boundaries” and “working parts” (p. 2). Cases are thus “instances of a phenomenon” and case study design is an approach to developing an in-depth understanding of

71

phenomena through these instances or situations (Gall et al., 2003, p. 436), rather than having direct comparability to other cases. Similarly, Miles and Huberman (1994) frame a case as “a phenomenon of some sort occurring in a bounded context” and graphically present a circle with a heart in the center. The heart is the focus of the study, while the circle “defines the edge of the case: what will not be studied” (p. 25). However, in studies that address change over time and are interested in the educative quality of social initiatives, as is my study, bounding becomes not stagnant, but rather an ongoing part of the research process itself. By engaging in a qualitative case study, case researchers may position themselves to explore a topic holistically, allowing the specific boundaries of the case to change as understandings change. Engaging in the process of bounding gives researchers the opportunity of understanding the interactions within a case system as they can also be understood as situated interactions occurring within a context. Cognizance of the interaction between the emic and the etic perspective is crucial at the same time that concern with holistic and possible multiple and contextualized understandings of a topic are important. These joint concerns lead to an attention to the nature of qualitative case study and a focus on the analytics of my research approach. Because of their ability to attend to complexity, case study approaches, in their analytic forms, lend themselves to understanding reporting as situated within CCIs. Within the process of my analysis, I brought together various ways of questioning, reflection of my own experiences from different views, and multiple types of data around the same topic, and I used the analytic process to build an understanding of reports as they changed over time.

72

Case Selection

In order for me to study CCI evaluation reporting as a case, an actual CCI evaluation was needed. A case study may be intrinsic whereby the unique characteristics of the specific case are worthy of study in and of themselves, or the case may be instrumental because exploring its related issues can help in understanding other similar cases (Stake, 1995, 2000). For an instrumental case study, as my study is, there must be criteria for selecting the case in order to maximize the learning that is accessible around the particular issue (Stake, 1995, 2000). Because of my research interest in CCIs, evaluation, and the evaluation reporting, I used the following criteria for selecting an initiative to study. These conditions, and thus criteria for selection, included: • A topic related to CCIs that could be located within a particular bounding -- CCI evaluation. • An identifiable enactment of that topic as bounded by organizations associated with the specific evaluation – CCI evaluation reporting. • The availability of primary data that can be sampled to inform the understanding of the CCI evaluation reporting – CCI evaluation reports. Given my research focus, I was interested in selecting a single nationally funded program, supportive of neighborhood-based development, and involving evaluations. It was also crucial that this initiative be publicly linked to a broader group of individuals who engaged in a public discussion of CCI evaluation theory and practice. This was necessary to provide a broader discussion of ideas of evaluation from the perspective of a larger community.

73

To begin the search for a national program, I began identified publications and website of the Aspen Institute’s Roundtable on Comprehensive Community Initiatives. The Aspen Institute’s Roundtable for Comprehensive Community Initiatives is an identifiable group that supports the discussion and practice of CCI evaluation and is a prominent source of information about CCIs and CCI evaluation. I then developed a list of the nationally supported programs funding CCIs as described on the Roundtable website. I mapped out primary membership on the Roundtable and utilized the website to explore further the evaluation information related to the evaluation firms involved and the evaluation publications generated in relation to these initiatives. Since the CCIs listed on the website did not contain immediately recognizable links to all Roundtable members, I then listed the remaining members as noted in the1998 Roundtable publication on CCI evaluation (Fulbright-Anderson et al., 1998). This process provided me with a snapshot of the broader network of associated individuals and organizations and a finite set of initiatives from which to select purposefully a CCI evaluation (see Appendix A for a complete listing of CCIs considered). To reach a final selection of a nationally funded initiative consistent with my research interests, I considered the following criteria: • To ensure the availability of a broader network related to the initiative, I considered the extent to which the national funder was linked to the publicly organized research group (Aspen’s Roundtable) as evidenced through financial support and membership. I also noted the extent to which the evaluators of the initiative were linked (as members) to Aspen’s Roundtable.

74



To ensure consistent investment into evaluation, I considered the investment into evaluation as indicated by the length of time of evaluation and the production of evaluation documents.



To establish that there was a connection to neighborhood development, I identified an initiative that included evaluation of specific neighborhood initiatives.



To ensure the availability of primary data, I considered the extent to which initiative evaluation documents were publicly available.

Of the CCIs listed in the Roundtable website, the Ford Foundation’s Neighborhood and Family Initiative most closely met these criteria. NFI was uniquely suited to this study for a number of reasons. The funder and evaluation intermediaries were both represented on the Roundtable. NFI supporters sustained investment into NFI for approximately ten years. NFI funding was invested into CCI activities that included evaluation activities. Evaluation was conducted over the course of the initiative and evaluation reports were produced, with some reports publicly released. Sampling in this study was not only purposive but also “theoretically driven” with choices made in relation to the conceptual question to be addressed rather than with a notion of “representativeness” (Miles & Huberman, 1994, p. 29). Sampling decisions occurred at three points in the study. As described, the first was a purposive sampling of NFI as the case to be addressed. NFI was purposefully selected to meet my criteria. The second sampling decision involved the selection of evaluation reports as the primary data for the study. The third and final point of sampling was concerned with the segmenting

75

of data from which meaning units would be identified. I describe these decisions as they occurred in my addressing of the methods for the study.

Methods

Case study involves data collection that is in-depth and comes from multiple sources; data is also often very detailed in contextual content (Creswell, 1998). The flexibility of case study to address issues holistically, through the incorporation of multiple sources of data and a variety of methods, is particularly supportive of understanding phenomenon within a real-life context and when the boundaries between the phenomenon and context are not clear (Merriam, 2001; Yin, 1994, 1998). According to Merriam (2001) case study researchers can utilize any methods to gather data. A case study researcher gathers as much information about the problem as possible with the intent of analyzing, interpreting, or theorizing about the phenomenon... Rather than just describing what was observed…the investigator might take all the data and develop a typology, a continuum, or categories that conceptualize different approaches to the task…The level of abstraction and conceptualization in interpretive case studies may range from suggesting relationships among variables to constructing theory. The model of analysis is inductive. Because of the greater amount of analysis in interpretive case studies, some sources label these case studies analytical. (p. 38) Because of my focus on language, I turned to content analysis as my analytic approach. Content analysis emerged as a quantitative science with positivist notions of replicability and validity; it was predominately as a means for documenting communication and media messages and predicting their impacts on audiences (Krippendorf, 1980; Neundorf, 2002). However, according to Krippendorf (1980), content analysis is unique because of its context emphasis.

76

For content analysis, more so than for other techniques, the research design as a whole must be appropriate to the context from which the data stem or relative to which data are analyzed…Categories have to be justified in terms of what is known about the data’s context. Content analysis research designs have to be context sensitive. There must be some explicit or implicit correspondence between the analytical procedure and relevant properties of the context. (p. 49) Although content analysis emerged as predominantly a quantitative, albeit contextualized approach, alongside the quantitative versions of content analysis, qualitative forms have also emerged (Merriam, 2001; Potter, 1996). Writing from within the media studies, Potter’s exploration of qualitative research is framed around the study of meaning making. According to Potter, content analysis, as a methodology, is particularly suited to exploring cases when there is acknowledgement that meaning is made by individuals and thus is evidenced through messages or signs of the associated experience. For purposes of this study, I was interested in the public documents produced through CCI evaluation and what could be learned about CCI evaluation from the text of actual evaluation reports. Embedding content analysis within a qualitative case study approach allowed me to look at different levels and types of messages as documented within evaluation reports. The content analysis process for this study involved the coding of data and the creation of categories to describe and classify the content (Merriam, 2001). This study involved the establishing of overall research questions to be addressed. Questions remained as guide posts as I moved through the analysis. Consistent with the flexible ideal of qualitative research and the evolving nature of research questions (Merriam, 2001; Scram, 2003), I refined the questions throughout the process, allowing the questions to develop from etic (or outsider) issues based in past experience or literature into emic issues (those grounded in the case itself) (Stake, 1995). Creswell (1998) describes the qualitative analysis process as a spiral including loops for data managing,

77

reading and memoing, describing, classifying and interpreting, and representing and visualizing (p. 143). More specifically, my analysis process involved identifying message units applicable to my research questions and then analyzing the reports in relation to individual messages, messages across the reports and also as they occurred over the time span of the initiative. To address this complexity, my analysis process involved the interaction of data, questions, and techniques occurring together throughout a series of investigative iterations.

Data

Sampling refers to both the “how” and the “why” of data selection processes (Potter, 1996). As to why certain data were used, the researcher is guided by either convenience sampling or purposive sampling. For convenience sampling, efficiency is the predominant concern while for purposive sampling the specific data need is predominant (Potter, 1996). The choice of data for this study was purposive although access to information was also a concern. The NFI evaluation reporting itself is complex in that within the reports the primary or secondary nature of the reporting is vague. The NFI evaluation involved a two-tiered approach with evaluation occurring both “nationally” and “locally” (Chaskin, 1992). The national evaluators also utilized locally produced data and sometimes were involved in site interactions locally. The local evaluators participated at times in training, conversation, or meetings with other local sites and also with members of the national organizations. The data included those where the authors were the people personally

78

involved in the event (Gall et al., 2003; Merriam, 2001) as well as documents involving accounts of events by authors not present (Gall et al., 2003; Merriam, 2001). For the purposes of this study, I treated the entirety of the publicly available NFI reports as primary data for my study without attending to whether the NFI evaluators themselves were reporting from an observer or secondary standpoint. The primary data for the study came from the series of publicly available documents describing the process and outcomes of NFI. Publicly available means that, as someone not directly involved in the initiative, I was able to obtain the documents either electronically through a public website, through mail order for a fee, or with a simple email request or phone call to the producers of the documents. The primary data for the study included the following documents listed in Table 1.

Table1: Primary Data Date Author 1992 Chaskin, R. Organization Chapin Hall Center for Children Chapin Hall Center for Children Community Foundation for Southeastern Michigan. Community Foundation for Southeastern Michigan Chapin Hall Center for Children Chapin Hall Title The Ford Foundation's Neighborhood and Family Initiative: Toward a model of comprehensive neighborhood-based development. The Ford Foundation’s Neighborhood and Family Initiative: Building Collaboration: An Interim Report Neighborhood and Family Initiative local evaluation: May 1993: Neighborhood and Family Initiative local evaluation: May 1994: The Neighborhood and Family Initiative: Moving toward implementation. The Ford Foundation’s Neighborhood and

1993 Chaskin, R. and Ogletree, R. 1993 Grant, L. M., & Coppard, L. C. 1994 Grant, L. M., & Coppard, L. C. 1995 Chaskin, R. and Joseph, M. 1997 Chaskin, R.,

79

ChipendaDanoshka, S., & Joseph, M. 1998 Johnson, J.

Center for Children Planning Council for Health and Human Services Chapin Hall Center for Children Cosmos Corporation Chapin Hall Center for Children

Family Initiative: The challenge of sustainability: The Milwaukee Harambee Neighborhood and Family Initiative: Outcomes-based evaluation report covering the period July 1, 1996 – June 30, 1998: The Neighborhood and Family Initiative: Entering the Final Phase Common data collected for the Ford Foundation’s Neighborhood and Family Initiative: Neighborhood indicators. Moving beyond the Neighborhood and Family Initiative: The final phase and lessons learned.

1999 Chaskin, R., ChipendaDanoshka, S., & Richards, C. J. 2000 No author credited 2000 Chaskin, R., ChipendaDanoshka, S., & Toler, A. K. 2000 Chaskin, R.

Chapin Hall Lessons learned from the implementation of Center for the Neighborhood and Family Initiative: A Children summary of findings. * Note: At the time of data collection for this study, there were no reports publicly available from the foundations or evaluators of the Hartford or Milwaukee sites. I approached the data with the intent of identifying meaning units. According to Gall, Gall and Borg (2003) a meaning unit is “a section of the text that contains one item of information and that is comprehensible even if read outside of the context in which it is embedded” (p. 453). Because of my focus on the multiplicity of meaning and on the interaction between meaning and context, during the analysis I identified meaning units with attention, not to the provision of information, but to whole concepts. A meaning unit in my study was always at minimum a sentence to ensure the potential of a whole concept, each with a stated or implied action, verb, subject, and object included. Meaning units may have been as short as one sentence or as long as a few pages dependent upon the amount of text needed to capture the thought about the particular

80

concept. Meaning units may have been multiply labeled in analysis if there were aspects within the sentence that referred to various concepts. Within the primary data, there were also two sets of data or segments that I identified, for both convenience and purpose, and utilized for eliciting findings from the NFI evaluation reports. First, I drew from descriptive overviews that were included in each evaluation report. Analysis of these overview statements provided a basic snapshot of the way in which NFI was framed at that point in time within each evaluation report. The second dataset was drawn from the entire body of evaluation report text and included statements that evaluators made about the initiative evaluation. From these segments, I identified change constructs to note areas wherein evaluation learning was evidenced throughout the reports. These were passages that included any reference to the term evaluation or any derivative of the root of the word evaluation. With this segment of data, I sought understandings that the evaluators shared in terms of the concepts and processes of evaluation. Together the change constructs added to my understanding of the primary documents and contributed to my interpretive framework. I also used the primary data as whole texts for the evaluation findings, for refining my learning, and for checking the change construct development against the full evaluation texts.

Analytic Questions

In combination with the data, I utilized a series of analytic questions (Merriam, 2001) to focus my attention on the messages documented in the NFI reports. These questions included topical questions that guided my gathering and focusing of

81

information (Creswell, 1998; Stake, 1995), critical questions that helped me to look deeper into the messages (Marshall & Rossman, 1999), and reflective questions (Glesne & Peshkin, 1992; Maxwell, 1996). The reflective questions enabled me to examine my emerging analysis against the backdrop of professional and experiential understandings that I have gained over the past ten years of working in various areas of community assessment. Topical questions are those questions that elicit the specific information needed to describe the case (Stake, 1995, p. 25). I used topical questions at various points within the data analysis as I came upon information that I needed to order and examine. For example, as I reviewed the description statements, I recognized the need for a table that provided basic details about the initiative. I utilized a series of simple questions to organize the information available. All of the topical questions required a low-level of inference and were directly related to the exact words in the text. Critical questioning may be thought of as a frame of reference rather than a specific list of details to be identified. In critical questioning, I continually asked and made notes on questions such as: So what? Why? How? To what end? From whose perspective? Based upon what evidence? In relation to which concept? The genesis of critical questions is not explicitly identifiable or specifically related to the details of the research questions. Rather critical questioning comes from immersion into the data as well as the literature, current understanding of the phenomenon, and simple curiosity about the phenomenon being studied.

82

In addition to curiosity, Merriam (2001) noted that qualitative research requires an acceptance of ambiguity as there are no set step-by-step processes. The researcher must be intuitive and sensitive to context and variables within it: including the physical setting, the people, the overt and covert agendas, and the nonverbal behavior. The researcher must be sensitive to the information being gathered. What does it reveal? How can it lead to the next piece of data? How well does it reflect what is happening? Finally, the researcher must be aware of any personal biases and how they may influence the investigation. (Merriam, 2001, p. 21) Merriam adds that, given that the researcher is the primary instrument for the research, there is a connection between the researchers “worldview, values, and perspectives” (p. 22). Qualitative methodologists thus often support the idea of reflecting on their relationship to the subject and ideas being explored emphasizing the need to include a reflective process in qualitative analysis (Glesne & Peshkin, 1992; Maxwell, 1996; Schram, 2003). In order to understand the experiential aspects of my questioning and interpretations, I engaged in reflective questioning throughout the analysis. Reflective questions started at the very beginning of the design of the study with the selection of the topic and with the choice of a qualitative approach to understanding. Through reflective questions, I was able to explore the layers of meaning involved in my interpretation of the data. For me the reflective questioning, like critical questioning, was more a process than a list of questions; some initial questions included: How does this relate to a past experience? Is this what I thought the data would show? Is the data confirming what I already know or is there something more here? If my worldview were different, how might I see this differently? How does my background and experience influence how I interpret this data?

83

Techniques

Combined with these data and questions, I utilized four qualitative analysis techniques. Three of these techniques coincide with Miles and Huberman’s (1994) simultaneously occurring components of qualitative analysis – data reduction, data display, and conclusion drawing and verification. In relation to the content analysis approach for this study, I refer to these components as coding of textual units, the generation of data displays, and the writing of interim textual summaries. The fourth technique I utilized was one of analytic memoing (Maxwell, 1996; Miles & Huberman, 1994). Coding, as utilized in content analysis, is a process of identifying categories to apply to segments of text. Text may be broken apart allowing the researcher to treat segments as individual messages that may contribute to the understandings of a larger piece of work (Potter, 1996). Data displays are graphic representations such as matrices, diagrams, and drawings of information or thoughts, that emerge in relation to the analysis of qualitative data (Miles & Huberman, 1994; G. W. Ryan & Bernard, 2000). Textual summaries involve the writing up of ideas as a way for a researcher to begin to link thoughts and explore or verify emerging understandings. Writing, used in this way, is not a final representation but rather an ongoing process of analysis (Miles & Huberman, 1994; Richardson, 2000). In addition to the above components, I utilized analytic memoing. Memoing became a process of applying and documenting the topical, critical, and reflective questions that occurred in relation to each of the techniques and to other thoughts that emerged in analysis.

84

Coding Primary Data

Unlike linear inquiry processes, content analysis often involves the coding of raw data in conjunction with the development of broader categories (Merriam, 2001). Although Marshall and Rossman (1999) refer to coding as a phase in the analysis to follow the generation of themes and patterns and categories, I utilized coding as a technique rather than an explicit phase. The labeling of data or coding thus became an ongoing part of the analysis process rather than a discrete stage. Miles and Huberman (1994) suggest that: Coding is analysis. To review a set of field notes, transcribed or synthesized, and to dissect them meaningfully, while keeping the relations between the parts intact, is the stuff of analysis. This part of analysis involves how you differentiate and combine the data you have retrieved and the reflections you make about this information. (p. 56) I utilized two initial iterations of analysis to explore and then identify data. The two approaches included exploratory labeling and descriptive coding. The exploratory labeling occurred first as I became familiar with the data. I then read the documents and placed labels on the text highlighting immediately apparent ideas about CCIs. I utilized a number of electronic searches based on word usage in order to explore any obvious patterns that might have emerged in relation to the labels I had identified. Due to the use of a computerized analysis program, each of these explorations resulted in another label being added to the applicable units of text. I considered these steps exploratory labeling (rather than explicit coding) because they involved a process of labeling that was immersion focused rather than systematically grounded.

85

I then proceeded to code the text based on the stated structure that the evaluators placed on the text through the table of contents for each report. I utilized the table of contents as an indication of the major concepts that the evaluators emphasized and I engaged in systematic coding with reference to the specific words (or derivations of the root of words) that the authors used. My intention with this initial coding was to be systematic and also to remain directly linked to the word usage of the authors. This later form of coding may be referred to as “descriptive coding” or codes that do not involve interpretation on the part of the researcher (Miles & Huberman, 1994, p. 57). This coding was useful for immersion into the data but was too broad to be useful in focusing my analysis.

Graphic Displays

Data displays are visual depictions of data or of ideas that the researcher is drawing from the data. Displays can be useful for both visualizing ideas and facilitating thinking (Maxwell, 1996, p. 80; Miles & Huberman, 1994). “ In data analysis, they [graphic displays] serve two other key functions as well: data reduction and the presentation of data or analysis in a form that allows it to be grasped as a whole” (Miles & Huberman, 1994, p. 91). To understand parts and wholeness, I utilized a variety of data displays common to qualitative research. These included matrices (e.g. time sequenced, role focused, organizational focused,) flow charts, and tables.

86

Textual Summaries (Including Visuals)

Maxwell (1996) differentiates strategies that are used to focus on similarity or sorting into categories such as coding, and strategies that are relational in orientation which he calls “contextualizing strategies” used to “look for relationships that connect statements and events within a context into a coherent whole” (p. 79). For my study, I utilized both textual summaries and visuals to explore connections of ideas and data within the context of my questioning. Textual summaries occurred as I sought to bring together ideas that were generated during the coding processes. These occurred throughout the coding and also were a major part of the first draft writing of the study report. Visual summaries also occurred at all stages of the analysis as I sought to represent various insights and possible conceptual linkages between the data, the questions, and my emerging understanding of the data. Visual summaries differed from data displays in that they were more inferential in nature, linking together emerging concepts (sometimes with actual data), rather than solely listing and configuring data excerpts or information extracts.

Analytic Memoing

According to Maxwell (1996), “Memos are primarily conceptual in intent. They don’t just report data; they tie together different pieces of data into a recognizable cluster, often to show that those data are instances of a general concept” (p. 72). Analytic memoing occurred throughout the processes of the design and analysis of the study. I

87

utilized memos to document my thinking and my responses to the topical, critical, and reflective questioning to enhance the analysis (Maxwell, 1996; Miles & Huberman, 1994) and to provide for an audit trail of thoughts and processes. My analytic memos early in the process tended to be freeform, whereas the memos during the later analysis were often more explicitly structured around a particular emerging issue that I wanted to think about more systematically. I recorded memos in different ways depending on where I was when the thought occurred or what medium I needed to use in order to record the thought. These venues included lined notebooks, blank sheets of paper, and computerized memos and displays. For those memos that were documented in conjunction with the electronic processing and coding of the data, the use of a qualitative data management program enabled me to directly link memos to the text unit I was reviewing or coding at the time I engaged in that particular idea or question.

Investigative Iterations

I utilized these data, questions and techniques together throughout a series of investigative iterations. These iterations were loosely defined temporal stages of the analysis that marked the primary focus of my analytic attention at that point in the process. Iterations included immersion into the data and segmenting, visual diagramming of text, analytic layering, data analysis and change construct definition. Figure 3 represents the analytic approach.

88

Figure 3: Analytic Approach

primary

Immersion into data and segmenting

data
topical critical reflective coding graphic displays textual summaries analytic memoing Visual diagramming of text units primary data

questions
Analytic layering descriptive, cluster, path

techniques
Change construct definition

Immersion into the Data and Segmenting

My first stage of analysis was immersion into the data. Marshall and Rossman (1999) refer to this phase as “organizing the data” or the process through which researchers become “familiar” with data “in intimate ways” (p. 153). During this phase, I utilized the techniques of coding, visual displays, and memoing to acquaint myself with the primary reports. I noted major ideas that were privileged in the documents and reflected on the thoughts that puzzled or intrigued me. The result of this immersion was iterations of coding schemes, a series of memos, and a better grasp of the nature of the data and the challenges with utilizing formally represented textual data. With this

89

process, I came to familiarize myself with the language and formal structuring of the evaluation reports. I recognized that an arbitrary designation of a unit of analysis and computerized text searching for individual words alone would not suffice for capturing the meanings embedded within the evaluation reports. As I sought to understand the nature of the data that I had accessed, I struggled with when to utilize the whole dataset and when to focus on strategic portions of the data. Stake emphasizes the importance of selecting the data most useful to the study and “spending the best analytic time on the best data” (Stake, 1995, p. 84). I recognized that I needed to segment the evaluation reports according to my research focus. This resulted in a treatment of the text in three components. First, I sought an overall understanding of the evaluation reports through a reading of the entire text of the reports. I returned to the entire text as whole documents to be explored in relation to main ideas. I then wanted to understand the evaluators’ description of NFI. I discovered that in each of the evaluation reports, there was an overall statement, early in the report, that served as a general description. These were statements where the evaluators told what the initiative was to them at that point in time. I segmented this text to use in analyzing the general descriptors of the initiative. Focusing on my primary interest in evaluation, I then segmented out any time that the evaluators talked about the concept of evaluation. This “evaluation” text I set aside for the most intense data analysis. During this process of working with the data, coding, and segmenting the “best data,” I also noted that linkages between ideas in the text risked being lost either in my focusing on the text in the linear structure of the reports or in a dissected fashion. I

90

needed a way to “see” the ideas of text without losing the connections of ideas to one another and to understand these excerpts of texts as both individual ideas and parts. I addressed this need for linkages through a visual diagramming of text units.

Visual Diagramming of Text Units

I used visual diagrams in the analysis of text units. Very early in my treatment of the data, I recognized that my dissecting of data segments risked becoming haphazard and I risked losing the linkages of ideas to one another, to their context within the reports, and to my research purpose. Because of my concern with remaining close to the text and also with, not only identifying patterns or themes, but with understanding ideas and change over time, I needed a process for identifying configurations of ideas as they centered around major concepts. Once I segmented the text into the descriptor statements and the evaluation statements, I performed a visual diagramming of each sentence. This diagramming became the primary data with which I continued to work in the analysis. Figure 4 includes a sample diagramming of a sentence explaining the evaluator statement referring to neighborhood comprehensive development.

Figure 4: Visual diagramming
formation of strategies

harness
neighborhood focused comprehensive development

interrelationships between spheres of action

involves implementation of strategies

social

physical

and economic

91

Visual diagramming is different than graphic display and visual summary as it is akin to the grammatical diagramming of sentences (without the grammatical labeling). Words are separated but kept linked to their main sentence structure I utilized to help in analyzing the individual text units as a preparatory step for analysis of change over time. Using a visual diagramming, I could then “see” the shifts in configurations of ideas as they related to central ideas.

Analytic Layering

Once I identified the segments of text to be analyzed and diagrammed that text so it was in a usable visual form for analysis, I then engaged in analytic layering to draw out the meaning of the text. Stake (1995) asserts that qualitative research utilizes “ordinary ways of making sense” (p. 72). For case study researchers, this is sometimes a process of “direct interpretation” and at others times an act of “aggregation of instances until something can be said about them as a class” (Stake, 1995, p. 74). Miles and Huberman (1994) describe a process of “pattern coding” or identifying explanatory or inferential codes to identify emergent themes, configurations, or explanation (p. 69). According to Miles and Huberman (1994) you can have descriptive codes that involve little interpretation and pattern codes that are more inferential: Pattern codes are explanatory or inferential codes, ones that identify an emergent theme, configuration, or explanation. They pull together a lot of material into more meaningful and parsimonious units of analysis. They are a sort of metacode. (p. 69) Miles and Huberman add that patterns can also take the form of themes or emerging constructs. Gall, Gall and Borg (2003) define construct development as “bring[ing] order

92

to descriptive data” (p. 440) similarly to Miles and Huberman’s (1994) descriptive coding. Patterns can be in the form of relational patterns and causal patterns depending upon the nature of the relationship identified and the term “themes” means presence of recurring features and patterns that are explanatory in nature (Gall et al., 2003, p. 440). Instead Merriam (2001) focuses on categories, and emphasizes the need to identify the varying analytic levels of categories and to guarantee that categories related to the research are mutually exclusive, sensitizing, and conceptually congruent. Across the qualitative methodology literature, there is apparent variation in the use of terminology with respect to the research process and levels of inference involved in developing items called codes, categories, patterns, themes, and constructs. The primary goal of my analytic layering was to move from a descriptive analysis through a form of pattern analysis (using the idea of clusters) toward the identification of key constructs related to CCI evaluation. I utilized coding and categorizing to help in these transitions. I envisioned utilizing the constructs for understanding in their own right and also as anchors for further analysis in relation to the whole body of primary text. My analytic layering process did involve assigning labels to text. I utilized coding as a technique to capture my understandings of the text rather than as a stage or process as Miles and Huberman suggest. I utilized category creation in multiple levels of inference as suggested by Merriam, yet I focused more intently on opening up meaning and looking for linkages or partial overlaps than ensuring mutual exclusivity. In striving for constructs, I did so with the understanding of constructs as occurring at a more advanced level in the analytic process rather than their being descriptive and directly linked to observations as Gall, Gall, and Borg suggest. Finally, during the analysis, I recognized

93

that I needed to incorporate the notion of change directly into the analysis process and doing so led me to an idea of paths. The analytic layering thus led me in analysis from descriptive, through clustering as a categorizing approach, to identify paths as an analytic process for identifying change constructs. The first type of analytic layering was a descriptive layering. This involved reviewing the diagrams of the meaning units and marking the major ideas as documented by the evaluators. Thus, it was a process of working with the data as represented in the reports and as I could see data through the diagrams of meaning units. Even in its visual form, I consider this identification process descriptive because of the low-level of inference involved in this aspect of the analysis (Miles & Huberman, 1994). The second process of analytic layering involved my review of the idea diagrams with the intent of identifying clusters of concepts – cluster layering. At this stage, I recognized that there were many ideas involved in the discussion of the initiative and its evaluation but that some ideas were richer in text than other ideas. As I tried to make sense out of the evaluation reports, the need for rich data revealed itself. I recognized that there were as many ideas in the text as there were words. Yet most of these ideas were isolated concepts with little associated text, were thin concepts that had associated text but that provided little support information for understanding central concepts, or were simply transition ideas between concepts. An example of an isolated (because it was not connected meaningfully to other text), and thin (because it did not have much description for clarification) idea was the following statement from a Michigan evaluation report, referring to the idea of “challenge.” “During that time we have seen the collaborative grow and develop while facing the many challenges of new community

94

based organizations” (Grant & Coppard, 1994, p. 54). A richer textual unit with a reference to the concept of “challenge” was: Over the course of the Initiative's implementation, the two-tiered evaluation has faced several challenges. First, there has been a degree of confusion among participants about the division of labor, focus, and responsibility of each tier, as well as their relationship to each other. Second, the national evaluation has informed the broader field but has been less useful for sites. Third, local evaluations were slow to get started, have been uneven across sites, and have been plagued by problems of evaluator selection, turnover, and limited resources. (Chaskin, Chipenda-Danoshka, & Richards, 1999, p. 15) In the latter text, there were associated sentences to describe the idea of challenge and there begins to be information to help in identifying components of the understanding of initiative evaluation. During this stage, I therefore sought to identify concepts that were related to rich ideas rather than being isolated or thin in their usage. Rich data is a prerequisite for qualitative research and for this study, rich referred to not just informational details about a concept. In their visual form, rich concepts presented themselves as the main concept within a clustering of linked ideas. In thematic forms of qualitative case study, this step might have led directly to the identification of patterns or relationships. Yet for this analytic study, where change over time was the focus, I was particularly concerned with the incorporation of the notion of change into the construct identification process. Although elaborate texts exist to discuss qualitative approaches to analysis for identifying general themes and meaning structures, less has been written about what the insertion of the concept of time or paths into a qualitative analysis does to that analysis. In this study, I explicitly introduced time as a dimension both in my inclusion of the concept in my original questioning and in my analytic treatment of the data in relation to its temporal positioning during the initiative. As I proceeded with analytic layering, I

95

looked at concept clusters across the initiative and documented the linkages between ideas. These linkages were another step in helping to identify which concepts were central and which were elaborations on a key idea. A path layering for documenting the linkages within clusters of ideas helped me in two ways. First, it helped me to begin to identify which concepts may have seemed central but may not have been occurring over time. This elimination process was addressed more fully in construct definition. Second, it helped me to clarify which concepts within the clusters were indeed the central concepts to be explored because of their emergence as evidence of CCI evaluation.

Construct Definition

The layering processes resulted in a list of concepts which emerged as main concepts in each of the segments of my text – the description statements and the evaluation statements. These main concepts appeared to have rich data associated in the form of linked ideas that could help in understanding the central concepts. Gall, Gall, and Borg (2003) might classify this analytic concept as a theme or “salient, characteristic features of a case” (p. 439). However, with respect to an idea of change, I was not looking for a consistent idea that recurred over time, as a recurring behavior might be described as characteristic by Gall, Gall, and Borg. Rather, I needed to distinguish between ideas of differing conceptual levels, concepts that recurred in the same way over time and constructs that emerged from configurations of change. I was in search of the constructs that would help me in responding to my questions about the evaluation reporting in relation to knowledge development. As I

96

entered the data, my definition of a construct was broad – “the issue areas within which debates occur about the initiative.” As I explored the data, I recognized that there were many ideas that might have been labeled a construct. I engaged in the analytic layering which helped me to work with the data at the same time that I was clarifying what I meant by the term construct. My initial definition helped to narrow the number of possible constructs, but this definition was not adequate because it would have led to constructs that would perhaps not be rich enough to study or that would not explicitly encompass the notion of change that was an integral aspect of my study. I struggled with understanding the idea of a construct and recognized the need for any definition of construct, in order to be analytically useful, to encompass the research intent. For purposes of this study, it meant that the definition of construct needed to include attention to the needs of inquiry and to the intent of the research -- in this case the notion of change. A definition of “change construct” emerged as I came to understand the ideal of a construct in my study, not as an idea that emerged naturally from the text, but rather as an idea that emerges within the context of analytic concerns. The idea of construct became about clusters of ideas – not dislocated concepts. It became about ideas that were rich enough in data to be studied and about ideas that occurred over the course of initiative to lend themselves to understanding of change in ideas as reported over time. A construct in this study then can be more accurately defined as a cluster of ideas that coalesce around a single concept, are rich in data, and occur in various configurations over the reported time-span of the initiative. With this definition, I identified a number of constructs with relation to the two datasets of text that I had analyzed.

97

Challenges to Credibility of Change Analysis Using Documents

The credibility of a study is about the quality of conclusions with respect to their fit with the experience they are depicting. Researchers provide a variety of understandings and approaches for establishing the strength of the interpretations arrived at through qualitative inquiry. Being sure to differentiate from positivist notions of validity, Creswell (1998) refers to “verification” rather than validity and calls for trustworthiness relative to the particular traditions or research perspectives. To the goal of trustworthiness, the use of methods does not guarantee credibility, but rather methods are the processes for reaching the goal of credibility. There are several challenges that must be addressed in an analytic case study in general and specifically with an approach that relies solely on written public documents. In relation to the former, Stake (1995) notes that the “logical path” to the assertions researchers make are often not apparent: What we describe happening in the classroom and what we assert do not have to be closely tied together. For assertions, we draw from understandings deep within us, understandings whose derivation may be some hidden mix of personal experience, scholarship, assertions of other researchers…Ultimately the interpretations of the researcher are likely to be emphasized more than the interpretations of those people studied, but the qualitative case study researcher tries to preserve the multiple realities, the different and even contradictory views of what is happening. (p. 12) The latter relates primarily to the use of data drawn from experiences that have already been not only interpreted, but also formally represented. The use of written texts raised questions about the documents relied upon for the study, the analytic processes, and the

98

researcher’s ability to make credible inferences from the data and intelligibly represent these. The challenges of content analysis fall under their overall concern of the “span of inferential reasoning” and as Marshall and Rossman (1999) note, document review procedures can lead the researcher to “miss the forest while observing the trees” (p. 117). The researcher is thus dependent on the “goodness” of the research question and the study’s quality is dependent on the researcher’s ability to be “resourceful, systematic, and honest” (p. 135). In an analytic study these are unavoidable cautions responded to with systematic and careful analysis and an attention to the underlying principles of specific challenges. As in any research, the researcher must pay attention to the quality of the interpretations and the paths through which findings are derived. She must also be attentive to the ways in which the factors surrounding the study, including her own experiences, influence her understandings. I identified this as an issue of reflexivity. This research concern is not only related to the thought processes but to the ways in which thought processes occur. For an analytic study the ability of a researcher to demonstrate a descriptive and interpretive coherence with relation to the findings of the data is challenging because the researcher does not have the benefit of continual feedback and questioning other than with the texts themselves. The researcher must also solely through text describe findings to the readers; this occurs without the benefit of multifaceted observation as that which might occur during interviews or participant observation. The concern of descriptive and interpretive coherence then relates to the

99

internal processing of information as well as the researcher’s ability to take readers along with her in that process. The challenges of analysis also included the possibility that there may be multiple interpretations of a similar event. To the extent that a document analysis utilizes a variety of sources as primary, secondary, and contextual data, interpretive balance became a concern. In addition, keeping track of a variety of materials as they go through multiple iterations of analysis calls into question the researcher’s adherence to a systematic and intentional process and documentation and description of that process to readers. A final issue involves the challenge of transferability and the potential utilization of the research in other settings. These challenges and how they are addressed together comprise the concept of trustworthiness of the study or the research strength in relation to establishing the credibility of the findings. I addressed these challenges and notions of trustworthiness through a number of standards toward which I aimed and four specific actions which I used to achieve these standards. The standards include: standard of reflexivity; standard of descriptive and interpretive coherence; standard of interpretive balance; standard of process adherence; and standard of transferability. The approaches that I used to address these standards included: description, process adherence, and transparency in interpretation and ethical stance, and a derivation of triangulation with which I pay attention to potential multiple understandings.

100

Trustworthiness Standards

Researchers approach the issue of research credibility with various concerns. Yin (1994) refers to case study validity consistent with positivist framings of research. Issues such as construct validity (the goodness of a measure), internal validity (the demonstration of relationships), external validity (generalizability) and reliability (replicability of operations and results) are of importance (Gall et al., 2003; Yin, 1994). As Gall, Gall, and Borg point out, interpretive studies, in their rejection of positivist notions, require their own criteria (p. 461). There are many differing views of criteria in interpretive studies and much depends on the particular study and the aims of the researcher. Some researchers have tried to reframe positivist criteria and other researchers have developed new criteria for credibility appropriate to analytic research (Creswell, 1998; Lincoln & Guba, 2000; Maxwell, 1992). As Stake (1995) points out: Every informant’s personal reality is not equally important, either epistemologically or socially. Some interpretations are better than others. People have ways, not infallible but practical ways, of agreeing on which are the best explanations. So do philosophers. There is no reason to think that among people committed to a constructed reality, all constructions are seen to be of equal value. One can believe in relativity, contextuality, and constructivism without believing that all views are of equal merit. Personal civility of political ideology may call for respecting every view, but the rules of case study research do not. (pp. 102103) For analytic studies, the analysis process is ongoing from start to finish and therefore the researcher must be cognizant of the analytical decisions made throughout the process (Potter, 1996). Qualitative researchers thus address the threats to validity as part of the entire process (Maxwell, 1992). The key concept in approaching validity in a qualitative

101

case study is thus repeatedly asking what decisions are being made by the researcher and also how might one’s interpretations be wrong (Maxwell, 1992). In addition to this general questioning that occurs throughout the study, I sought trustworthiness in the study in a number of ways. As Potter (1996) noted authors such as Lincoln and Guba and Marshall and Rossman adhere to a notion of trustworthiness although the components of trustworthiness differ between the researchers. I sought to establish trustworthiness in my study through attention to standards and the use of approaches for achieving standards.

Standard of Reflexivity

Reflecting on the what and why of inquiry decisions and the possible impact of decisions on the research product is key to achieving an overall standard of reflexivity through which the researcher continually questions her own choices as contextualized in her experiences and frameworks. In this way, the credibility of a study involves the awareness of how the researcher’s purposes are infused throughout a study and how the researcher deals with questions of potential researcher bias (Maxwell, 1992). Traditionally, researchers have been asked to avoid bias by distancing themselves from past experience in order to make rational judgments about their research approach, and strategies. We have also been asked to “bracket” experience to render it an aside to participant experience. Yet, today there are calls for researchers to engage more directly with their own experience as they perform inquiry and produce research texts. Being aware of researcher influence upon interpretations is necessary in a study and doing so

102

strengthens the study’s confirmability (Miles & Huberman, 1994). The process of addressing researcher bias is not one of trying to eliminate the reasons a researcher conducts a study and understands data in certain ways, but rather one of understanding the influences that these reasons and perspectives have upon the study. The process of addressing research bias is therefore inherently a self-reflexive act of coming to understand the multiple “selves” involved in the research endeavor. Citing Reinharz, (1997) Lincoln, and Guba (2000) refer to “research based selves, brought selves (the selves that historically, socially, and personally create our standpoints), and situationally created selves” (p. 183). They frame the self-reflexive act as a “conscious experiencing of the self as both inquirer and respondent, as teacher and learner, as the one coming to know self within the processes of research itself” (Lincoln & Guba, 2000, p. 183). Therefore addressing researcher bias is a process of engagement with one’s experience. When reflected upon, experiential knowledge can be extremely valuable in providing important insights to a study. Maxwell (1996; 1998) calls for researchers to write an experience memo, while Glesne and Peshkin (1992) ask researchers to work through the various “I’s” of researcher subjectivity. In short, the subjectivity that originally I had taken as an affliction, something to bear because it could not be foregone, could, to the contrary, be taken as “virtuous.” My subjectivity is the basis for the story I am able to tell. It is a strength on which I build. It makes me who I am as a person and as a researcher, equipping me with the perspectives and insights that shape all that I do as a researcher, from the selection of a topic clear through to the emphases I make in my writing. Seen as virtuous, subjectivity is something to capitalize on rather than to exorcise. (Glesne & Peshkin, 1992, p. 104) Marshall and Rossman (1999) simply refer to writing a researcher biography to orient one to that which she brings into a study.

103

For this study, I relied upon a reflective process that was integrated within the questioning that was an essential part of the research design. Through this self-reflective process, I became more aware of the generation of my interpretations and better able to present substantiated interpretations or, where useful, multiple interpretations to similar issues. Reflexivity therefore became uniquely integral to the holistic nature of this study. Because of my former experience in multiple roles in relation to the topic of evaluation, my reflection supported the study in providing an internal form of multiplicity. The selfreflective process enabled me to see from multiple positions and, as Gall, Gall and Borg (2003) suggest, to draw the attention of a researcher to her positioning as a means of strengthening the research (p. 461). Standard of Descriptive and Interpretive Coherence

As Maxwell (1992) describes, reactivity refers to the influence that the researcher might have on the setting as she engages in conducting the study. Reactivity is of little concern in document analyses because the documents were produced without the researcher’s involvement (Marshall & Rossman, 1999). Since it can be conducted without the researcher’s presence in the event, content analysis is considered “unobtrusive and non-reactive” (p. 117) and thus does not fall prey to validity concerns of researcher presence during the study. For this study, it is the lack of researcher presence that opens up content analysis to credibility threats. These threats pertain to the accuracy of description and the insightfulness of interpretations, requiring that the researcher be attentive to the descriptive and interpretive coherence of the study.

104

Description, as referred to in this study, involves low-level inferences made about reported accounts of an occurrence (Maxwell, 1992). Validity for Krippendorf (1980) is related to the researcher’s careful attention to the symbolic nature of text as it relates to the meanings of its producers as they report occurrences. For analytic studies, engaging in analysis requires the researcher to conceptualize those relationships in the data. Straus and Corbin (1990) refer to this as “theoretical sensitivity.” Theoretical sensitivity is a personal researcher quality of “having insight, the ability to give meaning to data, the capacity to understand, and capability to separate the pertinent from that which isn’t important (Strauss & Corbin, 1990, pp. 41-47). Attention to the levels of inference occurred throughout the analysis of my study and was embodied in the process of analytic layering during which I continually referred back to the text as I refined and built upon layers of categorizing. To ensure qualitative engagement with the texts and theoretical sensitivity, I conducted systematic and documented collection, management, and ongoing identification and analysis of the primary data for the study. This supported my confidence in making descriptive or interpretive statements and being able to revisit the analysis in order to check or refine those statements. I also revisited my analysis by referring back to the original units of text from which I drew meaning. To demonstrate the descriptive and interpretive coherence I included text excerpts of the actual documents and contextualizing data that I used to form some descriptions. This approach of including appropriate data for the reader is supported as an approach to strengthening credibility (Marshall & Rossman, 1999).

105

Standard of Process Adherence

Methodologists often offer stages or processes for qualitative research. For example, Marshall and Rossman (1999) state that analytic procedures can be categorized into six phases: a) organizing the data; b) generating categories, themes, and patterns; c) coding the data; d) testing the emergent understandings; e) searching for alternative explanations; and f) writing the report. (p. 152) Although qualitative studies may encompass categories as defined by methodologists, their potentially iterative nature shifts the credibility focus from adherence to predetermined stages to ensuring an audit trail for the process as it was engaged. Therefore, although quality concerns of research studies can be addressed through systematic processes, process adherence is also in itself a concern in analytic studies. Because the possibility for change at even the most basic of levels (e.g. the questions and methods) was open to development at any time in this study, ensuring that once a process was begun, all data was treated in the same and complete way was crucial. In my study, I explicitly designated exploratory stages of analysis as the times when I was trying out various ways of segmenting and classifying text or when I was addressing pieces of text to try to develop the process I would use for systematic analysis. The times I labeled as coding, or analytic layering, were the structured aspects of the study, during which I ensured that any change in my process was applied to all of the text with which I was working. It was from these systematic encounters with the text that I drew my findings. For example, it was during the exploratory aspects of the study that I began labeling data and realized that I needed a visual representation of data in order to see it in

106

a way that would assist my analysis. I experimented with a couple of ways of doing this. Once I decided upon a form of visual diagramming, the diagrams that I had experimented with and the associated text were set aside. I then systematically applied the visual diagramming to all of the segments of text with which I was working. A similar incident occurred within the systematic analytic layering. After going through two series of layering, I recognized that there was a variation on an analytic layer that would better help in my understanding. I went back to the segments of text that I had already analyzed and systematically applied that layer of analysis. This attention to process adherence is concerned with the treatment of data rather than with the specific coherence or quality of the interpretations themselves, although it adds to this quality issue as well. In this study, I ensured my awareness to process adherence through my documenting of changes that occurred in the analysis, utilizing a system of coding that could be revisited and viewed at each stage of the analysis, and by reflecting upon my reasons and thinking for changes in process.

Standard of Transferability

Concepts such as external validity or generalizability refer to the extent to which the interpretations, developed in the research process have importance beyond the specific study (Miles & Huberman, 1994). As an analytic case study, the concern of my research was not that specific findings be set forth as if they would be the same in other cases but rather that the theoretical understandings and framework developed could be useful in other settings. Similarly, Miles and Huberman pose “utilization, application,

107

[and] action” as criteria of quality (p. 278). Debates have occurred about how involved a researcher should be in the ways in which the research calls for changes. The potential to contribute to change can also be understood as an issue of transferability or the extent to which the researcher enables utilization. Use, although not amenable to documentation within a study, was also an explicit intention on mine. I, of course, intend to promote the use of the learning beyond this particular study. Within the study, I supported the concept of use by reflecting upon, and being explicit about, possible avenues for usage for those in various positions in relation evaluative reporting.

Trustworthiness Approaches to Standards

I utilized multiple approaches to meet the demands of the above standards. Each of the approaches worked with the others as a whole to address the challenges and specific standards. Yet each approach was also tied more explicitly to reaching certain standards than others. Table 4 provides an overview of the relationship of approaches to standards.

Table 2: Trustworthiness Approaches
Approaches Identifying data Using description Providing an audit trail Being transparent Reflexivity Descriptive and interpretive coherence X X Standards Process adherence X X X X X X X Transferability

108

Identifying Data

The analytic processes of coding, memoing, and analytic layering required that data be manipulated in various ways, analyzed from different perspectives, and linked in multiple ways. Managing this complexity and being able to refer back to various points in the process was essential to ensuring the standards of reflexivity and process adherence. I utilized NVIVO, qualitative software, to assist with basic labeling and management of materials. This involved identifying each of the materials utilized and keeping track of the organizational and authorial ownership of these materials along with their date of publication and stated purpose.

Using Description

In studies that draw upon the interpretations of others, it is important to document and acknowledge the actual data that the researcher utilized. Throughout my analysis, I connected the readers to the evaluation report data through my description of the reports and my sharing of report excerpts. This description enabled the readers of the study to interact with the reports and to follow a chain of reasoning in relation to the data I was using. The public availability of reports provided the possibility for readers to review the entire evaluation reports as well, offering another possibility for readers to draw their own conclusions about the trustworthiness of the study.

109

Providing an Audit Trail

Documenting the processes and decision-making that happened throughout the study (Miles & Huberman, 1994) was the way that I addressed the possibility of auditing. I consistently documented and dated both reflective and interpretative transitions in the form of analytic memos. I sought to document key decisions in the research development in order to provide an audit trail and evidence of the systematic nature of the study. In order to support the systematic nature of the study, I also periodically referred back to the multiple forms of questioning that were integral to the design of the study. Reflexivity, descriptive and interpretive coherence, and process adherence were supported through this documentation. The data management integral to this documentation included paper and files, a word processor, and NVIVO. I used NVIVO primarily to track data and its relationship to codes and to perform basic searches of word usage in the text. Word processor and paper files were used in conjunction with the program so that I could keep track of additional materials and memos. The data management approach allowed me to document and track multiple types of coding, insights and thoughts, and levels of analysis. To support an audit trail, the management structure of the study also had to allow for segments of data to be coded, brought together, rearranged, and multiply coded. The data had to be arranged and separated without losing the connections to the whole from which it came (e.g. whole report, whole organization, whole initiative). And the text had to be continually connected to memos related to ongoing insights. In addition, ongoing

110

interpretations of the text had to be connected to the text to support researcher reflexivity and theoretical sensitivity. This ability to link explicitly data to each other and to memos and interpretive tasks was critical to the development of a trustworthy analytic study. The consistent linking allowed me to engage in a complex analysis without losing connection to the systematic process. It also helped me to be able to reflect upon, revisit, and make transparent my decisions and insights as they occurred throughout the process. The creation of an audit, trail as a process supporting systematic and intentional nature of the research, ensured a flexible yet consistent adherence to the design of the study. This systematic management ensured an audit trail and process adherence and it also supported the approach to transparency allowing me to show my own thinking as it evolved. Being Transparent

Making transparent the processes and decisions of the research is another approach to trustworthiness. According to Potter (1996), among the challenges to the standard of quality of qualitative research is the explicit revealing, by the researcher, of methods, methodology, researcher assumptions, types of data, decisions about evidence, and possible counterarguments. It is through transparency, or the description of analytic processes and interpretive decisions, that is it possible for readers to actively engage the deeper meanings of the study in order to determine the level of correspondence with their own real world situations. For analytic studies, in order that a reader be able to understand the possible learning for his/her own settings, the researcher must have

111

achieved a standard of transparency in revealing key aspects of the total research design and enactment. I embraced transparency in this study by being explicit about the research approach, the data used, and the paths to interpretation of the data. Where possible I also utilized graphic displays to include, within the text or appendices, as much relevant data as possible so that the reader could assess the interpretations and also draw their own uses from being able to see the interpretive substance. This inclusion was also an encouragement to readers to closely examine and utilize the research within their own context. Lincoln and Guba (2000) add to the idea of transparency that postmodern treatments of validity and ethics are often intertwined. “The way in which we know is most assuredly tied up with both what we know and our relationships with our research participants” (Lincoln & Guba, 2000, p. 182). In this way, the attention to transparency encompasses issues of ethics in relation to the researcher stance. This congruence does not mean that specific ethical issues should not be highlighted. Particularly with the use of content analysis, the authors stance may seem distant from the phenomenon making attention to transparency in ethical stance even more important. In this study, most important for the reader to know are the following:

?

To my knowledge, there are no immediate financial links between myself and the members of the specific initiative being studied. There may be university related linkages to the foundation and research group being investigated, but I was not

112

aware of alignments that would have prohibited my engaging in the study as designed. ? As this study is one of document analysis, the issue of ethical relationships to member texts is a crucial one. The case nature of the study and public nature of the data being used made anonymity unfeasible. An ethical relationship to member texts then became one of being explicit about the substance and paths to interpretation. ? I considered an ethical relationship to the CCI members. Where information was not obtainable through intermediary channels, such as libraries, websites, or publication ordering processes, I requested information from the relevant organizations. In these instances, I was clear as to my desire to use the information for research purposes. As this is a specific case study, no assurances of confidentiality or anonymity were guaranteed in exchange for written documents. The nature of this as a qualitative study, as one where membership and continuation of the discussion of the topic is ongoing, made these ethical issues unavoidable. ? I considered my ethical stance in relation to a professional community. There are, of course, multiple communities to which this study speaks, yet the initial ethical relationship that was of prominent concern was that which involved my commitment to qualitative research. It became important for me to address the study with attention to general issues of quality related to language, social science, and case study as well as the more specific standards of a qualitative research tradition with which I was aligned. Appendix B includes information

113

about the resources used to support overall quality of the study. This checklist served to remind me of qualitative concerns and served as an additional quality review. ? An ethical relationship to the reader and society was indeed not a separate issue but a culmination of the issues of trustworthiness. To the extent possible in the reasonable space of the study write-up, I made clear the information utilized and the interpretative processes engaged to come to my representations. It was my expectation that my study would contribute to deeper understandings of sociopolitical life and to CCIs and their evaluation. Through these understandings I intended to contribute to theory, policy, practice and social action (Marshall & Rossman, 2000) in a way that is respectful of human dignity and rights, and conducive to the expanding of socially creative capacity.

114

CHAPTER FOUR CASE STUDY FINDINGS

According to national evaluation reports, the Ford Foundation’s Neighborhood and Family Initiative (NFI) was a ten-year community development initiative that came to be called a comprehensive community initiative (CCI). Through NFI, foundation managers invested funds into neighborhood development in targeted areas of four cities – Detroit, Hartford, Milwaukee, and Memphis. The NFI managers sustained financial investment, into planning and implementation, over a ten-year period, with some extensions in the timing and distribution of funds. Managers funded an evaluation of the initiative and evaluators explored processes as well as indicators in order to document and support the CCI mission and goals. The NFI evaluators also suggested that they were interested in developing theory and that they had a participatory intent in the evaluation. As part of the evaluation, managers directed funds into the production of publicly disseminated evaluation documents and, as part of the evaluation approach, evaluators documented their reflections on the evaluation process. In this chapter, I first provide a general background of the NFI reports as a case situated within a knowledge community. The knowledge community is distinguished by the CCI and CCI evaluation literature of the national organizations of NFI and the Aspen Roundtable. As consistent with Maxwell’s (1996) research concerns, I then present my analysis of the data in relation to descriptive, interpretive, and theoretical concerns. In order to report on the primary data, I have organized the description of each report with attention first to the major concepts addressed in reports, then highlighting the evaluation

115

ideas as presented by the evaluators, and concluding with ideas from the primary overview descriptions of each report. I then present dimensions as areas covered in the collection of NFI evaluations. I then address the evaluators’ interpretations of CCI evaluation as it occurred in NFI. I do this by analyzing the evaluation descriptions of the initiative over time and the evaluators’ descriptions of evaluation over time. I present the challenges and lessons shared by the NFI evaluators; I organize these in relation to categories representing Aspen Roundtable writings. Lastly, I present change constructs. I utilized analysis of change constructs to address theoretical concerns through questioning change as evidenced in NFI reports. In my presentation of dimensions, lessons, and change constructs, I bring in the surrounding literature to provide contextual information about how the NFI reporting is situated within a broader knowledge community.

NFI Evaluation as a Case of Learning about Evaluation Reporting

According to NFI evaluation reports, fund managers and evaluators came to classify NFI as a contemporary form of initiative called comprehensive community initiatives (CCIs). Ford Foundation’s funding of NFI began in 1990 and continued into the year 2000 with some extension in the distribution of funds in later years. NFI involved central organizations that were categorized by evaluators as either national or local. Grounded in a history of the Ford Foundation’s community development work, the Neighborhood and Family Initiative (NFI) was launched in 1990. NFI was originally housed under the Urban Poverty Program. Approximately $3 million operating and program support was granted to each of four local sites. The Ford Foundation provided

116

dedicated support for technical assistance and evaluation and set aside an additional $3 million total to be awarded, via an investment fund, for use in specific development projects (Chapin Hall Center for Children, 2002). According to the Ford Foundation, The NFI design was intended to foster a “local base” of resident involvement, “inclusive partnerships” for development, a “comprehensive approach” to neighborhood issues, and “empowerment” for sustained benefits for individual, families, and neighborhoods (Ford Foundation, n.d). According to Chapin Hall evaluators, the design of the initiative explicitly provided for the decisions about outcomes and strategies to rest with the local initiatives. Local community foundations served as fiduciary agents and local institutional support for the collaboratives addressing neighborhood needs. Through the initiative, the foundation managers directed funds into neighborhoods in four cities. Each neighborhood had a median household income that was lower than that of their corresponding city, and each had a higher percentage of households classified as being below the poverty level and with residents having a lower educational attainment for persons 25 and over (Chaskin, 2000). Each neighborhood also had a higher unemployment rate than their corresponding cities and each city had a higher unemployment rate than their associated Metropolitan Statistical Area (COSMOS Corporation, 2000). Managers therefore directed NFI funds into neighborhoods that had indicators of high poverty, low educational attainment, and high unemployment, relative to their corresponding cities and metropolitan regions. The sites each allocated some of their funding for local evaluation but the sites had varying degrees of success with incorporating a local evaluator and producing evaluation reports. Of the four community foundations that served as fiscal managers for

117

the collaboratives, two released their evaluation reports publicly. The other community foundations and evaluations indicated that evaluation reports were either not available or not for public distribution.

NFI Central Organizations as Members in an Initiative

NFI reports referred to central organizations that comprised the national structure of the initiative; this included “national” evaluation. NFI also included local organizations and evaluators that were involved in the evaluation. The central national organizations included the Ford Foundation, the Center for Community Change, and the Chapin Hall Center for Children and Families. Henry Ford and his son Edsel founded the Ford Foundation in 1936. Operating locally in Michigan until 1950, the Foundation then expanded to national and international programming. Over the years, the foundation diversified assets and discontinued the holding of Ford Motor Company Stock and by the end of 2001, the Foundation’s portfolio was estimated at $10.7 billion. At the time of the initiative, the Ford Foundation’s headquarter offices were located in New York City (Ford Foundation, 2002). The Center for Community Change was founded in the 1960s to provide assistance to community based organizations. According to the NFI reports, within NFI, CCC worked with the local sites in interpreting the Ford Foundation charter and engaging in strategic planning. CCC also provided technical assistance on operational issues and contributed, at times, to evaluation technical assistance and documentation. The Chapin Hall Center for Children is a policy research and development center located at the University of Chicago; Chapin Hall has roots dating

118

back to 1860. The establishment of Chapin Hall as a policy center took place in 1986 under the director Harold Richman who also served as co-director of Aspen Institute’s Roundtable on Comprehensive Community Initiatives and was involved in the Roundtable’s steering committee on evaluation (Chapin Hall Center for Children, 2001). During Richman’s directorship, members of Chapin Hall have published on various issues including CCIs and CCI evaluation and Chapin Hall researchers conducted the NFI “national” evaluation. The NFI local evaluations were each funded with Ford Foundation grants through the community foundations working with NFI collaboratives in each of the four local sites. The local evaluators did not remain consistent in the sites nor were reports released throughout the entire initiative. Public reports were available for two of the sites with reports released in Michigan in 1993 and 1994 and in Milwaukee in 1998. The community foundations and local evaluators in Memphis and Hartford did not release reports to me. COSMOS Corporation, a Maryland based organization provides “applied research and evaluation, technical support, and management assistance aimed at improving public policy, private enterprise, and collaborative ventures” (Chaskin et al., 2000, p. 156). Directed by Robert Yin, an expert in positivist approaches to case study, the COSMOS Corporation contributed to the last years of evaluation of NFI by producing a local indicators report of data common to the four local sites and also providing, to the local collaboratives, technical assistance and direction in evaluation.

119

NFI Structural Change as Initiative Decentralization

Throughout the evaluation reports, evaluators described the organizational structure of NFI. According to evaluators, NFI, as a whole initiative, began with ten organizations involved, including the Ford Foundation, the Center for Community Change, the four local community foundations and the four local collaboratives (Chaskin, 1992). In the 1993 Chapin Hall report, the evaluators described the national structure, in terms of three central organizations, (Ford Foundation, CCC, and Chapin Hall) and four “issues at play;” issues included the NFI charge provided by the Ford Foundation, technical assistance, cross-site communication, and evaluation (Chaskin, 1993, p. 49-52). By the 1997 report, the Chapin Hall evaluators described a structure that included split foundation oversight of NFI. This split occurred because of changes in program management responsibilities at the Ford Foundation. Chapin Hall evaluators also documented the provision of intermediary services to the NFI collaboratives (Chaskin, Chipenda Danoshka, & Joseph, 1997). By the 2000 Chapin Hall report, the evaluators wrote about the initiative as the local collaboratives decided whether to continue working through the funding structure of the local community foundations (Chaskin et al., 2000). By this time, Chapin Hall had given up their technical assistance role to handle only the national evaluation and a separate consultant had been hired by the Ford Foundation to handle communication with the collaboratives. By the end of the initiative, evaluators described a three-organization centralized initiative structure --with specific intermediaries selected and funded directly by the Ford Foundation and who were guiding the local process of interpretation, action

120

and documentation – that had changed to a decentralized structure. Within the decentralized structure, the local collaboratives accessed resources such as technical assistance and local evaluation from various providers and communicated with the Ford Foundation and each other through any one of a few avenues. One avenue was a Ford Foundation funded communication consultant and another was a cross-site learning team.

NFI Context as the Knowledge Community Boundaries

NFI funding included support for both local and national organizations that conducted evaluation. In cases of national organizations, sometimes evaluators also released writings, about CCIs and evaluation that may have included data from NFI. Descriptions of Chapin Hall writings show an interest in issues of CCIs, data links to the Ford Foundation’s NFI evaluation, and also publication links to the Aspen Roundtable with Chapin Hall writers participating in Aspen Roundtable writings such as Voices from the Field (Chapin Hall Center for Children, 2001). Overlapping with NFI funding was the Ford Foundation’s support, through funds and membership, of the development of the Aspen Roundtable -- a research group dedicated to supporting the work of CCIs and CCI evaluation. Activities of this research group are evidenced in the convening of the Aspen Institute’s Roundtable for Comprehensive Community Initiatives (Roundtable) and the formation of the Roundtable’s Steering Committee on Evaluation. The Aspen Institute itself, within which the Roundtable exists, was created in 1950 by Walter Paepcke, chairperson of the Container Corporation of America. His vision centered on supporting reflection and dialogue about society and culture. Today, the Aspen Institute is housed in

121

twelve offices in six United States’ locations and four additional countries. These locations include, Washington, DC, Aspen, Chicago, Santa Barbara, New York (three), Berlin, Italy, France, and Japan. The Aspen Institute work was enacted through a variety of policy programs, one of which was the Roundtable on Comprehensive Community Initiatives. The Roundtable began in 1992 within the National Academy of Sciences and transitioned to the Aspen Institute in 1994 (Connell et al., 1995). The Roundtable also included the Steering Committee on Evaluation, which was begun in 1994 to “resolve the lack of fit that exists between current evaluation methods and the need to learn from and judge the effectiveness of comprehensive community initiatives” (Connell et al., 1995, p. viii). The Aspen Roundtable membership and funding has involved participation by a number of foundations and public agencies that have also supported evaluation of community initiatives (e.g. The Ford Foundation, Pew Charitable Trusts, Annie E. Casey, HUD, and Department of Education). With this support, the Roundtable has produced publications, has maintained an electronic site for information about CCIs and CCI evaluation, and has offered funding for the testing of new evaluation strategies. For example, in 1995 and 1998, the Aspen Roundtable published Volumes I and II of New Approaches to Evaluating Community Initiatives. The Aspen Roundtable’s website served as an example of an online dissemination venue, from the 1990s through 2003, for literature about CCI evaluation (Roundtable on Comprehensive Community Initiatives, 2002). As described earlier on the website, the Roundtable’s explicit work through 1999 focused on describing perspectives from participants working in CCIs, exploring key

122

issues of evaluating CCIs, developing and sustaining informational internet based resources for CCIs; and examining evaluation approaches to community development. The Aspen Institute’s Roundtable was therefore a public manifestation of a group of individuals engaged in research and with explicit commitments to CCIs and CCI evaluation. These commitments were evidenced in the Roundtable’s name, its public focus on comprehensive initiatives, its expressed concern with approaches to evaluating CCIs, its publications on CCIs and their evaluation, its electronic website and its members' public work in both community development and evaluation. A review of Roundtable publications in 1995, 1997, and 1998 provided data to trace Roundtable membership throughout the 1990s. Analysis indicated that representation came from four types of entities including universities, foundations, government, and other organizations. Universities included both private and public universities with deans, directors of centers, and department faculty, serving on the Roundtable. Representatives from private foundations included presidents, executive directors, program directors, and program officers, with The Ford Foundation listed as a member through the 1997 publication. Local, state, and national governments were also represented. Examples of participating government offices included the White House, the Department of Housing and Urban Development, the City of Minneapolis, and the Maryland State Department of Education. Senior and middle management officials of government offices served on the Roundtable. “Other” organizations were comprised primarily of nonprofit research, evaluation, service, and consulting firms, with both directors and staff of these serving on the Roundtable. Members sometimes provided funding, sometimes representation, and sometimes both. Some members of the

123

Roundtable had previously worked for the Ford Foundation. For example, Robert Curvin, former director of the Urban Poverty Program, that originally housed NFI at the Ford Foundation, and former member of the Aspen Roundtable on Comprehensive Community Initiatives, commented on Chapin Hall’s NFI evaluation approach stating: Chapin Hall doesn't come at a problem from just one angle or a single disciplinary point of view….Perhaps even more important, they have a willingness to unpack complex phenomena -- and the occasional mushy idea -- and make them clearer. (Chapin Hall Center for Children, 2002) The overlap in time, membership, and content focus between Chapin Hall, the Aspen Roundtable and NFI, indicates a possible knowledge community within which it can be expected that ideas and practices of evaluation might be shared. Although this study is focused on the NFI evaluation from 1990 through 2000, a review of a 2002 Aspen Roundtable publication showed significant changes in the Roundtable membership. Throughout the 1990s, there was general movement of individuals and organizations in and out of the Roundtable. However, by 2002, only one publicly funded university retained membership, and all but one government department had withdrawn from membership, with the only remaining government representative coming from the level of city council. By 2002 what had been, through the 1990s, a mixed membership of private and public entities became more solidly comprised of privately funded entities.

NFI Evaluation Purpose and Structure for Learning

According to Chapin Hall evaluators, the fund managers of NFI invested in evaluation to support theory development and participation (Chaskin, 1992). The Chapin

124

Hall Center for Children produced the majority of publicly released NFI evaluation reports. According to Chapin Hall evaluation and promotional materials, the NFI approach to evaluation was unique in its two-tiered (national and local) structure, in its addressing of complexity, in its interest in theory, and in its participatory intent. The NFI funding of CCI evaluation also included funding of local evaluators and the Chapin Hall evaluation reports included information about the activities in each of the local sites. Chapin Hall released seven evaluation reports over the ten-year funding of NFI. The COSMOS Corporation provided an additional local indicators report and, in coordination with COSMOS and Chapin Hall, local evaluators released three reports about collaborative activities in two of the neighborhoods. Eleven NFI evaluation reports were publicly available; together they formed the body of text for this study. The NFI evaluation is an example of an actual CCI evaluation and the reports include information about both NFI and its evaluation. As part of the research design, Chapin Hall evaluators documented their initial assumptions about issues they believed to be crucial to the learning of the initiative: There is a set of assumptions imbedded in the preceding brief description that needs to be examined. The description includes assumptions about the nature of "community" and its relationship to geographically defined areas referred to as "neighborhoods." It also includes beliefs about planned development, and the need to address the wholeness of individuals' and families' lives through integrated, comprehensive strategies. Finally, it includes convictions regarding governance, empowerment, and the role of participation in formulating and implementing policy. (Chaskin, 1992, p. 3) The Chapin Hall evaluation was to help in trying to understand governance structure just as NFI, as an initiative, was itself an “attempt to design a process through which to structure action” (Chaskin, 1992, p. 3). According to the 1992 NFI report, theory was thought to have been missing from the previous 1970s Ford Foundation community

125

initiatives. Although the Chapin Hall evaluators did not utilize the phrase “theory-ofchange” to describe their approach to NFI evaluation, their stated interest in developing theory and in embracing a participatory intent mirrored the concerns of CCI evaluation as documented in the Roundtable evaluation publications which centered on a “theory-ofchange” approach. Throughout the NFI evaluation, the evaluators commented on the attempts and challenges to this development and participation. Towards the end of the Chapin Hall evaluation, the NFI evaluators did bring in the language of “theory-ofchange,” although not directly when describing their own approach to evaluation.

The NFI Evaluation Reports as Public Knowledge Development

Of the eleven reports publicly released in relation to NFI, the Chapin Hall evaluators labeled six as “national” evaluations; these were produced by Chapin Hall. Chapin Hall evaluators labeled the other four of these reports as “local” evaluations. One local evaluation was produced by the COSMOS Corporation and was commissioned by the Ford Foundation; two were written by local evaluators funded through the Community Foundation of Southeast Michigan; one was written by local evaluators and funded through the Milwaukee Foundation. Local evaluations from Hartford and Memphis were not publicly available. In each of the reports, the evaluators described key issues related to the initiative as well as the progress made on evaluation. Each report also included an introductory snapshot that provided information about the way in which the evaluators framed the initiative at that point in time. A description of key concepts addressed in each report, evaluation progress and issues, and overall descriptions of the

126

initiative as included in reports at specific points in time, provide a background to the major concepts that emerged throughout the evaluation. The description also provides a vehicle for me to highlight key dimensions of evaluative reporting about the initiative. These were dimensions that emerged in my analysis.

The 1992 Chapin Hall Report

The Ford Foundation’s Neighborhood and Family Initiative was launched in 1990 with the identification of four community foundations in four neighborhoods where collaboratives were to be developed to support geographically based community development. The Center for Community Change (CCC), a national intermediary, was originally involved with working with the sites in strategic planning, assessment, and documentation of the initiative. However, in 1992, it was the Chapin Hall Center for Children that released the first of the Neighborhood and Family Initiative public reports entitled Toward a model of comprehensive neighborhood-based development. In their first report, the Chapin Hall evaluators wrote about the start-up of the initiative. The Chapin Hall evaluators described the neighborhoods, giving an overview that included information about demographics, local institutions, key services, and context information about the neighborhoods in relation to the characteristics of their surrounding areas. The evaluators also outlined the collaborative structure for each site including the number of members, their demographics, and their professional or resident status. Chapin Hall evaluators stated that the affiliations of those individuals connected the collaborative as a whole to outside organizations.

127

For each of the sites, the Chapin Hall report included an overview of preliminary issues to be addressed by each collaborative. Examples of these issues included housing, education, economic development, empowerment, and family and personal development. The 1992 report also included appendices of both the Ford Foundation’s charter for the initiative and each collaborative’s charge. CCC, the initial Ford Foundation chosen technical assistance provider, worked with each of the collaboratives to interpret the Ford Foundation charter in order to create the charge that each collaborative would use in the planning process. According to the Chapin Hall evaluators, the initiative design included the development of local collaboratives that were not incorporated organizations but rather would work through community foundations that were to serve as fiduciary agents. The role of the collaboratives was to serve, not as representatives of institutional interests, but as a “gathering of perspectives, skills, and people with access to resources” (Chaskin, 1992, p. 16). The collaborative structure was also to foster citizen participation with the design assumption that, to be successful, the collaboratives needed to draw from local knowledge about needs and opportunities. In this way, the Chapin Hall evaluators compared the NFI collaboratives to former community efforts including the Gray Areas Program, Community Action Agencies, and Community Development Corporations. The collaboratives, according to the Chapin Hall evaluators, supported planning and decisions about the division of labor necessary for the accomplishment of collaborative goals. They wrote: The neighborhood collaborative is the corporeal instantiation of the concepts of collaboration and participation upon which NFI is built. It is the primary mechanism through which the conceptual bases of the Initiative will be tested in action… It is charged with the examination of neighborhood strengths,

128

weaknesses, opportunities, and needs and with strategic planning for the Initiative. A purposefully diverse collaborative membership is meant to bring together a wealth of perspectives, skills, knowledge, and access to resources. It is believed that this range of perspectives and experiences will facilitate new thinking and the development of comprehensive, integrated strategies for neighborhood revitalization, and will foster collaborative relationships within and beyond the neighborhood (Chaskin, 1992, pp. 33-34) The basis of NFI, as reported by the Chapin Hall evaluators, was therefore to encompass comprehensive development and the integration of strategies. Their rationale was that there was an interrelationship of social problems and that multiple problems were often present together in geographically defined areas of low-income residents. According to the Chapin Hall evaluation documents, integration of strategies was needed to go beyond comprehensiveness -- understood as a group of separate projects -- to projects that were linked together in ways that could leverage them into greater change. In relation to comprehensive integration, the evaluators noted that NFI was an effort to “design a process through which to structure action, and to demonstrate and learn from a general approach” (Chaskin, 1992, p. 3). Despite some references to the idea of demonstrating, in their report, Chapin Hall evaluators cited Marris and Jackson (1991) in describing the NFI as an example rather than a demonstration: The difference is subtle but profound. An example can inspire, inform, warn, encourage: unlike a demonstration, it does not pre-empt decisions about what to do another time, nor promise certain outcomes. It presents new possibilities and insights, but it does no prove anything. Demonstrations are confined to the simplified condition to the simplified conditions which make them replicable, but examples are everywhere: and they provide a much richer if less reliable guide to action. (Chaskin, 1992, p. 52) In relation to the NFI evaluation purpose, Chapin Hall evaluators described their research intent as “an examination of the process of the Initiative, leading to an analysis of the

129

structure of action under NFI in each site” (Chaskin, 1992, p. 553). They provided three central purposes of the evaluation which they repeated throughout the initiative reporting. These were: 1) to refine, through conceptual exploration, Ford’s model of comprehensive, participatory community development; 2) to document the process of implementation and evaluate the significance of the developing model; and 3) to investigate the implications of what is learned and explore the ways in which the Initiative can inform similar endeavors. (Chaskin, 1992, no page) As part of the research plan, Chapin Hall evaluators laid out their assumptions for concepts such as community, neighborhood, participation, and collaboration. Within their assumptions, they argued that because an ideal community is nonexistent in urban America, they “must therefore define communities heuristically, with reference to a particular problem we seek to solve” (Chaskin, 1992, p. 10). Evaluators noted that to this end, their first report provided the “building blocks for the construction of a coherent theory of development” (Chaskin, 1992, 3). They also stated that there was a participatory intent to their evaluation, with the data collection strategies each relying on the “collaboration and input of local participants” (Chaskin, 1992, p. 53). In the 1992 report, the Chapin Hall evaluators provided a snapshot of the initiative giving an overview at that point in time. The evaluators described the initiative in terms of comprehensive development that would involve “the implementation of strategies that harness the interrelationships among social, physical, and economic development” which they said “have historically been treated as separate spheres of action” (Chaskin, 1992, p. 1). They described the purpose of creating a collaborative governance structure in the neighborhoods:

130

Through this governance structure, by investing in the support and development of local leadership, and by integrating development strategies to address physical, social and economic needs and opportunities within the targeted neighborhoods, the Initiative seeks to revitalize and empower whole communities and the individuals and families who live in them (Chaskin, 1992, p. 1). The 1992 Chapin Hall evaluation reports raised the key issue of comprehensiveness being integral to the initiative. This idea is continued throughout the evaluation reports. I follow with a description of each report.

The 1993 Chapin Hall Report

The Chapin Hall 1993 evaluation report, Building collaboration: An interim report, was the most difficult to obtain of all the Chapin Hall NFI evaluation reports. Whereas the other Chapin Hall reports were listed online and available either electronically or by mail, the 1993 report was not included in listings with other reports. I realized the report was missing from my data when evaluators referred, in later reports, to the reports that had already been released. Phone calls to Chapin Hall did not result in my obtaining a copy of the report, so I retrieved the report from one of the only three libraries (nationally) that I was able to identify as holding copies. In the 1993 report, there were statements of the details about the collaborative process and challenges faced including issues of representation on the initiative as delineated by resident status, sector affiliation, race, ethnicity and gender. In the report, there were descriptions of the changes in collaborative structure throughout the first years of the initiative. The report indicated that CCC provided guidelines to help collaboratives in selecting members. These guidelines included the idea that membership should be

131

mixed with “grassroots leaders” classified as low income residents, “bridge people” who were classified as neighborhood professionals and entrepreneurs, and “movers and shakers” who were people from public and private organizations (Chaskin, 1993, p. 8). As noted by the Chapin Hall evaluators, these individuals did not represent their neighborhoods or organizations, but rather were chosen through interviewing and networking conducted by the community foundations. The intent of the initiative design, as discussed by Chapin Hall evaluators, was to bring together various people “on equal footing” who were to engage in assessing the neighborhood, planning, and overseeing implementation, instead of having these processes run solely by professionals (Chaskin, 1993, p. 21). The 1993 report also included documentation of meeting attendance and descriptions of organizational relationships. The evaluators noted there had been challenges in NFI in the use of different languages and different types of knowledge amongst collaborative members but that through the collaborative process, trust had been built. For example, Chapin Hall evaluators documented different understandings of process timing with residents growing impatient whereas professionals tended to be comfortable anticipating action during planning processes. The evaluators also documented some specific collective successes that had already occurred including one collaborative’s ability in “persuading” its community foundation, as its fiduciary institution, to redirect investment toward minority institutions (Chaskin, 1993, p. 34). The Chapin Hall evaluators documented the changes in collaborative structure and noted the fluidity of these structures and the willingness of the collaborative members to change in response to shifting goals. The 1993 description outline offers the

132

emerging complexities of the collaborative structures as each developed some form of working groups to address aspects of their endeavors. However, according to Chapin Hall evaluators, there were collaborative challenges related to integrating plans, from various workgroups, into a comprehensive approach. Integration was with this sometimes addressed by having individuals serve on more than one committee (Chaskin, 1993, p. 26). But the best evidence of the degree to which each collaborative member understands and carries the weight of the charge to integrate strategies will probably be the strategic plans themselves, as well as the perspectives provided by participants individually…Thus, there are, at least potentially, organizational mechanisms in place to facilitate thinking comprehensively about the integration of strategies beyond the forum that the full collaborative provides. (Chaskin, 1993, p. 27) The Ford Foundation set up a cross-site committee to address the same issues of communication locally that were becoming problematic with the national initiative as a whole. Addressing the national initiative, the Chapin Hall evaluators offered a reconceptualization of the structure of the initiative noting that there was continued confusion over the notion of integration with “neither technical assistance, cross-site communication, nor the conceptual exploration of the issues in Chapin Hall’s first report” serving to help clarify the issue (Chaskin, 1993, p. 50). The 1993 Chapin Hall report included an outline of the strategic planning process that was utilized by CCC. Although the planning model was linear, the evaluators wrote that, in practice, the process had been iterative, with some phases beginning before others were completed and with later phases leading to renewed questions of previous phases. As well, various efforts were begun in response to opportunities rather than because of completed planning.

133

Evaluation, as described in 1993, was also a complex process needing to occur at “several levels” and the evaluators cited lessons learned about needing to “reach several audiences” (Chaskin, 1993, p. 55). The rationale for a two-tiered model of evaluation was described in the 1993 report with acknowledgment that the national evaluators relied upon the local collaboratives for data related to outcomes. Chapin Hall evaluators described the ways in which evaluation work brought them into contact and communication with the local sites. The evaluators told of their evaluation intent to establish an ongoing dialogue between the national and local evaluation in order to support the linkages needed for the evaluation and the development of a “common understanding of the lessons and implications” of NFI (Chaskin, 1993, p. 56). However, according to Chapin Hall evaluators, the compartmentalization of work and tensions in relationships had interrupted the linkages between them and the local collaboratives. The Chapin Hall evaluators emphasized their need for local documentation in order to conduct the evaluation and reiterated their idea that the limitations in local documentation would prohibit the national evaluation. Among other evaluation limitations, noted by Chapin Hall evaluators, was their use of ethnographic methods. Although our research design uses different methodologies, the core strategy is essentially ethnographic…the national evaluation relies most heavily on our qualitative interviews and guided observations during the course of our fieldwork at each site. This method allows us to consider a range of perspectives on the conduct of the Initiative, formulated to a large degree in the words and within the cognitive and cultural frameworks of each respondent. It does not, however, allow us to go beyond our relatively small panel of respondents (to the neighborhood at large, for example), or to focus on concrete outcomes of the process. Further, while the ethnographic approach offers an excellent forum for 'exploratory research and for formulating hypotheses and drawing informed conclusions regarding (the collaboratives') process issues, its powers of formal analysis and ability to model the dynamics of collaborative action are limited. (Chaskin, 1993, p. 59)

134

Chapin Hall evaluators suggested that on-site ethnographers might be helpful in supporting the analysis although an alternative approach they discussed was network analysis. The Chapin Hall evaluators stated their belief that network analysis would provide a formal approach to mapping coalitions around specific issues and to “concretiz[ing] relationships within the collaborative context” (p. 59). Acknowledgement of the lack of feasibility of this approach was followed by the statement that it might become important to use evaluation to support the collaboratives in their “broker” or “mediator” role in order to understand connections between networks (Chaskin, 1993, p. 59). The mediator role was elaborated upon in the 1993 overview to the initiative, which included the following statement: Bringing together this broad range of participants may well generate as much conflict as cooperation; their joining through the NFI structure represents a determined investigation-an exploration of the possibilities and challenges of broad-based relationship building and cross-sectoral collaboration (p. 1) The overview statement included reference to NFI as a CCI, defined as a prescribed structure that, in addition to fostering collaboration, would develop and support local leadership. The changes documented in collaborative structure, the tensions noted in relationships within the given structure, and the idea of evaluators presenting issue areas as structural components of the initiative, all brought the concept of structure, as a reporting dimension, to the foreground in the evaluation.

135

The 1993 and 1994 Michigan Reports

The 1993 and 1994 Michigan reports provided descriptions of the evaluation process that occurred between the collaboratives and the evaluators. According to the evaluators, in the process, the collaboratives agreed to focus on outcomes rather than process, that evaluation would be formative, what sources of data they would rely upon, and what roles the evaluators, and collaboratives would have in conducting the evaluation. Formative, according to the evaluators meant that the “findings of the evaluation would be used to reshape the project” (Grant & Coppard, 1993, p. 3). The reports included lists of outcomes and related activities along with the data obtained through specific collection methods such as focus groups, questionnaires, and program review discussions. In 1993, the evaluators were asked by the collaborative to consider program development activities as outcomes since much of the time was spent on efforts to build collaborative structure. With the reports, the local evaluators listed outcome statements as specific action statements. For example, the outcome to “improve physical, social, and economic environment” included statements such as “increase number of local jobs filled by local residents” (Grant & Coppard, 1993, p. 6). In the 1993 report, the stated mission of the initiative was “to develop an ideal community where people are employed and where a mix of cultures and people of all income levels and ages live among fine institutions” (Grant & Coppard, 1993, p. 2). The 1994 report also included listings of collaborative activities along with raw data from the various data collection efforts such as questionnaires. The evaluators presented information within the framework of their

136

evaluation processes with results used to provide details of key efforts. Activities included specific projects, creation of implementation organizations, and also results related to broader concepts such as community outreach. The evaluators described how they supported evaluation through the provision of written forms with which the collaboratives could consistently document their activities. In the effort to support evaluation, in the 1994 report, the evaluators also documented collaborative participants’ perceptions of what evaluation meant to them and shared the language with which members discussed ideas of evidence and data. The 1994 report was the last of the evaluation reports released publicly from this site so it was not publicly reported how the evaluators utilized this information about member meaning.

The 1995 Chapin Hall Report

The 1995 Chapin Hall report, entitled Moving toward implementation: An interim report, included updates about issues such as planning, collaborative participation, and changing collaborative organizational structures. For example, the evaluators discussed a critical issue raised in the collaboratives when members were hired as consultants. Although evaluators stated a rationale that the employment of members would serve as capacity building as well as paying individuals fairly for their work, evaluators noted that the professionals on the committees were hired as consultants and allowed to keep their collaborative membership. However, according to evaluators, when grassroots members were hired they were hired, as staff and required to relinquish their membership (Chaskin & Joseph, 1995, p. 18).

137

The 1995 report focused attention on the “mission, funding and institutional auspice” of the initiative, and made comments about local frustration over the “passive posture” of the Ford Foundation noting the desire of participants for more clarity by the foundation (Chaskin & Joseph, 1995, p. 68). According to Chapin Hall evaluators, in efforts to foster community development, the Ford Foundation, according to evaluators, remained non-directive. Chapin Hall evaluators noted that, although there were changes at the Foundation -- including having five different program officers influencing the initiative by the reporting in 1995 -- the nondirective philosophy remained consistent. However, this nondirective philosophy was not always met with approval from local sites that were looking for more guidance. Despite the nondirective approach, reports that collaborative members were being paid for services, were met with a swift response from the Ford Foundation and clear requirements that conflict of interest rules be drafted and applied by the collaboratives. In 1994, Chapin Hall evaluators had taken on the evaluative technical assistance provided to the local sites. Described as part of the institutional support for the initiative, evaluation appeared in the 1995 report with acknowledgment of the lack of coordination between the technical assistance that CCC had provided for evaluation and the technical assistance that Chapin Hall provided. Despite technical assistance challenges, the report included the idea of evaluation as an anticipated feedback mechanism to clarify the influence of project and neighborhood level outcomes on goals and objectives. In addition, the national evaluation was to concern: itself with a cross-site analysis of the collaborative-building, strategic-planning, and project-implementation processes. It focuses on the usefulness and viability of the Initiative's guiding principles and the possibilities and pitfalls presented by the organizational structures and processes put in place centrally and at each site.

138

By following the process as it unfolds, it hopes to draw from the particular experiences of the participants general lessons regarding the intent, structure, and conduct of NFI. (Chaskin & Joseph, 1995, p. 84) The Chapin Hall evaluators reiterated the tensions around integration and the need for greater communication, within collaboratives, in order to ensure the community work was integrated across increasing numbers of committees. According to Chapin Hall evaluators, in some sites, meeting time was to be dedicated to communicating evaluation findings as well as to fostering coordination of information. However, the Chapin Hall evaluators admitted that coordinating evaluation technical assistance did not appear to be working. In the 1995 report, the Chapin Hall evaluators, returned to the idea that the local collaboratives should explore interrelationships between social, physical and economic needs and opportunities but stated that integration was still not the “primary driving force” behind the programs and activities of the collaboratives (Chaskin & Joseph, 1995, p. 49). Chapin Hall evaluators noted that most of the collaborative members utilized the notion of comprehensiveness rather than integration, if any idea was used at all. Throughout the evaluation, the Chapin Hall evaluators alluded to various meanings given to the notion of comprehensive but noted that the idea of comprehensiveness seemed to have been of little use in program development. The Chapin Hall evaluators documented three approaches to addressing comprehensiveness. One approach involved collaboratives trying to integrate projects. Another approach involved collaboratives trying to link projects at a “strategic level.” The third approach involved collaboratives using a strategic “lens” to understand community issues (Chaskin & Joseph, 1995, p. 63). The evaluators continued to communicate the tensions, one of which included tension

139

between “categorical planning and implementation structure” of the collaboratives and the task of integration (Chaskin & Joseph, 1995, p. 93). The Chapin Hall evaluators had suggested that the task of integration was to be alleviated, in part, by having different individuals with different perspectives and organizational connections serving as collaborative members. In discussing evaluation itself, the national evaluators commented on their connection to the local assessments and their provision, to the local sites, of evaluation technical assistance. They restated their reliance on local sites for data. The Chapin Hall evaluators also commented on their difficulty with speaking to a range of audiences, most specifically the difficulty of communicating evaluation findings with the local collaboratives. The Chapin Hall evaluators admitted that the linkages between local and national evaluation had been minimal and that there was a lack of clarity around how the Chapin Hall technical assistance in evaluation was to work with the CCC technical assistance in planning. They also described the evaluation work that was done at the local sites: In developing their strategies, several sites attempted to address concerns in addition to the development of a particular kind of local assessment “product.” Some of these concerns included: 1) the exploration of “nontraditional” and “participatory” evaluation methods; 2) the desire to build relationships among and strengthen the capacity of local researchers; 3) the inclusion, in the evaluation process, of neighborhood residents and other local constituencies; and 4) the development of a kind of check on or protection against the possible conclusions drawn by the national evaluation. (Chaskin & Joseph, 1995, p. 86) Despite these concerns in their reports, the national evaluators outlined their attempts to convince local sites to utilize assessment as a feedback mechanism for local work and explained that evaluation could be a tool for accountability and for leveraging resources.

140

The description of the initiative in the Chapin Hall 1995 report emphasized the idea of a NFI creating “circumstances under which a working model for neighborhoodbased, integrated development could be generated” with action “set within” an operational structure that is guided by principles (Chaskin & Joseph, 1995, p. 1). The first of these principles included collaboration and citizen participation with the second focusing on the idea of comprehensiveness. However, as described by the national evaluators as early as 1995, they were questioning the value of ideas of comprehensiveness for guiding action. With the initiative reported focus on NFI as a providing of a structure for action, the questioning of action by the national evaluators becomes a central reporting concern.

The 1997 Chapin Hall Report

The 1997 Chapin Hall report titled The Ford Foundation’s Neighborhood and Family Initiative: The challenge of sustainability, was focused on issues pertaining to the future of the collaboratives. It also included updates on issues such as participation and the specific activities at the local NFI sites. The report included documentation of a change, across the initiatives, in collaborative structure as the committees that earlier were “structured around substantive areas of programmatic planning – housing, education, economic development,” started to shift toward “organizational maintenance, financing, and fundraising” (Chaskin, Chipenda Danoshka et al., 1997, p. 38). At the same time, according to the 1997 report, a shift occurred in local collaborative membership from “representative categories” toward “substantive expertise” (Chaskin,

141

Chipenda Danoshka et al., 1997, p. 40).

The evaluation documents provided

background information about decision pressures regarding governance structure issues. Within this structure, a more critical influence on programmatic planning and project implementation has been a set of competing motivating factors including arising opportunities within the local context, networks of association that provide access to these opportunities, and issues of control and the need to act within particular funding periods. (Chaskin, Chipenda Danoshka et al., 1997, p. 51) According to the Chapin Hall evaluators, these three factors, in relation to each collaborative’s focus, drove program implementation. These changes and pressures also occurred with an increase of the formality of procedures in the collaboratives. According to the Chapin Hall evaluators, formality increased in all four collaboratives, with three of them considering incorporation. The Chapin Hall evaluators also concluded that the attempts at comprehensiveness had largely turned into program development that “followed parallel categorical streams of activity, with projects developed in large degree in response to emerging opportunities in the local environment” rather than because of the Ford Foundation funding (Chaskin, Chipenda Danoshka et al., 1997, p. 5). The sites became increasingly different as the organizations moved further from their original charges in order to adapt to meeting local conditions (Chaskin, Chipenda Danoshka et al., 1997, p. 4). According to them, the ideas of comprehensiveness served as a lens to look at development strategies. According to Chapin Hall evaluators, Ford Foundation funding changed significantly in 1996 when the Ford Foundation allowed the local collaboratives to choose their own technical assistance providers other than CCC. At that time, the sites also started to address options for long-term survival, with incorporation into nonprofit organizations as one option considered. The evaluators noted that the tendency toward

142

incorporation was in part due to pressures to monitor activities (Chaskin, Chipenda Danoshka et al., 1997, p. 99). Evaluation in the Chapin Hall 1997 report included an update on the evaluation design and process. The evaluators reiterated that the two-tiered design of the study was an appropriate approach for providing the local sites with the necessary flexibility and the national evaluation with the information to conduct a cross-site analysis. Reasonable indicators of success, it was argued, should be developed by the collaborative as they refine their strategic plan, and locally driven documentation and analysis should provide both formative feedback on their progress and ultimately, summative reports on their success and failures. (Chaskin, Chipenda Danoshka et al., 1997, p. 91) Yet in the Chapin Hall reports through 1997, evaluators reportedly documented concerns that the national evaluation and local evaluations were occurring separately, with limited information sharing between them. Chapin Hall evaluators explained that this was due to the national and local evaluations “differing in scope, stage of development, focus, methodology, and reporting mechanisms, and [being] conducted under the aegis of different institutions” (Chaskin, Chipenda Danoshka et al., 1997, p. 92). According to national evaluation reports, the local evaluation success was also constrained by the limited funding allocated to local evaluation as a key area of programmatic concern. In the 1997 report, the Chapin Hall evaluators provided increased description of the efforts they had made to work with local sites in understanding evaluation as a feedback mechanism and in building data-collection mechanisms to supply information for the initiative. However, Chapin Hall evaluators admitted that this intensive work had begun in 1994 but ended in 1995 when there had been concerns that Chapin Hall’s dual role as evaluator and technical assistance provider was inappropriate. On the contrary, at

143

the same time local participants were challenging the role of Chapin Hall evaluators wanting them to play a “proactive role” in making recommendations and communicating with the other national organizations. To this, Chapin Hall evaluators documented that they clearly stated that this was not a role that the Ford Foundation had encouraged for Chapin Hall (Chaskin, Chipenda Danoshka et al., 1997). In the overview of the 1997 report, the evaluators described the initiative exactly as it was described in the Chapin Hall 1995 report. Two previously articulated principles were reiterated: first strategies should be “viable, relevant, and equitable to the people who will be affected,” and second, neighborhood development strategies required attention to social physical and economic needs and opportunities within the neighborhoods and beyond (Chaskin, Chipenda Danoshka et al., 1997, p. 1). In the 1997 report the national evaluators revisited the notion of comprehensiveness. A central goal of NFI is to explore the extent to which development strategies that look comprehensively at the interrelationship among the physical, economic, and social conditions of the neighborhood are likely to have a growing, synergistic and substantial impact on the neighborhood as a whole. (Chaskin, Chipenda Danoshka et al., 1997, p. 51) As stated in earlier reports, according to Chapin Hall evaluators, the idea of comprehensive integrated development provided a “lens” to look broadly at development strategies. However, by 1997 the Chapin Hall evaluators had already concluded that comprehensiveness did not drive planning and implementation. Rather the 1997 report included documentation of the questioning that took place as collaboratives explored the best ways to meet the needs of survival while also influencing the local issues that matched with their goals.

144

The 1998 Milwaukee Report

The Milwaukee report included a description of project activities categorized into areas including redevelopment, business development, employment, housing, and leadership. The report included local evaluator recommendations along with a response from the community foundation about the evaluation itself. The evaluation process description included description of a preparation stage and an implementation stage of the evaluation. According to the Milwaukee evaluators, the preparation stage included a presentation by evaluators to collaboratives in order to explain the evaluation approach and needs in relation to evaluation. The data collection and implementation involved three tiers of data collection including: - Tier I: Collecting data on present collaborative members. - Tier II: Conducting interviews with project participants, present and former. - Tier III: Extracting information from Harambee residents, discerning their knowledge of, and participation in, NFI activities in their community. (Johnson, 1998, p. 7) The evaluators also outlined their intent that the evaluation be participatory with the inclusion of administrators, staff and collaborative members in the process of shaping the evaluation. The overview of this one publicly released Milwaukee report, included in some sections, verbatim portions of the national evaluation reports as in the Chapin Hall evaluation. The Milwaukee initiative was described as intending to “create the circumstances under which a working model for neighborhood-based, integrated development would be generated” with action under the initiative “set within” an

145

operational structure and with the “organizational outline” adhering to central principles (Johnson, 1998, p. 1). Only one Milwaukee report was released publicly.

The 1999 Chapin Hall Report

The Chapin Hall 1999 report entitled The Neighborhood and Family Initiative: Entering the Final Phase was an interim report released before the final evaluation documents. Although the 1997 report dealt with questions of sustainability, the 1999 report did so with a more immediate and urgent timeline because of the impending completion of the initial NFI funding. The Chapin Hall evaluators referred to the time period as a “critical juncture” and the “final phase” of the initiative (Chaskin et al., 1999, p. 1). They noted that the announcement of the final Ford Foundation funding came in 1997 and that the announcement resulted in increased pressure for local sites to leverage funds for sustainability. One result of the pressure included the collaboratives reexamining their purpose and niche. The Chapin Hall evaluators noted that, although the collaboratives had originally considered themselves as facilitating organizations, in considering longevity beyond the Ford Foundation grant, the collaboratives were increasingly considering direct implementation roles. According to Chapin Hall evaluators, the Ford Foundation funding from 1994 through 1996 had emphasized programmatic development and spurred new project ideas. However, a shift occurred in 1996, with the collaboratives bringing in all program funding from outside sources since the Ford Foundation decision at that time was to fund only operational support. According to Chapin Hall evaluators, with the finality of the Ford Foundation funding,

146

collaboratives faced the decisions of how to bring in funds for both operating and programmatic expenses. In light of these issues, the Chapin Hall evaluators emphasized two levels of learning to come from the evaluation. The first or the national learning was to contribute to the field -- funders, policymakers, practitioners, and researchers. The second, or the local learning, was to contribute to formative feedback for the collaboratives with summative information about progress. The Chapin Hall evaluators described barriers to this learning stating: Although collaboratives have elaborated some goals in clear and actionable ways…many of the goal statements remain at a very general level. A local “theories-of-change” evaluation approach attempting to connect strands of activity to neighborhood change goals was not consistently engaged, and the use of ‘logic models’ guided by COSMOS… is relatively new. (Chaskin et al., 1999, p. 16) In addressing the issue of learning from the “ground up” at this critical time in the initiative, the Chapin Hall evaluators referred to the need for more systematic documentation related to both individual and organizational level outcomes (Chaskin et al., 1999, p. 16). The evaluators referenced the national COSMOS Corporation collection of existing administrative data for NFI as one possible solution for acquiring data for the national evaluation, but noted that the COSMOS indicator work did not have the intent to document change in relation to the initiative. The Chapin Hall evaluators advocated for decentralization of data-collection to provide data to understand change. However, they acknowledged the difficulty with data collection given the variation in skills of evaluators and collaboratives at the local sites. The Chapin Hall evaluators

147

emphasized that, because of these limitations, decentralization would require ongoing technical assistance. The descriptive overview provided by the national evaluators in the 1999 Chapin Hall report was considerably different from those found in the previous reports. It was the only overview where the principles of the initiative were not discussed. Rather, statements referred to the struggles of collaboratives facing the end of initial funding and the efforts for each local collaborative to establish a continuing identity. The Chapin Hall evaluators’ overview focused on the questioning by the participants in the initiative. After this period, NFI as a national demonstration will be over. What will be left, what will have been accomplished, and what will continue to develop in the wake of NFI as a formal initiative are the questions that participants are grappling with and attempting, through their actions, to answer. (Chaskin et al., 1999, p. 1) With this statement, Chapin Hall evaluators framed the concept of action as a way to address issues of sustainability and alluded to the uncertainty with which the collaboratives addressed their futures beyond original funding.

COSMOS 2000

The COSMOS Corporation study, whose director was Robert Yin, suggested indicators for the four neighborhoods and documented the changes in these indicators over the period of the NFI. Drawing from agencies that housed data, COSMOS documented information about business development, unemployment, real estate and housing, public education, crime, and traffic accidents. Each of these, COSMOS evaluators noted, was intended to capture an aspect of social and economic development

148

in the neighborhoods. The evaluators provided maps, charts, tables and graphs, showing the change in indicators spanning the NFI timeline. The report did not address any methodological issues. The evaluators did not include overview information as to the process of indicator selection in relation to NFI processes. The report did not include information about the NFI process or make any claims about causality. From the report, it would appear that the indicator work happened virtually in isolation from the local initiative process since COSMOS evaluators did not discuss the NFI structure, the decisions about data by the local sites, or the use of data in local evaluation. Neither did the COSMOS evaluators discuss their technical assistance to the local sites for developing local logic models, a process noted in the Chapin Hall reporting.

The 2000 Chapin Hall Reports

There were two Chapin Hall reports released in 2000. The first Chapin Hall 2000 report was entitled Moving beyond the Neighborhood and Family Initiative: The final phase and lessons learned. The second was entitled Lessons learned from the implementation of the Neighborhood and Family Initiative: A summary of findings. Since the summary report was shorter than the original and used much of the same text as the longer version, I utilized the full report for purposes of this overview. In the Chapin Hall 2000 report, there was a reiteration of the central principles of the initiative including ideas of comprehensive change, organizational collaboration, and citizen participation. As described in the 2000 report, the initiative was to “develop sustainable processes, organizations, and relationships that would address the physical,

149

social, and economic circumstance of poor neighborhoods and their residents” (Chaskin et al., 2000, p. 3). This was to be done by creating synergy among strands of development activity. Cited examples of strands included housing, economic development, human service provision, and organizing. The Chapin Hall evaluators explained that the idea of the interrelationship of social, physical, and economic needs and opportunities went beyond the idea of comprehensiveness of categorical approaches toward “the weaving of strategies into a strategic whole” (p. 3). With respect to comprehensiveness, the Chapin Hall evaluators concluded that the idea had encouraged a broader view but that the concept had not helped in implementation. According to Chapin Hall evaluators, the organizing of a wide array of activities into synergistic change had not occurred in NFI. In the 2000 report, Chapin Hall evaluators also paid attention to the institutional support structure of the initiative describing changes at the Ford Foundation such as staff turnover and shifting funding policies that had influenced the initiative. The evaluators gave clarification and description of the original intent that NFI would encourage community foundations to engage in philanthropy that was aligned with community development principles and would strengthen their relationship with local neighborhoods. However, according to Chapin Hall evaluators any changes in the community foundations could not be attributed directly to NFI. The Chapin Hall evaluators commented that the NFI funding actually had stretched the community foundations beyond their accustomed roles and had resulted in a more cautious tone by community foundations in taking part in future national initiatives.

150

The 2000 Chapin Hall report included a compiled listing of all NFI reported actions with their associated strategic focus. The Chapin Hall evaluators discussed the openness of the original funding commenting that the “theory of change that linked the principles, through initiative action, to expected outcomes” had not been defined by the foundation. Although the national evaluators included strategic foci in the list of actions, they did not document whether the local sites had explicitly identified these connections or if the national evaluators were assuming these links. The Chapin Hall evaluators did suggest that the initiative actions had been predominantly small “discrete” projects rather than explicitly connected strategies (Chaskin et al., 2000, p. 97). In the 2000 report, Chapin Hall evaluators stated that the collaboratives were beginning to engage in political processes but noted that they usually did so as a reaction to external decisions and not as a planned strategy. According to evaluators, as the newly created independent organizations were just beginning to become visible in their advocacy and influence, they did not yet have the “political savvy or financial clout to influence high-level players” (Chaskin et al., 2000, p. 60). The 2000 report included a synthesis of the initiative activity as categorized by key issues including: collaborative role and functioning, leveraging resources, programmatic activity, neighborhood planning, and institutional support. The report also included lessons learned from the initiative. The purpose of the evaluation was repeated as it had appeared in the initial 1992 evaluation report. From the beginning, the evaluation had three central purposes: (1) to refine, through conceptual exploration, Ford's model of comprehensive, participatory community development; (2) to document the process of implementation and evaluate the significance of the developing model; and (3) to investigate the implications of what is learned and explore the ways in which the initiative can inform similar endeavors. (Chaskin et al., 2000, p. IX).

151

The Chapin Hall evaluators explained that sustainability as a category in the initiative had not been addressed as a priority until funding changes were made by the Ford Foundation and the final timeline of funding had been announced. The Chapin Hall evaluators acknowledged the challenges with the original design of the initiative and addressing comprehensiveness with CCI evaluation. Evaluators faced challenges, such as: • • • • • • lack of clear expectations, lack of collaborative and community interest in evaluation, difficulty integrating local evaluation activities and findings into planning and implementation, lack of technical support, lack of faith in possibility of really tracking outcomes, lack of trust in the endeavor as a whole. (Chaskin et al., 2000, p. 103)

The 2000 report included a detailed appendix outlining collaborative related local activities by listing the focus, activities, roles and participants, goals addressed, action taken and results. However, the Chapin Hall evaluators noted that there were issues beyond those raised in implementation that influenced their understanding of outcomes. They wrote: Lack of clarity regarding goals and outcome expectations; the extent to which such objectives shift over time; limitations on access to and relevance of existing data at the neighborhood level; a reluctance to collect data that focuses on neighborhood-level change given the relative scale of intervention; the difficulty of attributing causality without appropriate comparisons; and the limited capacity to collect and manage data locally in relatively efficient and unobtrusive ways. (Chaskin et al., 2000, p. 102) The Chapin Hall evaluators suggested that a theory-of-change approach might have helped in responding to these issues.

152

Despite evaluation challenges, the Chapin Hall description in the 2000 report reemphasized the demonstration purpose of the initiative and the intent of the evaluators to speak to practitioners and policymakers. The description read as follows: In 1990, the Ford Foundation launched the Neighborhood and Family Initiative (NFI). One of the earliest of what have come to be known as comprehensive community initiatives (CCls), NFI was eventually to become a l0-year effort that sought to strengthen a single neighborhood in each of four cities and improve the quality of life of the families who live in them. It was also a demonstration project, designed to explore the usefulness and viability of a set of principles and a general approach to community development, and to provide lessons for policy makers and practitioners engaged in similar work in the field. (Chaskin et al., 2000, p. 3) By the 2000 overview, the Chapin Hall evaluators had returned to a notion of the initiative being a demonstration rather than the earlier adherence to the idea of an example and evaluators continued to state that their evaluation approach faced many challenges. Although they had begun to utilize the theory-of-change references, the Chapin Hall evaluators repeated, as they had done early in their work, their preference for network analysis that they believed would provide a formal modeling of issues within the initiative and would be a useful approach to CCI evaluation.

Reporting Dimensions in NFI and Chapin Hall Writings

The dimensions of comprehensiveness, structure, action, influence, and to some extent, sustainability come to the fore in analysis of the description of the NFI by Chapin Hall evaluators. That the evaluation revealed these dimensions of reporting raises questions as to other issues necessary in understanding the reports themselves as part of broader issues related to knowledge communities. The literature about CCIs, that

153

surrounds NFI, does address the dimensions that the NFI evaluators did –– albeit in different ways and with different emphases. However, when viewed by organization (Ford Foundation, CCC, and Chapin Hall) and when placed in the context of Aspen Roundtable writings, the differences in understanding of these dimensions and the relationships between them are highlighted. The Ford Foundation literature links a comprehensive approach with notions of partnership (between government, private sector, foundations, community residents, neighborhood organizations, and citywide leaders), and with the idea of community empowerment. More specifically, comprehensiveness comes to mean that collaboratives “look holistically at neighborhoods and families” in attempts to “strengthen both individuals and the community” through various efforts and services. The notion of a whole being greater than its parts is thereby noted as the purpose of comprehensiveness (Ford Foundation, n.d). As in NFI, the idea of comprehensiveness is thus an area to be addressed by the initiative and, when focused locally, to be addressed also by the local collaboratives. For the Ford Foundation, structure of success includes partnerships between various professional communities including the financial, foundation, corporate, government and CDC communities ("Perspective on partnerships," 1996). Structure and action come together in the Ford Foundations commentary on poverty alleviation efforts and the need for a “comprehensive national attack” on poverty including a strong federal commitment to urban policy (Thomas, 1991, p. 3-12). For Franklin Thomas, then president of the Ford Foundation, success can come from a CDC approach to community programs emphasizing involvement of community members and self-help. However

154

according to Franklin Thomas, enhancing and sustaining the CDC efforts also requires the building of a support system including training and financial intermediaries (Thomas, 1991). Ultimately, influence is the goal; he writes “This kind of empowerment brings respect and opportunity to people and increases their ability to affect policy” (Thomas, 1991, p. 11). In NFI, Ford’s approach to structure became collaboratives and partnerships with the goal of empowerment focused on participation in projects and leadership development (Ford Foundation, n.d). As documented in the Ford Foundation’s NFI charter, structure also involved partnerships, this time specifically involving community foundations and neighborhood collaboratives. At the time of the Charter, CCC was noted as the primary intermediary in the structure. The primary action at the time of the charter-included neighborhood needs assessments and planning for revitalization. As indicated by the Ford Foundation, comprehensiveness was to be understood as a lens to understand needs and to develop strategies (Chaskin, 1992). The action that was to result from the planning was for the collaboratives to make suggestions to the community foundations about how to spend the Ford Foundation funding pool. the Charter: Because existing resources and public entitlements will always exceed by many factors the special resources of the targeted funding pool, the collaborative’s efforts will focus on creative ways to: redirect existing resources and improve ongoing programs and development activities; identify opportunities where modest resources can catalyze new responses; strengthen neighborhood leadership; and build community in the broadest sense. A significant emphasis will be on activities which create and sustain informal networks and connections among residents and reinforce a sense of belonging to and responsibility for the neighborhood. Ultimately, the collaborative aims to stimulate a critical mass of neighborhood development activity powerful enough to generate hope and a belief both within and outside of the neighborhood that the Initiative can result in substantial change (Chaskin, 1992, p. 66). As documented in

155

In this way, the Ford Foundation set out that the ideas of comprehensiveness, intended NFI structure, and action were to come together for both influence and sustainability through development and hope. The Center for Community Change also posed ideas about these key dimensions. The focus of their work is on supporting grassroots action for influencing policies and institutions for the improvement of neighborhoods. Comprehensiveness, according to the CCC 2004 website, refers to the assistance that CCC provides to community groups. For NFI, the structure of that assistance became a CCC strategic planning model that connected the NFI charge to local collaboration through six phases of planning. These phases included developing the organization and process which included developing a “commitment to strategic planning;” assessing the environment; identifying the strategic issues; formulating the strategy; developing the plan; and implementing the plan. The CCC model included needs assessment and use of data in the development and planning phases and then involved assessment of action, which prompted adjustment leading to future action, resulting in a continual or sustained process. When translated into NFI local charges, as facilitated by CCC, the focus of the work became about planning with some charges focused on the plan itself and others focused on the process for developing a plan. Within the guidelines adopted for the plan or planning were the Ford Foundation concepts as interpreted by the local collaboratives. Within the charges, comprehensiveness was most often interpreted in reference to community issues with a desired understanding and attention to relationships amongst these. Although the charges include some ideas about planning processes, partnerships, and leadership development, actions were addressed, not as predetermined by the Ford charter, but as they were to be

156

developed in the collaborative planning processes. Likewise, the paths to influence were not predetermined but again the collaboratives were to come to ideas about strategies and desired impact through the planning process. Structure, within the collaborative charges came to refer mostly to the collaborative structure and the desired partnerships. Although the word structure was not used in the charges, it appeared that structure was addressed in the collaborative membership that was to include residents and individuals from the private and public sectors. Structure was also addressed in the mention of desired linkages and partnerships that were to be developed between the collaboratives and local organizations and institutions. Sustainability was not mentioned explicitly in the collaborative charges although one might understand the emphasis on partnerships as having an underlying interest in continuance. Chapin Hall’s researchers also released materials focused on CCIs. According to Chapin Hall promotional materials, Chapin Hall evaluators were credited with being involved from the beginning of CCIs which were referred to as the “current wave of community building initiatives” (Chapin Hall Center for Children, 2001, p. 34). Comprehensiveness in Chapin Hall publications was varied, as it was in the NFI reports. Sometimes it referred to combinations of issues perceived to be relevant to low-income neighborhoods and, at other times, the term was used to draw attention to interrelationships between issues, between needs, or between activities (Brown, 1996; Stone, 1994, 1996). Still at other times, comprehensiveness either characterized development, or was closely connected to the concept of community building and development or was even set at odds with the concept of community building (Chaskin, 1999; Stone, 1994, 1996)

157

In the writings, as in NFI, there is a notion that having a comprehensive lens focusing holistically on community initiatives is to be desired, however difficult it may be and however limited the notion may be in implementation (Stone, 1994, 1996). In some writings, emphasis is placed on comprehensiveness as bringing together disciplines of human services such as “comprehensive services, service integration, system reform” and “community development” (Brown & Garg, 1997; Stone, 1994). In yet another use, comprehensiveness in Chapin Hall writings was used to connote the bringing together of sectors (Brown & Garg, 1997). Again, this was consistent with NFI references to collaboration between the private, public, and nonprofit sectors. In addressing structure, Chapin Hall writers provided an even more diverse range of references to the notion of structure from the identification of types of neighborhood infrastructure (Stone, 1996), to the more specific discussions on the power structures in CCIs (Brown & Garg, 1997; Stone & Butler, 2000). Included in these was: the discussion of the various structures that might exist for community initiatives or interventions themselves (Brown, 1996; Stone, 1994, 1996), the types of structure for specific aspects of an initiatives such as funding or management (Brown & Garg, 1997), the types of governance structures that initiatives may seek to develop in neighborhoods (Brown & Garg, 1997; Chaskin, 1999; Stone, 1996), the structures of the institutional entities or organizations that might be involved or influence initiatives such as foundation, community agencies (Brown & Garg, 1997; Stone, 1996), and

158

-

local institutions, government bureaucracies, or the social and socioeconomic structures within which initiatives are set (Brown, 1996; Stone, 1996; Stone & Butler, 2000).

The term action is used sporadically throughout the Chapin Hall writings although the various intentions attributed to CCIs can be considered actions. Examples of areas of action include community building and community capacity (social capital) building, neighborhood governance formation, organizing or mobilization of people and resources, and leadership development (Brown, 1996; Chaskin & Brown, 1996; Stone, 1996). Less often identified as a CCI action is the changing of local institutions (Chaskin, 1999; Stone, 1994). In Chapin Hall writings, initiatives are noted for their demonstration purpose, an action in itself, of showing the possibilities and lessons of variously structured initiatives and specific initiative approaches. In these cases, evaluation is often brought into the CCI discussion although not usually referred to as an action (Brown, 1996; Brown & Garg, 1997; Stone, 1996; Stone & Butler, 2000) Although types of action and the direction of action varied across the Chapin Hall writings, the influence desired of comprehensive initiatives was ultimately related to poverty alleviation, better services, and community empowerment in low-income neighborhoods in the interest of resource generation (Brown, 1996; Brown & Garg, 1997; Chaskin, 1999; Stone, 1994, 1996). In some cases, policy influence is specifically stated (Stone, 1994) and the desire to connect communities to systems and institutions (Stone, 1996). As in the NFI reports, the emphasis of these influences and the ideas about strategies to achieve them varied in reports, as they do in the initiatives themselves and as they may even change over the course of initiatives.

159

Sustainability is a term less often addressed in Chapin Hall writings than in NFI writings. The building of community capacity or ownership itself is taken as the sign of sustainability of community change (Brown, 1996; Chaskin, 1999; Stone, 1996). In general, the challenges to initiatives were also understood as the challenges to sustaining initiatives. Those writers who discuss evaluation, implicitly or explicitly associate learning about challenges and successes and the related knowledge development with sustaining investment into community initiatives and development (Brown, 1996; Brown & Garg, 1997). However, the discussion of evaluation is not without debate. As Kubisch wrote in a volume edited by Stone (1996): Because we do not have history on our side, we need to devise ways to create political space to keep the CCI field moving forward. The less exact we are required to be about what we are doing, the less room there is for detractors to challenge our assumptions and to hold us prematurely accountable for results. (p. 38) The Aspen Roundtable writers bring to fore broad ideas of CCIs as well as participant experiences and their perceptions of CCIs. In introductory or overview statements, comprehensiveness is less often defined as it is posed as a solution to the problems of community building, development, categorical poverty alleviation interventions, lack of addressing of interconnections of neighborhood issues, as in the Aspen Roundtable website (2002). However for the Roundtable writers, comprehensiveness is understood as social, economic, and physical sectors, interconnections between these areas of circumstances or conditions, need and opportunity, and the bringing together of a range of actors (Kubisch et al., 2002; Roundtable on Comprehensive Community Initiatives, 1997). In addition, comprehensiveness is sometimes referred to in Aspen writings as relating to a development process (Kubisch et al., 2002; Roundtable on Comprehensive

160

Community Initiatives, 1997). Although mentioned in both Chapin Hall writings and NFI reports, in Aspen writings, the linkages between individual, family, and community circumstances are explicitly connected to ideas of comprehensiveness, as is the idea of communities as complex systems (Kubisch et al., 2002). Similar to Chapin Hall writings, structure in Roundtable writings is related most often to the neighborhood governance forms put in place through community initiatives (Kubisch et al., 2002; Roundtable on Comprehensive Community Initiatives, 1997). However, also like Chapin Hall writings, structure is sometimes referred to in relation to neighborhood infrastructure, initiative funding structure or the social, institutional, resource, systems, and power structures influencing the communities and initiatives (Kubisch et al., 2002; Roundtable on Comprehensive Community Initiatives, 1997). Unique to the latter of the Roundtable Voices from the Field writings is a shift from the earlier notions of structure to an explicit consideration of leadership structures (formal and informal) of a community(Kubisch et al., 2002). In addition, although not referred to in structural terms, it is in these latter Roundtable writings that writers begin to speak of specific aspects of an ecology of change connecting their work to theoretical writings and bringing the idea of types of individuals involved in change to the foreground (Kubisch et al., 2002, p. 18). This focus on actors would appear a departure from the focus on three spheres of activities that characterized the Chapin Hall writings as well as the NFI reports. The actions referred to by Roundtable writers mirror those of the Chapin Hall writers, including attention to generation and mobilization of resources, community organizing, capacity building, strengthening social relations or social capital and

161

supporting empowerment, leadership development, and community governance (Kubisch et al., 2002; Roundtable on Comprehensive Community Initiatives, 1997). However, in the Roundtable writings, evaluation and data usage are more explicitly linked to the purpose and work of CCIs. In addition to the ideas of influence that are similar to the Chapin Hall writings, -- relationships, institutions, resource streams, policy and political change, -- the ideas of change occurring at the individual or family level, the neighborhood level and the systems level is pronounced in the Roundtable (Kubisch et al., 2002; Roundtable on Comprehensive Community Initiatives, 1997). As in Chapin Hall writings, sustainability of community initiatives and their work is said to be linked to their successes with an emphasis on learning from and sharing lessons and achievements (Roundtable on Comprehensive Community Initiatives, 1997). The revealing of the dimensions -- that emerged through descriptive analysis of the evaluation reports -- offers areas to use in further exploration of the reporting contribution to knowledge communities.

162

Topical Questions as Lessons Documented by NFI Evaluators

The overview statements and evaluation statements of the NFI evaluation reports also provided information related to topical questions regarding CCIs and CCI evaluation. The statements made by NFI evaluators indicated their acknowledgement of the interpretations of specific lessons about CCIs and CCI evaluation. Cross report analysis of evaluators’ statements serves to enhance the understanding of the initiative and directed attention to how evaluators documented their understanding of the initiative over time. Within these descriptions were indicators of the component identifiers that the evaluators attributed to the initiative. For example, in 1992, the first of the Chapin Hall evaluation reports included the following description of the initiative: The Neighborhood and Family Initiative is a community development initiative sponsored by the Ford Foundation and launched through the agency of community foundations in four cities (Detroit, Hartford, Memphis, and Milwaukee). The Foundation has submitted, for local exploration and implementation, a general statement of philosophy - conceptual concerns to be tested by demonstration in four different sites upon which action under NFI is to be based. This philosophy is based on two guiding principles. The first is a notion of neighborhood-focused, comprehensive development. It involves the formation and implementation of strategies that harness the interrelationships among social, physical, and economic development, which have historically been treated as separate spheres of action.1 These development strategies are to be employed within a geographically defined area: the "neighborhood." The second principle is that it is necessary to have the active participation, in both planning and implementation, of residents and stakeholders in the neighborhood targeted for development. In NFI, participation is organized initially through a collaborative governance structure that links community foundations (as Ford's mediators and fiscal managers of the Initiative at the local level), representatives of neighborhood interests, and representatives of potential internal and external resources. These representatives, drawn from both neighborhood residents and public and private organizations with an identifiable stake in the neighborhood, comprise the operational core of NFI: the neighborhood "collaboratives." The collaboratives are conceived of as the

163

generative body for planning, monitoring, and coordinating the implementation of action under NFI. Through this governance structure, by investing in the support and development of local leadership, and by integrating development strategies to address physical, social, and economic needs and opportunities within the targeted neighborhoods, the Initiative seeks to revitalize and empower whole communities and the individuals and families who live in them. (Chaskin, 1992, p. 1) This description provided basic information about the framing of the initiative and thus how the evaluators reported the initiative. They described the initiative in relation to questions of when, what, where, by whom, who, what for, how, by what approach, and upon what principles. The description appears in the 1992 report (answering the question “when”). The evaluators describe the initiative as a community development initiative (answering “what”) that involved four cities including Detroit, Hartford, Memphis, and Milwaukee (“where”). The initiative had been sponsored by the Ford Foundation and launched through the agency of community foundations (“by whom”). The initiative involved community foundations and representatives, of neighborhood interests and potential internal and external resources, including neighborhood residents from public and private organizations (“who”). The intent of the initiative was to “revitalize and empower whole communities and the individuals and families who live in them” and the Foundation was interested in exploration and implementation of a philosophy or conceptual concerns to be tested by demonstration in the four sites (“what for”). A collaborative governance structure was to be used to link the community foundations and representatives. The collaboratives were conceived of as the generative body for planning, monitoring, and coordinating the implementation of action under NFI (“how”). Through the governance structure, there was to be investment in the support and development of local leadership,

164

and the integration of development strategies to address physical, social, and economic needs and opportunities within the targeted neighborhoods (“by what approach”). Two principles were set forth as guidance for the initiative -- neighborhood focused comprehensive development and active participation of residents and stakeholders in planning and implementation (“upon what principles”). Appendix C and D provide a table and overview of the information related to these topical questions that I found in the description statements. Likewise, each report included statements providing an overview of the evaluation at that point in time. The topical information that emerged from the evaluation overview statements centered on evaluation related project descriptions, evaluation purpose, evaluation or report focus, and evaluation process. Appendix E provides the data from evaluation overview statements. Taken together the information in appendices C, D and E provide a chronological view of description and evaluation overview information throughout the initiative reports. This represents the basic information a reader might glean from the descriptions provided in the NFI evaluation. However, through analysis of the reports, a deeper understanding also emerges about the evaluation. The NFI evaluators were writing their evaluations at the same time that the Aspen Roundtable was engaging in writings about the nature of CCI evaluation. As I have documented, the Chapin Hall evaluators intended for the evaluation to be theory-based and participatory. Similarly, the Aspen Roundtable’s discussions of theory-of-change evaluation focused on the challenges of evaluating CCIs and the challenges related to theory-based evaluation and the shifting roles for evaluators in terms of new approaches to participation. In addition, in the Aspen Roundtable writings, some writers outlined

165

specific approaches to theory-based evaluation and all shared learning in the form of either discussion, reflections, or recommendations for practice. Throughout the NFI evaluation, Chapin Hall evaluators repeatedly commented on the challenges to the evaluation and the specific occurrences of the NFI evaluation. Although their challenges were in line with the Aspen Roundtable writings, the Chapin Hall evaluators identified specific and often practical challenges related to the vision or lack thereof for the initiative and evaluation, the complexity of the initiative, data issues and relationship challenges. In practice, as documented by NFI evaluators, these challenges were many. The evaluation was slow to get started and funding for the local evaluations was included within programmatic funding. The local sites were reluctant to use funds for evaluation until Ford Foundation reporting requirements approached and the need for evaluation was imminent (Chaskin, 1993). As early as the 1993 reporting, there was a documented lack of communication between the central organizations (CCC, Chapin Hall and Ford Foundation) and about the expectations of evaluation and responsibilities for technical assistance (Chaskin, 1993). According to Chapin Hall evaluators, the national evaluation approach provided reports that were too infrequent, too long, and too general, to be of use to the local sites (Chaskin, Chipenda Danoshka et al., 1997). Consistent throughout the evaluation reports were the Chapin Hall evaluators’ claims that the national and local evaluation remained disconnected and that the attention and resources given to local evaluation were disproportionate to the task of documenting neighborhood-level change in relation to initiative projects. Throughout the reporting, the Chapin Hall evaluators documented some adjustments that they had made to communicate with the local sites (e.g. interim

166

memos, informal conversations with collaboratives, interactive forums). However, these did not rectify the basic lack of interaction between the national evaluators and the collaboratives and between the local evaluators and their collaboratives. There was also very little baseline data collected at the local sites (Chaskin et al., 1999). According to Chapin Hall evaluators, the national evaluation was based on the idea that the local sites would provide much of the project and neighborhood data. Chapin Hall evaluators indicated that they had to fill in the gaps because of the limitations of local evaluations. According to Chapin Hall evaluators throughout their NFI reports, just as there was a disconnect between the national and local evaluations, the local evaluations also did not develop in an integrated fashion with the local strategic planning. By the last years of the initiative funding, when the Chapin Hall evaluators reflected on the initiative challenges, they emphasized that there was little to be done at the end of the initiative to increase interest for remedying the challenges of evaluation faced throughout the initiative. The local sites each experienced challenges to their specific approach to evaluation. The Chapin Hall evaluators documented these challenges. The local sites experienced lack of resources and evaluator turnover (Chaskin et al., 2000). Attempts at developing teams of researchers from various disciplinary or cultural backgrounds were not successful in providing a collective evaluation approach (Chaskin & Joseph, 1995). Local attempts at using participatory methods to build a learning community met with challenges and ultimately needed to focus on documentation. Figure 5 provides a summary of NFI evaluators’ ideas about CCI s in relation to evaluation.

167

Figure 5: NFI Evaluation Problems, Purposes and Challenges
Problems with past community evaluations A theory of development was missing from the Ford Foundation programs (Chaskin, 1992). A clearly defined ideal community does not exist in urban American so need to define community heuristically in relation to specific problem. (Chaskin, 1992). Past efforts did not provide documentation for analysis to advocate for Ford Foundation approach and to refine its assumptions (Chaskin, 1993).

NFI evaluation purpose Examination of process leading to an analysis of the structure of action under NFI in each site. Is an example not a demonstration. (Chaskin, 1992). Challenges of NFI Evaluation - Lack of clarity of Ford Foundation expectations for evaluation. - Lack of understanding about potential benefits of evaluation. - Lack of interest by participants in evaluation. - Lack of faith in the possibility of tracking outcomes in community initiatives. - Combination of roles – evaluators documenting framework and also refining it. - Learning needs to occur on several levels. - Needs to reach several audiences. - Variation in sites does not allow for predetermined measurable objectives. - CCI scope is broad and CCI field of action confounded with extraneous influences - CCI dynamics are complicated and nonlinear. - Overarching goals are often too broad and ambitious to be easily evaluated. - Building understanding from ground up requires extensive documentation and is resource intensive. - Need qualitative data to help understand process and impact. - Quality data about communities and community circumstances is difficult to find. - When available, data may be controlled by people outside the initiative. - Relying on subjective perceptions rather than independent information. - No clear correlation between the cause and effect. - Unlikely that neighborhood change will occur and be measurable in time of funding. - Community members differ in ability to ask questions and engage in data collection. - National evaluation is reliant on local sites for data and information. - Tensions between collaboration and compartmentalization in evaluation technical assistance. - Control over information with local sites protecting against national researcher interpretations. - Being responsive to collaborative needs is difficult. - Lack of trust, by community residents, in researchers. - Local research teams at did not function well and participatory learning community did not last.

Two-tiered approach with each local sites having their own approach.

168

NFI national evaluators also found challenges in their own approach, an approach that they labeled as ethnographic in nature. Limited resources had prevented extensive on-site ethnographic work and the combination of Chapin Hall roles (e.g. technical assistance and evaluator) was considered difficult. Chapin Hall evaluators came onto the local evaluation work and technical assistance late in the process and then shifted their roles midway to focus solely on the national evaluation rather than technical assistance. According to the Chapin Hall evaluators, the two-tiered approach had faced many challenges. However, the NFI evaluators did use the experience of evaluation to document some of the lessons learned in trying to address CCI evaluation challenges (Chaskin et al., 2000). The NFI evaluators did not cite the Aspen Roundtable throughout the NFI reports. However, the NFI evaluator lessons overlap with the issues elaborated upon in the Aspen Roundtable CCI evaluation literature and with contributions made by the Chapin Hall researchers in Aspen reports and other reported articles. Using research-based frameworks: Unlike the Aspen Roundtable writers, the Chapin Hall evaluators did not discuss how research-based frameworks were to help inform the local evaluation activity. However, in their own discussion of their approach to research, they did draw from prior research to help in discussing their conceptual orientation. In their final NFI report, Chapin Hall also drew from the work of other CCIs and evaluation literature to frame lessons learned from NFI. As Chapin Hall researchers, Stone and Butler(2000) also noted that CCIs have received criticism from researchers for being based on “largely untested assumptions about the nature of community isolation, the mechanisms through which that isolation can change, and the role of philanthropy and other institutions in promoting change strategies” (p. 1). CCIs, such as NFI do face

169

decisions about whether their evaluative purpose is to demonstrate a specific model or to adapt that model as the initiative proceeds (Brown, 1996). As an example of a CCI evaluation, the NFI evaluation begins with a seemingly research based discussion but does not include a revisiting of the research framework during or at the end of the evaluation reporting. Approaches to problems of comparison: Chapin Hall evaluators noted the difficulties with cross-site analysis within NFI primarily because of the differences in the local sites. The two-tiered evaluation approach was used, according to Chapin Hall evaluators, to provide a means for cross-site analysis while also allowing the local sites to engage in strategies specific to their own contexts. When referring to other initiatives, the Chapin Hall evaluators did not focus on trying to compare NFI to other national initiatives but rather placed their learning with the learning of the other CCIs. Issues of positive causality: Chapin Hall evaluators indicated that there was no causality assumed in the NFI evaluation both because the initiative and its evaluation were meant to be exploratory and also because of the impossibility of establishing a comparison so as to document what would have occurred without the initiative (Chaskin et al., 2000). Instead, the NFI national evaluation became a process documentation. Indicator and measurement issues: The Chapin Hall evaluators discussed the problems with issues of indicators and measurement to the NFI evaluation. They repeatedly noted that the task of measurement was that of the local sites but that the local evaluators met with mixed success in trying to document project outcomes and virtually no success in addressing measurement of larger contextual issues. According to Chapin Hall evaluators, the COSMOS indicator report did provide for some contextual data

170

about the neighborhoods but did not directly address any connections between NFI activities and changes in neighborhood indicators. The Chapin Hall evaluators noted that it would be in combining project data with neighborhood indicators that an understanding could be built about reasonable expectations of change (Chaskin & Joseph, 1995). However, according to Chapin Hall evaluators, the local sites tended to focus on project activity in part due to resource constraints and in part due to the difficulty with addressing comprehensiveness. There was also a lack of clarity related to Foundation expected outcomes and shifts in Foundation outcome expectations (Chaskin et al., 2000). Roundtable and Chapin Hall researchers also noted the methodology issues related to the complexity of community initiatives and the difficulties with both documenting change over time and attributing causality to those measures that are possible (Brown, 1996; Roundtable on Comprehensive Community Initiatives, 1997; Stone, 1994). As is suggested in Roundtable writings, evaluators tend to adapt their evaluation to the stage and pace of the initiative (Roundtable on Comprehensive Community Initiatives, 1997). The NFI reports document the ways in which evaluation and evaluation technical assistance was shifted over time to try to deal with issues of fit with the needs of the initiative. Working with multiple stakeholders: The NFI evaluation reports included discussions of the general difficulty with involving multiple stakeholders in the evaluation process. Chapin Hall evaluators admitted repeatedly that there was no consistent or instrumental connection between them and the local sites and that the process of developing and refining evaluation questions did not involve local participants (Chaskin et al., 2000). According to Chapin Hall evaluators, the local evaluators also

171

noted difficulty communicating with their collaborative, especially in terms of providing useful data and feedback during the collaborative processes. The national evaluators noted difficulty in providing information to multiple stakeholders or audiences including the local sites. Chapin Hall evaluators commented that some of this difficulty was related to reporting issues, with reports not being in the format or timeframe useful to various audiences (Chaskin et al., 2000). Similarly Chapin Hall researchers noted that evaluation is filled with misunderstanding, tensions, fears, -- evaluators can be pulled in many directions and difficult to maintain a good relationships with funders and community participants. CCI research requires skills not always possessed by researchers (understanding community dynamics, comfort with diversity, traditional methodological training does not fit community change processes, (Stone & Butler, 2000). Issues of race and trust are also of concern (Stone & Butler, 2000) as is cultural sensitivity as evaluators struggle to work with multiple clients and audiences (Brown, 1996). Issues of evaluator roles: Chapin Hall evaluators repeatedly commented on the lack of clarity around national evaluator roles and the lack of a directive from the Ford Foundation for establishing these roles. Even without a directive, the local evaluators that released public reports were very explicit about their proactive negotiations with the local collaboratives with respect to establishing an understanding of roles and expectations (Grant & Coppard, 1993, 1994; Johnson, 1998). When challenged by the local sites to take on more proactive roles, the national evaluators refused and claimed that the Ford Foundation had not encouraged this. They did note that, as early as the 1993 report, part of the NFI challenge was for participants to break out of overly compartmentalized roles

172

and responsibilities in order to effectively collaborate (Chaskin, 1993). With objectivity sometimes called into question, CCI evaluators are faced with the choice of various roles for evaluators, dependent upon the purpose of the evaluation (e.g. formative requiring evaluators to provide feedback; capacity building with the evaluators providing technical assistance; or co-inquiring with evaluators serving to democratize and demystify the research process (Brown, 1996; Kubisch et al., 2002; Stone, 1994, 1996). Specific steps for generating outcome expectations: Both NFI local sites that released evaluations did so with an explanation of the ways in which they negotiated the evaluation with the collaboratives. The emphasis in these cases was predominantly on negotiating data processes such as how data would be collected and managed (Grant & Coppard, 1993, 1994; Johnson, 1998). Chapin Hall evaluators emphasized that local sites tried to control data in order to protect from outside interpretation. Although NFI evaluators began with directions about developing outcomes from within the initiative, Roundtable writers present the use of outside standards as a possible option for CCIs although not one that usually fits with the purpose of the initiatives. The tendency in writings is to focus on the goals determined by the initiative itself (Roundtable on Comprehensive Community Initiatives, 1997). Despite what standards or outcomes are used, often too much is promised in order for initiatives to receive funding with time and complexity as issues of how much can reasonably be accomplished during the funding (Brown, 1996; Kubisch et al., 2002; Roundtable on Comprehensive Community Initiatives, 1997; Stone, 1994). The NFI evaluators suggested that the evaluation itself would help in addressing the issue by determining reasonable outcomes, but this promise did not manifest during the NFI evaluation reporting.

173

Availability and use of small-area data: The COSMOS indicators report provided the closest connection to ideas of small-area data with the Chapin Hall evaluators admitting that they had expected the local sites to collect local neighborhood data and that this had not transpired in a useful way other than through limited neighborhood surveying. According to Chapin Hall evaluators, baseline data was not collected both because of the lack of availability and also because of lack of ideas about what might occur. According to Stone, the challenges of data go beyond lack of clarity. There are challenges related to the relevance of data to different audiences and disciplines as well as challenges to information sharing. The latter of these include psychological issues as information is related to power and the structural impediments that are built into the design of initiatives such as similar professionals talking with each other and limited staff to collect data (Stone, 1994). Miscellaneous recommendations for evaluation practice: In the NFI reporting, the Chapin Hall evaluators also provided some miscellaneous recommendations for evaluation practice. Examples of these included spending more time at the sites, especially when using an ethnographic approach, balancing the needs of documentation with programmatic so as the work being done locally, and the need for greater communication between the local sites and the national evaluators (Chaskin, 1993). The Chapin Hall evaluators noted that greater attention should be paid to the broader initiative structure in addition to the focus on the local sites. The Michigan evaluators provided insights on utilizing forms to help their local collaborative collect consistent data across projects (Grant & Coppard, 1993, 1994). Other recommendations for practice can be inferred from the multiple challenges and lessons that are spread throughout the reports.

174

Some of the most prominent have to do with dedicating resources specifically to evaluation, dedicating local staff solely to documentation and information management, and integrating evaluation into initial planning activities. Within Chapin Hall and Aspen Roundtable writings, additional suggestions were also highlighted, including the need to choose an emphasis for evaluation -- formative, summative, social learning for generalizable lessons for policy and research, and capacity building (Kubisch et al., 2002; Roundtable on Comprehensive Community Initiatives, 1997). Also the question of ownership of data and knowledge is important to CCI evaluation with the priorities of various audiences to be considered not only in the types of information needed but in what they want to learn (Brown & Garg, 1997). A theories of change approach was cited as a possible way to create a shared framework, building trust, and facilitating the discussion between the Ford Foundation and local collaboratives (Brown & Garg, 1997). In their last NFI report, Chapin Hall evaluators provided the most specific recommendations for addressing the continuation and contributions of evaluation as they referred to the promise of theory-of-change approaches. They suggest that the following “elements” should be in place: A rational and well supported process that explicitly (and from the beginning) ties strategic planning activities to evaluation requirements, identifies objectives and appropriate measures, collects baseline data across sites, and establishes management information systems at each site that can be maintained by local actors (CCI staff, CBOs) who are provided with dedicated resources and support to do so). to tie strategic planning to evaluation requirements, identifies objectives and measures, collects baseline data across sites, and establishes management information systems at each site that can be maintained by local actors. Expectations that are explicitly aligned with likely initiative effects in order to establish an appropriate approach to the question of outcomes at the individual, organizational, and neighborhood levels including an explicit focus on relational networks and establishing a counterfactual. This requires making

175

strategic choices about both what is important and what is likely to change, and applying resources accordingly. In cases where there are two (or more) “tiers” of evaluation activity performed by different actors, a unified management structure with a clearly defined division of labor between national and local evaluators, clear and agreed upon lines of accountability and agreed upon mechanisms for collaboratively sharing instruments, data, and analyses. Sufficient and dedicated resources for local evaluation activities including support for building capacity of initiative governance structures and other local organizations to collect and use data and research results effectively. (Chaskin et al., 2000, p. 103) This, one of the final Chapin Hall evaluator statements about NFI, read similarly to traditional evaluation needs. The Chapin Hall transition from an attempt at building theory, to a lament over the poor evaluation conditions, provides a view of the NFI evaluation as lacking at best and devoid of learning at worst. This information and discussion about evaluation challenges and suggestions also offered evidence beyond the evaluators’ commentary that additional change may have occurred in relation to the initiative supporting the idea that change itself poses challenges to CCI evaluation. For example, the descriptions sometimes focused on the notion of comprehensive development and at other times focused on integration of development strategies. At times, the descriptions referred to communities and families as targets and, at times, individuals within communities. Sometimes the descriptions included reference to individuals with access to resources working together with residents but at other times, there was reference to representatives of resources and interests. At other times, the descriptions included reference to institutions and actors as the initiative participants. There were also signs of change occurring in the way researchers characterized evaluation in their evaluation overview statements. In the evaluation overview

176

statements, there was an emphasis on evaluation as the construction of a theory of development. Sometimes the focus was on understanding impact and, at other times, on developing understanding within an analytic framework. In some statements, the evaluators said evaluation was focused on drawing from experiences or documenting processes of interpretation and, at other times, on processes of empowerment. The evaluators discussed aspects of the evaluation with a focus on the data but also referred to agreements made, in interactions between evaluators and collaborative participants, about the operation of evaluation itself. Through this information, changes began to emerge in relation to the evaluators’ ideas about the nature of CCIs in relation to CCI evaluation. However, a chronological list of segments of text and a map of challenges did not make visible the challenges that evaluators attributed to evaluation or the lessons documented by the NFI evaluators.

Documenting Change in Reporting

In addition to what the evaluators said, the reports were also evidence of the changes in reporting as an initiative shifted from a centralized structure to increasingly decentralized decision-making on the part of local collaboratives. The change constructs that emerged give indication of, not what the evaluators said, but of what changes they represented in the evaluation reports. The change constructs included three description issues and five evaluative issues – development, resources and participation, and internal communication, external communication, data, outcomes and context.

177

Description Change Constructs

I found that there were changes in three aspects of the overall descriptions of the initiative as provided by the evaluators. I have called these description change constructs; they include development, resources, and participation. In my analysis, I sought to deepen my understanding of these change constructs by exploring the ways in which the evaluators came to frame these key concepts in relation to the community initiative. I utilized these change constructs to provide a scaffold for understanding CCI evaluation reporting.

Development as a Change Construct

Over the course of NFI, changes occurred within the descriptive statements that the evaluators used to introduce each of the evaluation reports. The Chapin Hall evaluation report descriptions brought the concept of development to the foreground. In these statements, development was described as directly associated with concepts of community, neighborhood, and comprehensiveness. Throughout the full text of the evaluation reports, authors utilized the concept of development in a number of ways. Development was used as a descriptor, such as in the use of terms like development initiative, development strategies, and development activities. Development was something that was to be directed toward other concepts such as in developing strategies, developing leadership, and developing resources. Development was also an outcome that the evaluators expected to result from collaboration. Development was itself a concept

178

described or defined by other concepts, such as in comprehensive development and community development, or as in the notion of developing an “ideal community where people are employed and where a mix of cultures and people of all income levels and ages live among fine institutions” (Grant & Coppard, 1993, p. 2). Although various conceptions of development appeared throughout the reports, examples of the configurations of the concept as related to the notion of quality or effectiveness provide insight into the structure of the idea as reported in the NFI evaluation. Figures 6 and 7 include diagrams of the concepts of development as described by evaluators in the 1992 and then later in the 2000 Chapin Hall overview description in the NFI evaluation reports.

179

Figure 6: Chapin Hall 1992 Report Diagram -- Development
formation of strategies harness interrelationships between spheres of action social, physical and economic

Principle 1
neighborhood focused comprehensive development involves implementation of strategies

to be employed in neighborhoods

Principle 2

residents and stakeholders in the neighborhood

active participation,

in both planning and implementation

organized

collaborative governance structure

support of local leadership development of local leadership by investing in mechanism -- "through this governance structure"

by integrating

development strategies

physical, social and economic needs

to address

and opportunities within neighborhoods

180

Figure 7: Chapin Hall 2000 Report Diagram -- Development
Ford Foundation

broad and ambitious goals

launched

Neighborhood and Family Initiative

through NFI came to be a ten year effort sought to

was develop support create

sustainable synergy processes organizations relationships among strands of development activity address

demonstration project

usefullness to explore viability

set of principles

general approach to community development physical circumstances economic circumstances

social circumstances

housing

economic development

human service provision

organizing

poor neighborhood and residents in ways that

combination of activities

lead to

changes greater than the sum of the parts

181

In the 1992 evaluation descriptions, evaluators were concerned with the ways in which the collaborative governance structure would serve to develop local leadership and would support the formation and implementation of strategies. By 1993, the evaluation focus in descriptive overviews had shifted to the ways in which institutions and actors collaborate in order to foster the use of resources for development strategies. Evaluators also emphasized the need for a prescribed structure to assist in the integration of strategies. By 1995 and 1997, model building had become the focus, with attention to the role of an initiative in creating circumstances that might allow the generation of a model for development. By 1999, as the initiative funding was nearing completion and the collaboratives were making decision about their future, the evaluation ideas surrounding development came to be directed toward implementation of development activities. In the last of the evaluation reports in 2000, the concept of development was described, not in relation to implementation, but as entailing “sustainable processes, organizations, and relationships to address the physical, social, and economic circumstances of poor neighborhoods,” for the purpose of creating synergy between strands of development activity (Chaskin et al., 2000, p. 3). Throughout the reports, the concept of development therefore moved from ideas of developing local leadership and development involving the harnessing of interrelationships between social, physical, and economic issues to developing strategies around social, physical, and economic issues. There were also descriptions of development involving processes to take advantage of essential interrelatedness and making use of interrelations between social, physical, and economic needs and opportunities. Although the configurations of concepts of development and associated

182

concepts changed, in the national evaluation, the notions of physical, social, and economic categories of community remained consistent. Despite the consistency in conceptual categories, within both the national and local evaluation reports, development activities varied in the way that they were labeled or defined. For example, in the Michigan 1994 report, activities were separated into major grants, action grant programs, outreach programs, and program development. The evaluators claimed that the evaluation was “driven by a set of outcome measures” adopted by the collaborative, but they seemed to make no attempt to relate these outcomes to the programmatic activities (Grant & Coppard, 1994, p. 10). In one section of the Milwaukee report, activities were categorized into job development, health care, revolving loan fund, housing collaborative, and leadership development. In other sections, activities were grouped in relation to categories such as redevelopment, business development, employment, housing development, community outreach, and youth council. The reports do not reveal if the categories of social, physical, and economic, remained unquestioned because the reflection of the background assumptions explored publicly by the Chapin Hall evaluators did not include mention of these categories. As explained in the 1992 report, Chapin Hall evaluators drew the idea of social, physical, and economic development, not from past assumptions, but rather from a paper by Kravitz and Oppenheimer-Nicolau (1977). In this paper, the authors described the concept of the integration of three spheres of development – family development, community development, and economic development. According to the Chapin Hall report (1992), these terms “correspond in substance to the concepts of social, physical,

183

and economic development discussed in this paper” and provide the conceptual framework for organizations to explore (p. 27). As I explored these categories, I found that, in the timeline of NFI, the categories of social, physical, and economic first appeared in the Ford Foundation charge to the local collaboratives. What the four collaboratives share is a commitment to testing the notion that their neighborhood development strategies will benefit from a comprehensive lens, that is, attention to the interdependence of physical, economic, and social factors. (Chaskin, 1992, p. 66) Similarly, another Ford Foundation publication produced during the NFI initiative described these categories as follows: The movement toward neighborhood-based community development – now more than 30 years old – was born of a desire by neighborhood residents, especially those in poor areas, to shape the economic, physical, and social life of their communities. ("Perspective on partnerships," 1996, p. 1) Additional Ford promotional materials also included these categories. A “Works in Progress” pamphlet referred to a focus on the “development and enhancement of physical, economic, and social assets” (Ford Foundation, n.d, p. 1). The same pamphlet included these categories with the notion of an “integrated approach to social, physical, and economic development” (Ford Foundation, n.d, p. 6). In an organization description, the Center for Community Change, as the intermediary that helped the NFI collaboratives interpret the Ford Foundation charter, stated that their mission included providing assistance to help “poor people develop the power and capacity to improve their communities and change policies and institutions that affect their life” (Chaskin et al., 2000, p. 156). However, when involved directly with NFI, CCC, in coordination with the local collaboratives, produced charges including

184

statements about the social, physical, and economic categories. The charges, derived by the collaborative and with the CCC assistance, each included a description of the interpretation of the charge including initiative guidelines. Despite variations in other aspects of the charges, all four charges included similar statements that related the idea of comprehensive to economic, social, and physical issues in the community. CCC’s strategic planning process, utilized with the NFI collaboratives, included a series of phases: • • • • • • Develop the organization and process Assess the environment Identify the strategic issues Formulate the strategy Develop the plan Implement the plan. (Chaskin, 1993, pp. 65-73)

Within the CCC strategic planning model description, there were no references to the categories of social, economic, and physical issues, circumstances, strategies or strands of development. Nor was there any explanation of if, or how, these categories were to be included within a development model. As outlined in my findings, development emerged as a change construct in the NFI reporting. Throughout the NFI reporting, development was initially categorized into the areas of social, economic, and physical. Throughout the NFI reporting, the evaluators (local and national) described a variety of categories. However, by the end of the national evaluation, the reports reflected the same categories as presented first in the Ford Foundation charter to the local sites. This persistence of conceptual categories led me to question if this categorization was unique to NFI or if the categories had emerged in other literature as well.

185

Beyond the NFI evaluations, Chapin Hall publications about community development included the categories of social, economic, and physical issues did emerge. In synthesizing the work of approximately 50 comprehensive local initiatives, Brown (1996) wrote: “What all these programs share conceptually is an appreciation of the interdependence of physical, economic, and social development strategies and a desire to create synergy among them” (p. 162). In the article, Brown went on to describe aspects of “community life” differently including economic opportunity, physical development, safety, well-functioning institutions and services, and social capital (1996, p. 164). She noted that: Comprehensive, in this case, does not mean that all five spheres of activity must be addressed at the same time, nor does it mean that simultaneous but independent initiatives necessarily add up to a comprehensive approach. Rather, a comprehensive lens assures attention to the interrelationships among areas as a way to understand the neighborhood’s needs and strengths and to shape development strategies that are most likely to have a synergistic impact over time. (1996, pp. 164-165) She went on to talk about the difficulties with ensuring an integrated and comprehensive approach from categorical funds and the desire to understand policy impact at the neighborhood level. Stone (1994) too referred to the categories of “social physical and economic development” and the “social, physical and economic lives of children, families and communities” (pp. 5-11). Yet in 1996, Stone referred to alternative categories of social, structural, and economic aspects of community revitalization (Stone, 1996, p. viii). In other Chapin Hall listed reports discussing community capacity, power, and race issues within comprehensive initiatives, the categories of physical, economic and social were not prominent (Stone & Butler, 2000). Rather, Chaskin and Brown (1996) provided categories for development including human capital, social capital,

186

physical infrastructure, economic infrastructure, institutional infrastructure, and political strength. In the Aspen Institute’s Roundtable on Comprehensive Community Initiatives website, comprehensive development was related to the strengthening of all “sectors of neighborhood well-being, including social. educational, economic, physical, and cultural components” (Roundtable on Comprehensive Community Initiatives, 2002). Additional Aspen Roundtable Voices from the Field publications categorized comprehensiveness as addressing circumstances, opportunities and needs of neighborhoods but then focused specifically on social, economic and physical “sectors,” “conditions” or development (Kubisch et al., 2002, p. 1; Roundtable on Comprehensive Community Initiatives, 1997, p. 8). Within the Aspen publications, comprehensiveness was further described as involving the integration of “economic, social, political, physical, and cultural” issues (Kubisch et al., 2002, p. 22). With reference to development, the categories of social, physical, and economic issues, circumstance, strategies or conditions persist despite alternative categorization in the surrounding literature and even in the local NFI evaluation text. These categories were not included in NFI reports as assumptions to be questioned and the categories seemed to coincide with the Ford Foundation ideas of development rather than being grounded in NFI local experiences. In the Aspen Voices from the Field writings as well as throughout the NFI reports, the relationship of the concept of integration is never fully addressed as an associated concept although it is often used to describe a perhaps higher goal than just the compilation of discrete categories of activities. As a change construct then, development becomes central to the ideas of evaluation reporting as evaluators were

187

faced with coming to narrative decisions about how to mediate the experiences of the local initiatives, the conceptualizations of the research of intermediaries, and the larger CCI development writings as exemplified in the Aspen Roundtable writings.

Resource as a Change Construct

The changes and consistency in ideas of development point to the potential of funding to contribute to development ideas that in turn may guide the work of community collaboratives. As seen in NFI, funding as a resource either can come through categorical programming, through operational funding to a collaborative, or can be considered a type of resource to support activities that develop certain aspects of a collaborative. Resources also emerged as a change construct as configurations of ideas varied throughout the NFI reporting. In the Chapin Hall reports, the concept of resources was clustered with the notions of internal and external, development and exploitation, collaboration, public and private sector, new and available, and ideas of inside or outside a community. The idea of resources was presented in the evaluation variously as including types of resources (such as training, consulting, staffing, managing, and coordinating), location of resources, and processes (such as representation) through which resources were made available to the initiative. In the 1992 Chapin Hall report overview description, the evaluators presented the collaborative governance structure as the hub for generating resources for community development. Alternatively, in the 1993 report, resources themselves were to be developed. In the 1995 and 1997 report descriptions, resources were to be sought out and

188

utilized. However, it is in the 1999 Chapin Hall report that there was a significant change in the way in which resources were described by evaluators giving indication of a key conceptual characteristic influencing the understanding of the term. At that time, the initiative funding from the Ford Foundation was coming to an end and a shift took place in how evaluators referred to resources in the description statement of the reports. During the period of initial funding, the concept of resources in the descriptive statements was open for collaboratives to define, generating a variety of indicators of a resource-full collaborative. However, when the national funding was coming to an end, the Chapin Hall reports indicated that new “sources of funding” were to be sought with resources becoming more narrowly defined as monetary. By the 2000 Chapin Hall report, the entities that were using available resources or seeking out new sources were called actors (not representatives) and organizations (not institutions). Although not categorized as such, the 2000 Chapin Hall reports include summaries of the local activities give indication of the ideas of resources that emerged from actual work of the collaboratives. These included: • • • Program investment funds Dues support, Networking through job placements and referrals to social service agencies • • Training such as leadership workshops and youth development Technical assistance including meeting facilitation, grant-writing, planning and strategizing support, grant applications, development of

189

policies procedures and criteria, needs identification, program development • Outreach through lobbying, publicity, information dissemination, government communication, • • • Staffing through administration, management, organizing of events, Equipment provision Manual and skilled labor

The CCC development model utilized for supporting the NFI communities in identifying and understanding resources treated resources as a question in the implementation stage of the strategic planning, with collaboratives identifying specific resources and mobilizing resources after their goals were established (Chaskin, 1993). Ford Foundation materials produced during the time of NFI and related to community development did not include discussion of specific resources needed for change in communities but rather included identification of professional groups needed for partnerships to support development. These groups included the financial community, the foundation community, the corporate community, government officials, and community development corporations ("Perspective on partnerships," 1996). The Ford Foundation charter for NFI included the following statement challenging new thinking about participation: In the implementation phase of the Initiative, the collaboratives will make recommendations to the community foundations for the use of a funding pool that will be set aside for the Initiative. The funding pool will augment regular public and private resources being devoted to the neighborhood, while challenging these same sources to participate in new and creative ways in the neighborhood’s development. ("Perspective on partnerships," p. 66)

190

Resources were most often treated without specific description, and with the emphasis on access to resources or obtaining resources. What constituted resources only occasionally received specific labels. For example, researchers made reference to foundation resources devoted to community initiatives (Brown & Garg, 1997), professional resources (Stone, 1996; Stone & Butler, 2000), resources that come from within the community (Stone, 1994, 1996), resources from a network of CBO’s (Chaskin, 1999), and human, social, and financial resources (Stone, 1996). Stone, in secondary writings, also discussed funding sources related to community collaboration (Stone, 1994) and, in a journal article describing characteristics of communities that have capacity, Chaskin wrote: The second characteristic of a community with capacity is the existence of a level of commitment on the part of particular individuals, groups, or organizations that take responsibility for what happens in the community and that invest time, energy, and other resources in promoting its well being. (Chaskin, 1999, p. 6) The concept of resources was thus consistently treated as integral to community development and the work of collaboratives. However, a CCI model presented in the Aspen Roundtable’s Voices From the Field report shows linear linkages between goals, principles, operational strategies and programs, but does not include the question of resources within this model (Roundtable on Comprehensive Community Initiatives, 1997). Analysis of the Aspen Institute’s Voices From the Field reports, (1997, 2002) revealed that sometimes resources were referred to as technical and at other times individual people were described as local resources; in other cases resources were described as “external structures” that affect communities (Kubisch et al., 2002; Roundtable on Comprehensive Community Initiatives, 1997). Despite the considerable

191

variation throughout the NFI reports, as in writings about community development, when the Ford Foundation funding was in question, it appeared that the use of the term resources became narrower in meaning in the reporting done by evaluators documenting the initiative.

Participation as a Change Construct

Although participation might be classified as a type of resource, participation as treated in NFI evaluation reports, emerged as a separate change construct with shifting emphases in the description statements. Throughout the body of the NFI evaluation reports, the notion of participation was often coupled with other concepts. For example, evaluators discussed active participation and citizen participation. Chapin Hall evaluators seemed to assume participation was occurring through collaboration or a prescribed structure. As described in overviews of the NFI evaluation reports, Chapin Hall evaluators discussed participation as a means toward ensuring viability, relevance, and equity in development strategies, with specification that participation should also be meaningful and active. The text from the Chapin Hall 1995 report indicated that the Chapin Hall evaluators identified the issue of citizen participation as an operational issue related to one of the initiative principles and treated it as parallel to the idea of institutional collaboration as another operational issue of the same principle (Figure 8). In the NFI evaluation descriptions, citizens were consistently separated from those individuals assumed to have institutional affiliation. Within the description statements of the evaluation reports there were no indications of the difference or linkages between

192

the concepts of meaningful and active, nor explanation of how institutions were to collaborate with participating actors.

193

Figure 8: Chapin Hall 1995 Report Diagram -- Participation

viable

implementation phases

actively

necessary for

relevant

set of strategies

in order for

in

participate

neighborhood residents

citizen participation

meaningfully

equitable to people affectd
planning phases

principle 1

operational issues

inside the neighborhood
relevant institutions

utilize
institutional collaboration

available

collaboration fostered

public sector
resources

private sector

seek out

new

throughout the larger community
relevant actors

194

Participation as a change construct in NFI reporting was accompanied, in surrounding literature, with the concern for “deep and representative community participation” that appeared in literature about foundation concerns (Brown & Garg, 1997, p. 6). Researchers also raised concerns about participation in questioning of CCIs (Stone, 1996). Chaskin wrote: How much participation (and of what sort) is necessary to promote a meaningful connection between organization and its constituency is unclear. Indeed, it is unclear how much is even possible, given the costs (time, energy, money, reputation) that may accrue to both the organization and the potential participants, and the lack of clarity (and often faith) on the part of many residents regarding likely benefits. (Chaskin, 1999, p. 20) The Ford Foundation charter for NFI referred to funding as a means toward encouraging public and private sources to participate in “new and creative ways in the neighborhood’s development” (Chaskin, 1992, p. 66). The NFI local charges did not specify how to achieve these goals and the local collaboratives often had difficulty with the nondirective stance of the Ford Foundation. Empowerment, for some researchers, was associated with learning, with the educational aspects of participation highlighted and with researchers explicit about their desires to understand participant meanings (Stone & Butler, 2000). Researchers supported the study of CCIs claiming that the “experiential learning arising from participation in the CCI process” would contribute to understanding and improved funding strategies (Brown & Garg, 1997, p. 22). At times, the notion of participation was qualified in relation to the type or arena of participation. In these cases, the focus was on discussions of levels of participation, participants as members of boards, participants as involved in leadership development programs, and organizations as participants (Kubisch et al., 2002) with the attitudes of individuals to participation often being the focus of

195

inquiry (Kubisch et al., 2002; Roundtable on Comprehensive Community Initiatives, 1997). They also noted that involvement takes place in a complex arena, with constraints and tensions. For example, researchers argued that residents participating in CCIs “may doubt their right to participate or their ability to do so” (Kubisch et al., 2002, p. 42). According to researchers, throughout a CCI process, participants may also come to expect that their “experiential knowledge” would be respected by others (Roundtable on Comprehensive Community Initiatives, 1997, p. 46). The way in which researchers viewed the empowering aspect of participation showed in their discussion of the processes by which CCIs were used as a strategy for development. Researchers noted that CCIs focused on developing mechanisms for resident participation and the desire for a “participatory, representative, and empowered governance structure,” even though such structures slow the process of development (Kubisch et al., 2002, p. 28). Participatory research was also mentioned and noted as itself a means toward enhancing empowerment of participants (Stone, 1996, p. 56). For example, participatory research was described as involving participants as “researchers” in knowledge production (Connell et al., 1995, p. 217). In discussing the evaluation and participation, researchers claimed that “if a participatory process for defining evaluation methods and measures is used, it could reduce various tensions within an initiative” (Kubisch et al., 2002). However as described in the NFI evaluation reports, participation in evaluation caused its own forms of tension without a clear indication of the impact that evaluation tensions might have had on the ideas of development, leveraging of resources, or the initiative overall.

196

Evaluation Change Constructs

There were five change constructs that emerged in reference to evaluation as a concept. These change constructs were internal and external communication, data, outcomes, and context. The data used to develop these change constructs came from those segments of text in which the evaluators discussed the concept of evaluation, as delineated by their use of any word coming from the root word of evaluation. The evaluation change constructs provide a deeper understanding of the issues that were evidenced in those NFI reporting segments that related to the evaluator understandings of evaluation.

Internal and External Communication as Change Constructs

Communication served as a concept through which changes were evidenced in the NFI evaluation reports. The evaluators noted communication as it occurred internal to the initiative. Although done less often, they also described communication as it occurred with external entities such as with local institutions. Internal to the initiative, there were different types of communication such as that which occurred across sites, informally between central organizations, between the local collaboratives and Ford Foundation program managers, between the collaboratives and their community foundations and between community foundations, and the Ford Foundation managers. According to Chapin Hall evaluators, difficulties in communication were addressed throughout the initiatives. When sites indicated that they felt detached from the Ford Foundation

197

decisions about the initiative, Ford Foundation managers brought in a consultant to facilitate communication. However, throughout the initiative, Chapin Hall evaluators repeatedly stated that there was difficulty in communication between the local and national evaluations. To this, some attempts were made by Chapin Hall to alter their methods of communication by providing more frequent summaries of findings and engaging in informal discussions with the local collaboratives. As early as the 1993 Chapin Hall report, Chapin Hall evaluators documented that communication between Chapin Hall and the local sites was “actually infrequent, making the links between national assessment and local assessment limited” (Chaskin, 1993, p. 41). These issues continued throughout the evaluation reporting. Chapin Hall evaluators noted that direct communication occurred between them and the local sites for purposes of “developing surveys, discussing issues and ideas, and lessons and methodologies” (Chaskin, 1993). There was also difficulty in communication between the central national organizations – CCC, Chapin Hall, Ford Foundation – as evidenced in the repeated Chapin Hall comments about confusion over roles in evaluation. Despite the attention given in NFI reports to the difficulties of communication, it remained a key area where change occurred although seemingly without resolve. The Chapin Hall evaluators continually pointed to the lack in communication and blamed this lack for the deficiencies in the evaluation. By 1995, a shift had occurred when Chapin Hall’s role in cross-site communication was separated out from the evaluation endeavor. In this year, Chapin Hall evaluators said that evaluation activities were limited and they blamed this on the

198

requirement that evaluators utilize evaluation for communication rather than just providing detailed analysis of work and progress. At this point, the Chapin Hall evaluators stated that it was communication from the collaboratives that was needed to support making the evaluations useful to the sites. They described attempts and results of making documents available to participants and receiving feedback on the national evaluation reports. In 1995, the evaluation also began to include statements about the desire of the Chapin Hall evaluators to engage in formal network analysis of communication rather than the ethnographic approach to the evaluation that they claimed to be using. According to Chapin Hall evaluators, throughout the initiative, the Ford Foundation’s response to the limitations with communication was to create a separate mechanism for direct communication between the sites and the foundation, this mechanism was a consultant. The Ford Foundation’s response resulted in more formal separation between communication and evaluation as functions of the initiative and between the national evaluation, technical assistance, and local assessments. By the 2000 report, the Chapin Hall evaluators commented on how the roles and functions including communication had become distributed: Different initiatives have tried to address this broad range of needs through various ways of structuring and supporting provision by a number of types of technical assistance providers to a variety of different kinds of recipients, from groups of residents to organizational collaboratives to individual CBOs. In this way, the range of roles and functions required to support initiative action -establishing and maintaining commitment to a guiding mission; fostering communication among participants; collecting, analyzing, and presenting data; promoting effective planning; supporting outreach and organizing; developing management systems and staff capacity -- have been distributed by different initiatives to different constellations of providers, and roles have been traded off among funders, evaluators, intermediary organizations, independent consultants, and providers of specific kinds of technical assistance. Depending on how this has

199

been structured, there have been more or fewer problems with coordination, more or less tension around the source of authority and lines of accountability, and technical assistance has been more or less responsive and effective. (Chaskin et al., 2000, p. 95-96) This admission follows from the Chapin Hall 1997 report in which the lack of communication between sites was blamed for causing the local sites to develop differently. Issues related to external communication were reported less often in the NFI reports than internal communication with the risks associated with external communication not prominent. However, prominent in the discussion of evaluation were the Chapin Hall claims that they and the local evaluators met with difficulties in sharing findings with the full range of audiences. The Chapin Hall evaluators indicated that the needs of the local collaboratives were quite different from the primary audience for the national evaluations. The latter of these included policymakers, researchers, and others in the community development field. In this way internal audiences, such as the collaboratives and participants, and external audiences such as policymakers, were separated as needing different forms of communication. A couple of the highlighted differences included the type of language used, the level of detail, and the timing of evaluation reports with local participants needing a less professionalized language encountered more frequently in order to influence local collaborative decisions. The external national audiences required more generalized information with timing less integral to decision-making and a language that was more professionalized.

200

Stone (1994) highlights an understanding of communication as it occurred within CCIs, yet the conversation of information sharing does not address complexities of communication within CCI structures. She wrote: A common thread connecting the issues and questions identified in this section is the necessary but surprisingly difficult tasks of collecting, documenting, and sharing information. Because these comprehensive initiatives strive to effect policy change in real time, there is an even more urgent need to learn while doing, instead of waiting for evaluations at the end of the road…. Most important, it involves the willingness to share unfiltered information... Sharing information, committing an idea to paper or computer screen, and allowing other people in on day-to-day problems of implementation or theoretical disagreement, challenge the standard operating procedure of most institutions involved in these endeavors. Communication and information sharing has to be seen as a good in and of itself – with benefits to the information provider as well as the information recipient. For this, the risks associated with discussing problems must be reduced. (p. 17) The challenges of communication were rarely addressed in any detail within the Aspen empirical writings about the practice of CCIs. NFI evaluation reports included documentation of the complaints of local collaboratives about the lack of communication from the Ford Foundation in relation to decisions related to the initiative. The community foundations too, indicated feeling left out of the decisions about the initiative. Researchers emphasize that the communication between sites and funders are indeed usually filled with “dishonest communication” (Brown & Garg, 1997, p. 1). Stone’s commentary that, in order to work effectively, communication must be seen as valuable to both parties, and requires that the risks of discussing problems must be limited (Stone, 1994, p. 17). As documented in an Aspen Voices from the Field report, CCI staff require excellent communication skills because of the breadth of goals and the range of participants and constituencies involved (Roundtable on Comprehensive Community Initiatives, 1997). However, within the NFI report sections on evaluation and within the

201

larger writings of the extended writings of the organizations involved, the specific communication skills required for successful CCIs was not elaborated nor were the difficult issues of identifying internal and external communication needs, as they relate to the notions of CCI evaluation, addressed.

Data as a Change Construct

The Chapin Hall description of the data used for their NFI national or cross-site analysis included process data, site-produced documents, data on perceptions and attitudes of residents, and data about the neighborhood. The national evaluators relied upon the local assessments to provide data about the local initiatives and admitted assuming that the contextual data (organizational, cultural, political, and social-structural) would come from the local assessments as well. However as early as the 1995 report, the Chapin Hall evaluators noted the limitations in the collection of local data (Chaskin & Joseph, 1995). To collect adequate local data, Chapin Hall evaluators suggested would require efforts to put in place data collection mechanisms for everyday administrative use in the local collaboratives. They made a case for increased data collection at the local sites and stated that collecting local data would also assist collaboratives in clarifying goals, linking together information of projects for greater understanding, and contributing to an understanding of reasonable neighborhood level data. Chapin Hall had been brought into the technical assistance role with the local sites in 1994. By 1995, a shift had already occurred in the NFI evaluation services with Chapin Hall discontinuing their working with the local sites. The stated reason was “in

202

part to avoid the confusion and complications caused by the assumption of a dual role (evaluation and technical assistance)” and “in part due to an inability to provide the ongoing, dedicated staff time required while continuing to address its core responsibilities” (Chaskin, Chipenda Danoshka et al., 1997, p. 85). Prior to 1994, CCC had been involved in the local assessments. After the Chapin Hall evaluation technical assistance concluded, the COSMOS Corporation was responsible for both providing neighborhood indicators and for supporting the sites in their evaluation. Despite these multi-faceted attempts to support local evaluation, by the last reports, Chapin Hall evaluators were reiterating that the local data had not yet provided adequate documentation to support the national evaluation and that the national evaluators had to fill in for the data not collected (Chaskin et al., 2000). Chapin Hall evaluators did note that some of the issues around data collection included the tendency of the local participants to protect information from national evaluators supposedly because of negative experiences with outside researchers. However, two of the local sites did release evaluation reports, including data collected in response to specific project activities. In the Michigan 1993 evaluation report, the local evaluators were labeled as consultants that handled evaluation process. The 1994 Michigan report also included information about the formative nature of the evaluation. “Formative” in the Michigan reports meant using evaluation data to influence collaborative activities, rather than formative as involved with documenting or understanding process (Grant & Coppard, 1993, p. 3). As consultants, the Michigan evaluators wrote about their negotiations with local collaboratives about data needs and responsibilities for collecting data. Evaluators indicated their desires to learn from

203

participants how they envisioned evaluation. In the Michigan 1994 report, evaluators shared the response to a question about the meaning of evaluation to participants. The local participants related evaluation to four types of data including observational and counting data, presence of tangible products, client satisfaction surveys, and archival measures. Throughout the national NFI evaluation text, data was also discussed in relation to its type, the challenges associated with obtaining it, and the responsibility for collecting it. Despite that some of the data concerns of the national sites and the local sites were similar, the Chapin Hall evaluators repeatedly noted that local data was insufficient and providing technical assistance around data was in conflict with Chapin Hall’s role as an outside national evaluator. One of the challenges that Chapin Hall evaluators cited throughout the reports related to the tendency of the locally collected data to be based on subjective perceptions rather than independent data. The local evaluations tended to include listings of responses to specific questions whereas, in the national evaluation reports, similar data was reported in the form of quantified responses, charts, diagrams and graphs. In the 1999 Chapin Hall report, new barriers became the focus for the Chapin Hall evaluators. According to the Chapin Hall evaluation reports, minimal baseline data had been collected at the beginning of the initiative, limiting their ability to evaluate the initiative. To increase the data collection, the Chapin Hall evaluators encouraged decentralization of data collection. Although tracking such program-level outcomes is at least part of the intent of local evaluations using the logic model as an organizing technique, evaluation is expensive, and local evaluators may be attempting to cover too much for too little, and are relying, in some cases, on thin data (e.g., sparse documentation) to draw

204

their conclusions. In trying to cover both progress toward outcome goals and process issues as they arise, local evaluation resources are further stretched. (Chaskin et al., 1999, p. 17) The Chapin Hall evaluators also noted the potential limitations in decentralization of data including lack of local interest, commitment, skill, resources, and limited ongoing technical assistance. They also acknowledged that, at the end of an initiative, there might be less incentive to work on evaluation. The Ford Charter and the local charges had not paid explicit attention to the nature or use of data within the process of the initiative. As documented by Chapin Hall evaluators, local participants were disturbed by the lack of direction that was given by the Ford Foundation with one area being that of data collection. In a separate study, Brown and Garg (1997) commented on the complicated nature of data collection within funded initiatives. They wrote: The complexities of the relationship between a CCI and its funder or funders make gathering reliable information from either party a difficult task. This reality was reinforced for us early on in the study, when we realized that we would not be able to create a candid interview situation or obtain data that were sufficiently complete and nuanced unless we agreed not to reference particular individuals or initiatives in the report. (p. 23) There was a tendency for researchers, within NFI reporting and outside of the NFI reports to reference quotations as data and associate that data with types of individuals (e.g. sponsors, directors, residents) rather than as specific individuals set within an understanding of a targeted case. For example, data in the form of quotations was also identified in Voices from the Field reports. In these reports, quotations were often attributed to those holding various roles in the initiative, with all respondents listed at the end of the document rather than associated with individual quotes (Kubisch et al., 2002;

205

Roundtable on Comprehensive Community Initiatives, 1997). Although the literature by organizations involved in NFI dealt mostly in qualitative data, the NFI reports themselves (both local and national) utilized a combination of numerical and verbal data to document the initiative outcomes.

Outcomes as a Change Construct

Even though they documented that a reason for limited data collection in the initiative was a lack of faith in the possibility of documenting outcomes in CCIs, Chapin Hall evaluators framed the national study as an approach to understanding possible outcomes. In the 1992 report, Chapin Hall evaluators set out their initial task stating: In addition to eliciting operational lessons, such a process assessment will provide essential information on how to construct reasonable expectations for such an initiative. It will illuminate the specific dynamics of action and the inherent constraints, conflicts, and opportunities presented during its implementation. Further, by relating changes in the neighborhoods to the processes of strategic planning and program implementation, a process assessment can begin to clarify the types and degrees of outcomes that might be looked for. Of course, the evaluation cannot attribute direct causal relationships between action taken under the Initiative and broad objective measures of neighborhood change. The measure of such change: will, however, help to anchor our understanding of the process of the Initiative within the specific local contexts of each site, and will help us to make informed judgments as to the possibility of change, as well as to draw some thoughtful conclusions regarding the efficacy of the approach represented by NFI. (Chaskin, 1992, p. 53) By 1993, the Chapin Hall evaluators described the NFI approach to evaluation as being in opposition to traditional evaluations because of the differences in addressing outcomes. For Chapin Hall evaluators, traditional evaluations focused on “predetermined outcomes” standardized across sites and narrowly focused on quantifiable outcomes (Chaskin, 1993,

206

p. 55). In contrast, the Chapin Hall evaluation focused on both doing and learning with an emphasis on understanding the “impact process has on product or outcome” (Chaskin, 1993, p. 55). According to the Chapin Hall evaluation, the two-tiered design of the evaluation was intended to allow for variation at the local level in outcome expectations. The local assessments, with technical assistance, were to meet both the needs of the local collaboratives and the needs of the national evaluation. As they discussed the outcome issues related to the evaluation progress at each of the sites, the Chapin Hall evaluators emphasized a common tension between a focus on outcomes and process and the reliance on subjective perceptions of participants as used to measure progress toward outcome goals. They discussed the challenges faced by local sites as they decided to focus on projects but still grappled with how to show a relationship between project level outcomes and neighborhood change, and a relationship between strategies and outcomes (Chaskin & Joseph, 1995). Chapin Hall evaluators emphasized the importance of the local assessments to the national evaluation and noted that local assessments should focus on outcomes but may also include some process. Chapin Hall evaluators expected that the evaluation would help them in understanding reasonable outcomes and how to document process. Yet, the evaluators admitted that their evaluation was exploratory in nature and could only give some ideas about how to further develop evaluation for comprehensive initiatives. The Chapin Hall evaluators noted, as early as 1993, that despite the comprehensive focus, that evaluation would “inevitably” result in a focus on “targeted strands of neighborhood outcomes, based on particular sets of programmatic activity” (Chaskin, 1993, p. 59).

207

Chapin Hall evaluators claimed that the local challenges arose from varying abilities to collect data, measure outcomes, and provide feedback to the local collaboratives. They also noted the difficulty at the local sites with distinguishing project level outcomes and project level activity and outputs. In the 1997 report, Chapin Hall evaluators recommended that the focus on the national evaluation might need to shift from cross-site to a focus on local contexts with the provision of technical assistance (Chaskin, Chipenda Danoshka et al., 1997). Chapin Hall evaluators described a number of barriers to the evaluation. They included, as a barrier, a lack of local focus on outcomes as opposed to the existing focus on process. This limited local focus undermined the potential of the evaluation to offer detailed understandings of outcomes at the “individual, organizational, and neighborhood levels.” One barrier is a lack of clarity regarding goals and outcome expectations. Although collaboratives have elaborated some goals in clear and actionable ways (particularly to the extent they are connected with specific projects), many of the goal statements remain at a very general level. A local "theories-of-change" evaluation approach attempting to connect strands of activity to neighborhood change goals was not consistently engaged, and the use of "logic models" guided by COSMOS (the TA provider for local evaluation) is relatively new. In some cases, attempts to understand outcomes rely largely on collaborative members' perceptions of success toward outcome goals; in others, the outcomes to be measured-"leadership skills," "capacity building"-are too broadly labeled to provide guidance on how to recognize them. (Chaskin et al., 1999, p. 16) The Chapin Hall report categorized evaluation barriers as being technical, motivational, incentive-oriented, and perceived usefulness and noted that one result was that little baseline data was collected and that aligning data with goals was difficult. However, two of the sites released outcome reports with Michigan evaluators explaining that their outcomes were decided with the collaborative members. They

208

described this working with the collaboratives as a process that made the report different from had there been pre-decided outcomes. The 1998 Milwaukee report referred to evaluation as an outcome-based evaluation and included a description of the process by which those evaluators also worked with collaboratives to identify outcomes. The report included lists of outcomes related to specific questions asked of those involved in the initiative. Along with the evaluation difficulties, Chapin Hall evaluators noted that programmatic activity was actually targeted to few people and was relatively traditional. Also, according to Chapin Hall evaluators, evaluation follow-up with individuals was too expensive and even with a logic model approach, sites were expected to cover too much writing the limited resources. They suggested drawing from existing administrative data to support local evaluation data as they note the COSMOS indicators work was doing, although they noted that this indicator work did not appear to address change over time. Skepticism regarding the possibility of capturing individual- and community-level outcomes stemmed largely from the limited resources and capacity of the local evaluations, the lack of clearly defined outcome objectives, and the broad range of outcome targets. In addition, the value of attempting to track community level outcomes was questioned on the basis of unreasonable expectations: given the relatively low level of resources provided to change relatively large and complex neighborhoods, it is unlikely that measurable change would occur at the neighborhood level over the time frame of the initiative. (Chaskin et al., 2000, p. 102) Chapin Hall evaluators therefore blamed the limited outcome results to on a “lack of faith in the possibility of really tracking outcomes” (Chaskin et al., 2000, p. 102). The COSMOS report dealt in indicators rather than referring to outcomes of program activities or any specific researchable questions. The ideas of process outcomes and

209

project outcomes and the differences in comprehensive outcomes in reporting were not dealt with in detail. The Ford Foundation charter and the CCC development models for NFI did not address outcomes. However, the idea of outcomes was addressed in that outcomes were set up with “productive capacity centered largely on creating jobs and providing services” and documented as in opposition to the planning and advocacy work that organizations conducted (Chaskin, 1999, p. 23). In other research, programmatic processes of community development were often related to enhanced outcomes (Roundtable on Comprehensive Community Initiatives, 1997, p. 24). Authors wrote: The principles of community, comprehensiveness, participation, collaboration, democracy, empowerment, and capacity building have served community-change initiatives well, in some ways. They have drawn attention and sometimes significant resources to poor neighborhoods. They have shifted the focus from categorical, remedial approaches to holistic, asset-based, developmental ones. The process of applying the principles has driven community revitalization efforts to produce real outcomes – for businesses, jobs, housing, services – and vital connection among organizations and individuals. (Kubisch et al., 2002, p. 75) Throughout Voices from the Field writings, authors documented the tensions between understanding of process and product in CCIs (Roundtable on Comprehensive Community Initiatives, 1997). However, it was noted that, with community building as an outcome as well as a principle, the documenting of CCI change may need to include specific understandings of community capacity before change can be assessed (Chaskin, 1999). The difficulties with documenting a broad range of outcomes, as is necessary in CCIs, and the need for showing outcomes in order to keep participants interested in the initiative is a challenge faced in relation to outcomes (Kubisch et al., 2002). In the NFI

210

reports, a broad range of outcomes were indeed discussed. The final reports end, less in specific outputs described by evaluators explaining what they had learned or how they could be associated with broader neighborhood level change, than in comments of how outcomes were difficult to achieve and to document in a comprehensive initiative (Chaskin et al., 2000). As evidenced by NFI reports and additional organizational writings, the challenges to outcome assessment were both in the ability of sites to identify the outcomes and demonstrate them.

Context as a Change Construct

Although context was a concept discussed in less detail than the other change constructs, it is an important one because of its relationship to concepts of evaluation. In many ways, the whole story of NFI is a story about the interplay between groups of participants and broader arenas of context including organizations, cities, national initiatives, and larger social and political issues. According to evaluators, the NFI collaboratives were intended to address contextual factors of their neighborhoods. These factors included the inequalities that were documented in their neighborhoods as compared to a broader city and metropolitan area. The Chapin Hall evaluators’ discussions evidenced varying conceptualizations of the idea of context. Chapin Hall evaluators made reference to local context as it related to the development of the charge for the local initiatives. The governing principles and the general operational structure were developed centrally by the Ford Foundation. The principles were crafted into a six-point "charge" by the Center for Community Change in conjunction with site representatives, but each local initiative has substantial freedom to interpret the

211

charge with reference to local context and local needs and to plan accordingly. Ultimately, the NFI charge was modified by each site, and it is the local charges (and their subsequently developed strategic plans) that guide the collaboratives. (Chaskin & Joseph, 1995, p. 1) The report, as with earlier reports, included references to local contexts and also to the failure of the evaluation to describe ideas of broader contextual issues. The evaluators reported that, through the evaluation, they were learning about how the sites had “interpreted and operationalized the principles given their own purposes and contexts…[rather] than … about the inherent value and usefulness of the principles themselves” (Chaskin & Joseph, 1995, p. 92). Examples of contextual issues given included “organizational, cultural, political, and social-structural influences” (Chaskin & Joseph, 1995, p. 84). The Chapin Hall evaluators offered a diagram of what they considered the operational context of the initiative including organizations involved and types of relationships. These were limited to municipal boundaries and the national organizations but not to a broader national arena within which the initiative as a whole would function. Missing from the diagram depicted by the Chapin Hall evaluators was the national evaluation, even though the evaluators acknowledged, in the text, their participant role within the initiative. The addition of the term “operational” to context appeared to have shifted understanding of context from social, cultural, and political to inclusion of organizations in the depiction. By the 2000 Chapin Hall reports, the evaluators positioned the Ford Foundation’s role in creating a “strategic context for action” including making “decisions regarding target cities, major objectives, participating institutions, and central goals” (Chaskin et al., 2000, p. 4). In this way, the initiative, as a

212

whole, became an attempt not to address contextual factors but to create a context within which action could be facilitated in order to address local factors. Chapin Hall evaluators also discussed context in relation to the concept of local, and they noted that measuring change helped them to “anchor” their understanding of initiative process within the “specific local contexts of each site” (Chaskin, 1992, p. 53). The Chapin Hall evaluators noted that the local assessments, did not describe the broader context of the initiative. The evaluators added that the ethnographic method they used did not allow for an in-depth attention to neighborhood context. They asserted that a network analysis would be more suitable to “map shifting coalitions around given issues, and to concretize relationships within the collaborative context” (Chaskin, 1993, p. 59). The Chapin Hall evaluators told of the lack of attention by the national evaluation to this mapping. They noted that the national evaluation had not attempted to understand the social and cultural context provided by the neighborhoods. They stated that they assumed that the local assessments and planning activities would have provided contextual data about the circumstance of neighborhoods and change. They suggested that developing a richer understanding would require a more contextualized examination of the program. Aspen Roundtable authors referred consistently to CCIs as a context within which discussions occurred and issues arose; one example was the discussion over the insider and outsider tensions between residents of a community and other individuals who might contribute to the community through their involvement in a CCI (Roundtable on Comprehensive Community Initiatives, 1997). Emphasis on the individual in relation to context also occurred in discussions of the presence of foundations in CCI work wherein the need to understand one’s presence in a “different” context, was emphasized (Brown

213

& Garg, 1997, p. 1). Another way context was discussed was as related to community, most importantly as the resources that surround a community (Stone, 1996, p. 95). Researchers discussed issues of demographics and the associated issues of power and race in a community (Stone & Butler, 2000) with Stone discussing context more specifically in relation to the research role of CCI evaluation: Obstacles to information sharing can be divided into contextual, psychological, and structural issues. Information is always embedded in a context that influences the likelihood of its being shared. Most of these initiatives have many layers, from the direct-service level to the community-based governing body, to the program officer at a sponsoring foundation (and, by implication, the leadership of the foundation or other sponsoring agencies), to the evaluation. While the obligation to share information within this hierarchy is usually well-established, individuals at each level and within the different cooperating institutions may be quite uncertain about what kinds of information (from observation to hard data) are appropriate to talk about outside the bounds of the initiative. (Stone, 1994, p. 17) Stone asserted that community could also be considered a “context for change” meaning the location where empowerment could occur (Stone, 1994, p. 9). Although the NFI evaluators did not use “theory-of-change” language until their last reports, their theory development and participatory intentions mirrored that of the Aspen Roundtable evaluation writings just as the change constructs as concepts could be traced throughout writings of involved organizations, and membership overlaps indicated the possibility of idea sharing across evaluation work. The continued analysis of the dimensions, lessons and change constructs led to specific reporting findings of NFI as contextualized by the writings of surrounding organizations and the Aspen Roundtable.

214

Reporting Issues

As documented in the NFI reports about programmatic changes, shifts in evaluation occurred throughout the evaluation as well. Initially, CCC was responsible for working with the sites to incorporate assessment into early planning activities. According to Chapin Hall, this incorporation of assessment did not occur. After strategic planning was close to complete, the Ford Foundation brought in Chapin Hall evaluators to provide technical assistance to the local sites. This role did not last and the Chapin Hall responsibilities became confined to the conducting of the national evaluation with the local sites then receiving additional evaluation technical assistance from the COSMOS Corporation. In addition to these changes were changes in the funding of local evaluation. Initially, evaluation funds were included in overall site funding. As the Chapin Hall evaluators noted, local sites often chose to allocate funds to programmatic efforts rather than to the evaluation of local activities. Later in the initiative, the Ford Foundation dedicated funds specifically for local evaluation as they had done in the national evaluation. However, according to Chapin Hall evaluators, the resources for local evaluation were dedicated too late in the initiative and these funds were limited, thus constraining the possibilities of adequate data collection and reporting. As I entered my study, I expected that, because of the clear connections in membership with the Aspen Roundtable and because of the statements of the Chapin Hall evaluators that they were conducting theory development and were interested in participation, the NFI evaluation would follow along the ideals of a theory-of-change approach as espoused by the Roundtable. Indeed the challenges documented by the

215

Chapin Hall evaluators mirror those noted in Aspen Roundtable writings. However, as documented in the NFI reports, more often than not, the NFI evaluators told of how they had adhered to a two-tiered approach that separated out process understanding and outcome understanding rather than integrating national and local participation into a theory building process. Although learning is a key element to Aspen Roundtable espoused evaluation, as evidenced in the lack of theory development, the NFI evaluation was lacking, if not in the actual learning, at least in the presentation of that learning as theory. Because of the level of difficulty in addressing comprehensiveness in reporting, it is not surprising that NFI evaluators documented struggles as they reflected on their evaluation approach. Although they might have addressed these challenges by using ideas of a theory-of-change approach to address the notion of comprehensiveness and might have enthusiastically engaged an idea of holism by involving multiple types of participants, the Chapin Hall evaluators admit they did not. Rather, the Chapin Hall evaluators shared their own skepticism about engaging local participants in theory development. Coupled with the fact that the same local evaluators did not conduct evaluation and that technical assistance for evaluation was provided by, at a minimum, three different national intermediaries, the theory of change approach was not consistently engaged. That only two local sites released evaluation reports is evidence that the learning about evaluation reporting of a theory-of change and participatory approach was also not demonstrated in the NFI evaluation. It would seem obvious that NFI evaluators would have embraced their espoused approach even if it was not a theory-of-change approach. However, from early in the

216

reporting, Chapin Hall evaluators told of the limitations of what they considered an “ethnographic” approach. They repeatedly advocated for the use of a formal network analysis rather than the approach they were utilizing. Also from early in the reporting, NFI national evaluators documented their expectation that the evaluation would result, not in a notion of comprehensiveness, but rather in the inevitable documentation of categorical strands of activity. Evaluators referred to comprehensiveness as a lens for understanding rather than being programmatically useful and the reports were filled with comments about the difficulty with using that lens to document NFI activity. It might be expected that the NFI evaluation reports, even if resulting from a theory-based approach, might not offer candid reflection because of the limitations and confidentiality issues associated with reporting on a single case. However, in NFI, the evaluators did offer some reflection on evaluation. Analysis showed that key components of the Chapin Hall evaluators’ documentation of their evaluation approach remained the same from the first report to the last report even though they described multiple changes that occurred throughout the initiative. Although the NFI evaluators included information about their evaluation and reflected on their research process in those reports that were publicly released, they did not provide a framework for understanding or development of CCI evaluation models. Additional expectations might be that report and article writings by the same professional group of individuals involved in CCI evaluation would provide a deeper understanding than actual evaluations. To the contrary, the NFI evaluation documents often offered a depth of detail that professional articles did not. Chapin Hall evaluators

217

also drew upon this detailed analysis to speak about CCI issues through additional reports and articles that focused on specific issues of community development. Therefore, throughout the NFI reports, the Chapin Hall evaluators repeatedly referred to their desire to conduct a formal network analysis rather than doing the “ethnographic” work they had begun, analysis that they claimed was limited in its ability to provide for formal study and the “concretizing” of relationships. By the final NFI evaluation reports, the Chapin Hall evaluators had changed their language of evaluation and concluded that, in order for a “theory-of-change” evaluation to work, certain conditions needed to be in place. Among these were the need for evaluation to be well funded and to include established data collection and management systems. Additional requirements included clear objectives, goals and associated baseline data integrated with planning processes, explicit and aligned expectations for outcomes, an identified counterfactual, choices made early about what is likely to change, a clear management structure and division of labor, mechanisms in place for sharing data, and resources for capacity building to maintain all of the above. These conditions read like those that experts in the field of community development and CCI evaluation have stated are not present and perhaps are not desirable in the context of community initiatives that are funded to include goals of learning and empowerment. With respect to learning about evaluation, my presentation of background and report description, dimensions, evaluation lessons, change areas and associated findings may seem a dismal portrayal of the NFI evaluation given the publicly sanctioned private funds invested into the initiative and the stated intent of evaluators to be developing theory and supporting participation. However, because NFI reports were released

218

publicly, the evaluators did leave a trail of hope for public learning. Analysis of the reports also revealed specific findings related to the evaluative reporting over the course of NFI. Reporting and Comprehensiveness

The reporting of NFI, an initiative that was an example of a comprehensive community initiative, was conceptually distinct from the reporting about community development processes and social programming. In their reporting, Chapin Hall evaluators discussed development and programming but evidenced the reporting of community coalition action. In the descriptive findings, I stated that there were five areas that were revealed in NFI public evaluation reports as the “dimensions” that were addressed by the evaluators in reporting. These dimensions included ideas of comprehensiveness, structure, influence, action and sustainability. In coming to these dimensions I found that these were different than the areas the Chapin Hall evaluators claimed as significant to address in relation to NFI ideas of community development and governance structure (e.g. community, neighborhoods, planned development, wholeness of individual’s and families’ lives, integrated and comprehensive strategies, governance, empowerment and participation in implementing policy). This revealed that, in NFI evaluation reports, what was stated as important about initiatives was not necessarily what was reported, and therefore considered important to know about community development. The concept of a “comprehensive lens,” as addressed in the NFI evaluation reporting, clarified little with respect to understanding the interconnections between

219

various aspects of NFI and change. The Chapin Hall evaluators admitted that the concept of comprehensiveness was not helpful in describing NFI implementation but was more useful as a lens to understand the work of community initiatives. Although the national evaluators advocated for focusing on integration rather than comprehensiveness, the term comprehensive persisted. This tendency to address difference, not in the interconnections and multiplicity, but rather by offering increasingly higher levels of conceptual perspective – such as the notion of a lens – was also evidenced in the twotiered design utilized to capture comprehensiveness. In the reporting there is evidence of a resultant polarity. In the national tier of the evaluation, the evaluators consistently drew upon concepts of social, economic, and physical to categorize any of the actions taken by the local collaboratives. In the local tier, evaluators often resorted to changing categories with a focus instead on extensive lists and detailing of actions. There thus appears to be a tension between evaluators integrating ideas of development into increasingly encompassing categories and the detailed reporting of community actions as conceptualizations.

Reporting and Communication

As evidenced in NFI reporting, evaluators addressed communication challenges, by taking on more limited evaluative functions, rather than by meeting the Chapin Hall stated need for less compartmentalization of roles. This limiting was manifested in the evaluation approach’s mirroring of linear portrayals of change. Throughout the reports, the Chapin Hall evaluators told of how participation did

220

not work, how communication had failed, how evaluation was slow to get started, and how participant interest in evaluation was limited at best and ended prematurely. By the Chapin Hall evaluators’ own admission, the NFI evaluation met with many challenges and never fully gained integration throughout the planning and implementation of the initiative. Throughout NFI, the Chapin Hall evaluators admittedly kept their distance from local participation, refusing to take on participatory roles that some local participants had requested. In efforts to connect evaluation to the local collaboratives, Chapin Hall evaluators claimed to have adjusted some of their reporting mechanisms in order to provide more useful feedback to the local sites, but admitted that these attempts did not alleviate the communication challenges that existed between the local and national evaluations. NFI’s two-tiered approach to evaluation thus risked becoming two separate evaluations with the Chapin Hall evaluators stating that the local participants protected against the interpretations of the national evaluators and that the national evaluators had to fill in for the local data that they repeatedly bemoaned had not been delivered by the local collaboratives. In the NFI evaluation, communication was never resolved. Chapin Hall evaluators repeatedly blamed lack of communication for hindering the evaluation and then blamed the expectation that evaluation would help with communication for burdening the evaluation over time.

Reporting and Funding

As documented in NFI reports, local evaluation implementation followed Ford Foundation funding mandates. However, NFI reporting reveals that influence

221

was related, not only to the hierarchy of institutional funding structures, but also to a hierarchy of linear time in the funding of the initiative. National evaluators received dedicated and longitudinal initiative funding throughout the initiative, but local evaluators were initially subject to each local collaborative determined need for evaluation at various points in time. Although evaluators claimed that the Ford Foundation exerted more control early in the process and then took a non-directive stance, the early decisions related to evaluation persisted throughout the initiative. These decisions included selection of the national evaluator, reporting responsibilities, the horizontal relationships, and related horizontal communication between national organizations.

Reporting and Sustainability

As an issue, sustainability was more often reported at the end of NFI reporting, than at the beginning. The meaning of necessary resources was increasingly reported as distinctly monetary. Throughout the initiative as funded by a single Foundation, the issue of sustainability remained beneath the surface. In evaluation reporting, activities were described but little was revealed about the processes for activity decision-making or whether sustainability was incorporated into the early decisionmaking processes about what activities would most lead to ongoing change. Early in the NFI, reporting partnerships and collaborative building were emphasized with resources broadly construed in relation to the members that would come together. Toward the end

222

of the initiative, as Ford Foundation funding was ending, the concept of resources appeared to be reported as more specifically a need for another centralized funding agent. In the NFI reporting, local evaluation was not fully addressed as part of sustainability, despite claims made by Chapin Hall evaluators that local evaluation could be used to leverage resources for continued local development. However, national evaluators did leverage NFI data into their own professional journal articles and organizational reports. Nevertheless, within the NFI reports, Chapin Hall evaluators made claims about the importance of evaluation and the need for local sites to engage in, and provide data for, the national study. One argument that they used is that evaluation could help a local collaborative leverage resources but they were not clear as how leveraging was to be occur. As reported, in NFI, the resources dedicated to national evaluation were not leveraged into consistent technical assistance, hours of contribution to data collection, or systematic reporting throughout the initiative. Rather the NFI evaluation funds resulted in little local reporting, years of unmet requests for local data, and national reports wherein Chapin Hall evaluators repeatedly lack of local participant skill for any content or communicative failures in the evaluation. Despite conjecture as to how evaluation as language might be leveraged, there were no clear avenues reported. However, writers of national evaluation reports clearly leveraged their data into journal writing and report writing, bringing NFI data to understandings of topics.

223

Reporting and Knowledge Norms

Evaluation reporting -- and therefore the decisions about the type of language, acknowledged research method, and style and focus of reports -- was conducted, not only in the context of scholarly ideas, but also in the context of discipline and field-based norms. The prevalence of governance related issues in reporting was not surprising given this ongoing focus in the national principal investigator’s research writing. However, given the stated desire of comprehensiveness and the local attempts at interdisciplinary work, it was surprising that so little attention was reported related to issues such as strategic decision-making, culture, learning and other emphases that might have fallen into the label of comprehensive lens or might emerge as issues relevant to complex structures.

Reporting and Decentralization

As evidenced in reports, the programmatic structures of NFI appeared to be decentralizing, while the evaluative structures of NFI, solidified in national evaluative authorship and the persistence of conceptual categories. The result was a predominantly centralized evaluative reporting. The struggles that the NFI evaluators reflected upon in their reports alluded to decision-making processes that took place during shifting funding mandates, foundation management changes, and amongst ongoing change, in the local collaboratives.

224

The NFI sites began their work with a planning and implementation purpose with charges, adapted from the Ford Foundation charter, to clarify the parameters of this work. However, as the initiative progressed, sites also began to take on challenges prompted by their local conditions and the nature of collaboratives within a context of needs and opportunities. The Chapin Hall evaluators documented NFI sites’ attempts at prompting institutional change, as well as hints of political involvement. As the sites moved toward incorporation, thus seeking to disconnect from their community foundations, they moved toward a traditional reaction to the tensions of collaboration. The NFI reports include description of the changes in initiative structure over its ten years of funding. The NFI reports document a structure that decentralized as local collaboratives took on decisionmaking responsibility. The reports also include statements about some of the challenges of evaluation design and process throughout the initiative. However, analysis of the NFI evaluation reports shows that, on their own, the reports do not offer a story about the changes that occurred in evaluative understanding and the development of innovations in research approaches for decentralizing initiatives. The NFI “local” evaluators released too few reports with too little depth to demonstrate their ideas of evaluation and how those ideas changed over the course of NFI. The NFI evaluators claimed to be documenting the structure of action put into place as part of the Ford Foundation initiative and by way of the requirements of funding guidelines and a charter. As part of that structure, intermediaries assisted the local collaboratives in interpreting the Ford Foundation charter and creating charges to guide their local work. As I have documented, although the Ford Foundation was reported to

225

adhere to a non-directive approach encouraging the sites to build from the ground up, the language utilized within the evaluation put into effect a structure of interpretation based on perceptions of the categories that defined the field of community development. Although Chapin Hall evaluators claimed that the governance structure of the local collaboratives changed according to local conditions and opportunities, and the work and choice of technical assistance providers changed throughout the initiative, the structure of interpretation set into place did not change. For example, grounded, not in the experience of the sites, but rather in a theoretical framework perpetuated within the evaluation, through Ford Foundation writings, and with the facilitation of intermediaries, the categorization of social, physical and economic issues remained in tact. At the end of evaluative reporting, the Chapin Hall evaluators, in the “national” evaluation, came back exactly to where they had begun in documenting the doubts of the possibilities of their own approach. The Chapin Hall evaluators ended, not with ideas of improvements in participatory and theory-based evaluation for complex and decentralized initiatives, but with recommendations that would appear to return evaluation ideas to traditional positivist notions of centralized control and quantity. That the structures of language have clear longevity within approaches at community change even when seemingly sturdier structures and ideas change and decentralization occurs, raises questions for understanding the role of reporting within CCIs and the possibilities of utilizing reporting to support change.

226

Reporting and Knowledge Communities

To the extent that the NFI case involves a loosely linked knowledge community, the analysis of the case indicates that the CCI evaluation community that surrounded NFI, came together based on similar concepts, rather than similar definitions of, or reported approaches to, those concepts. Although the NFI evaluation reports appeared to be situated within the broader writings about CCI evaluation as distributed by the Aspen Roundtable, the NFI reports offer evidence of divergence from, rather than adoption of, the theory-of-change approach. The national evaluators also denounced shifting concepts of the role of evaluation in relation to communication, and from beginning to end, documented how difficult incorporating new approaches to evaluation was. Although the Roundtable writings would have been available as early as 1995, the Chapin Hall evaluators did not discuss a theory-of-change approach until the final reports. Instead, they wrote of an ethnographic approach and network analysis even at the time that they claimed to focus on theory development and participation. I have studied NFI reports in order to contribute insights about CCI evaluative reporting that can help in understanding theory development and participation as it was presented through the NFI reports. This contribution to understanding an example of a CCI evaluation continues to be important because, although NFI has ended as a funded initiative, CCIs continue to be possible approaches – supported by public investment -- to neighborhood development. Evaluation is also still on the agenda of the largest of private funders; this agenda was evidenced in a search of the web-sites of the twenty-five largest

227

foundations as determined by annual giving (as listed by The Foundation Center in 2003). For example, the 2003 W.K. Kellogg Foundation’s website included the following statement: Our grantees are encouraged to develop a logic model, or theory of change, for their projects...A logic model helps to clarify the expected results – short, intermediate, and long-term outcomes, and identifies how the project’s activities will contribute to achieving those outcomes…Evaluation is sometimes seen as an intrusive requirement that takes time away from the “real” work of programming. We believe that effective evaluation provides program practitioners with valuable information that leads to more effective programs...Some projects do very novel or high risk work, which calls for a greater depth of evaluation to help to understand and improve the work…We encourage you to think differently about evaluation, and to make a firm commitment to evaluate your project and share the results with the Kellogg Foundation and others. Together we can move evaluation from being a stand-alone monitoring process to an integrated and valuable part of program planning and delivery. (Evaluation toolkit: Overview, 2003) As this statement shows, evaluation continues to be perceived, by funders, as important to funded initiatives. However, the embedded notions of the potential of evaluation, along with my analysis of the NFI reporting, leaves open questions about notions of the learning, knowledge development, and the educational potential related to evaluative reporting. There is therefore continued need, on the part of those interested in evaluation, to discuss community initiative evaluation and to develop deeper understandings of evaluation’s role in strengthening the work of community initiatives. This discussion I take up in the final chapter as I discuss findings about reporting as they relate to ideas about evaluation intended for social program development and social change.

228

CHAPTER FIVE DISCUSSION In this chapter, I examine the understandings that I gained through the process of this study. I begin with reviewing the problem, purpose, and questions that guided my study. I then present an overview of the study process and an outline of key findings. I discuss these findings as they relate to literature about evaluation and I provide a summary of the study’s contributions to evaluation approaches. I reflect on some of the issues of studying the reporting of a changing initiative and on the challenges that the topic posed to my research approach. After presenting study contributions to policymaking, theory-development, and evaluation practice, I end with thoughts on new directions for conducting research for CCI evaluation. Review of the Problem, Purpose, and Questions That Guided the Study

A loosely linked knowledge community that is represented by the Aspen Roundtable has addressed the issues of comprehensive community initiatives. The Aspen Roundtable work included attention to the issues of evaluation. Theory-of-change evaluation, as applied to CCI evaluation, was the approach given most attention in the writings of the Aspen Roundtable. Roundtable writers addressed ideas of theorydevelopment and participation. Roundtable writers promoted evaluation as a way to keep interested supporters of CCIs informed, to generate feedback, to guide implementation, and to support social learning in the anti-poverty field (Kubisch et al., 1998, pp. 3-4). The concepts of theory-development and participation were also mentioned in the

229

evaluation reports of the Neighborhood and Family Initiative. NFI is an example of a CCI. Although evaluation literature helps readers to understand CCI evaluation, little research has been conducted to explore the reporting of CCI evaluation or to address ways in which CCI evaluation reports can contribute to evaluation literature. The purpose of this case study was to explore how evaluators reported a CCI evaluation and how evaluation itself was discussed in that reporting. Throughout the study, I utilized questions to focus the study. These questions encompassed an overall inquiry of the evaluation language used in the evaluation reporting. Additional questions focused my attention on the concepts present in the reporting on the changes in these concepts over time, on the learning and knowledge contribution of understanding these reported concepts, and the educational potential of evaluation reporting. I have responded to the first of these questions through my Chapter Four presentation of findings, wherein I identified evaluation dimensions, lessons, and change constructs all of which emerged from my analysis of NFI reports. In this chapter, Chapter Five, I discuss the broader question of what the study of NFI reports means in the context of evaluation relevant literature and in relation to ideas, such as learning, knowledge development, and the educational potential of evaluation reporting. Overview of the Study Process and Findings To address my research questions, I utilized qualitative analysis. I drew upon the text of NFI evaluation reports as the primary data that I used with a variety of types of questions, techniques, and iterations of analysis. In my literature review, I identified CCI characteristics as holism, engagement, intensity, and informed action. In Chapter Four, I

230

presented my findings, of the NFI evaluation reporting: I did this with attention to Maxwell’s (1996) descriptive, interpretive, and theoretical concerns. I first provided a background of the NFI evaluation reports as the reports were situated within a broader knowledge community. I then focused my analysis upon the NFI reporting. I described dimensions of NFI reporting, evaluation lessons as documented by the NFI evaluators, and constructs that emerged from my theoretical concerns of change. In addition to providing the results of the analysis, I also presented nine highlighted findings. I list these findings here as an overview. I include parenthetical indication of the related discussion areas that I address in this chapter.



The reporting of NFI, an initiative that was an example of a comprehensive community initiative, was conceptually distinct from the reporting about community development processes and social programming. In their reporting, Chapin Hall evaluators discussed development and programming but evidenced the reporting of community coalition action. (Community organization building versus coalition formation).



The concept of a “comprehensive lens,” as addressed in the NFI evaluation reporting, clarified little with respect to understanding the interconnections between various aspects of NFI and change. (Comprehensiveness as a lens for change).

231



As evidenced in NFI reporting, evaluators addressed communication challenges by taking on more limited evaluative functions rather than by meeting the Chapin Hall stated need for less compartmentalization of roles. This limiting was manifested in the evaluation approach’s mirroring of linear portrayals of change. (Audience).



As documented in NFI reports, local evaluation implementation followed Ford Foundation funding mandates. However, NFI reporting reveals that influence was related, not only to the hierarchy of institutional funding structures, but also to a hierarchy of linear time in the funding of the initiative. National evaluators received dedicated and longitudinal initiative funding throughout the initiative, but local evaluators were initially subject to each local collaborative determination of evaluation need at various points in time. (Institutional distancing).



As an issue, sustainability was more often reported at the end of NFI reporting than at the beginning. The meaning of necessary resources was increasingly reported as distinctly monetary. (Institutional distancing).



In the NFI reporting, local evaluation was not fully addressed as part of sustainability despite claims made by Chapin Hall evaluators that local evaluation could be used to leverage resources for continued local development. However,

232

national evaluators did leverage NFI data into their own professional journal articles and organizational reports. (Institutional distancing).



Evaluation reporting -- and therefore the decisions about the type of language, acknowledged research method, and style and focus of reports -- was conducted not only in the context of scholarly ideas but also in the context of discipline and field-based norms. (Learning, knowledge development, and education).



As evidenced in reports, the programmatic structures of NFI appeared to be decentralizing while the evaluative structures of NFI solidified in national evaluative authorship and the persistence of conceptual categories. The result was a predominantly centralized evaluative reporting. (Learning, knowledge development, and education).



To the extent that the NFI case involves a loosely linked knowledge community, the analysis of the case indicates that the CCI evaluation community that surrounded NFI, came together based on similar concepts: They did not always share similar definitions of, or reported approaches to, those concepts. (Learning, knowledge development, and education).

In the following discussion, I draw upon these findings, as elaborated upon in Chapter Four, and discuss their meaning in relation to existing evaluation literature.

233

Discussion of Findings

Community Organization Building vs. Coalition Creation

As described by Chapin Hall evaluators, the local NFI collaboratives had a capacity for change because committees and rules shifted in relation to the local opportunities and needs and because strategies were identified by the collaboratives. The Chapin Hall evaluators documented that, as Ford Foundation funding continued, the local collaboratives continued to change and became more diverse and less like organizations of the same funded program. However, at the end of the ten-year Ford Foundation funding, three of the collaboratives had incorporated as traditional nonprofit organizations. The one collaborative that remained unincorporated the longest, dissolved at the end of the original foundation funding. Chapin Hall evaluators noted what many in community fields quietly acknowledge; the tendency of collaboratives is to return to a comfortable status quo. In the case of NFI, this tendency meant the development of formal organizations with traditional board structures and bureaucratic tendencies of hierarchical control. Taken as one result of an initiative funded predominantly through a single source, this occurrence may be neither surprising nor interesting. Taken within the context of the original NFI reported concerns that the creation of new organizations would put increased demand on already scarce nonprofit resources, interest in the result is more reasonable. The result raises questions about the distinction between development of community organizations and coalition formation.

234

The NFI concern for the need for community collaboratives rather than additional service organizations is contextualized within critiques of historic trends of community development corporations. In the 1980s, CDC’s developed as money-making and service provision organizations instead of the community policy advocacy groups that had emerged in the 60s and 70s (Clavel et al., 1997; Stoecker, 1997, 2003). The tensions between bricks and mortar development, service provision, and advocacy have been well documented within discussion about whether formalized CDCs can be effective coalition action. Stoecker (1997; 2003) argued that CDC-generated development may be at odds with community advocacy goals. Clavel, Pitt, and Yin (1997) argued that CDCs had the potential for maintaining advocacy but that this interest is often co-opted by larger financial interests that detract from local advocacy. A related critique points to the need to distinguish the intent of community initiative funding and to retain an awareness of a “dialectic” between organizing for development and for coalition activity (Stoecker, 2003). Chavis (2001) offers: A community organization, at its best, consolidates members’ resources so that the organization can achieve its goals. Community coalitions, in contrast, must disperse resources to enhance the capacity of participating institutions in order to achieve their common goals. (p. 310) According to Chavis (2001), the success of coalitions has been in their ability to mobilize and focus resources. Himmelman (2001) also emphasized that the ideas of collaboration involve participants having a “willingness to enhance the capacity of another for mutual benefit and a common purpose” (p. 278). As reported, NFI was different than the funding attempts that created CDCs in the 1980s. NFI fund managers raised concerns about the strain on existing resources. NFI

235

reports included ideas of community collaboratives that would mobilize and direct resources toward community desires rather than toward the creation of new independent organizations. To the extent that the development of coalitions, not organizations, was a goal of NFI, the NFI reported results did not describe or evidence success. The Ford Foundation NFI charter began with language of resource mobilization and, thus, language suitable to coalition building. However, the introduction of CCC as an intermediary helping to interpret the charter moved the initiative language to the development of collaborative plans as itself an outcome. The evaluation language, as mediated by Chapin Hall as an intermediary, also included resource mobilization language. However, the two-tiered evaluation structure of NFI that kept process reporting separated from outcome reporting, maintained a false separation between the development programming and organizing potential within a complex social initiative. Existing evaluation literature often addresses either of two approaches: development programming or organizing. Evaluation literature rarely addresses the challenges involved in distinguishing between the two, understanding the dialectic, or addressing shifts over the course of a long-term initiative. For example, Patton’s (1994; 1997b) developmental evaluation, along with other stakeholder approaches to evaluating social programming (Brandon, 1998; Fine et al., 1998; Rossi, 1999), address activities as they occur in one arena. Approaches such as Fetterman’s (1996; 2004) empowerment evaluation, and various forms of participatory evaluation (Cousins, 1996; Cousins & Earl, 1992; MacNeil, 2000; Mertens, 1999, 2002), frame evaluation as a deeper questioning of social change. Theory-of-change approaches provide the initial attempts at linking process understandings, outcome understandings, and context understandings (Connell et

236

al., 1995; Fulbright-Anderson et al., 1998; C. H. Weiss, 1995, 2004; J. A. Weiss, 2000). NFI’s two-tiered, national/local approach served to distinguish process and outcomes and perhaps structured participation in relation to questions of social change. However, none of these evaluation approaches addresses the complex dynamics of coalition-related initiatives, such as NFI; the decision-making processes involved in utilizing the appropriate evaluative approaches over the course of an initiative; or the interaction between evaluation, participants, and organizational and social contexts. Whereas the existing evaluative approaches are embedded with principles of community collaborative building and ideas of social change, complex coalition initiatives may have their own concepts of change: These concepts need to be incorporated into evaluative literature and initiative evaluation designs for comprehensiveness.

Comprehensiveness as a Lens for Change

Comprehensiveness is an elusive term. Researchers and evaluators have provided various characteristics of the word in relation to the initiatives it defines. The openness of the term comprehensiveness may indeed be its strength because it leaves room for multiple interpretations to emerge over the course of an initiative. However, when coupled with potentially ambiguous terms -- like community, development, and change -the layering of ambiguity may hamper attempts at understanding the work of CCIs and their approaches of change. As Chapin Hall evaluators reported, the term comprehensive was not useful in guiding implementation, so they relegated the idea to use as a lens to

237

understand the initiative. The Chapin Hall evaluators revealed their own preference for a notion of integration, further indicating a perceived limitation of the term comprehensive. As with other vague concepts related to holism, understanding the term comprehensive only occurs with the help of a dialectic tension that posits a notion of parts. Although NFI reports retained the term comprehensive from beginning to end, multiple dialectics emerged within the NFI evaluation reports. Comprehensiveness came to be defined by whatever issues of fragmentation appeared to have been perplexing the evaluators at the moment. Examples from the NFI reports include the notions of categorical funding streams, sectors, diverse categories of community need, multiple development opportunities, targeted services, and types of strategies. The term comprehensiveness was be everywhere used in the NFI reports, but only defined as it occurred in relation to shifting concerns. As is the tendency with any perceived void, within NFI reporting, the vagueness of the term comprehensive gave way to the certainty of the categorical terminology that took hold to fill the void. Analysis revealed that there was a persistence of three categories attributed to comprehensiveness: social, physical, and economic. Distinguishing between the Ford Foundation charter language, and the language as it occurred throughout the intervention of training and evaluation intermediaries, leads to a questioning of the derivation of these categories. Although the Chapin Hall evaluators based these categories in existing theory, the use of these categories, in the Ford Foundation charter to NFI, was not explained. Instead, throughout the reporting of the collaboratives, comprehensiveness appeared to have become co-opted by the language of intermediary envisioned categories.

238

Comprehensiveness, as displayed in the NFI reporting was, therefore, not enough to guide evaluation. The term held its power in its ambiguity that, in a complex initiative, left a void of meaning to be filled in by participants. In NFI, this void appeared to be filled by intermediaries imposing a theory-driven categorization. This categorization persisted over the reporting of the initiative even though local evaluation constructions of the concept of comprehensive did not match that categorization. Despite the possibilities of the Ford Foundation charter to open up language, the related charges solidified categories. The evaluator categories thus restricted the reporting of new understandings of comprehensiveness. As demonstrated in the NFI reporting, in the presence of the possibility opened by the foundation language, categories took hold and persisted. This persistence effectively thwarted any chance of a creative vision for change that comprehensiveness as a lens might have provided. Even though these categories persisted in the literature, as well as NFI reports, there is no indication that these categories are essential or grounded in community-building principles or the structures of current sectoral, field-based, or categorical funding streams. Although CCI evaluation literature implicitly and explicitly addresses the concept of comprehensiveness, through categories, missing is attention to the place of the concept of audience in relation to CCI evaluation.

Audience in Evaluation

A distinguishing characteristic of evaluation is the responsibility that evaluators have for providing information to stakeholders and other audiences with an interest in the

239

findings (Torres, 1996, p. 65). When emphasis is given to collaborative ideals, the concept of evaluation often becomes partnered with the notions of information-sharing with stakeholders (Cousins & Earl, 1992; Patton, 1994, 2004). Evaluation theorists supporting concepts of involvement of participants, who are not organizational staff, have also addressed the issue of stakeholder participation. The idea has come to mean different things in the context of theorist support for various approaches to evaluation. For Brandon (1998), stakeholder participation is about confirming interpretations in order to strengthen validity and to ensure equity in input. According to Brandon, this emphasis does not exclude stakeholders from various phases of the evaluation process, but does emphasize their role in the validating of evaluation findings. For Patton (1997b; 2004), participation is about fostering use of both information and of the evaluative process for development purposes. For Fetterman (2004), stakeholder participation is used in evaluation process to encourage participant voice. Involvement of participants can also be based in efforts to break down the resistance perceived by individuals who might feel judged by evaluation processes (Frederick et al., 2002, p. 13). For Carol Weiss (1972; 2004), stakeholder participation is focused on the learning that occurs through collective theory-development within politicized environments. Although not always discussed in these evaluation approaches involving participation, there is an element of risk. In coalitions, as is the case in NFI, the organizational boundaries and related evaluative boundaries, that in other initiatives might provide clues to acceptable information-sharing, lose their meaning. Without the protection of clearly delineated boundaries for informational sharing, the publicness of the concept of audiences takes over the safety of utopian directives about democratization

240

of information. To manage the risk, Torres (1996) has tried to address questions of reporting of information by categorizing types of individuals by their appropriate level of access to that information. Authors have also addressed notions of communication and audiences, trying to differentiate processes of information-sharing (Innes, 1995; Preskill, 2004; Preskill & Torres, 2000). Literature, such as the Aspen Roundtable’s (Connell et al., 1995; Fulbright-Anderson et al., 1998), alludes to the difficulties in informationsharing and suggests keeping information safely close to the concept of theory. For example, Gambone (1998) asserts that data has little meaning without a connection to theory. However, in the addressing, separately, of the concepts of data, stakeholders, communication, and participation in theory, little has been understood about the ways in which a focus on information sharing in coalitions blurs the distinctions between these concepts. Analysis of the reflections of the Chapin Hall evaluators provides insight into the concept of risk as it comes to be understood within a coalitional endeavor. The Chapin Hall evaluators documented the resistance of local collaboratives to collecting and sharing data with national initiative members. Unlike a bounded arena of organizational members, within NFI as a coalition, there was involvement from various types of stakeholders from various organizations. As reported, membership also represented various sectors, professions, institutions, and socio-economic positions. Although there was a multiplicity of notions of stakeholders as reported in NFI, NFI reports point to one widespread, albeit implicit, treatment of the issues of risk in information sharing. This treatment is in the patterning of processes.

241

In the NFI reports, as in broader evaluation literature, processes of community building, collaborative formation, and even learning, were often represented in either stages of a process or in a dichotomizing of horizontal and vertical relationships – both portraying linear conceptions of development. The latter of these leads to conversations of top-down versus bottom-up influence that is also a linear portrayal. However, in NFI reporting, the lack of linearity and the complexity of structures were openly admitted. Nevertheless, linearity seems to have been used to provide a sense of conceptual control over ideas of stakeholder participation in information sharing. The specific examples of linear portrayals of development are many. The CCC model for development is one example of a linear model. The Chapin Hall model for assessment, although circular, is also linear in the portrayal of a direct progression from assessment to change. This linear tendency is also prevalent in broader evaluation literature. Guzman and Feria (2002) place evaluation in relation to both a hierarchy of concentric circles and a hierarchy of institutional authority. Also in the same volume, researchers provide a depiction of a singular feedback loop for understanding processes of empowerment evaluation (Tang et al., 2002). Even in placing evaluation within a more complex political context, Segerholm provides a horizontal and vertical representation of evaluation (Segerholm, 2003). Finally Chen (2004), and various Aspen Roundtable researchers (Connell et al., 1995; Fulbright-Anderson et al., 1998), discussing theory-development approaches, also posit essentially horizontal/vertical progressions of distinct stages. Each of these portrayals -- top-down versus bottom-up, stages, and horizontal and vertical relationships -- lends itself to some version of linearity.

242

In relation to evaluation in coalitional activities, the tendency toward linearity lends itself to a pulling apart of programmatic concepts (development, resources, and participation) as well as an isolating of evaluative concepts (data, outcome, communication, context). NFI reports point explicitly to complexity and evidence the misrepresentation of these processes. For example, NFI reports provide evidence that resources and participation are not separate from concepts of development but rather influence ideas of development. As this analysis shows, the perceived separation influences evaluation and notions of information sharing, as evaluation representations come to mirror the linear representations of the structures and processes of development. In this way, data is separated from interpretation, which is separated from communication, each fitting nicely into a stage of a linear structure or process. The danger in this tendency is twofold. The separating out of evaluative concepts serves to either increase the risk involved in non-linear informational processes or to force initiative participants into a safe, yet erroneous, belief that the flow of information can occur in the predictable ways mapped by theorists. The latter of these outcomes appeared to occur in NFI as, at the admission of the Chapin Hall evaluators, evaluation as a twotiered structure failed. Failure occurred when the local participants responded to perceived risk by exerting control over information thereby preventing the national evaluators’ access to local data. The importance placed on understanding concepts of stakeholders is confirmed in the NFI reporting as questions were raised about who was participating in evaluation and in what ways. However, the study of NFI reporting also suggests that a more nuanced understanding, of the concept of stakeholders in evaluation, may be needed. This can

243

occur as the idea of risk converts questions of information sharing and participation to ideas of audience, a concept that has an embedded idea of information interpretation. In complex initiatives such as NFI, interpretations are not controllable. Because of advances in communication in a technologically advanced society, local residents have the potential power to share their interpretations with people around the globe. Therefore, local voice can no longer be expected to remain local. Local issues, as interpreted by residents, are indeed now very public global concerns. Existing evaluation literature has yet to adequately address the idea of information-sharing and interpretive control within complex initiatives. Neither has existing evaluation literature addressed the complexities involved with distinguishing types of participation or approaches to interpretation of information. One way to begin to address these issues is to examine linear models of evaluation and create models that will support understanding of audiences as they participate in complex initiatives. Also needed is an understanding of the distancing forces that occur in initiatives like NFI.

Understanding Institutional Distance

Analysis of the reporting of the two-tiered structure of NFI evaluation directed my attention to issues of institutionalization. Literature related to concepts of participation and community helped to shed light on the issues. For example, Arnstein’s (1969) ladder of participation -- moving from manipulation through to citizen control -- is just as relevant to NFI as it was to initiatives of its time. Chapin Hall evaluators documented issues of empowerment related to collaborative membership and to points of resistance.

244

They exhibited, throughout their reporting, the tensions of developing a collaborative voice within a nationally funded initiative structure. However, Chapin Hall evaluators also reported facing the reality that citizen control may be manifested in the refusal, by the local collaboratives, to collect data and to communicate information to national evaluators. In the case of the NFI evaluation, the same issue that Chapin Hall evaluators viewed as a limitation might have been the exhibiting of exactly the empowerment intended in Arnstein’s ideas of citizen participation and control. Intermediaries originally maintained control of both the interpretation of the Ford Foundation charter and the conceptualization of the structure of evaluation. That the local collaboratives took on the responsibility of hiring their own intermediaries, was one more sign that citizen empowerment might have been evolving in directions that did not benefit the intermediaries. At the same time that programmatic decentralization of decisionmaking was indicating local empowerment, the limitations in the public reporting of the local collaboratives indicated that the local collaboratives had not reached the level of empowerment required to evaluate and speak publicly on their own behalf. The extensiveness of the Chapin Hall evaluation in reporting about local reality is an additional indication of the limited evaluative empowerment of the local collaboratives. The differential funding of the national versus local evaluation is another indication of disparity in evaluative empowerment. Although national evaluators received dedicated funding over the course of the initiative, the local evaluators were initially dependent on local collaborative perceptions of the value of evaluation. The collaboratives sporadically allocated funding for evaluation. The Ford Foundation began providing dedicated funding only after observing that local evaluation was not occurring.

245

Chapin Hall evaluators noted that this dedicated funding came too late in the initiative to support a strong local evaluation component. The differential evaluation support suggests that evaluative power is influenced not only by institutional structures but also by the relationship of funding to time. Questions of time and funding, as set within the structures of NFI, indicate that the terrain of CCI evaluation is far more complex than can be encompassed in the various typologies, such as the simplified horizontal and vertical structures presented by Warren (1978). Warren’s portrayals of community action within a context of horizontal and vertical patterns, and his classification of community acts as episodic, involving beginnings and endings, are too simplistic to help in understanding the NFI evaluation. Perpetuated in the contemporary tendency toward horizontal and vertical mapping of development and evaluation, Warren’s work offers little to the understanding of complex power structures as they influence evaluation. In NFI, evaluation did not exist in a simple hierarchy but rather took place in a parallel relation to programmatic development. However, Warren’s (Warren, 1973) earlier conceptions of truth, love, and social change, actually provide greater assistance in understanding the challenges of evaluating initiatives that involve complexities of power as manifested in the funding differentials over time. According to Warren, truth is based in the adherence to a notion that there are moral values, with the believer positing their values as inherently better than those in opposition. As a principle of social change, this is a call for a hierarchical order. Love, as described by Warren, is an “appreciative” rather than “affective” term and is related to respect of diverse ideas and the valuing of human beings. For Warren, the adherence to these ideas is the difference between asking people to jump through the hoops of a

246

predetermined purposive change and allowing change to occur in a natural process. Following Warren’s argument, truth is a potentially distancing concept with love being a unifying one. Scherer’s (1972) work in relation to love and concepts of community adds another notion of unity, with human beings accepted as whole beings rather than as players of the rigid roles typical in institutional structures. Scherer (1972) relates community with the idea of love, meaning that each person is accepted as a “complete unity,” able to hold onto all of one’s roles at the same time (p. 97). Within NFI, the lack of consistent funding for local evaluators is one indication that roles and sustainability were distanced from individuals. Sustainability was most often reported at the end of the initiative even though local evaluators had not been consistently funded. Sustainability also became increasingly understood as monetary, rather than as the collaboration of human effort. This tendency solidified the institutionalizing of the initiative and therefore distanced the notion of love from the local evaluators. Placed in the context of NFI and the two-tiered evaluation structure that separated national and local activities, the questions the distance between truth and love illuminates key issues related to the conceptualizing of initiatives within institutional structures. These issues include: independence, communication, and data leveraging.

Independence

The Chapin Hall evaluators repeatedly distanced their work from that of the local evaluators and claimed that the local evaluations were not based on independent information. The Chapin Hall evaluators also reported that they tried to connect to, and

247

provide information in various ways for, the collaboratives. This claim might have led to the perception that the distance between national and local evaluation had diminished. However, in the absence of a strict hierarchy of institutional structure of intermediaries, the two-tiered structure established for the NFI evaluation, resulted in continual separation. The structure served to enable the Chapin Hall evaluators to frame their public representation with respect to an ideology of disciplinary concern, rather than situating their work within a love for the specific local collaboratives and a concern for their needs and requests. In the context of evaluation, this distancing is disturbing because, as Schwandt (2002) notes, evaluation is “fundamentally local.” By local I mean engaged, native, concrete, indigenous, lived, or performed as opposed to abstract, transcendent, disengaged, or somehow removed from the erratic, contentious, uncertain, ambiguous, and generally untidy character of life itself. All judgments of the merit, worth, or significance of human action are undertaken within specific jurisdictions and circumstances where these judgments both reflect and depend upon the thinking (including socioeconomic, political, and moral values) and doing of the specific parties involved at the distinct time and place in question. There may indeed be broader or more global societal values (such as equity, justice, fairness, and so on) but these are interpreted and adjudicated in particular ways in particular circumstances where some group of people is attempting to decide whether they are doing the right thing and doing it well. (p. 17) The two-tiered structure that enabled the Chapin Hall evaluators to claim independence, as an establishment of truth, also served to distance the Chapin Hall evaluators from a loving relationship to the local sites. Marris and Rein (1967) documented similar issues related to the detachment of knowledge that occurred in an earlier Ford Foundation initiative stating: In the political struggle to determine whose interests should dominate, the detached pursuit of knowledge and the validation of techniques became confused and confusing, irrelevant to the immediate conflict. As we saw in the development of several projects, communities could be led into ‘neurotic’

248

solutions, where the balance of power came to rest in an organization that could not function, but served to disguise the unresolved issues. Only as an agency became partisan, and chose between its possible roles, could it recover its coherence. (pp. 229-230) Even within the distancing concept of the independence that occurred in a parallel yet vertical authority, Chapin Hall evaluators admitted to their close communication horizontally with other national organizations. This admission indicates that communication is not always structured in accordance with either funding structures or claims of independence and truth.

Communication

Scherer’s (1972) work encourages communication in support of love. As the reporting of NFI shows, the national evaluators did not connect with local evaluation and, therefore, communication was limited. The Chapin Hall evaluation adhered to the disciplinary norms of reporting on traditional issues of governance, despite the existence of issues relevant to, or informational needs of, local collaboratives. The two-tiered structure and the lack of vertical communication across the structure may have helped to press national evaluators into truth as a normative reaction, rather than into loving connection. This press would appear to be an institutionalizing force within a seemingly decentralizing initiative action. Chapin Hall evaluators praised the lack of top-down Ford Foundation directives, as an attempt to support increasing decentralization and community control. However, Chapin Hall evaluators reported that the local collaboratives repeatedly asked for greater

249

clarity and guidelines. That the Ford Foundation adhered to this non-directive approach, even through changes in program management, is beneficial to understanding the nature of intermediaries. Analysis of NFI reports indicates that, in the void of funder-imposed direction, the NFI intermediaries co-opted the collaboratives’ desire for directives. Intermediaries achieved this through the professionalized language of planning and evaluation. Whether done in an effort at truth or a loving provision of assistance, this likely served as an elusive yet deterministic force competing with collaborative empowerment. The lack of connection between the national funders and the local work resulted in a disguising of the institutionalizing forces that were solidified through the structures of intermediary authority and perpetuated in horizontal communication patterns. This horizontal communication occurred despite the appearance of parallel and independent systems and kept control within the relationship of national rather than local organizations. Even when the local collaboratives succeeded in removing the Center for Community Change and Chapin Hall from intermediary authority in collaborative work - seemingly decentralized choices about training and technical assistance -- the removal was followed with a Ford Foundation appointed communication intermediary. Structural centralization was thus replaced by communicative centralization masquerading as local choice. According to NFI reports, Chapin Hall continued to be compensated for the public voice of the initiative till the end of Ford Foundation funding, advantaging the national evaluators despite their shifting or collaborative requested roles.

250

Data leveraging

Another way in which evaluation privileged Chapin Hall evaluators was with respect to the concept of data leveraging. Although Chapin Hall evaluators repeatedly reported that local collaboratives should engage in evaluation and use evaluation to leverage additional resources, the reporting about local collaboratives showed no indication of this leveraging. According to Chapin Hall reports, the local collaboratives did not consistently allocate funds to evaluation and did not consistently release public reports. However, analysis of the literature produced by Chapin Hall evaluators provided evidence that Chapin Hall evaluators did leverage NFI data investment and advanced articles related to their disciplinary interests. The national evaluators of NFI enjoyed dedicated funding over the life of the initiative. This dedicated funding gave them the longevity to collect data and to leverage that data into professional profit, in the form of journal articles and reports. The local evaluators changed over time, were funded to a lesser degree, and were at the mercy of the local collaboratives’ perceptions about evaluation worth. The local evaluators, therefore, did not enjoy the same possibilities of professionalism – image of independence, communication, and data leveraging – as did the national evaluators. The reported distribution of greater funds into the national evaluation, combined with the Chapin Hall perception that they needed to compensate for the data not provided by the local evaluators, can also be understood as a disproportionate compensation for independent truth over the participatory stance of love that is possible within the interaction of local evaluators with collaboratives. It would not be surprising

251

if the related learning, knowledge development, and educational value, were also disproportionately or inconsistently distributed throughout NFI.

Learning, Knowledge Development, and the Educational Potential of CCI Evaluation Reporting

The Aspen Roundtable has supported notions of theory-of-change evaluation to enhance the learning and knowledge development of CCIs. In NFI reports, there was a limited use of the term “theory-of-change.” This was surprising, given the connections between NFI and the Roundtable. For example, NFI national evaluators were connected organizationally to the Roundtable because the director of Chapin Hall served as co-chair of the Roundtable and was also a member on the Roundtable evaluation committee. The Ford Foundation supported both NFI and the Aspen Roundtable and maintained membership on the Roundtable through the early 1990s. NFI evaluators had Ford Foundation supported Roundtable publications available as early as 1995. The ideas of the members of Aspen Roundtable were published and disseminated throughout reports, articles and the website. However, NFI did not start out as a CCI, but rather came to be called a CCI, by evaluators. Without explicit reference to the Aspen Roundtable

literature, the NFI national evaluators claimed to be building theory and doing this in a participatory way. These are the basic ideals of the theory-of-change approach, as discussed by the Aspen Roundtable. Nevertheless, the NFI evaluators initially called their approach “ethnographic,” further indicating that, despite overlaps in membership

252

and report availability, the ideas of the Aspen Roundtable had not, at the start of NFI, either reached or been embraced by the Chapin Hall evaluators. The NFI reporting itself indicates that the NFI evaluation was not conducted using the language of theory-of-change evaluation. Rather evaluation was conducted within the customs of discipline and field-based norms that directed attention to community mapping, demographics, and governance structures. The Chapin Hall reports focused on local governance structures and the changes in those structures over the course of the initiative. National evaluators documented membership, perceptions of involvement, and the changing structures of local collaboratives. According to Chapin Hall reporting, and evidenced by the limited publicly released local evaluation reports, the attempts at alternative locally-defined evaluative approaches (e.g. participatory action, learning community, cultural, interdisciplinary teams) did not result in continued evaluation, or in consistent or extensive public reporting. Not surprisingly, the NFI local evaluation approach that most closely resembled the national evaluation emphasis resulted in the most extensive formal reporting. The NFI reporting therefore indicates that the language, and related approach, of the national evaluation does not seem to have benefited from either the language of the larger evaluation coalition or from the local alternative evaluative attempts. Existing evaluation literature, such as the Aspen Roundtable writings, often advocates a singular perspective with evaluation theorists forming camps around evaluative ideas. However, the evaluation literature does not take into account the ways that disciplinary and field-based norms, of these camps, mediate evaluation approaches or how these mediating dynamics influence the learning and knowledge development within

253

complex initiatives such as NFI. A study of reports cannot determine the actual learning of participants or informal knowledge development. Yet, to the extent that national evaluators supported the learning in NFI, it is reasonable to expect that the participant learning be also guided, to some extent, by the evaluation sanctioned questioning. Although there were similarities between the NFI national evaluations and the Aspen Roundtable ideas, in reporting about their understanding of the challenges of evaluation and their lessons learned in practice, the Chapin Hall evaluators did not adopt the language of theory-of-change until the last pages of their final reports. Even if they had adopted the language of theory-of-change evaluation, the limitations around reporting would probably have persisted, given the lack of attention by the Aspen Roundtable to developing a language of reporting or learning about public voice. The limited public release of local evaluation reports is itself an indication that the local collaboratives did not embrace evaluation reporting, perhaps not having had the opportunity to learn about knowledge development and reporting. Understanding learning and knowledge development is complicated by various ideas about that which an initiative is to demonstrate and about how to demonstrate initiative learning. For many evaluators, evaluation approaches to learning have not evolved into ideas of public reporting, but have remained concerned with involvement of participants only within the private processes of evaluation. Whether addressed in utilization-focused evaluation, empowerment evaluation, theory-of-change, or constructivist evaluation, or by the NFI reports themselves, the focus of learning is often centered on the notion of data as utilized within evaluation processes. As Lincoln and Guba (2004) state about “fourth generation” evaluation:

254

The constant interaction around data is what makes this model hermeneutic. Such interaction creates new knowledge, and permits old or taken-for granted knowledge to be elaborated, refined and tested. The dialectic of this evaluation model is the focus on carefully bringing to the fore the conflict inherent in value pluralism. Unlike more conventional models of evaluation, constructivist, fourth generation evaluation assumes that social life is rife with value pluralism and therefore, conflict. A critical part of the evaluation effort within this model involves getting at core values of participants and stakeholders, so that when decisions are made, the value commitments that those decisions represent are clear, negotiable, and negotiated between and among stakeholders. (p. 235) As is demonstrated in the NFI evaluation reports, the approach to learning is not always clear amongst evaluators, let alone shared by multiple participants, funders, or institutional and professional staff. In addition, the relationships between data and reporting, and the learning that this relationship entails, are not always emphasized. Even in constructivist approaches, where value pluralism and interpretation are acknowledged, deep understandings of interpretation and evaluation decision-making may not be discussed or made transparent in reports, such as NFI’s. However, learning about interpretation in complex initiatives is facilitated by the NFI reports as evidence. The descriptions of the two-tiered approach, with local collaborative members in a position of receiving intermediary help in interpreting the interests of a higher tier of organizations -- indicates that NFI evaluative learning might have remained separate. Analyzing the reports revealed that intermediaries were utilized to facilitate and mediate interpretation. The tiered structure may not have had a negative implication for the functioning of local planning and perhaps even supported a sense of decentralization. However, the framing of evaluation as a two-tiered approach may have limited the evaluative learning of the local participants. The two-tiered structure provided a separation between the reporting of process and outcome, limiting reporting

255

about the connections between the two. It is reasonable to expect that this separation, as controlled by the language of the Chapin Hall evaluators, constrained the learning of the local sites by restricting the local learning process to organizational rather than public reporting. It is in the public reporting that the notions of coalition building and leveraging evaluation take place. In this way, the local sites were limited in their development potential, not having been given the experience, in NFI, to publicly communicate the value of their work within a larger decentralized coalition. Given the separation and dominance in reporting, in the presence of decentralization, the NFI evaluation approach actually became more solidly centralized in authorship and concepts. This centralizing tendency, within programmatic decentralization, is an indication of the limitations of the educational potential of NFI evaluative reporting with education referring to the revealing of the learning of participants. However, as shown in my analysis of NFI reports, the educative potential of reports to elucidate concepts of dimensions, lessons, and change in reporting as knowledge development, is considerable. Learning about reporting emerged through analysis of those reports as situated within a longitudinal effort and within a context of the ideas presented by evaluators. To this point, I have been discussing the findings of this study and their relation to evaluative literature and understanding about learning, knowledge development, and the educational potential of reports. Given the evidence, as in NFI, of the strength of language structures in controlling ideas, the lack of attention to language in CCI evaluation literature is a crucial issue. In the next section, I turn to a discussion of this study, as a whole, and its meaning to evaluative approaches.

256

Study Meaning to Evaluation Approaches

In the literature review, I presented categories of evaluation to inform the understanding of evaluation reporting in community initiatives. I discussed traditional approaches for utilizing measurement to support organizational decision-makers. I addressed approaches focusing on social programming and I discussed evaluation efforts for social change. Traditional evaluation is geared toward organizations, as they exist within institutional structures and top-down meaning making. Because of the structure of NFI as a complex initiative, with social programming and social change goals built into the initiative, a study of NFI reports cannot contribute to understandings of traditional evaluation. However, this study does contribute to understandings of evaluation for social programming development and social change and has pointed to special issues relevant to complex initiatives. Key to evaluation for social program development, evaluations for social change, and evaluation within complex initiatives, are the sometimes-complicated relationships of evaluators to participants and to the goals of the work being conducted. Whether invited into an organization, working side-by-side with stakeholders, or being part of a larger coalition, evaluators face multiple decisions that influence the evaluation. These decisions are not made with pure adherence to specific approaches to evaluation. Rather, as evidenced in the NFI reporting, evaluation decisions may be made with attention to disciplinary and field-based norms. Decisions are also enacted through multiple responses to shifting and changing funding structures, initiative principles, organizational needs and opportunities, and the very issues of social change that the initiative might be

257

trying to address (e.g. racism, poverty). Some decisions change throughout an initiative and others become solidified in evaluative structures and language. This study supports that the notions about evaluative conditions, espoused by Tharp and Gallimore (1982) as being supportive of organizational development (e.g. evaluator authority, stability of funding, consistent values and goals etc.), become complicated in complex initiatives. That evaluation does not usually take place in ideal evaluative conditions is widely accepted in the literature related to CCIs (Baum, 2001; Connell et al., 1995; Kubisch et al., 2002). However, in complex initiatives such as NFI, the lack of these evaluative attributes may be not only unusual but also undesirable. As this study suggests, the strength of the consistently funded national evaluation provided Chapin Hall with an intermediary interpretational authority. This authority may have detracted from the coalitional opportunities of change that were opened by the notion of comprehensiveness. Approaches to evaluation for social programming and social change provide answers, or at least guiding questions, that situate evaluators within understandings of evaluation and change, the purpose of evaluative work, the nature of data interpretation, and the acceptable roles of evaluators. For example, Patton’s (1994; 1997b; 2004) developmental evaluation places evaluators within existing organizations and frames evaluators as guiders of questioning, and assistors to data interpretation as it relates to programming decisions. Stakeholder approaches utilize evaluators to support technical decisions, and to bring outside interpretations into organizations so as to influence programming (Christie & Alkin, 2003; Huberman, 1995; Nichols, 2002). Fetterman’s empowerment evaluation and forms of democratic or participatory evaluation, frame

258

evaluators actions in possible contention with dominant structures. These approaches also support processes for using data interpretation to strengthen the expression of diverse views (Cousins, 1996; Cousins & Earl, 1992; D. Fetterman, 1996; D. M. Fetterman, 2004; Garaway, 1995; Huberman, 1995; Mathison, 2000). This study suggests that, within NFI, the purpose of evaluative work, the nature of data interpretation, and the roles of evaluators, were neither consistent nor clear throughout the initiative.

Purpose of evaluative work

Literature has included discussion of the ways in which evaluators construct a problem (Sawicki & Flynn, 1996), and of the sociopolitical context for the language used in the construction of both problems and evaluation (Madison, 2000). However, less has been written about the construction of change. Even within theory-of-change literature (Connell et al., 1995; Fulbright-Anderson et al., 1998), the use of language is not included in deep understandings of evaluation purpose or understandings of implicit strategies of intermediary control that may be exerted by evaluators. This study brings the use and creation of language specifically into contact with efforts of change, and provides encouragement for evaluators to examine their reasons or strategies for working with initiative participants in coalitional activities. The study leads to awareness of the need, within self-reflection and communication, for evaluators to focus on how they interact with concepts of change and how they utilize language to influence change, or even to co-opt possibility. The study thus raises questions about the structuring of evaluation purpose within complexity and possibility.

259

The case of NFI revealed not only an explicit structure of two-tiered evaluation but also aspects of an implicit structure based on evaluator resistance to ideas of evaluation. The study shows that various aspects of an initiative’s evaluation structure can contribute to institutional distancing. In the absence of a visible centralized authority, parallel streams and ideas of independence may provide an authority structure: The perpetuation of this authority may become the implicit evaluative purpose. Within existing approaches to evaluation, independence is addressed in specific ways. In developmental evaluation, an evaluator brings independence with her by entering someone else’s organization and drawing upon evaluative questioning of social programming. In empowerment evaluation, independence is acquired when participants engage in an evaluative process. However, as shown in this study, the concept of independence can also involve an idea of the relationship of interpretive structures to ideas of authority within a larger socio-political context.

Data interpretation

The study shows that, in NFI, the structure of evaluation was multi-faceted and included reported approaches to and changes in communication, information sharing, participation, and reporting responsibilities. As reported, one aspect of the evaluation structure that remained consistent was the dedicated funding for evaluation. As reported by the Chapin Hall evaluators, dedicated evaluation funding was one factor in the quality and depth of evaluation. This study shows that dedicated funding provided a temporal element of hierarchy over the course of the initiative, placing the national evaluators in a

260

better position, than local evaluators, to influence the processes of interpretation and also to leverage data into professional gain. The influence of funding, as it relates to the interactions between time, data interpretation, and leveraging, are not areas presently discussed in evaluation literature. Neither is the impact of the relation between time and interpretation discussed in relation to theory-development. Theory-development work has come to involve various notions of learning and relationships to knowledge construction (Hasci, 2000; Preskill et al., 2003; Rogers et al., 2000; C. H. Weiss, 1995, 1998, 2004). As revealed in the NFI evaluation, the evaluative ideas related to theory, social construction of knowledge, and learning, were not always incorporated into the evaluation. Existing literature about evaluation focuses on ideas of data, and where participation is concerned, on the idea of information-sharing (D. M. Fetterman, 2004; Fulbright-Anderson et al., 1998; Patton, 2004). This study supports that data and information sharing are key to understanding and framing evaluative approaches. This study also suggests that, in complex and ambiguous contexts, understanding data, as separate and distinct from other aspects of evaluation, contributes to a false sense of interpretive control. To consider data separately -- without ideas of context, outcomes, and communication -- leads to increased initiative risk as the boundaries around stakeholders become ambiguous. The study therefore supports a shift to a notion of audience -- a notion that can encompass ideas of context, communication, data, and outcomes together. This shift requires that evaluators understand their roles, not only in relation to participants, but also in relation to the processes of evaluation as evaluation might occur throughout various aspects of a complex initiative.

261

Evaluator roles

Evaluation theorists have questioned evaluator roles within authority structures (Henry & Mark, 2003; House & Howe, 2000), and in relation to those without authority (MacNeil, 2000). As revealed in this study, tensions occurred throughout NFI. There were divergent views on the type of relationship and level of participation appropriate for evaluators working within a tiered structure. These tensions confirm the need for discussions about evaluator roles, in relation to concepts of larger social structures (Mertens, 1999, 2002), in relation to participant learning (Cousins, 1996; Cousins & Earl, 1992; Preskill & Torres, 2000; Rallis & Rossman, 2000; Shulha & Cousins, 1997; Springer & Phillips, 1994), and in relation to communication of interests. Although, the study reveals that the Aspen Roundtable work may not have initially, or deeply, influenced the NFI evaluation, the Aspen Roundtable ideas of theoryof-change evaluation do place evaluators in close relation to participants (Connell et al., 1995; Fulbright-Anderson et al., 1998; C. H. Weiss, 1995). Within the CCI literature, Prudence Brown focused specifically on the various roles that evaluators might take within theory-of-change work (Brown, 1995, 1998). Authors have also expanded the discussion to include complex notions of identity and complicated frameworks for situating concepts of role within evaluative work (Mertens, 2002; K. E. Ryan & Schwandt, 2002; Schwandt, 2002). However, complicated depictions of evaluator positioning, and endless typologies, are limited in their use in addressing the specific decisions through which evaluator roles emerge as situated within initiative designs for change. When addressing a complex and ongoing initiative, evaluators are left to navigate

262

change: At the same time they are engaged in the efforts of change within which their roles emerge. As this study suggests, the dynamics of change may result in differing evaluator roles over the course of an initiative, or even within the various components of an initiative (e.g. local collaborative programming, national coalition building, and organization creation). Understanding these roles as they interact with evaluation purpose and data interpretation requires questioning the linearity that is pervasive in the evaluative literature.

Interim Conclusions

As discussed in this study, the linearity that has been utilized to address stages of planned development has also been used to guide evaluation. Evaluative approaches that mirror linear programmatic development may be useful for evaluators who are addressing micro-aspects of a complex initiative (e.g. development of a single program to address a single community issue). However, continuing this trend creates risks for evaluation within coalitions. Separating out aspects of evaluation, such as data collection, outcomes, context, and communication, confuses the work of evaluation as the work becomes dispersed throughout a decentralized structure. The separation may also limit the learning of coalition participants who require gestalts to function within ambiguity, complexity, and change. Approaches to evaluation for social programming and social change involve evaluators making decisions about their evaluative roles and processes. However, the structures within which the programming and change occur also influence the possibilities of evaluator participation. This study confirms the need to focus

263

attention on the various evaluation structures as they might influence evaluator roles, data interpretation, and the purpose of evaluation. As shown in the NFI reporting, evaluators in complex initiatives may also be brought in anywhere within a process of evaluation. In addition, both the structures (e.g. management, funding,) and processes may change throughout a long-term initiative. For evaluators, the possibility for influence and change opens a door to their participation, not only in observing an initiative, but in visioning as well. In NFI, the evaluators noted the use of the term comprehensiveness as a lens. However, analysis showed that the Chapin Hall evaluators also took part in passively co-opting the idea of comprehensiveness through their persistent use of categories that were based in disciplinary and field-based theoretical grounding. In evaluation for social programming, and even for social change, organizational boundaries or focus areas for change provide parameters for action and for evaluation. Given, as Patton (1997b; 2004) notes, that there is a tendency for evaluators to make all the decisions, it is not surprising that evaluators might adjust for the ambiguity and uncertainty in the parameters of complex initiatives. The term comprehensive, if understood as a concept of possible change or vision rather than lens, could be experienced as ambiguous and uncertain. Given this ambiguity, the study confirms that a self-reflexive stance is necessary on the part of evaluators (Innes, 1995; Innes & Booher, 1999a; Lincoln, 1994; Lincoln & Guba, 2004), that evaluators may want to give attention to communication (Innes, 1995; Innes & Booher, 1999a), and that the ways in which evaluators interact in the processes of learning with others is crucial (Garaway, 1995).

264

The study also supports that, in shifting to a concept of audiences when dealing with the interpretational issues and evaluator roles, there is a need to differentiate between the learning, knowledge development, and the related educational potential of an initiative. A focus on evaluative reporting brings all of these together. However, existing evaluative literature addresses the concept of reporting as a concluding activity, seemingly isolated from the rest of evaluation process (Morris et al., 1987; Torres, 1996). Evaluation literature sometimes treats reporting in relation to notions of audiences (Preskill & Torres, 2000; Stronach et al., 2002), but even then fails to adequately address the ideas of interpretation, as related to reporting for audiences as they interact within complex initiatives. This study suggests that reporting is integral to approaches to evaluation and the treatment of reporting may itself be a sign of initiative success. Without access to reporting, participants miss out on a key element of learning within coalitions. In addition, the lack of decentralized reporting may actually serve to deter coalitional activity as a possibility, therefore keeping initiatives centralized. Reporting is still an under-explored aspect of evaluation that may contribute to understanding evaluation in complex initiatives for change. My focus on reporting has served as a central concern bringing to the fore aspects of evaluation that continue to be important and areas that need further exploration. Just as Schwandt (2002) raises questions about who evaluators are in their evaluative practice, the issues of this study, prompt evaluators to reconsider themselves. Considering evaluator interpretational and reporting responsibilities may be a way to distinguish and embrace the learning, informal and formal knowledge development, and educational potential of evaluation practice.

265

Reflection on Limitations of Studying the Reporting of a Changing Initiative

A number of challenges emerged as I sought to study initiative reports. In using documents, I was restricted in case completeness, as I had access only to those documents that were produced and were publicly available as evidence. As designed, the study did not prompt my involvement in the case or my direct access to those who were involved. These restrictions were intentionally aligned with my public emphasis. The benefit was that, along with other readers, I was myself restricted to a public perspective from which to view NFI documents. This restriction kept me grounded in a reading of the reports, as they were publicly available, despite my own professional experience in evaluation. This limitation also had an unexpected but interesting result. The framing of the study in this way helped me to re-define my own experience within the field of community evaluation, allowing me to experience the study from a different vantage than I had when I myself conducted evaluations. As I documented the ways in which the Chapin Hall evaluators confessed the limitations of their study – the lack of cooperation, the lack of resources, the unmet expectations of the local data contribution – I too remembered facing similar issues. I remembered being frustrated at the lack of data, miscommunication, requests that I participate more or less, and at the building of trust only to have it shaken with decisions beyond my control. I remembered being tested in my approaches to interracial relationships, and the tactics utilized by participants, at all levels, when they believed that my thoughts might shed too bright a light on their livelihood. However, for the study, I tried to approach my analysis thinking first how I, as a public taxpayer or employee

266

accountable to the public, might question the use of philanthropic funds. Therefore, the study allowed me to reconsider my own past experiences in a variety of ways that the experience of conducting and facilitating social program and initiative evaluation had not. Whereas my experiences in conducting evaluation tended to form into gestalts, my study of NFI evaluation reporting took the form of distinct perspectives, including views through the identification of dimensions, lessons, change, and relevant evaluation literature. In terms of documenting the study processes that led to these multiple views, the conventions of representing work, in a textual form, limited my ability to demonstrate the visual diagramming and interpretive layering that was the heart of the study. In the future, the potential of electronic representation might provide new avenues for sharing the data and analysis from various interpretive layers. Unfortunately, there has been little discussion about exploring visual portrayal of qualitative analysis. Innovative approaches to showing visualizations of the analysis might allow me to better demonstrate the connections of researcher memos to understandings of data, and to provide for a fuller grasp of the multiple connections and decisions that comprise the interpretation of a case. The selection of the case was a purposeful decision, made at the beginning of this study. However, in the future, I might select a case whose evaluators more definitively held to a notion of the specific evaluation approach of a larger national coalition. For the Aspen Roundtable, this would not have been possible in that studies, as NFI, which began in the 1990s, would not have had the benefit of the Aspen Roundtable writings that were developing concurrently. My utilization of NFI supports that any case selection must be made carefully, and approached with attention to analytic concerns, the context

267

within which the study and those concerns takes place, and the issues of establishing credibility. Within the approach to analysis of this particular case, I addressed the concerns for credibility of the study, in part, through a process that allowed me to approach the study topic from various perspectives on the same data. In Chapter Four, I provided evidence of those multiple views as I outlined reports, examined them for key aspects, addressed topical issues, and provided analysis of the dimensions, lessons, and change constructs embedded in the reports. As in any study, much of that process remains secret to the hours of systematic study and contemplation, as I took notes, worked with the data, and utilized a variety of drawings and writings to help me understand the data. It is regrettably impossible to bring anyone along for the duration of that process. However, I have tried to provide glimpses of the process and to assist the reader in identifying standards and questioning that may be directed toward my methodology and final writing. A study limitation that I found even more troubling than hidden processes, was my own reluctance to interpret and categorize from the textual data. It was surprising to me that, in working with documents, I felt immense pressure to ensure that I did not deviate from the categorizations of the authors themselves. I also found myself restricting myself, for a longer period of time, than I had ever done while analyzing interview or observation data. This was evidence of the solidifying effect of language, especially as it occurs in written evaluation reports. Reflecting on this tendency confirmed that it is necessary to analyze text for its role in constructing reality and that the solidifying of meaning is not always beneficial to all participants. The literature on data analysis did

268

not prepare me for the stagnation that would occur as I dwelt, for months, in trying to find a way to release meaning from the structure of the formal documents, without pulling the text away from the evaluators’ intentions. Instructions, within qualitative guides, prompted me to list ideas and identify themes or recurrences. However, no methodologists provided me with a process for documenting change in the text, in a way that would prevent me from losing the connections between the central concepts and the changes in meaning. It was only in interacting with the data, that I came upon a visual diagramming approach (as described in Chapter Three), that enabled me to trace configurations of ideas as they changed, rather than as they recurred. This diagramming in combination with description of the documents as whole documents, allowed me to keep the text in context, at the same time that I examined change. Thinking about the idea of examining, I am at a loss for finding the author who described qualitative research as a process of moving around a statue. In the context of understanding meaning, this idea of movement would lead to a concept of meaning as multi-dimensional. I am not sure how to describe a multi-dimensional approach to meaning, although I know that I experienced it in the layering of my interpretive process. It involved utilizing various conceptual levels and types of questioning in relation to the same text. This approach lends, to case study, a deeper understanding of the differences between triangulation approaches that are based in the case, and possible triangulation approaches that are based in the action of the researcher, as she moves around a study by using interpretive views. This concept of multiple perspectives also holds some understanding for current trends for focusing studies within areas of action, rather than within a discipline. Doing so leaves open for exploration, the possibility of

269

interdisciplinary evaluation, as a multiple perspective approach to analysis. According to Chapin Hall evaluators, the interdisciplinary approach was a possibility explored locally, but without success. I suspect that the lack of success was in part due to the same lack that I experienced in existing research approaches for understanding the meaning of a multi-dimensional evaluation. Another multidimensional concern of this study is the use of terminology by qualitative researchers. In Chapter Three, I explained that each of the authors’ work that I utilized for qualitative terminology provided a different framing of the levels of conceptualization and the terms associated. My approach to generating a notion of a change construct demonstrates my coming to terminology from within the needs of the study, rather than through an established methodology. This approach was necessary, not as change for the sake of change, but because the existing research terminology did not adequately meet with my desire to understand dimensions, lessons, and change over time. Using existing terminology would have posed a serious credibility issue for the study, in that there would have been a preconditioned mismatch between the nature of the case, as an example of change, and the approach to the study. In addition to new terminology, enduring research concepts were also of concern. To the extent that portions of my findings appear to be purely descriptive, the reader should critique concepts of validity and credibility. It is partially through my choice of description, and within the NFI text revealed, that I lead the reader to understandings of pertinent issues. Just as the written word of the NFI evaluation reports led me to perceive concrete meanings, my own descriptions may also lead the reader to a notion of fixity, rather than fluidity, or to the unquestioned immersion into rhetoric, rather than meaning. I ask the reader to question my text, hopefully coming to question the various aspects of my findings and the possible relationships that exist between the categories that emerged in the study. In other words, although I have outlined specific approaches I have taken, the validity of this study’s text is at best incomplete without the engagement of the reader. I therefore admit that the credibility of the study is uncertain in the temporal space of the reading of the study, but may prove stronger in days to come, as the ideas reach beyond these pages and

270

into actual initiative ideas. Within this context of limitations, I proceed with a discussion of possible contributions of this study to policy, theory-building, and evaluation practice and end with a conclusion about possible new directions in CCI evaluation research and engagement.

Contributions of Study to Policymaking, Theory-Development Within Initiatives, and Evaluation Language Practice

The contributions of this study span policymaking, theory-development as it might occur in community initiatives, and evaluation practice for complex initiatives. The study points to areas in which evaluators should be cautious of supporting, or even creating, a distance between communities and policymakers. Evaluative distance may serve to thwart the effectiveness of policymaking initiatives. For example, even if framed in a notion of programmatic decentralization, tiered evaluative structures, and the evaluators that perpetuate them, can serve to distance local constituents from policy processes. Even in processes of policymaking, that are intended to provide concepts open to definition by local communities, intermediaries, may co-opt policy rhetoric. By doing so, they may limit the change potential inherent in the concepts. The reverse may also hold possibility: Exploration of the way in which policy language and compensation for language is distributed, may serve as an indication to policymakers, of the effectiveness of fund distribution and learning. In order to meet this possibility, policymakers and evaluators may need to pay particular attention to the processes of theory building as these processes occur in demonstrations. Theory building provides one avenue for linking policy funding to learning about the ways in which local communities can leverage funds into knowledge

271

development, and into formal reporting for policy influence. However, participation in knowledge development processes also risks becoming limited by evaluative structures that control rather than develop local voice. Within decentralizing initiatives, institutional forces, although guised as independent evaluation, may serve to centralize information interpretation just as lip service is being paid to stakeholder participation and information-sharing. In these cases, divisions, such as national versus local control of theory building, may perpetuate inequity in funding distribution and knowledge development. The concept of theory-of-change evaluation is not enough to address the challenges of thinking about theory building in complicated contexts. Approaches, responsibilities, resources, and consistency are all issues that prove difficult in even the highest funded initiatives. More advanced methodologies and more funding directed toward advanced measures, even when situated within a theory-of-change approach, will not necessarily provide for deeper understanding of evaluative decision-making. Nor will they provide the transitional or bridging language necessary to bring relationships of CCI concepts together in meaningful and action-supporting efforts. These challenges are areas to which this study might contribute as the study draws attention to language of reporting. The attention to evaluative language, as reporting practice, moves the study’s contributions to the importance of distinctions between community organization building, coalition formation, and initiative action. Although sometimes co-existent, the three arenas may have differing principles not often understood or distinguished. The concepts may also be specific to arenas of various policy communities. As Cabatoff (2000)

272

suggests, evaluation requires attention to the translation of findings into “policy language” appropriate to the policy community in question. The evaluative concepts most suitable for diverse purposes may need to be deciphered in evaluative language practice in order for approaches to community evaluation to be maximized. Although NFI evaluation reports provided lessons about governance structures and evaluation challenges, the analysis of NFI reports has provided additional concepts to support understanding of initiative evaluation. In presenting CCI characteristics, dimensions, lessons, and change constructs, the study begs a questioning of how these concepts relate to each other, and to the structures of initiatives. The study also prompts continued inquiry into how these concepts can be brought together to inform evaluative language in practical relation to decision-making in complexity.

Language in Reporting: Implications for Future Research

I have analyzed an example of CCI evaluation reporting, and have drawn from this analysis, some ideas about contributions to policymaking, theory-building, and evaluation practice. My study has been limited by the approach chosen, the data available, my own preparation and experience, and the situatedness of my own thinking within a social and historical context. These same aspects have also supported the study. This study would not be complete without a suggestion of ways in which research about, and for CCI evaluation, might be improved through the study’s findings. After all, the study does lead to some understandings about seeing things.

273

Amidst a picture of an evaluation that did not proceed as planned, in an initiative that did not end as planned, within a planning process that was not implemented as planned, perhaps it can be learned that the reporting of a CCI is doomed to a perceived failure, given the ambiguity of the term comprehensive. Maybe it is learned that a funded initiative can never become comprehensive community development until it loses its evaluation focus on being about comprehensive community development. The mismatch that exists between evaluative approaches and the explicit or implicit purposes of an initiative should not surprise viewers. Asking for formal reports to lend themselves to CCI learning and education may be asking too much in the context of coalitional diversity and the competing interests that exist in contentious policy arenas and initiatives themselves. In the contention, evaluation is left with a definitional void. A contribution of this study may be the idea that groups look to a language to provide stability and continuity in the presence of ambiguity and contention. A term such as comprehensiveness, used as a lens, is destined to loose its value in the search for a constant. In discomfort, the void of comprehensiveness, that offers possibilities of learning and change, can quite easily be co-opted by intermediary attempts at certainty and self-sustainability. Fortunately, the need for certainty may just as easily be filled through concepts explicitly related to learning and knowledge development – e.g. data, outcomes, communication, and context. It has been documented that the language created in the process of evaluation can indeed have influence. Language can serve to change the nature of relationships between evaluators and participants toward a more dialogic interaction (Rallis & Rossman, 2000). As Schwandt (2002) explains:

274

Reframing evaluation as practical hermeneutics restores a focus on our efforts to reach evaluative understanding in everyday life. It urges us to attend to the lebenswelt – to the practical and communal life of persons, to dialogue and language. “The conversation that we are,” to borrow a phrase from Gadamer, is about the meaning of speech and action, and meanings are expressed in language. That language is not private but shared, and hence meaning is not subjective, but intersubjective. Moreover, the significance of our language use does not reside solely in its capacity to designate, discover, refer, or depict actual states of affairs. Rather, language is used to carry out or perform actions and to disclose how things are present to us as we deal with them. This is the historical, cultural, and linguistic context of our practices and our shared being – it can never be fully objectified or grounded (Guignon, 19910. We both start and end our efforts to make sense of things in our best grasp, our best account, of ourselves as agents in the world. (p. 79) It is reasonable to posit that evaluation language may also serve in the construction of an initiative. If, as NFI evaluators claimed, comprehensiveness is to be a lens for community initiatives, then there is a need for conceptual tools that help to link various disciplinary discourses around the notions of comprehensiveness. Using evaluation to attend to comprehensiveness may therefore require approaching case understandings through ideas of shared evaluation practice rather than from the canons of existing knowledge factions. The tendency in the NFI evaluation to exert control within the evaluative language of national and local separation prompted me to consider issues of decentralization as it pertains to evaluation. Through the study, I came to believe that, despite the discussions of evaluation “use” and ideas of theory-of-change evaluation, concepts of evaluation and decentralization were not addressed adequately within NFI or within the writings of the Aspen Roundtable. The inability of NFI sites to maintain a local coalition was perhaps due to the inability of Chapin Hall evaluators to effectively develop a language for reporting. This inability, in turn, may have been due to the Aspen

275

Roundtable’s own inability to effectively construct or distribute an evaluation phenomenon, whose language of change would have supported understandings of reporting. Of course, change is not always a popular or comfortable concept, especially when it refers to alternatives to existing power structures. Evaluators deal with this lack of popularity in a variety of ways. Some choose not to talk about change. Others choose to involve participants in learning about change, but not in the reporting of change. Others, as is the case with the NFI reports, reveal change or the lack of change, in ways that they may not even recognize or discuss. In the case of NFI, the initiative moved from a centralized structure, of three national organizations, to an increasing decentralized structure, of multiple organizations providing services and counsel to the local collaboratives. NFI evaluators resisted efforts to engage with the notion of change in their approaches. From early in their NFI reports, the Chapin Hall evaluators noted their skepticism with the ethnographic approach they claimed to be taking. They repeatedly suggested that a network analysis would offer a more formal approach to evaluating the work of the local collaboratives. They repeated this preference throughout the NFI reports as they documented the challenges with the approach they were taking. In the final analysis of the NFI reports, the Chapin Hall evaluators noted the ways that the evaluation had not worked. However, evaluation as a two-tiered design may have served exactly the interests of the national evaluators -- the dependence of local collaboratives upon national evaluation intermediaries for public voice – and maintenance of the status quo.

276

In addition to awareness of this professionalized dependence, supported through language, the study has led me to a main conclusion that evaluation, as it has become increasingly professionalized as a field, has also become divided. The division is not only into various approaches and camps, but also into groupings of evaluative ideas, as situated within various categorical streams of understanding. Health evaluators come together to discuss health funding and also to debate evaluation approaches; housing evaluators do the same in reference to housing and evaluation ideas; youth development evaluators come together; economic experts came together, and so on. All speak within their own power structure with that talk manifesting in disciplinary, field-based, and institutionalized communication. This tendency leads one to wonder who is speaking with communities and to policymakers. Perhaps it is unreasonable to expect this to be otherwise. The realities of bureaucratic funding streams, if unquestioned, dictate just this tendency. However, in this increased division, there may also be unifying possibilities. If researchers look for the concepts that cross boundaries, they may find avenues to increased understanding. Discussions about concepts like community, urban studies, and evaluation itself, hold the potential to bring together ideas that have become fragmented. For example, shifting the conversation from evaluation as a practice to evaluation as a bridging language may provide opportunities to bring together lessons learned in various types of initiatives whose foci have been categorized by different funding streams. The study has shown that, although ideas of development may have differed, the ideas of theory-development and participation served as loose linkages between the NFI evaluators and the Aspen Roundtable, with organizations mediating the differences

277

between the two. This CCI evaluation grouping provided just one avenue to understanding the ways in which the language of reporting identified and perpetuated a knowledge community. I have chosen this particular grouping because of my interest in ideas of comprehensiveness, community, change, and urban neighborhoods experiencing the symptoms of poverty. I also was interested in engaged approaches of theory building and the connections between learning, knowledge development, and education around community initiatives for coalitional development. I leave it to others to take up the exploration of groupings that center on their concerns, and to explore the language and associated knowledge communities that bridge their interests. I admit that this conclusion is not a comfortable place to leave a study, as it brings readers to places of uncertainty represented in the question of what next? This question can only be answered through the future interpretations of this work. Nevertheless, I will be mindful of the discomfort and leave the reader with some thoughts about the directions I hope to take this work. Evaluation literature is in need of a language of reporting. Research of CCI reporting is an under-explored but fertile arena. As demonstrated in this study, reports themselves are important artifacts of CCI evaluation, useful to the understanding of key concepts of initiative evaluation. The development of language of CCI evaluation reporting has been limited in the literature about CCI evaluation. Yet, developing a language of reporting is a necessary step in the understanding and improvement of CCI evaluation. A language of reporting, if clearly distinguished through funding charters, charges, and evaluative structuring, may help to support the notion of, and change in, efforts to decentralize knowledge development.

278

In addition, the research on measurement of coalition outcomes, which is still relatively limited (Berkowitz, 2001). This research may also benefit from a deeper understanding of evaluation language, as an integral part of coalition success. As evidenced in the shifting language in the NFI evaluation reports, a study of language, as it is dispersed through coalitions, may be an indication of the potential success of a coalition. In relation to comprehensive coalitions, the ability of a coalition to quickly distribute changing policy language according to investment opportunity (either by categorically based funds or more creative funding approaches such as comprehensive funding) may also indicate the functioning of that coalition. Although the Aspen Roundtable literature can be utilized by various strands of interest (e.g. health, youth development, education), there is little discussion within the evaluation literature or the coalition literature about examining the success of comprehensive approaches in relation to their ability to distribute language throughout a coalition. This focus would bring the issue of language to the forefront of concepts of change as concepts relate to community development and social and political dynamics surrounding the work of decentralized community structures. This focus could also influence the capacity building ideas of community coalitions (Chaskin, 1999, 2001; Foster-Fishman et al., 2001; Wolff, 2001b), offering a deeper understanding of the use of language in the valuing of, and sharing of value within community coalitions. A focus on language value within coalition outcome and CCI evaluation literature would also bring, within ideas of evaluation, a deeper understanding of the social construction of meaning as it relates to influence within complex contexts. This focus encourages an emphasis on reporting as an integral issue within discussions of valuing. This inclusion serves to shift the discussions of evaluation,

279

from those of evaluation use, focused mainly on process utilization of evaluation methods, to a discussion of language as itself value-laden, value enhancing, and therefore valuable. Combining the study of reports with decision-making processes in evaluation is one way that researchers can help to develop a deeper understanding of evaluation language. Another way would be to combine the study of funding structures in relation to evaluation participation in reporting. Meta-analyses across reports may also be useful in identifying commonalities and differences in variously structured initiatives and types of reporting. Finally, a more targeted case analysis of Ford Foundation funding (because of its longevity, innovativeness, and historic funding for evaluation) across time and in relation to evaluation reporting, funding structure, socio-political context and broader research reporting would be enlightening. Such a study would provide a deeply contextualized understanding of evaluative reporting in historical contexts. A study of this focus would also enhance the discussion of the functioning of bureaucratic structures in distancing constituents from policymaking, and would shed light on the limitations of constructions of funding, policy-making, and change as questions of top-down versus bottom-up processes. To fully achieve these benefits of these types of studies, another element of research is necessary – the development of a framework to help in bringing together understandings and practices of policymaking, initiative theory building, and evaluative practice. Schlager (1999) tells that frameworks: bound inquiry and direct attention of the analyst to critical features of the social and physical landscape. Frameworks provide a foundation for inquiry by specifying classes of variable and general relationships among them, that is, how the general classes of variable and general relationships among them, that is, how

280

the general classes of variable loosely fit together into a coherent structure. (p. 234). A framework must address the claims that theory-of-change approaches are too linear in their characterization of neighborhood revitalization efforts (Fraser et al., 2002), by depicting CCI evaluation as multiply located within a decentralized arena of voice. A framework should depict evaluation as integrated within a system contextualized within, rather than separated from, the broader ideas of development that enter initiatives by way of financial and human investments. In this way, a framework can draw attention to the notion of contextualization related to political arenas for evaluation process (Segerholm, 2003), as contextualization is aligned with notions of constructivist learning within research processes that are linked to social and political arenas (Greene, 1997, 2000). A framework can help to de-center the ideas of communication to mirror those within networks as dispersed rather than flowing from the “center to periphery” (Springer & Phillips, 1994, p. 19), or as might be expected, from the funder directly to the grantee. Finally, a framework can encourage the “self-consciousness” required of evaluators (Lincoln & Guba, 2004) by emphasizing that, in reporting, evaluation must be considered as holistic with the distinctions between outcomes, data, communication, and context, less important than the messages that these together support. The call to develop another framework is a risky one, given the documentation of various frameworks (often depicting graphically), highlighting horizontal relationships, bottom-up and top-down structures, staged processes of development, and the horizontal and vertical structures of political contexts. However, in calling for a framework, I am calling for a contribution to the understandings of policy processes that goes beyond traditionally represented graphics to move toward a depiction of the ongoing learning processes and knowledge

281

development that occur in demonstration initiatives and the multiple concepts that comprise the reporting. With this study, I have shared the questioning and processes that I have engaged in while exploring my curiosity around reporting in CCI evaluation. I found that reporting of initiatives may describe community-building processes but the reports themselves, as artifacts, contribute to the understandings of coalitional reporting. In positing a version of the background of NFI, and the dimensions, lessons, and change areas of reports, I have provided the content for framing future research about CCI evaluation reporting. I have described possible next steps in research on reporting, and have indicated that a language of change is necessary to support community initiatives as they transition toward their goals of decentralization. I have left the reader with some specific avenues for further research, and in doing so, encourage a continued line of inquiry that takes into account the importance of the products of evaluative investment as representations of the meaning of initiatives. I encourage a development of a framework for conveying an evaluative language necessary to support CCIs.

282

Appendix A Selection Process for National Initiative CCI information as retrieved from the Aspen Institute Website of 2001 and Aspen Roundtable 1998 membership $ = roundtable funding
CCI Sponsor Representative Ford Foundation

O = membership on the roundtable Evaluator
Initiative / Program Officer Neighborhood and Family Initiative Robert Curvin O Locations / Participants Milwaukee Detroit Hartford Memphis

X = membership evaluation committee
Policy / Research Firm Chapin Hall Center for Children

E= CCI

Evaluation Reports Available E Moving toward Implementation Challenge of Sustainability Toward a model Findings from a Survey of Residents Entering the Final Phase

$

O

New York Community Trust United Way Carter Center Boston Foundation (Rockefeller Foundation) Enterprise Foundation

Agenda for Children Tomorrow Atlanta Project Boston Community Building Network Community Building in Partnership Community

New York City

Atlanta Boston

Emory University

E

Baltimore

Conservation Company

E

Sandtown-Winchester Interim Evlauation

Local Initiatives

Chicago

283

CCI Sponsor Representative Support Group

Initiative / Program Officer Building Initiative

Surdna Foundation Anita Miller – Program Director HUD O $

Comprehensive Community Revitalization Program Empowerment Zones

Locations / Participants Detroit Indianapolis Kansas City Los Angeles Miami New York City Philadelphia Pheonix St. Paul Washington South Bronx

Policy / Research Firm

Evaluation Reports Available

Multiple Sites including: Baltimore Detroit Philadelphia/Cam den Cleveland NYC Little Rock Pheonix Denver Bridgeport New Haven DC St. Paul Providence Milwaukee Boston New Haven

Organization and Management Group OMG Rockefeller Institute of Government Price Waterhouse

E

First Annual Assessment Report Final Assessment Report

E

Comprehensive Report Community Plan for Strategic Change New Paths to Opportunity

E Online Tracking

Annie E. Casey Foundation

$

Jobs Initiative

Denver, New Orleans

Abt Associates

E

Year One Cross Site Report

284

CCI Sponsor Representative Ralph Smith - VP Edna McConnell Clark Foundation Michael Bailin President $ O O

Initiative / Program Officer

Neighborhood Partners Initiative

Locations / Participants St. Louis Philadelphia Seattle Milwaukee Central Harlem South Bronx

Policy / Research Firm New School for Social Research Chapin Hall Center for Children Harold Richman Director Prudence Brown

Evaluation Reports Available

E E The Startup

O X X E E Research Statement and Protocol

Pew Charitable Trusts

$

Neighborhood Preservation Initiative

New York Community Trust Annie E. Casey Foundation $

Neighborhood Strategies Project Neighborhood Transformation /Family Development

Boston Cleveland Indianapolis Kansas City Memphis Milwaukee Philadelphia St. Paul Sand Francisco South Bronx Brooklyn Manhattan Atlanta Baltimore Boston Camden Denver DesMoines Detroit Hartford Indianapolis Louisville

Urban Institute Rockefeller Institute

Chapin Hall Center for Children

E

Report on Initial Implementation Report on Planning Period

285

CCI Sponsor Representative

Initiative / Program Officer

Annie E. Casey Foundation

$

New Futures Initiative

Locations / Participants Miami Milwaukee New Orleans Oakland Philadelphia Providence San Antonio San Diego Savannah Seattle St. Louis Washington Savannah Little Rock Dayton Bridgeport

Policy / Research Firm

Evaluation Reports Available

Center for the Study of Social Policy

E

Building New Futures Dayton’s New Future Initiative Little Rock New Futures Pittsburgh New Futures

Annie E. Casey Foundation

$

Rebuilding Communities Initiative

Roxbury Denver Detroit Philadelphia Washington DC

OMG

E

Savannah New Futures Phase I Progress Report Planning Phase Assessment

Robert Wood Johnson Ruby Hearn Senior VP Foundation for Child Development Rockefeller Foundation Angela Blackwell Senior VP

$ O $ $ O

286

CCI Sponsor Representative Macarthur Foundation Ralph Hamilton Dir Florida Philanthropy Susan Lloyd – Dir Buildign Community Capacity Mott Foundation James Litzenberg – Program Officer Kellogg Foundation Geraldine Brookings VP Programs Spencer Foundation Patricia Graham President HHS James Irvine Foundation Craig Howard -- Program Officer DOE Terry Peterson Counselor to Secretary Cleveland Foundation

Initiative / Program Officer $ O O $ O $ O $ O $

Locations / Participants

Policy / Research Firm

Evaluation Reports Available

O

O Cleveland Community Building Initiative Cleveland Center for Urban Poverty and Social E Implementing a Theory of Change – A Case Study

287

CCI Sponsor Representative

Initiative / Program Officer Ronald Register Director O X

Locations / Participants

Policy / Research Firm Change Claudia Coulton Harvard Lisbeth Schorr Julius Richmond Carol Weiss Heather Weiss American Enterprise Institute Douglas Besharov National Center for Children in Poverty Barbara Blum Rheedlen Centers for Families Geoffrey Canada Georgetown University Law Center Peter Edelman Stanford X

Evaluation Reports Available Baseline Progress Report

O X O X X

O

O

O

O

288

CCI Sponsor Representative

Initiative / Program Officer

Locations / Participants

Policy / Research Firm John Gardner Center for Collaboration for Children Sid Gardner City of Indianapolis Mayor Goldsmith School Districit Philadelphia Superintendent Hornbeck Chatham Savanaah Youth Future Authority Otis Johnson Mathtech William Morrill UNC Michael Stegman Public/Private Ventures Gary Walker Center of Children in Poverty

Evaluation Reports Available O

O

O

O

O

O

O O X

289

CCI Sponsor Representative

Initiative / Program Officer

Locations / Participants

Policy / Research Firm LawrenceAber MIT Philip Clay Langley Keyes Institute for Research and Reform in Education James Connell Manpower Demonstration a Research Corporation Robert Granger Swarthmore College Robinson Hollister Jr. UC Berkely Joyce Lashof Vera Institute of Justice Mercer Sullivan X X X

Evaluation Reports Available

X

X

X X X

290

Appendix B Narrative Criteria

In composing this narrative as a dissertation, I utilized resources to support the quality of the work. These resources focused my attention alternatively on general language, social science, case study and qualitative research as a tradition. For general writing style and rules of grammar, I utilized The elements of style by Willaim Strunk Jr. and E.B. White (2000) and A pocket style manual by Diana Hacker (1993). To review my writing in terms of social science, I referred to Howard S. Becker’s Writing for social scientists: How to start and finish your thesis, book, or article (1986). For reviewing my work within the quality ideas of case study research I turned to Robert Yin’s Case study research: Design and methods (1994). I utilized Yin in his basic quality statements for determining the quality of a case study report. These statements included: The case must be significant. The case must be complete The case must consider alternative perspectives The case must display sufficient evidence The case study must be composed in an engaging manner

I also drew more extensively from Robert Stake’s checklist in The art of case study research (1995) which included the following questions: 1. Is this report easy to read?

291

2. Does it fit together, each sentence contributing to the whole? 3. Does this report have a conceptual structure? 4. Are its issues developed in a serious and scholarly way? 5. Is the case adequately defined? 6. Is there a sense of story to the presentation? 7. Is the reader provided some vicarious experience? 8. Have quotations been used effectively? 9. Are headings, figures, artifacts, appendixes, indexes effectively used? 10. Was it edited well, then again with last minute polish? 11. Has the writer made sound assertions, neither over or under interpreting? 12. Has adequate attention been paid to various contexts? 13. Were sufficient raw data presented? 14. Were data sources well chosen and in sufficient number? 15. Do observations and interpretations appear to have been triangulated? 16. Is the role and point of view of the researcher nicely apparent? 17. Is the nature of the intended audience apparent? 18. Is empathy shown for all sides? 19. Are personal intentions examined? 20. Does it appear individuals were put at risk? In order to review my work with attention to a community of qualitative researchers with attention to the critiques of qualitative research, I utilized Potter’s An analysis of thinking and research about qualitative methods (1996). Potter poses the following categories and critiques to consider:

292

Problems in positioning the research Mischaracterizing methodologies used Non-illumination of axioms (researcher assumptions) Misleading assumptions (by writing in a manner that claims one truth) Problems with informing the reader about evidence selection Sampling Balanced or focused evidence (being clear about which one) Primary and secondary sources Clarity in presenting methods Illuminating analytic procedures Conceptual leverage (high level focus on concepts/low level on description) Generalizing (leveraging of conclusions) Contextualization (comparing subject to elements outside itself) Self-reflexivity Writing (attention to goals and readers) Making a case for quality (making a conscious case for the quality of qualitative research) Correspondence between theory and practice Correspondence of qualitative prescriptions and practices (desire high) Correspondence between qualitative and quantitative approaches (desire low in key areas)

293

Appendix C Description Information Matrix
When: What: 1992 community development initiative 1993 Initiative was to operate through a prescribed structure for comprehensive and integrated neighborhood planning and development. 1995 & 1997 community development initiative 1999 is at a critical juncture: it has entered the final phase of Ford Foundation funding. After this period, NFI as a national demonstration will be over. 2000 & summ One of the earliest of what have come to be known as comprehensive community initiatives (CCls), NFI was eventually to become a l0year effort It was also a demonstration project, Cosmos This document is part of a larger final report of the Common Data Collection efforts undertaken by COSMOS Corporation for The Ford Foundation under Grant No. 960-0128. Michigan 1993 The Neighborhoo d and Family Initiative is a program of the Community Foundation For Southeastern Michigan This multiple-year project is a "comprehens ive, neighborhoo d approach Lower woodward corridor of Detroit michigan Michigan 1994 This report is based on activities of NFI between May 1, 1993 and March 31,1994. Milwaukee 98 The Neighborhood and Family Initiative (NFI) is a community development program

Where

in four cities (Detroit, Hartford, Memphis, Milwaukee)

Local level

four cities (Detroit, Hartford, Memphis, and Milwaukee).

in Detroit, Hartford, and Memphis, Milwaukee

Single neighborhood in each of four cities

This report focuses on local level data indicators in four Neighborhoods and their surrounding areas. The four neighborhoods are located in: Detroit, Michigan; Hartford, Connecticut; Memphis, Tennessee; and Milwaukee, WI

in four cities (Detroit, Hartford, Memphis, and Milwaukee).

By whom:

sponsored by the ford foundation and launched through the agency of

The Ford Foundation launched the Neighborhood and Family Initiative

sponsored by the Ford Foundation and launched through the agency of

Initiative has been funded for almost ten years their sponsoring community

the Ford Foundation launched the Neighborhood and Family Initiative

funded by The Ford Foundation

sponsored by the Ford Foundation. It was launched in 1990 through agencies of

294

When:

1992 community foundations community foundations and representatives of neighborhood interests and potential internal and external resources. These representatives are neighborhood residents or from public and private organizations. seeks to revitalize and empower whole communities and the individuals and families who live in them The Foundation has submitted, for local exploration and implementation , a general statement of philosophy conceptual concerns to be tested by demonstration in four different sites upon which action under NFI is to be based.

1993

1995 & 1997 community foundations Relevant institutions and actors in both public and private sectors

1999 foundations collaboratives

2000 & summ

Cosmos

Michigan 1993

Michigan 1994

Milwaukee 98 community foundations Community foundation

Who:

What for:

involves neighborhood residents (who can best identify neighborhood needs) with key individuals in the public and private sectors, individuals who themselves or through their professional and community organizations and affiliations have the resources to plan and implement programs to strengthen distressed communities and the families who live in them. a determined investigation-an exploration of the possibilities and challenges of broad-based relationship building and cross-sectoral collaboration

relevant organizations and actors , in both the public and private sectors,

The COSMOS team is indebted to many organizations in these areas including the police departments and the school districts, who provided the local data used in the report.

The project is guided by an 18 person committee, the Collaborativ e, the majority members of which either live or work in the target area.

The consultant s worked with the NFI Collaborat ive, its various committee s and foundatio n staff throughou t the program year.

It attempts to create the circumstances under which a working model for neighborhoodbased, integrated development can be generated.

What will be left, what will have been accomplished, and what will continue to develop in the wake of NFI as a formal initiative are the questions that participants are grappling with and attempting, through their actions, to answer.

sought to strengthen a single neighborhood in each of four cities and improve the quality of life of the families who live in them. designed to explore the usefulness and viability of a set of principles and a general approach to community development, and to provide lessons for policy makers and practitioners

to helping improve life for families and individuals and reducing neighborhoo d deterioration in the lower Woodward Corridor" of Detroit, Michigan. The mission of the project as determined by the Collaborativ e is "To improve the quality of life of those who reside

Implemen ting evaluation design

Its intent was to create the circumstances under which a working model for neighborhoodbased, integrated development would be generated.

295

When:

1992

1993

1995 & 1997

1999

2000 & summ engaged in similar work in the field.

Cosmos

How:

collaborative governance structure to link the community foundations and representatives. The collaboratives are conceived of as the generative body for planning, monitoring, and coordinating the implementation of action under NFL

collaboration among relevant institutions and actors from the public and private sectors in developing and exploiting resources for community development.

Action under the Initiative is set within a particular operational structure that links the four sites and provides the basic organizational outline for each of them, and is to be guided by adherence to two central principles.

the collaboratives in Detroit, Hartford, and Memphis are relatively young organizations, having recently become incorporated entities independent of their sponsoring community foundations these coIlaborativesturnedorganizations are struggling to move beyond the start-up stage of organizational development. The Milwaukee collaborativewh ich remains unincorporated and continues to work closely with the community foundation and under its umbrella, is trying to address similar issues for different reasons.

Planning and implementation guided by principles

Michigan 1993 and work in the lower Woodward Corridor. The effort will help develop an ideal community where people are employed and where a mix of cultures and people of all income levels and ages live among fine institutions. Improvement s will be incremental but should be both real and recognized by the residents and those who work in the lower Woodward Corridor."

Michigan 1994

Milwaukee 98

From the start of the program year, all aspects of the NFI program were guided by a list of outcome measures approved the previous year. These outcomes measures were introduce d regularly into Collaborat ive planning and decision making. These outcomes are the foundatio n of this report.

Action under the Initiative was set within a particular operational structure that linked the four sites and provided the basic organizational outline for each of them.

296

When: By what approach:

1992 invest in the support and development of local leadership, and by integrating development strategies to address physical, social, and economic needs and opportunities within the targeted neighborhoods.

1993 to develop and support local leadership, and by integrating development strategies to address physical, social, and economic needs within targeted neighborhoods, the Initiative seeks to revitalize individual communities and to draw broad lessons about strategic planning, neighborhood development, and the possibilities for collaborative action. through the process of exploring the interrelationships among the neighborhood's social, physical, and economic issues, the Initiative hopes individual strategies will coalesce into a strategic whole, the elements of which can work together to foster synergistic, sustainable change.

1995 & 1997 Utilizing available resources and seek out new ones inside the neighborhood and throughout the larger community Weave individual strategies into a strategic whole

1999 Identify and fill organizational niche, maintain organizational survival and continue programmatic efforts and seek a niche and rational for continued existence

2000 & summ In NFI, both goals and principles were broadly stated. Particular outcome expectations and the appropriate measures of them were left unspecified by the Ford Foundation, and the "theory of change" that linked the principles, through initiative action, to expected outcomes was undefined.4 It was the role of local actors to identify the particular outcomes to be generated and to determine the appropriate strategic approach for accomplishing them, based on an assessment of local needs and priorities and on the opportunities and constraints provided by the local environment. comprehensive change: Neighborhood

Cosmos

Michigan 1993 A three year implementati on grant of 1 million dollars was awarded in 1991.

Michigan 1994

Milwaukee 98

On what principles:

neighborhood focused comprehensive

Citizen participation and institutional

concerns both institutional collaboration

The basic organizational outline is

297

When:

1992 development and active participation of residents and stakeholders in planning and implementation .

1993 collaboration that neighborhood planning can best be done by neighborhood residents in collaboration with other people invested in and involved with the community; most effective development strategies will take advantage of the essential interrelatedness of social, physical, and economic development, which have historically represented separate spheres of action. Institutional collaboration

1995 & 1997 and citizen participation.

1999

2000 & summ development strategies need to explore and make use of the interrelationshi ps among the social, physical, and economic needs and opportunities within and beyond the target neighborhood organizational collaboration and citizen participation

Cosmos

Michigan 1993

Michigan 1994

Milwaukee 98 guided by adherence to two central principles of institutional collaboration and citizen participation, and comprehensive strategic planning.

neighborhood development strategies need to explore and make use of the interrelations among the social, physical, and economic needs and opportunities within and beyond the target neighborhood.

298

Appendix D Description Information Text Information about “what”: In the 1992, 1995, and 1997 Chapin Hall reports, the framing of the initiative is focused on the concept of community development. In the 1993 Chapin Hall report, planning is highlighted with development incorporated and the qualifiers of comprehensive, integrated, and neighborhood. In 1999 Chapin Hall report, the idea of the initiative as a demonstration project is prominent. In the final 2000 reports, the ideas of comprehensive and demonstration are brought to the fore. In the Cosmos report, data is the emphasis. For Michigan 1993, the project is a comprehensive one of the community foundation and for Michigan 1994 the initiative would seem to be solely about activities. In the Milwaukee 1998 report community development as a program is the focus.

Information about “where”: Across the initiative, descriptive statements about where the initiative was to take place included cities, the individual neighborhoods within cities, and a more general “local level” designation.

Information about “by whom”: In the 1992, 1995, and 1997 Chapin Hall reports, the initiative is sponsored by the Ford Foundation and launched through community foundations. In the Chapin Hall 1993 and 2000 reports, and in the Milwaukee 1998 report, the Ford Foundation has launched the initiative. In the Michigan 1993 report the initiative is funded by the Ford Foundation, and in 1999 Chapin Hall report, the

299

initiative has been funded without reference to by whom but with sponsorship attributed to the community foundations.

Information about “who”: In the 1992 Chapin Hall report, the focus is on individuals and organizations from the public and private sectors, that serve as representatives of resources and interests. In the 1993 Chapin Hall report, residents with the ability to identify neighborhood need are to be linked to public and private sector individuals with resources. In the 1995 and 1997 reports, institutions and actors of the public and private sectors become the focus of involvement and in the 1999 Chapin Hall report, the collaboratives take center stage. By the final 2000 report and summary organizations and actors from the public and private sector are the focus. The concepts of representatives, residents, institutions and collaboratives are not present in this description. The Cosmos indicator report emphasizes the organizations outside the initiative that are able to supply numerical data such as the police departments and schools. The Michigan reports emphasize the collaborative and its committees and foundations staff. The Milwaukee report mentions the community foundations as the participants.

Information about “what for”: In the 1992 Chapin Hall report, the initiative is enacted to “revitalize and empower whole communities and the individuals and families who live in them.” The initiative serves as a demonstration with an effort at local exploration and implementation of a philosophy. In the 1993 report, the emphasis is on the need of the communities identified as distressed and the goal is to

300

strengthen them and the families who live in them. The exploration takes on a focus on the specific issues of relationship building and cross-sectoral collaboration. In the 1995 and 1997 reports, the initiative is described as an effort to create circumstances for generating a model that focuses on neighborhood-based integrated development. In the Chapin Hall1999 report, the initiative becomes a questioning of the future with action posed as the process for answering questions. In the 2000 Chapin Hall report, the initiative is about strengthening single neighborhoods and improving quality of life for the families that live in them. It was about exploring usefulness and viability of principles of a general approach to community development and providing lessons. The Cosmos introduction does not elaborate on the purposes or use of the data. For the Michigan reports, the initiative is, in 1993, about improving life for families and individuals and reducing neighborhood deterioration and in 1994 about implementing and evaluation design. In the Milwaukee report, the initiative is about creating the circumstances for generating a model.

Information about “how”: In the 1992 Chapin Hall report, the initiative is to operate through a collaborative governance structure to link participants in planning and implementation. In the 1993 report, collaboration around developing and exploiting resources for community development is the approach. In 1995 and 1997, action is contextualized within an operational structure and emphasis is on adherence to principles. In the 1999 report, the initiative is questioning and addressing issues of the future. In the 2000 reports, the approach is focused on the planning and implementation guided by principles. The Cosmos report does not contribute a

301

description of the work of the initiative. The Michigan 1993 report focuses on incremental change toward an ideal community, and the Michigan 1994 report focuses on incorporating outcomes measures into decision-making. The 1998 Milwaukee report focuses on operational structure.

Information about “by what approach:” The how of the initiative is identified in the reports by an elaboration of the approach underlying approach. In the 1992 report, investing in the support and development of local leadership is highlighted as is integrating development strategies. In 1993, the process of exploring interrelationships is added to the approach. In 1995 and 1997, the Chapin Hall reports emphasize resource identification and use and the weaving together of strategies. The 1999 reports focuses on identifying and filling an organizational niche as the approach to initiative success. In the 2000 reports, the approach is one of utilizing a “theory of change” to link principles with actions and outcomes that were decided upon by the local actors. The 1993 Michigan report simply speaks to the amount of money invested in implementation and Cosmos and remaining local evaluation reports do not describe and approach in detail.

Information about “principles”: Some of the report descriptors include reference to the specific principles upon which the initiative as to be based. In 1992 these include neighborhood focused comprehensive development and active participation of residents and stakeholders in planning and implementation. The 1993, 1995, and 1997 reports emphasizes citizen participation and institutional collaboration,

302

collaboration in planning, and interrelatedness of spheres of action. The 2000 reports also note comprehensive approach and interrelationships of social, physical and economic needs and opportunities, along with collaboration and citizen participation. The 1998 Milwaukee report echoes the principles of citizen participation and comprehensiveness this time in strategic planning. COSMOS and the remaining local evaluation reports do not highlight specific principles in the overview descriptors.

303

Appendix E Evaluation Overview Information Matrix
Chapin Hall 1992 Project note As a demonstration project, NFI should not be viewed as a controlled experiment that seeks to test particular strategies in order to achieve particular, objectively measurable outcomes. It is, rather, an attempt to design a process through which to structure action, and to demonstrate and learn from a general approach within a specific governing structure and according to some basic conceptual ground rules. NFl seeks to provide and explore mechanisms to leverage resources and representation and to build within targeted neighborhoods a greater capacity to assess their needs and opportunities, and to devise workable methods to address them. Evaluation purpose analyzes the theoretical foundations of the Neighborhood and Family Initiative and describes the empirical circumstances under which it is being tested. It provides building blocks for the construction of a coherent theory of development, which some thought was missing from the Community Development Program formulated by the Ford Foundation in the early 1970s Evaluation or report focus First, it examines the beliefs that inform the conceptual foundations of the Initiative. Briefly, this examination includes a conceptual investigation of the chosen context of the Initiative (the neighborhood), the nature of "community," and the practical implications of neighborhood definition for structuring social action. The paper also explores the notions of collaboration and participation as they pertain to NFl, and examines the idea of "integrated" neighborhood development. Second, the paper describes the overall governance structure provided by NFl and its local variants in each site. It will discuss the structure's relationship to the conceptual bases that formed it, and consider how it may affect actions taken under the Initiative. Finally, the paper will highlight issues likely to prove important in understanding the value and impact of NFl, and will try to develop some realistic expectations of outcomes. Evaluation process

304

Chapin Hall 1993

Chapin Hall 1995

Project note the Initiative is the work of numerous individuals, and this report has likely failed to evoke adequately their work, the work that drives the Initiative. In our attempt to be clear about the elements at work, and due to our developing understanding of the relevant dynamics, we have not told their story with narrative detail and impact. We do hope, however, that the report will provide a useful description and discussion of the process underway and the implications of the Initiative's structure, without oversimplifying its nuanced complexity. To explore the validity of these principles and the assumptions that underlie them, the Neighborhood and Family Initiative was given a similar form in each of four participant sites.

Evaluation purpose This report draws from these data to review the history of the Initiative, from the development of the collaboratives through October 1992, in an attempt to understand the impact and implications of the central principles and the governing structure of the Neighborhood and Family Initiative. attempts to build an understanding of a complicated process still in progress, and place this developing understanding within an analytic framework useful to the Ford Foundation and a broader audience of funders and policymakers.

Evaluation or report focus it offers a reading of how the experience of the collaboratives reflects on NFl's guiding principles and draws from the particular experiences of the participant sites evidence of general trends and lessons. It attempts to "take the temperature" of the Initiative, to compare that reading to the broad set of issues that the Ford Foundation set out to investigate, and to forecast some possible concerns and some possible responses.

Evaluation process The second year, which ended in October 1992, entailed the development of a database program for the organization and analysis of qualitative data; collection and analysis of siteproduced documentation; and collection of process data through field research, including site visits to each participant city and target neighborhood, observing collaborative and collaborative sponsored meetings and events, and conducting extended qualitative interviews with a panel of respondents from each site (including virtually all collaborative members, community foundation participants, and project coordinators). Our descriptions and conclusions are drawn from the analysis of site-produced documentation and through field research that includes site visits to each participant city and attendance at crosssite events; the observation of neighborhoods, collaborative meetings, and collaborative-sponsored meetings and events; extended qualitative

The evaluation has three central purposes: (1) to refine, through conceptual exploration, Ford's model of comprehensive, participatory community development; [] (2) to document the process of implementation and evaluate the significance of the developing model; []and (3) to investigate the implications of what is learned and explore the ways in which the

This report covers a significant amount of territory and attempts to synthesize the experiences and lessons of an extremely rich and complex Initiative. The role of the national evaluation of the Neighborhood and Family Initiative is threefold. First, it critically examines-in the hopes of developing practical, operational lessons-the usefulness and

305

Project note

Evaluation purpose Initiative can inform similar endeavors.

Evaluation or report focus viability of the central principles that drive the Initiative, in order to shed some light on the soundness of their underlying assumptions. Second, it seeks to document and analyze the processes through which ideas are interpreted and moved to action across sites, and the organizational structures put in place to embody and act on the principles toward the realization of Initiative goals. From these activities, it hopes finally to glean from the particular experiences of the participants within and across sites some general trends, tensions, and lessons about the intent, structure, and conduct of NFI, and explore their implications for guiding neighborhood development.

Evaluation process interviews with a panel of respondents from each site, including virtually all collaborative members, community foundation participants, and project coordinators as well as some knowledgeable nonparticipants; interviews and conversations with other participants in the Initiative, including Ford Foundation staff, consultants, and technical assistance providers. Data is coded and entered into a database program based on a qualitative scheme derived deductively from our central research questions and inductively from our observations and the analytic categories suggested by participants' perceptions on the issues and dynamics at work within their experience of the Initiative. Our investigation is thus guided by a set of central research concerns, and our understandings and interpretations are built from those of the participants; the themes we discuss for the most part represent those that emerged in our conversations with them.

306

Chapin Hall 1997

Project note To explore the validity of these principles and the assumptions that underlie them, the Neighborhood and Family Initiative was given a similar form in each of four participant sites. At each local site, a community foundation was charged with identifying a target neighborhood, hiring a staff director, and creating a neighborhood collaborative.

Evaluation purpose The evaluation has three central purposes: (1) to refine, through conceptual exploration, Ford's model of comprehensive, participatory community development; (2) to document the process of implementation and evaluate the significance of the developing model; and (3) to investigate the implications of what is learned and explore the ways in which the Initiative can inform similar endeavors. Because our intent is to derive general lessons from the particular experiences of each site and the participants engaged in the Initiative, we document the unfolding of the Initiative at a particular level of abstraction, focusing to a large extent on issues of structure, organization, programmatic approach, and collective process. Our understanding of these issues, however, is built from the concrete experience and subjective interpretations of a collection of individuals who have dedicated significant amounts of time, energy, and commitment to a complicated, ambiguous, and often frustrating process. It is the efforts of these people,

Evaluation or report focus The role of the national evaluation of the Neighborhood and Family Initiative is threefold. First, it critically examines-in the hopes of developing practical, operational lessons-the usefulness and viability of the central principles that drive the Initiative, attempting to shed light on the soundness of their underlying assumptions. Second, it seeks to document and analyze the processes through which ideas are interpreted and moved to action across sites, and the organizational structures put in place to embody and act on the principles toward the realization of Initiative goals. From these activities, it hopes finally to glean from the particular experiences of the participants within and across sites general trends, tensions, and lessons about the intent, structure, and conduct of NFI, and to explore their implications for guiding neighborhood development.

Evaluation process Our descriptions and conclusions are drawn from the analysis of site-produced documentation and through field research that includes site visits to each participant city and attendance at crosssite events; the observation of neighborhoods, collaborative meetings, and collaborative-sponsored meetings and events; extended qualitative interviews with a panel of respondents from each site, including virtually all collaborative members, community foundation participants, and project staff as well as knowledgeable nonparticipants; and interviews and conversations with other participants in the Initiative, including Ford Foundation staff, consultants, and technical assistance providers. In addition, periodic telephone interviews are conducted with a smaller set of key informants to keep us up to date on events, issues, and developments at each site. Data are coded and entered into a database program based on a qualitative scheme derived deductively from our central research

307

Project note

Evaluation purpose primarily operating as volunteers, that form the foundation of action under the Initiative and that provide the source of knowledge about its challenges and successes.

Evaluation or report focus

Chapin Hall 1999

The Neighborhood and Family Initiative (NFl) is in its final phase as a centrally funded, foursite initiative.

The report focuses on issues related to the collaboratives' attempts to engage significant resident participation and support community "empowerment" while developing not-for-profit organizations with strong, competent staff and boards. It also focuses on trends in programmatic activity and some of the collaboratives' responses to the increased pressure to leverage resources to replace and supplement the Ford Foundation grants. Finally, the report looks at the broader institutional support structure-the roles of the community foundation and the Ford Foundation, key issues encountered in the provision of technical assistance, and the challenges of evaluation and understanding initiative impact.

Evaluation process questions and inductively from our observations and interviews. Our investigation is thus guided by a set of central research concerns, and our understandings and interpretations are built from those of the participants; the themes we discuss for the most part represent those that emerged in our conversations with them. This report draws on interviews, documentation, and the direct observation of meetings and events to examine the central developments that occurred between November 1996 and December 1998 and have implications for the future of NFl.

308

Project note Chapin Hall 2000

Chapin Hall 2000 Summary

One of the earliest of what have come to be known as comprehensive community initiatives (CCls), NFI was eventually to become a 10-year effort that sought to strengthen a single neighborhood in each of four cities and to improve the quality of life for the families who live in them. It was also a demonstration project, designed to explore the usefulness and viability of a set of principles and a general approach to community development, and to provide lessons for policy makers and practitioners engaged in similar work in the field.

Evaluation purpose (1) to refine, through conceptual exploration, Ford's model of comprehensive, participatory community development; (2) to document the process of implementation and evaluate the significance of the developing model; and (3) to investigate the implications of what is learned and explore the ways in which the initiative can inform similar projects. The analysis provided here is based on findings from the implementation study over the 10 years of the initiative, and provides the most comprehensive overview and pointed distillation of how NFI worked from its earliest goals and intentions to its actual achievements, longterm influence, and role in the life of its neighborhoods

Evaluation or report focus This report provides an update on the activities of the initiative since November 1996 and distills the lessons learned by NFl over much of its implementation through June 2000, placing these lessons within the context of what has been learned by other comprehensive community initiatives (CCIs). This report provides a summary of findings and distills a set of lessons learned by NFI in the course of its implementation.

Evaluation process

309

Cosmos 2000

Project note This document is part of a larger final report of the Common Data Collection efforts undertaken by COSMOS Corporation for The Ford Foundation under Grant No. 960-0128. The entire final report was submitted at the end of the second phase of the grant, which ran from May 1997 to September 2000.

Evaluation purpose

Michigan 1993

An integral part of NFl is a national and local evaluation.

The local evaluation of the Detroit project is the subject of this report and is authored by two consultants hired by the Community Foundation of Southeastern Michigan and the Collaborative. Although the consultants

Evaluation or report focus This report focuses on local level data indicators in four Neighborhood and Family Initiative (NFI) Neighborhoods and their surrounding areas. The four neighborhoods are located in: Detroit, Michigan; Hartford, Connecticut; Memphis, Tennessee; and Milwaukee, Wisconsin. The COSMOS team is indebted to many organizations in these areas including the police departments and the school districts, who provided the local data used in the report. The indicators cover the topics of business development, unemployment, real estate and housing, public education, crime, and traffic accidents. All are intended to capture some aspect of the social and economic development of the neighborhoods, which was the main focus of NFL Similarly, wherever possible the data were collected for the period spanning from 1990 to 1999, the interval during which NFl was in place. Objectives outlined in an August 1992 Executive Summary description of NFI were to serve as a guide by which outcomes would be monitored. Nonetheless it was assumed that the outcomes would be reviewed and modified by the NFI Collaborative members.

Evaluation process

The authors of this report met with NFl Collaborative members and the staff of the Community Foundation of Southeastern Michigan several times during the Fall of 1992 into 1993 to clarify the type of evaluation that was appropriate for this

310

Project note

Evaluation purpose agreed to assume responsibility for the bulk of the evaluation process, it was seen as a joint effort of the consultants, NFI committees, NFI grant recipients, and the Community Foundation staff. NFI committees agreed to be active in the design, development and monitoring of the evaluation process, to identify focus group members and review draft evaluation reports. NFI grant recipients will be expected to submit reports on their activities and administer participant evaluation forms. The Community Foundation staff agreed to monitor and coordinate the collection of project progress reports and participant evaluation forms and to host focus group meetings.

Evaluation or report focus Although the evaluation would primarily consider outcomes and not process, the evaluation was seen as formative, meaning that the findings of the evaluation would be used to reshape the project. Information collected as part of the local evaluation was seen as guiding the activities of NFI as well as assessing the impact of the NFI in the lower Woodward Corridor. Finally, it was understood that the local evaluation would compliment but not duplicate the national evaluation being conducted by Chapin Hall that focuses primarily on NFI process.

Evaluation process project. On February 1, 1993 they were engaged as consultants to conduct the local evaluation. It was agreed that the focus of the evaluation would be on the outcomes of NFI activities and programs. It was agreed that the evaluation would be based on four sources of data: 1. reports by the recipients of NFI grants, 2. evaluation forms filled out by participants in NFI supported projects, 3. comments by participants in "focus groups" from each funded project, and 4. written materials prepared by NFI and other sources. The consultants agreed to develop the data collection format and instructions for evaluation administration, design and facilitate focus groups, summarize all data collected and prepare annual evaluation reports. Specifically the consultants agreed to: 1. consult with Foundation staff and the Collaborative committees to determine outcomes that will be monitored, 2. develop a reporting system for all NFI funded projects to monitor outcomes, 3.

311

Project note

Evaluation purpose

Evaluation or report focus

Michigan 1994

This report is based on activities of NFI between May 1, 1993 and March 31,1994. The consultants worked with the NFI Collaborative, its various committees and foundation staff throughout the program year. Through the Summer and Fall, we met and reached agreement on a process for implementing the evaluation design that was developed the previous year.

From the start of the program year, all aspects of the NFI program were guided by a list of outcome measures approved the previous year. These outcomes measures were introduced regularly into Collaborative planning and decision making. These outcomes are the foundation of this report.

Evaluation process develop participant evaluation forms, 4. design and conduct focus groups. 5. review reports from NFI projects, 6. tabulate and summarize data !Tom participant evaluations, 7. review written materials produced by NFI and others, 8. meet regularly with NFI committees and Community Foundation staff, and prepare annual evaluation reports, and 9. provide specific administrative support for the evaluation process. All of the data used in the preparation of this report was collected between January and March of 1994. The report is based on a review of NFI documents, questionnaires administered to various NFI participants, focus groups, and direct observation of the program by the consultants. A draft of this report was shared with NFI staff and Collaborative members. The final report expresses the views and judgments of the authors.

Milwaukee 1998

In 1996, the Milwaukee Foundation contracted with the Planning Council for Health and Human Services for the

The goals of the NFI were threefold: (1) to compile program data for use in assessing the degree to which the projects are meeting their

Program Overviews--one national, one local-provide the origin and background of the Initiative and offer the reader a context in which to place the

312

Project note services of two local evaluators to assess the neighborhood and Family Initiative (NFI) project outcomes covering the period of July 1,1996 to June 30, 1998. The local evaluators were Johnnie Johnson and Cheryl Seabrook Ajirotutu, Ph.D. Cheryl Ajirotutu; Ph.D. discontinued her involvement after the data collection phase was completed to return to her academic work at the University of Wisconsin-Milwaukee.

Evaluation purpose stated goals, objectives and performance criteria, (2) to identify mitigating circumstances that either helped or hindered the success of meeting stated goals, objectives and performance levels; and (3) to explore the implications for guiding the Collaborative's neighborhood development and capacity building efforts over the next five years. The evaluation of HNFl Project is significant because this study is the only known instance where an attempt to implement a grassroots community development and capacity building model has been accompanied by an extensive, in-depth documentation and evaluation at both the national and local levels.

Evaluation or report focus programs and activities evaluated herein. The Evaluation Process is then explained step by step, including details of data collection methods, activities, outcomes and recommendations. This section also contains a listing of program objectives and activities related to economic development, employment and housing. In addition to indicating the responsibilities of program participants, this section provides an idea of the programs offered, organizational partners involved, and the performance of these entities during the reporting period. The Summary of Recommendations recaps the principal suggestions and the reasoning behind them. The Conclusion offers the evaluator's general interpretation of the findings, with candid opinions on the sustainability of current programs. These opinions include caveats relative to the Initiative's mission.

Evaluation process

313

Appendix F Selection of Information Search Locations Database searches General catalogue search for books and government documents Alternative Press Index Public Affairs Information Service (PAIS) Social Science Abstracts Sociological Abstracts Political Science Abstracts Eric Clearinghouse HUD clearinghouse

Publisher sites Sage publications Prentice hall Jossey Bass

Nonprofit and philanthropic websites Foundation Center Council on Foundations Association of Research on Nonprofit Organizations and Voluntary Association Evaluators Clearinghouse

314

American Evaluation Association Independent Sector

Nonprofit Think Tank websites Aspen Institute Urban Institute Center for Study of Social Policy Institute for Policy Research Community Development Research Center Chapin Hall Center for Children and Families Brookings Institute Harvard Family research Project

Foundation Center’s 2003 top 25 foundations based on total giving Bill and Melinda Gates Foundation Lilly Endowment Ford Foundation David and Lucille Packard Foundation Robert Wood Johnson Foundation Annenberg Foundation Starr Foundation Pew Charitable Trusts W.K. Kellogg Foundation

315

Theodore and Vada Stanley Foundation Andrew W. Mellon Foundation Bristol-Myers Squibb Patient Assistance Foundation John D. and Catherine T. MacArthur Foundation Annie E. Casey Foundation California Endowment Rockefeller Foundation Robert W. Woodruff Foundation Open Society Institute New York Community Trust Kresge Foundation Duke Endowment William and Flora Hewlett Foundation Ford Motor Company Fund Charles Stewart Mott Foundation Donald W. Reynolds Foundation

Journals For Community and Urban Studies American Journal of Community Psychology Journal of Community Psychology Journal of the American Planning Association Journal of Applied Behavioral Science

316

Sociological Practice Urban Affairs Quarterly Urban Affairs Review American Sociological Review Journal of Planning Educational and Research Social Science Review Journal of Urban Affairs Nonprofit and Voluntary Sector Quarterly Social Science Journal Journal of Social Issues National Civic Review Qualitative Inquiry International Journal of Qualitative Studies

For Education Educational Researcher Review of Educational Research Review of Research in Education American Educational Research Journal

For Evaluation New Directions in Evaluation American Journal of Evaluation

317

Evaluation and Program Planning Evaluation Review Educational Evaluation and Policy Analysis Journal of Policy Analysis and Management Studies in Educational Evaluation

318

REFERENCES Alkin, M. C. (Ed.). (2004). Evaluation roots: Tracing theorists' views and influences. Thousand Oaks, CA: Sage Publications. Arnstein, S. R. (1969). A ladder of citizen participation. Journal of the American Institute of Planners, 35(4), 216-224. Audi, R. (Ed.). (1999). The Cambridge dictionary of philosophy (2nd ed.). New York: Cambridge University Press. Baum, H. S. (1994). Community and consensus: Reality and fantasy in planning. Journal of Planning Education and Research, 13, 251-262. Baum, H. S. (1997). The organization of hope: Communities planning themselves. New York: State Univeristy of New York. Baum, H. S. (2001). How should we evaluate community initiatives? Journal of the American Planning Association, 67(2), 147-158. Berk, R. A., & Rossi, P. H. (1999). Thinking about program evaluation. Thousand Oaks, CA: Sage Publications. Berkowitz, B. (2001). Studying outcomes of community-based coalitions. American Journal of Community Psychology, 29(2), 213-227. Brandon, P. R. (1998). Stakeholder participation for the purpose of helping ensure evaluation validity: Bridging the gap between collaborative and noncollaborative evaluations. American Journal of Evaluation, 19(3), 325-337. Brown, P. (1995). The role of the evaluator in comprehensive community initiatives. In C. H. Weiss (Ed.), New approaches to evaluating community initiatives: Concepts, methods, and contexts (Vol. 1, pp. 201-225). Washington DC: The Aspen Institute. Brown, P. (1996). Comprehensive neighborhood-based initiatives. Cityscape: A Journal of Policy Development and Research, 2(2), 161-176.

319

Brown, P. (1998). Shaping the evaluator's role in a theory of change evaluation. In J. P. Connell (Ed.), New approaches to evaluating community initiatives: Theory, measurement, and analysis (Vol. 2, pp. 110-112). Washington DC: The Aspen Institute. Brown, P., & Garg, S. (1997). Foundations and comprehensive community initiatives: The challenges of partnership. Chicago: Chapin Hall Center for Children. Cabatoff, K. (2000). Translating evaluation findings into policy language. In R. K. Hopson (Ed.), How and why language matters in evaluation (Vol. 86, pp. 4354). San Francisco: Jossey-Bass. Caracelli, V. J., & Preskill, H. (2000). The expanding scope of evaluation use. In J. C. Greene (Ed.), New Directions for Evaluation (Vol. 88). San Francisco: JosseyBass. Center for the Study of Social Policy. (October 1996). Systems change at the neighborhood level: Creating better futures for children, youth, and families. Washington DC. Chapin Hall Center for Children. (2001). Chapin Hall projects and publication 20012002. Chicago. Chapin Hall Center for Children. (2002). Retrieved September 19, 2002, from http://www.chapin.uchicago.edu/AboutCH/measuring.html Chaskin, R. (1992). The Ford Foundation's Neighborhood and Family Initiative: Toward a model of comprehensive neighborhood-based development. Chicago: Chapin Hall Center for Children. Chaskin, R. (1993). The Ford Foundation's Neighborhood and Family Initiative: Building collaboration, An interim report. Chicago: Chapin Hall Center for Children. Chaskin, R. (1997). Perspectives on neighborhood and community: A review of the literature. Social Service Review.

320

Chaskin, R. (1999). Defining community capacity: A framework and implications from a comprehensive community initiative. Chicago: Chapin Hall Center for Children. Chaskin, R. (2000). Lessons learned from the implementation of the Neighborhood and Family Initiative: A summary of findings. Chicago: Chapin Hall Center for Children. Chaskin, R. (2001). Building community capacity: A definitional framework and case studies from a comprehensive community initiative. Urban Affairs Review, 36(3), 291-323. Chaskin, R. (2003). Fostering neighborhood democracy: Legitimacy and accountability within loosely coupled systems. Nonprofit and Voluntary Sector Quarterly, 32(2), 161-189. Chaskin, R., & Abunimah, A. (1997). A view from the city: Local government perspectives on neighborhood-based governance in community-building initiatives (Discussion paper). Chicago: Chapin Hall Center for Children. Chaskin, R., & Brown, P. (1996). Theories of neighborhood change. In R. Stone (Ed.), Core issues in comprehensive community-building initiatives (pp. 1-6). Chicago: Chapin Hall Center for Children. Chaskin, R., Chipenda Danoshka, S., & Joseph, M. (1997). The Ford Foundation's Neighborhood and Family Initiative: The challenge of sustainability: Chapin Hall Center for Children. Chaskin, R., Chipenda-Danoshka, S., & Richards, C. J. (1999). The Neighborhood and Family Initiative: Entering the final phase: Chapin Hall Center for Children. Chaskin, R., Chipenda-Danoshka, S., & Toler, A. K. (2000). Moving beyond the Neighborhood and Family Initiative: The final phase and lessons learned. Chicago: Chapin Hall Center for Children. Chaskin, R., & Garg, S. (1996). Neighborhood governance. In R. Stone (Ed.), Core issues in comprehensive community-building initiatives (pp. 41-47). Chicago: Chapin Hall Center for Children.

321

Chaskin, R., & Garg, S. (1997). The issue of governance in neighborhood-based initiatives. Urban Affairs Review, 32(5), 631-661. Chaskin, R., & Joseph, M. (1995). The Neighborhood and Family Initiative: Moving toward implementation. Chicago: Chapin Hall Center for Children. Chaskin, R., Joseph, M., & Chipenda Danoshka, S. (1997). Implementing comprehensive community development: Possibilities and limitations. Social Work, 42(5), 435-444. Chaskin, R., & Peters, C. (2000). Decision making and action at the neighborhood level: An exploration of mechanisms and processes (Discussion paper). Chicago: Chapin Hall Center for Children. Chavis, D. M. (2001). The paradoxes and promise of community coalitions. American Journal of Community Psychology, 29(2), 309-320. Chelimsky, E. (1994). Evaluation: Where we are. Evaluation Practice, 15(3), 339345. Chelimsky, E. (1995). The political environment of evaluation and what it means for the development of the field. Evaluation Practice, 16(3), 215-225. Chen, H. T. (2004). The roots of theory-driven evaluation: Current views and origins. In M. C. Alkin (Ed.), Evaluation roots: Tracing theorists' views and influences (pp. 132-152). Thousand Oaks: Sage Publications. Christie, C. A. (2003). What guides evaluation? A study of how evaluation practice maps onto evaluation theory. New Directions for Evaluation, 97, 7-35. Christie, C. A., & Alkin, M. C. (2003). The user-oriented evaluator's role in formulating a program theory: Using a theory-driven approach. American Journal of Evaluation, 24(3), 373-385. Clavel, P., Pitt, J., & Yin, J. (1997). The community option in urban policy. Urban Affairs Review, 32(4), 435-458.

322

Connell, J. P., & Aber, J. L. (1995). How do urban communities affect youth? Using social science research to inform the design and evaluation of comprehensive community initiatives. In C. H. Weiss (Ed.), New approaches to evaluating community initiatives: Concepts, methods, and contexts (Vol. 1, pp. 93-125). Washington DC: The Aspen Institute. Connell, J. P., & Kubisch, A. C. (1998). Applying a theory of change approach to the evaluation of comprehensive community initiatives: Progress, prospects, and problems. In J. P. Connell (Ed.), New approaches to evaluating community initiatives: Theory, measurement, and analysis (Vol. 2, pp. 15-44). Washington DC: The Aspen Institute. Connell, J. P., Kubisch, A. C., Schorr, L. B., & Weiss, C. H. (Eds.). (1995). New approaches to evaluating community initiatives: Concepts, methods, and contexts (Vol. 1). Washington DC: The Aspen Institute. Connor, J. A. (2003). Community support organizations: Enabling citizen democracy to sustain comprehensive community impact. National Civic Review, 92(2), 113-129. COSMOS Corporation. (2000). Common data collected for the Ford Foundation's Neighborhood and Family Initiative: Neighborhood indicators. Bethesda, MD. Coulton, C. (1995a). Using community-level indicators of children's well-being in comprehensive community initiatives. Cleveland: Center for Urban Poverty and Social Change. Coulton, C. (1995b). Using community-level indicators of children's well-being in comprehensive community initiatives. In C. H. Weiss (Ed.), New approaches to evaluating community initiatives: Concepts, methods, and contexts (Vol. 1, pp. 173-199). Washington DC: The Aspen Institute. Coulton, C., & Hollister, R. (1998). Measuring comprehensive community initiative outcomes using data available for small areas. In J. P. Connell (Ed.), New approaches to evaluating community initiatives: Theory, measurement, and analysis (Vol. 2, pp. 165-220). Washington DC: The Aspen Institute. Cousins, J. B. (1996). Consequences of researcher involvement in participatory evaluation. Studies in Educational Evaluation, 22(1), 3-27.

323

Cousins, J. B., & Earl, L. M. (1992). The case for participatory evaluation. Educational Evaluation and Policy Analysis, 14(4), 397-418. Creswell, J. W. (1998). Qualitative inquiry and research design: Choosing among five traditions. Thousand Oaks, CA: Sage Publications. Donaldson, S. I., & Gooler, L. E. (2003). Theory-driven evaluation in action: lessons from a $20 million statewide work and health initiative. Evaluation and Program Planning, 26, 355-366. Edelman, I. (2000). Evaluation and community-based initiatives. Social Policy, Winter, 13-23. Evaluation toolkit: Overview. (2003). Retrieved October, 26, 2003, from www.wkkf.org/programming/overview Fetterman, D. (1996). Empowerment evaluation: An introduction to theory and practice. In A. Wandersman (Ed.), Empowerment Evaluation: Knowledge and tools for self-assessment and accountability (pp. 3-46). Thousand Oaks, CA: Sage Publications. Fetterman, D. (1997). Empowerment evaluation: A response to Patton and Scriven. Evaluation Practice, 18(3), 253-266. Fetterman, D. M. (2004). Branching out or standing on a limb: Looking to our roots for insight. In M. C. Alkin (Ed.), Evaluation roots: Tracing theorists' views and influences (pp. 304-318). Thousand Oaks: Sage Publications. Fine, A. H., Thayer, C. E., & Coghlan, A. (1998). Program evaluation practice in the nonprofit sector. Washington DC: Aspen Institute Nonprofit Research Fund & Robert Wood Johnson Foundation. Finkelstein, B., & Croninger, R. G. (1997). Listening to communities: A perspective on the design and evaluation of human services delivery. Paper presented at the Association for Public Analysis and Management Annual Research Conference, Washington, DC. Fischler, R. (2000). Communicative planning theory: A Foucauldian assessment. Journal of Planning Education and Research, 19, 358-368.

324

Ford Foundation. (2002). Retrieved September, 19, 2002, from http://www.fordfound.org/ Ford Foundation. (n.d). Works in progress: A status report on the Neighborhood and Family Initiative. New York. Foster-Fishman, P. G., Berkowitz, S. L., Lounsbury, D. W., Jacobson, S., & Allen, N. A. (2001). Building collaborative capacity in community coalitions: A review and integrative framework. American Journal of Community Psychology, 29(2), 241-261. Fraser, J. C., Kick, E. L., & Williams, P. J. (2002). Neighborhood revitalization and the practice of evaluation in the United States: Developing a margin research perspective. City and Community, 1(2), 223-244. Fraser, J. C., Lepofsky, J., Kick, E. L., & Williams, J. P. (2003). The construction of the local and the limits of contemporary community building in the United States. Urban Affairs Review, 38(3), 417-445. Frederick, K. A., Carman, J. G., & Birkland, T. A. (2002). Program evaluation in a challenging authorizing environment: Intergovernmental and interorganizational factors. In M. D. Whitsett (Ed.), New Directions for Evaluation. San Francisco: Jossey-Bass. Fulbright-Anderson, K., Kubisch, A. C., & Connell, J. P. (Eds.). (1998). New approaches to evaluating community initiatives: Theory, measurement, and analysis (Vol. 2). Washington DC: The Aspen Institute. Gall, M. D., Gall, J. P., & Borg, W. R. (2003). Educational research: An introduction (7th ed.). New York: Pearson Education Inc. Gambone, M. A. (1998). Challenges of measurement in community change initiatives. In J. P. Connell (Ed.), New approaches to evaluating community initiatives: Theory, measurement, and analysis (Vol. 2, pp. 149-163). Washington DC: The Aspen Institute. Garaway, G. B. (1995). Participatory evaluation. Studies in Educational Evaluation, 21, 85-102.

325

Glesne, C., & Peshkin, A. (1992). Becoming qualitative researchers: An introduction. White Plains, NY: Longman. Goodman, R., & Wandersman, A. (1996). An ecological assessment of communitybased interventions for prevention and health promotion: Approaches to measuring community coalitions. American Journal of Community Psychology, 24(1), 33-61. Granger, R. C. (1998). Establishing causality in evaluations of comprehensive community initiatives. In J. P. Connell (Ed.), New approaches to evaluating community initiatives: Theory, measurement, and analysis (Vol. 2, pp. 221246). Washington DC: The Aspen Institute. Grant, L. M., & Coppard, L. C. (1993). Neighborhood and Family Initiative local evaluation: May 1993: Community Foundation for Southeastern Michigan. Grant, L. M., & Coppard, L. C. (1994). Neighborhood and Family Initiative local evaluation: May 1994: Community Foundation for Southeastern Michigan. Greene, J. C. (1997). Evaluation as advocacy. Evaluation Practice, 18(1), 25-35. Greene, J. C. (2000). Understanding social programs through evaluation. In Y. S. Lincoln (Ed.), Handbook of qualitative research (2nd ed., pp. 981-999). Thousand Oaks, CA: Sage Publications. Guzman, B. L., & Feria, A. (2002). Community-based organizations and state initiatives: The negotiation process of program evaluation. In M. D. Whitsett (Ed.), New Directions for Evaluation (pp. 57-72). San Francisco: Jossey-Bass. Hall, P. D. (2003). A solution is a product in search of a problem: A history of foundations and evaluation research. Cambridge, MA: Harvard University: Kennedy School of Government. Hasci, T. A. (2000). Using program theory to replicate successful programs. In T. Huebner (Ed.), Program theory in evaluation: Challenges and opportunities (pp. 71-78). San Francisco: Jossey Bass Publishers.

326

Hattrup McNelis, R., & Bickel, W. E. (1996). Building formal knowledge bases: Understanding evaluation use in the foundation community. Evaluation Practice, 17(1), 19-41. Hawks, J. (1997). For a good cause? How charitable institutions become powerful economic bullies. Toronto, Canada: Carol Publishing Group. Hebert, S., & Anderson, A. (1998). Applying a theory of change approach to two national, multisite comprehensive community initiatives: Practitioner reflections. In J. P. Connell (Ed.), New approaches to evaluating community initiatives: Theory, measurement, and analysis (Vol. 2, pp. 123-148). Washington DC: The Aspen Institute. Henry, G., & Mark, M., M. (2003). Beyond use: Understanding evaluation's influence on attitudes and actions. American Journal of Evaluation, 24(3), 293-314. Himmelman, A. T. (2001). On coalitions and the transformation of power relations: Collaborative betterment and collaborative empowerment. American Journal of Community Psychology, 29(2), 277-284. Hollister, R. G., & Hill, J. (1995). Problems in the evaluation of community-wide initiatives. In C. H. Weiss (Ed.), New approaches to evaluating community initiatives: Concepts, methods, and contexts (Vol. 1, pp. 127-172). Washington, DC: Aspen Institute. House, E. R. (2004). Intellectual history in evaluation. In M. C. Alkin (Ed.), Evaluation roots: Tracing theorists' views and influences (pp. 218-224). Thousand Oaks, CA: Sage Publications. House, E. R., & Howe, K. R. (2000). Deliberative democratic evaluation. In L. DeStefano (Ed.), Evaluation as a democratic process: Promoting inclusion, dialogue, and deliberation (Vol. 85, pp. 3-12). San Francisco: Jossey-Bass Publishers. Huberman, M. (1995). The many modes of participatory evaluation. In L. M. Earl (Ed.), Participatory evaluation in education (pp. 103-111). Washington DC: Falmer Press. Huebner, T. A. (2000). Theory-based evaluation: Gaining a shared understanding between school staff and evaluators. In T. Huebner (Ed.), Program theory in 327

evaluation: Challenges and opportunities (pp. 79-89). San Francisco: JosseyBass. Hula, R. C., Jackson, C. Y., & Orr, M. (1997). Urban politics, governing nonprofits, and community revitalization. Urban Affairs Review, 32(4), 459-489. Innes, J. E. (1995). Planning theory's emerging paradigm: Communicative action and interactive practice. Journal of Planning Education and Research, 14(3), 183189. Innes, J. E., & Booher, D. E. (1999a). Consensus building and complex adaptive systems: A framework for evaluating collaborative planning. Journal of the American Planning Association, 65(4), 412-423. Innes, J. E., & Booher, D. E. (1999b). Consensus building as role playing and bricolage: Toward a theory of collaborative planning. Journal of the American Planning Association, 65(1), 9-26. Johnson, J. (1998). The Milwaukee Harambee Neighborhood and Family Initiative: Outcomes-based evaluation report covering the period July 1, 1996 - June 30, 1998: Planning Council for Health and Human Services. Kaye, G. (2001). Grassroots involvement. American Journal of Community Psychology, 29(2), 269-275. Kingsley, T. G., McNeely, J. B., & Gibson, J. O. (n.d.). Communtiy building coming of age (Monograph). Washington DC: The Urban Institute. Kretzmann, J. P., & McKnight, J. L. (1993). Building communities from the inside out: A path toward finding and mobilizing a community's assets. Chicago: ACTA Publications. Krippendorf, K. (1980). Content analysis: An introduction to its methodology. Newbury Park, NJ: Sage Publications. Kubisch, A. C., Auspos, P., Brown, P., Chaskin, R., Fulbright-Anderson, K., & Hamilton, R. (Eds.). (2002). Voices from the field II: Reflections on comprehensive community change. Washington, DC: Aspen Institute.

328

Kubisch, A. C., Fulbright-Anderson, K., & Connell, J. P. (1998). Evaluating community initiatives: A progress report. In J. P. Connell (Ed.), New approaches to evaluating community initiatives: Theory, measurement, and analysis (Vol. 2, pp. 1-13). Washington, DC: The Aspen Institute. Kubisch, A. C., Weiss, C. H., Schorr, L. B., & Connell, J. P. (1995). Introduction. In C. H. Weiss (Ed.), New approaches to evaluating community initiatives: Concepts, methods, and contexts (Vol. 1, pp. 1-21). Washington DC: The Aspen Institute. Lincoln, Y. S. (1991). The arts and sciences of program evaluation. Evaluation Practice, 12(1), 1-7. Lincoln, Y. S. (1994). Tracks toward a postmodern politics of evaluation. Evaluation Practice, 15(3), 299-309. Lincoln, Y. S., & Guba, E. G. (2000). Paradigmatic controversies, contradictions, and emerging confluences. In Y. S. Lincoln (Ed.), Handbook of qualitative research (2nd ed., pp. 163-188). Thousand Oaks, CA: Sage Publications. Lincoln, Y. S., & Guba, E. G. (2004). The roots of fourth generation evaluation: Theoretical and methodological origins. In M. C. Alkin (Ed.), Evaluation roots: Tracing theorists' views and influences (pp. 225-241). Thousand Oaks, CA: Sage Publications. MacNeil, C. (2000). Surfacing the realpolitik: Democratic evaluation in an antidemocratic climate. In L. DeStefano (Ed.), Evaluation as a democratic process: Promoting inclusion, dialogue, and deliberation (pp. 51-62). San Francisco: Jossey-Bass. Madison, A. M. (2000). Language in defining social problems and in evaluating social programs. In R. K. Hopson (Ed.), How and why language matters in evaluation (Vol. 86, pp. 17-28). San Francisco: Jossey-Bass. Mark, M., M., Henry, G., & Julnes, G. (1999). Toward an integrative framework for evaluation practice. American Journal of Evaluation, 20(2), 177-198. Marris, P., & Rein, M. (1967). Dilemmas of social reform: Poverty and community action in the United States. New York: Atherton Press.

329

Marshall, C., & Rossman, G. B. (1999). Designing qualitative research (3rd ed.). Thousand Oaks, CA: Sage Publications. Mathison, S. (2000). Deliberation, evaluation and democracy. In L. DeStefano (Ed.), Evaluation as democratic process: Promoting inclusion, dialogue, and deliberation (pp. 85-89). San Francisco: Jossey-Bass Publishers. Mattingly, D., J., Prislin, R., McKenzie, T., Rodrigquez, J., & Kayzar, B. (2002). Evaluating evaluation: The case of parent involvement programs. Review of Educational Research, 72(4), 549-576. Maxwell, J. A. (1992). Understanding validity in qualitative research. Harvard Educational Review, 62(3), 279-297. Maxwell, J. A. (1996). Qualitative research design: An interactive approach. Thousand Oaks, CA: Sage Publications. Maxwell, J. A. (1998). Designing a qualitative study. In D. J. Roy (Ed.), Handbook of applied social research methods (pp. 69-100). Thousand Oaks, CA: Sage Publications. Medoff, P., & Sklar, H. (1994). Streets of hope: The fall and rise of an urban neighborhood. Boston: South End. Merriam, S. B. (2001). Qualitative research and case study applications in education. San Francisco: Jossey-Bass Publishers. Mertens, D. (1999). Inclusive evaluation: Implications of transformative theory for evaluation. American Journal of Evaluation, 20(1), 1-14. Mertens, D. (2002). The evaluator's role in the transformative context. In T. A. Schwandt (Ed.), Exploring evalutors role and identity (pp. 103-117). Greenwich, CT: Information Age Publishing. Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook (2nd ed.). Thousand Oaks, CA: Sage Publications.

330

Milligan, S., Coulton, C., York, P., & Register, R. (1998). Implementing a theory of change evaluation in the Cleveland community-building initiative: A case study. In J. P. Connell (Ed.), New approaches to evaluating community initiatives: Theory, measurement, and analysis (Vol. 2, pp. 45-85). Washington DC: The Aspen Institute. Morris, L. L., Fitz-Gibbon, C. T., & Freeman, M. E. (1987). How to communicate evaluation findings. Newbury Park, CA: SAGE Publications. Murphy-Berman, V., Schnoes, C. J., & Chambers, J. M. (2000). An early stage evaluation model for assessing the effectiveness of comprehensive community initiatives: Three case studies in Nebraska. Evaluation and Program Planning, 23, 157-163. Neundorf, K. A. (2002). The content analysis guidebook. Thousand Oaks, CA: Sage Publications. Nichols, L. (2002). Participatory program planning: Including program participants and evaluators. Evaluation and Program Planning, 25, 1-14. O'Connor, A. (1995). Evaluating comprehensive community initiatives: A view from history. In C. H. Weiss (Ed.), New approaches to evaluating community initiatives: Concepts, methods, and contexts (Vol. 1, pp. 23-63). Washington DC: The Aspen Institute. Owen, J. M., & Lambert, F. C. (1998). Evaluation and the information needs of organizational leaders. American Journal of Evaluation, 19(3), 355-365. Patton, M. Q. (1990). The challenge of being a profession. Evaluation Practice, 11(1), 45-51. Patton, M. Q. (1994). Developmental evaluation. Evaluation Practice, 15(3), 311319. Patton, M. Q. (1997a). Toward distinguishing empowerment evaluation and placing it in a larger context. Evaluation Practice, 18(2), 147-163. Patton, M. Q. (1997b). Utilization-focused evaluation: The new century text. Thousand Oaks, CA: Sage Publications.

331

Patton, M. Q. (2004). The roots of utilization-focused evaluation. In M. C. Alkin (Ed.), Evaluation roots: Tracing theorists' views and influences (pp. 276-292). Thousand Oaks: Sage Publications. Peirce, N. R., & Steinbach, C. F. (1987). Corrective capitalism: The rise of America's community development corporations. New York: The Ford Foundation. Perspective on partnerships. (1996). New York: Ford Foundation. Petersen, D. M. (2002). The potential of social capital measures in the evaluation of comprehensive community-based health initiatives. American Journal of Evaluation, 23(1), 55-64. Philliber, S. (1998). The virtue of specificity in theory of change evaluation. In J. P. Connell (Ed.), New approaches to evaluating community initiatives: Theory, measurement, and analysis (Vol. 2, pp. 87-100). Washington DC: The Aspen Institute. Poplin, D. E. (1972). Communities: A survey of theories and methods. New York: Macmillan Company. Potter, W. J. (1996). An analysis of thinking and research about qualitative methods. Mahway, New Jersey: Lawrence Erlbaum Associates. Preskill, H. (2004). The transformational power of evaluation. In M. C. Alkin (Ed.), Evaluation roots: Tracing theorists' views and influences (pp. 343-355). Thousand Oaks: Sage Publications. Preskill, H., & Torres, R. T. (2000). The learning dimension of evaluation use. In H. Preskill (Ed.), New Directions for Evaluation (Vol. 88). San Francisco: Jossey-Bass Publishers. Preskill, H., Zuckerman, B., & Matthews, B. (2003). An exploratory study of process use: Findings and implications for future research. American Journal of Evaluation, 24(4), 423-442. Rallis, S. F., & Rossman, G. B. (2000). Dialogue for learning: Evaluator as critical friend. In R. K. Hopson (Ed.), How and why language matters in evaluation (Vol. 86, pp. 81-92). San Francisco: Jossey-Bass.

332

Richardson, L. (2000). Writing: A method of inquiry. In Y. S. Lincoln (Ed.), Handbook of qualitative research (pp. 923-948). Thousand Oaks: Sage. Rogers, P. J., Petrosino, A., Huebner, T., & Hasci, T. (2000). Program theory evaluation: Practice, promise, and problems. In T. A. Huebner (Ed.), Program theory in evaluation: Challenges and opportunities (pp. 5-13). San Francisco: Jossey Bass Publishers. Rossi, P. H. (1999). Evaluating community development programs: Problems and prospects. In W. T. Dickens (Ed.), Urban problems and community development: Brookings Institute Press. Rossman, G. B., & Rallis, S. F. (2000). Critical inquiry and use as action. In V. J. Caracelli (Ed.), New Directions for Evaluation (Vol. 88). San Francisco: Jossey-Bass. Roundtable on Comprehensive Community Initiatives. (1997). Voices from the field: Learning from the work of comprehensive community initiatives. Washington, DC: Aspen Institute. Roundtable on Comprehensive Community Initiatives. (2002). Retrieved September 19, 2002, from http://www.commbuild.org/html_pages/ccilist.htm Ryan, G. W., & Bernard, H., Russell. (2000). Data management and analysis methods. In Y. S. Lincoln (Ed.), Handbook of qualitative research (2nd ed., pp. 769-802). Thousand Oaks, CA: Sage Publications. Ryan, K. E., & Schwandt, T. A. (Eds.). (2002). Exploring evaluator role and identity. Greenwich: Information Age Publishing. Sawicki, D. S., & Flynn, P. (1996). Neighborhood Indicators: A review of the literature and an assessment of conceptual and methodological issues. Journal of the American Planning Association, 62(2), 165-183. Scherer, J. (1972). Contemporary community: Sociological illusion or reality? London: Tavistock Publications.

333

Schlager, E. (1999). A comparison of frameworks, theories, and models of policy processes. In P. A. Sabatier (Ed.), Theories of the Policy Process. Boulder, CO: Westview Press. Schnoes, C. J., Murphy-Berman, V., & Chambers, J. M. (2000). Empowerment evaluation applied: Experiences, analysis, and recommendations from a case study. American Journal of Evaluation, 21(1), 53-64. Schorr, L. B., Farrow, F., Hornbeck, D., & Watson, S. (1994). The case for shifting to results-based accountability. Washington DC: Center for the Study of Social Policy. Schram, T. H. (2003). Conceptualizing qualitative inquiry: Mindwork for fieldwork in education and the social sciences. Upper Saddle River, NJ: Merrill Prentice Hall. Schulz, A. J., Israel, B. A., & Lantz, P. (2003). Instrument for evaluating dimensions of group dynamics within community-based participatory research partnerships. Evaluation and Program Planning, 26, 249-262. Schwandt, T. A. (1992). Better living through evaluation? Images of progress shaping evaluation practice. Evaluation Practice, 13(2), 135-144. Schwandt, T. A. (2000). Three epistemological stances for qualitative inquiry: Interpretivism, hermeneutics, and social constructionism. In Y. S. Lincoln (Ed.), Handbook of qualitative research (2nd ed., pp. 189-213). Thousand Oaks, CA: Sage Publications. Schwandt, T. A. (2002). Evaluation practice reconsidered. New York: Peter Lang. Scriven, M. (1994). Evaluation as a discipline. Studies in Educational Evaluation, 20, 147-166. Scriven, M. (1997). Empowerment evaluation examined. Evaluation Practice, 18(2), 165-175. Sechrest, L. (1994). Program evaluation: Oh what it seemed to be. Evaluation Practice, 15(3), 359-365.

334

Segerholm, C. (2003). Researching evaluation in national (state) politics and administration: A critical approach. American Journal of Evaluation, 24(3), 353-372. Shula, L. M., & Cousins, J. B. (1997). Evaluation use: Theory, research, and practice since 1986. Evaluation Practice, 18(3), 195-208. Shulha, L. M., & Cousins, J. B. (1997). Evaluation use: Theory, research, and practice since 1986. Evaluation Practice, 18(3), 195-208. Smith, M. F. (1994). Evaluation: Review of the past, preview of the future. Evaluation Practice, 15(3), 215-177. Springer, J. F., & Phillips, J. L. (1994). Policy learning and evaluation design: Lessons from the community partnership demonstration program. Journal of Community Psychology, 117-139. Spruill, N., Kenney, C., & Kaplan, L. (2001). Community development and systems thinking: Theory and Practice. National Civic Review, 90(1), 105-117. Stake, R. E. (1995). The art of case study research. Thousand Oaks, CA: Sage Publications. Stake, R. E. (2000). Case studies. In Y. S. Lincoln (Ed.), Handbook of qualitative research (2nd ed., pp. 435-454). Thousand Oaks, CA: Sage Publications. Stoecker, R. (1997). The CDC model of urban redevelopment: A critique and an alternative. Journal of Urban Affairs, 19(1), 1-22. Stoecker, R. (2003). Understanding the development-organizing dialectic. Journal of Urban Affairs, 25(4), 493-512. Stone, R. (1994). Comprehensive community-building strategies: Issues and opportunities for learning. Chicago: Rockefeller Foundation. Stone, R. (Ed.). (1996). Core issues in comprehensive community-building initiatives. Chicago: Chapin Hall Center for Children.

335

Stone, R., & Butler, B. (2000). Core issues in comprehensive community-building initiatives: Exploring power and race. Chicago: Chapin Hall Center for Children. Stone, R., Dwyer, L., & Sethi, G. (1996). Exploring visions of "built" community. In R. Stone (Ed.), Core issues in comprehensive community-building initiatives (pp. 16-22). Chicago: Chapin Hall Center for Children. Strauss, A., & Corbin, J. (1990). Basics of qualitative research: Grounded theory procedures and techniques. Newbury Park, CA: Sage Publications. Stronach, I., Halsall, R., & Hustler, d. (2002). Future imperfect: Evaluation in dystopian times. In T. A. Schwandt (Ed.), Exploring evaluation role and identity (pp. 167-192). Greenwich, CT: Information Age Publishing. Stufflebaum, D. (1994). Empowerment evaluation, objectivist evaluation, and evaluation standards: Where the future of evaluation should not go and where it needs to go. Evaluation Practice, 15(3), 321-338. Tang, H., Cowling, D. W., Koumjian, K., Roeseler, A., Lloyd, J., & Rogers, T. (2002). Building local program evaluation capacity toward a comprehensive evaluation. In M. D. Whitsett (Ed.), New Directions for Evaluation (pp. 3956). San Francisco: Jossey-Bass. Temkin, K., & Rohe, W. (1996). Neighborhood change and urban policy. Journal of Planning Education and Research, 15, 159-170. Temkin, K., & Rohe, W. (1998). Social capital and neighborhood stability: An empirical investigation. Housing Policy Debate, 9(1), 61-88. Tharp, R., & Gallimore, R. (1982). Inquiry process in program development. Journal of Community Psychology, 10, 103-118. Thomas, F. A. (1991, April 23). Fulfilling America's promise. Paper presented at the Chicago Council on Urban Affairs, New York. Torres, R. T. (1996). Evaluation strategies for communicating and reporting. Thousand Oaks, CA: Sage Publications.

336

Torres, R. T., Preskill, H. S., & Piontek, M. E. (1996). Evaluation strategies for communicating and reporting. Thousand Oaks: Sage Publications. Treno, A. J., & Holder, H. D. (1997). Evaluating efforts to reduce community-level problems through structural rather than individual change: A multicomponent community trial to prevent alcohol-involved problems. Evaluation Review, 21(2), 133-139. Twelvetrees, A. C. (1996). Organizing for neighborhood development: A comparative study of community based development organizations (2nd ed.). Brookfield: Avebury. Warren, R. L. (1973). Truth, love and social change. Chicago: Rand McNally & Company. Warren, R. L. (1978). The community in America (3rd ed.). New York: University Press of American. Weiss, C. H. (1972). Evaluation Research. Englewood Cliffs, NJ: Prentice-Hall. Weiss, C. H. (1995). Nothing as practical as good theory: Exploring theory-based evaluation for comprehensive community initiatives for children and families. In C. H. Weiss (Ed.), New approaches to evaluating community initiatives: Concepts, methods and contexts (Vol. 1, pp. 65-92). Washington DC: The Aspen Institute. Weiss, C. H. (1997). How can theory-based evaluation make greater headway? Evaluation Review, 21(4), 501-524. Weiss, C. H. (1998). Have we learned anything new about the use of evaluation? American Journal of Evaluation, 19(1), 21-33. Weiss, C. H. (2004). Rooting for evaluation: A cliff notes version of my work. In M. C. Alkin (Ed.), Evaluation roots: Tracing theorists' views and influences (pp. 153-268). Thousand Oaks, CA: Sage Publications. Weiss, H., Coffman, J., & Bohan-Baker, M. (2002). Evaluation's role in supporting initiative sustainability. Cambridge, MA: Harvard University Graduate School.

337

Weiss, J. A. (2000). From research to social improvement: Understanding theories of intervention. Nonprofit and Voluntary Sector Quarterly, 29(1), 81-110. White, J. A., & Wehlage, G. (1995). Community collaboration: If it is such a good idea, why is it so hard to do? Educational Evaluation and Policy Analysis, 17(1), 23-38. Wilder, M. G., & Rubin, B. (1996). Rhetoric versus reality. Journal of the American Planning Association, 62(4), 473-491. Wolff, T. (2001a). The future of community coalition building. American Journal of Community Psychology, 29(2), 263-268. Wolff, T. (2001b). A practitioner's guide to successful coalitions. American Journal of Community Psychology, 29(2), 173-191. Yin, R. K. (1994). Case study research. Thousand Oaks, CA: Sage Publications. Yin, R. K. (1998). The abridged version of case study research. In D. J. Roy (Ed.), Handbook of applied social research methods (pp. 229-259). Thousand Oaks, CA: Sage Publications.

338



doc_827752609.pdf
 

Attachments

Back
Top