Good program evaluation shows how well you are using your resources. Are the activities working? If so, wonderful! Now you know for sure. Do they need to be modified? Have you been duplicating efforts without realizing it? Are your activities ineffective? This is also great information to have so that you use your financial resources most efficiently or you can identify where to make improvements. Program evaluation not only reveals how effectively you are using your resources, it can be a way to showcase your successful activities or components to stakeholders.
You already track information for funders and other stakeholders. Consider what you can do with the information you already collect during normal program operations. Program evaluation does not necessarily require a significant time commitment. Although effort will be needed, it will be time well spent to have conducted a thorough and useful evaluation with the time you are able to dedicate.
As a result of doing an evaluation, you might also identify time savings that can be applied in other priority areas. Division of Adolescent and School Health Program Evaluation Toolkit Provides tools to help state and local education and public health programs with evaluation planning, data collection, and analysis, as well as sharing results and improving programs.
Contact Evaluation Program. E-mail: cdceval cdc. Get Email Updates. To receive email updates about this page, enter your email address: Email Address. What's this? Links with this icon indicate that you are leaving the CDC website. Linking to a non-federal website does not constitute an endorsement by CDC or any of its employees of the sponsors or the information and products presented on the website. Sources of evidence in an evaluation may be people, documents, or observations.
More than one source may be used to gather evidence for each indicator. In fact, selecting multiple sources provides an opportunity to include different perspectives about the program and enhances the evaluation's credibility. For instance, an inside perspective may be reflected by internal documents and comments from staff or program managers; whereas clients and those who do not support the program may provide different, but equally relevant perspectives.
Mixing these and other perspectives provides a more comprehensive view of the program or intervention. The criteria used to select sources should be clearly stated so that users and other stakeholders can interpret the evidence accurately and assess if it may be biased.
In addition, some sources provide information in narrative form for example, a person's experience when taking part in the program and others are numerical for example, how many people were involved in the program.
The integration of qualitative and quantitative information can yield evidence that is more complete and more useful, thus meeting the needs and expectations of a wider range of stakeholders. Quality refers to the appropriateness and integrity of information gathered in an evaluation. High quality data are reliable and informative. It is easier to collect if the indicators have been well defined. Other factors that affect quality may include instrument design, data collection procedures, training of those involved in data collection, source selection, coding, data management, and routine error checking.
Obtaining quality data will entail tradeoffs e. Because all data have limitations, the intent of a practical evaluation is to strive for a level of quality that meets the stakeholders' threshold for credibility. Quantity refers to the amount of evidence gathered in an evaluation. It is necessary to estimate in advance the amount of information that will be required and to establish criteria to decide when to stop collecting data - to know when enough is enough.
Quantity affects the level of confidence or precision users can have - how sure we are that what we've learned is true. It also partly determines whether the evaluation will be able to detect effects. All evidence collected should have a clear, anticipated use. By logistics , we mean the methods, timing, and physical infrastructure for gathering and handling evidence. People and organizations also have cultural preferences that dictate acceptable ways of asking questions and collecting information, including who would be perceived as an appropriate person to ask the questions.
For example, some participants may be unwilling to discuss their behavior with a stranger, whereas others are more at ease with someone they don't know. Therefore, the techniques for gathering evidence in an evaluation must be in keeping with the cultural norms of the community.
Data collection procedures should also ensure that confidentiality is protected. The process of justifying conclusions recognizes that evidence in an evaluation does not necessarily speak for itself. Evidence must be carefully considered from a number of different stakeholders' perspectives to reach conclusions that are well -substantiated and justified.
Conclusions become justified when they are linked to the evidence gathered and judged against agreed-upon values set by the stakeholders. Stakeholders must agree that conclusions are justified in order to use the evaluation results with confidence. Standards reflect the values held by stakeholders about the program. They provide the basis to make program judgments. The use of explicit standards for judgment is fundamental to sound evaluation.
In practice, when stakeholders articulate and negotiate their values, these become the standards to judge whether a given program's performance will, for instance, be considered "successful," "adequate," or "unsuccessful. Analysis and synthesis are methods to discover and summarize an evaluation's findings. They are designed to detect patterns in evidence, either by isolating important findings analysis or by combining different sources of information to reach a larger understanding synthesis.
Mixed method evaluations require the separate analysis of each evidence element, as well as a synthesis of all sources to examine patterns that emerge. Deciphering facts from a given body of evidence involves deciding how to organize, classify, compare, and display information. These decisions are guided by the questions being asked, the types of data available, and especially by input from stakeholders and primary intended users.
Interpretation is the effort to figure out what the findings mean. Uncovering facts about a program's performance isn't enough to make conclusions. The facts must be interpreted to understand their practical significance.
In short, interpretations draw on information and perspectives that stakeholders bring to the evaluation. They can be strengthened through active participation or interaction with the data and preliminary explanations of what happened.
Judgments are statements about the merit, worth, or significance of the program. They are formed by comparing the findings and their interpretations against one or more selected standards.
Because multiple standards can be applied to a given program, stakeholders may reach different or even conflicting judgments.
Community members, however, may feel that despite improvements, a minimum threshold of access to services has still not been reached. Their judgment, based on standards of social equity, would therefore be negative. Conflicting claims about a program's quality, value, or importance often indicate that stakeholders are using different standards or values in making judgments.
This type of disagreement can be a catalyst to clarify values and to negotiate the appropriate basis or bases on which the program should be judged.
Recommendations are actions to consider as a result of the evaluation. Forming recommendations requires information beyond just what is necessary to form judgments. For example, knowing that a program is able to increase the services available to battered women doesn't necessarily translate into a recommendation to continue the effort, particularly when there are competing priorities or other effective alternatives. Thus, recommendations about what to do with a given intervention go beyond judgments about a specific program's effectiveness.
If recommendations aren't supported by enough evidence, or if they aren't in keeping with stakeholders' values, they can really undermine an evaluation's credibility. By contrast, an evaluation can be strengthened by recommendations that anticipate and react to what users will want to know. Justifying conclusions in an evaluation is a process that involves different possible steps. For instance, conclusions could be strengthened by searching for alternative explanations from the ones you have chosen, and then showing why they are unsupported by the evidence.
When there are different but equally well supported conclusions, each could be presented with a summary of their strengths and weaknesses. Techniques to analyze, synthesize, and interpret findings might be agreed upon before data collection begins. It is naive to assume that lessons learned in an evaluation will necessarily be used in decision making and subsequent action. Deliberate effort on the part of evaluators is needed to ensure that the evaluation findings will be used appropriately.
Preparing for their use involves strategic thinking and continued vigilance in looking for opportunities to communicate and influence. Both of these should begin in the earliest stages of the process and continue throughout the evaluation. Design refers to how the evaluation's questions, methods, and overall processes are constructed.
As discussed in the third step of this framework focusing the evaluation design , the evaluation should be organized from the start to achieve specific agreed-upon uses. Having a clear purpose that is focused on the use of what is learned helps those who will carry out the evaluation to know who will do what with the findings. Furthermore, the process of creating a clear design will highlight ways that stakeholders, through their many contributions, can improve the evaluation and facilitate the use of the results.
Preparation refers to the steps taken to get ready for the future uses of the evaluation findings. The ability to translate new knowledge into appropriate action is a skill that can be strengthened through practice.
In fact, building this skill can itself be a useful benefit of the evaluation. It is possible to prepare stakeholders for future use of the results by discussing how potential findings might affect decision making. For example, primary intended users and other stakeholders could be given a set of hypothetical results and asked what decisions or actions they would make on the basis of this new knowledge.
If they indicate that the evidence presented is incomplete or irrelevant and that no action would be taken, then this is an early warning sign that the planned evaluation should be modified. Preparing for use also gives stakeholders more time to explore both positive and negative implications of potential results and to identify different options for program improvement.
Feedback is the communication that occurs among everyone involved in the evaluation. Giving and receiving feedback creates an atmosphere of trust among stakeholders; it keeps an evaluation on track by keeping everyone informed about how the evaluation is proceeding.
Primary intended users and other stakeholders have a right to comment on evaluation decisions. From a standpoint of ensuring use, stakeholder feedback is a necessary part of every step in the evaluation. Obtaining valuable feedback can be encouraged by holding discussions during each step of the evaluation and routinely sharing interim findings, provisional interpretations, and draft reports.
Follow-up refers to the support that many users need during the evaluation and after they receive evaluation findings. Because of the amount of effort required, reaching justified conclusions in an evaluation can seem like an end in itself. It is not. Active follow-up may be necessary to remind users of the intended uses of what has been learned. Follow-up may also be required to stop lessons learned from becoming lost or ignored in the process of making complex or political decisions. To guard against such oversight, it may be helpful to have someone involved in the evaluation serve as an advocate for the evaluation's findings during the decision -making phase.
Facilitating the use of evaluation findings also carries with it the responsibility to prevent misuse. Evaluation results are always bounded by the context in which the evaluation was conducted.
Some stakeholders, however, may be tempted to take results out of context or to use them for different purposes than what they were developed for. For instance, over-generalizing the results from a single case study to make decisions that affect all sites in a national program is an example of misuse of a case study evaluation.
Similarly, program opponents may misuse results by overemphasizing negative findings without giving proper credit for what has worked. Active follow-up can help to prevent these and other forms of misuse by ensuring that evidence is only applied to the questions that were the central focus of the evaluation.
Dissemination is the process of communicating the procedures or the lessons learned from an evaluation to relevant audiences in a timely, unbiased, and consistent fashion. Like other elements of the evaluation, the reporting strategy should be discussed in advance with intended users and other stakeholders. Planning effective communications also requires considering the timing, style, tone, message source, vehicle, and format of information products.
Regardless of how communications are constructed, the goal for dissemination is to achieve full disclosure and impartial reporting. Along with the uses for evaluation findings, there are also uses that flow from the very process of evaluating. These "process uses" should be encouraged. The people who take part in an evaluation can experience profound changes in beliefs and behavior.
For instance, an evaluation challenges staff members to act differently in what they are doing, and to question assumptions that connect program activities with intended effects. Evaluation also prompts staff to clarify their understanding of the goals of the program. This greater clarity, in turn, helps staff members to better function as a team focused on a common end. In short, immersion in the logic, reasoning, and values of evaluation can have very positive effects, such as basing decisions on systematic judgments instead of on unfounded assumptions.
There are standards to assess whether all of the parts of an evaluation are well -designed and working to their greatest potential. These standards, designed to assess evaluations of educational programs, are also relevant for programs and interventions related to community health and development.
The program evaluation standards make it practical to conduct sound and fair evaluations. They offer well-supported principles to follow when faced with having to make tradeoffs or compromises. Attending to the standards can guard against an imbalanced evaluation, such as one that is accurate and feasible, but isn't very useful or sensitive to the context.
Another example of an imbalanced evaluation is one that would be genuinely useful, but is impossible to carry out. The following standards can be applied while developing an evaluation design and throughout the course of its implementation.
Remember, the standards are written as guiding principles, not as rigid rules to be followed in all situations. The feasibility standards are to ensure that the evaluation makes sense - that the steps that are planned are both viable and pragmatic.
The propriety standards ensure that the evaluation is an ethical one, conducted with regard for the rights and interests of those involved.
The eight propriety standards follow. There is an ever-increasing agreement on the worth of evaluation; in fact, doing so is often required by funders and other constituents. So, community health and development professionals can no longer question whether or not to evaluate their programs.
Instead, the appropriate questions are:. The framework for program evaluation helps answer these questions by guiding users to select evaluation strategies that are useful, feasible, proper, and accurate. To use this framework requires quite a bit of skill in program evaluation. In most cases there are multiple stakeholders to consider, the political context may be divisive, steps don't always follow a logical order, and limited resources may make it difficult to take a preferred course of action.
An evaluator's challenge is to devise an optimal strategy, given the conditions she is working under. An optimal strategy is one that accomplishes each step in the framework in a way that takes into account the program context and is able to meet or exceed the relevant standards.
This framework also makes it possible to respond to common concerns about program evaluation. For instance, many evaluations are not undertaken because they are seen as being too expensive. The cost of an evaluation, however, is relative; it depends upon the question being asked and the level of certainty desired for the answer. A simple, low-cost evaluation can deliver information valuable for understanding and improvement.
Rather than discounting evaluations as a time-consuming sideline, the framework encourages evaluations that are timed strategically to provide necessary feedback. This makes it possible to make evaluation closely linked with everyday practices.
Another concern centers on the perceived technical demands of designing and conducting an evaluation. However, the practical approach endorsed by this framework focuses on questions that can improve the program. Finally, the prospect of evaluation troubles many staff members because they perceive evaluation methods as punishing "They just want to show what we're doing wrong. We're the ones who know what's going on. Evaluation is a powerful strategy for distinguishing programs and interventions that make a difference from those that don't.
It is a driving force for developing and adapting sound strategies, improving existing programs, and demonstrating the results of investments in time and other resources. It also helps determine if what is being done is worth the cost. This recommended framework for program evaluation is both a synthesis of existing best practices and a set of standards for further improvement. It supports a practical approach to evaluation based on steps and standards that can be applied in almost any setting.
Because the framework is purposefully general, it provides a stable guide to design and conduct a wide range of evaluation efforts in a variety of specific program areas. The framework can be used as a template to create useful evaluation plans to contribute to understanding and improvement. The Magenta Book - Guidance for Evaluation provides additional information on requirements for good evaluation, and some straightforward steps to make a good evaluation of an intervention more feasible, read The Magenta Book - Guidance for Evaluation.
Are You Ready to Evaluate your Coalition? The American Evaluation Association Guiding Principles for Evaluators helps guide evaluators in their professional practice. CDC Evaluation Resources provides a list of resources for evaluation, as well as links to professional associations and journals. Chapter Community Interventions in the "Introduction to Community Psychology" explains professionally-led versus grassroots interventions, what it means for a community intervention to be effective, why a community needs to be ready for an intervention, and the steps to implementing community interventions.
The Comprehensive Cancer Control Branch Program Evaluation Toolkit is designed to help grantees plan and implement evaluations of their NCCCP-funded programs, this toolkit provides general guidance on evaluation principles and techniques, as well as practical templates and tools. In addition to information on designing an evaluation plan, this book also provides worksheets as a step-by-step guide. Evaluating Your Community-Based Program is a handbook designed by the American Academy of Pediatrics covering a variety of topics related to evaluation.
Government Accountability Office with copious information regarding program evaluations. Practical Evaluation for Public Managers is a guide to evaluation written by the U. Department of Health and Human Services.
Penn State Program Evaluation offers information on collecting different forms of data and how to measure different community markers. The Program Manager's Guide to Evaluation is a handbook provided by the Administration for Children and Families with detailed answers to nine big questions regarding program evaluation.
Program Planning and Evaluation is a website created by the University of Arizona. It provides links to information on several topics including methods, funding, types of evaluation, and reporting impacts.
This guide includes practical information on quantitative and qualitative methodologies in evaluations. Kellogg Foundation Evaluation Handbook provides a framework for thinking about evaluation as a relevant and useful program tool. It was originally written for program directors with direct responsibility for the ongoing evaluation of the W. Kellogg Foundation. Recommended framework for program evaluation in public health practice.
Atlanta, GA: Author. Gazing into the oracle: the delphi method and its application to social policy and community health and development. London: Jessica Kingsley Publishers. Barrett, F. Sunnycrest Press, This practical manual includes helpful tips to develop evaluations, tables illustrating evaluation approaches, evaluation planning and reporting templates, and resources if you want more information.
Basch, C. Avoiding type III errors in health education program evaluation: a case study. Health Education Quarterly. Handbook of applied social research methods. Boruch, R. Randomized controlled experiments for evaluation and planning. In Handbook of applied social research methods, edited by Bickman L.
A program sets performance measures as a series of goals to meet over time. Program evaluations assess whether the program is meeting those performance measures but also look at why they are or are not meeting them. For example, imagine you bought a new car that is supposed to get 30 miles per gallon. But say, you notice that you are only getting 20 miles per gallon. That's a performance measurement. You looked at whether your car was performing where it should be.
So what do you do next? You would take it to a mechanic. The mechanic's analysis and recommendations would be the program evaluation because the mechanic would diagnose why the car is not performing as well as it should.
You need performance measures to know whether your program or car is performing where it should be, and you do a program evaluation or go to the mechanic to find out the reason why it is not meeting those expectations. Skip to main content. Contact Us. Popular Tools. What is program evaluation?
0コメント