An Organizational Guide to Building Health Services Research Capacity
Step 5: Evaluating the Infrastructure Support Initiative
Table of Contents
Purpose of Evaluation
Evaluation lets you see where your program stands in relation to the goals you've set. It can help you see if you are where you expected to be and doing the things you expected to be doing at a given point in your program. It is important, therefore, to plan for the evaluation at the outset of your project or initiative. Evaluation results can be used to:
- Support both short-term and long-term planning.
- Improve strategies and activities.
- Justify allocation of human, physical, and financial resources.
- Provide visibility for your program or research center to the community, funding agencies, in proposals and grants, and to your organization's leadership.
- Guide the development of best practices in your field.
Planning for Evaluation
It is impossible to evaluate every aspect of your program or initiative. Consider what is most important to get out of the evaluation and how you will use the results. You might consider going back to your goals or your funded proposal to help guide your decision.
Develop a logic modelb to help organize your evaluation (Figure 7). A logic model provides a visual showing of the relationships between the resources your program invests, the activities that take place, and the outcomes that result. It can be thought of as a roadmap for your program and for your evaluation.
Figure 7. Basic logic model framework
Types of Evaluation
A process-based or formative evaluation examines how a program works. This type of formative work enables the detection and correction of problems and inefficiencies. This type of evaluation looks at things like:
- How did administrators, faculty, staff, and students become aware of the resources available through the research center?
- What made it easier to implement capacity-building activities?
- What has made it harder to implement capacity building activities?
- What improvements or changes do administrators, faculty, staff, students, and/or advisory board members recommend?
An outcomes-based or summative evaluation helps you answer questions about the effects your program has had. Outcomes can be thought of as the benefits from the activities your program provides. This information tells how successful your project has been with respect to the attainment of desired outcomes and goals. First, you will want to identify the major outcomes for your program. There are several ways you can do this. For example:
- It may be helpful to think about what activities you are able to implement or are planning to do and then define your outcomes from that perspective, asking yourself “What do I hope to achieve from this activity?”
- Look at your organization's or department's mission, vision statement, or strategic plan to identify your major outcomes: What is it that your organization aims to do?
- Refer to your funded proposal for the program or initiative and examine the outcomes you proposed there.
It is not always feasible to evaluate every major outcome you have identified. You may need to prioritize and choose a smaller number of outcomes to examine. To help prioritize which outcomes to evaluate, select those that are: specific, observable, measurable, realistic given your program activities, and meaningful to your stakeholders.
Creating an Evaluation Plan
As you did for the Assessment (Step 1), create an evaluation plan. The evaluation plan should include:
What information will be collected?
Develop a list of variables. The evaluation items in the Appendix may provide you with some assistance in developing/selecting variables and questions. Your variables list should include information on potential sources of the data. In deciding how to collect information, balance the quality of information with the cost-effectiveness of data collection. Methods for collecting data might include: surveys, collecting archival data, getting information from Federal/public databases, discussions with staff, and advisory group feedback. Use the variables list for developing data collection tools such as forms, discussion guides, and surveys.
When will data collection occur?
Evaluation can be a resource-intensive activity. Consider how frequently you will conduct a formal evaluation. Perhaps you will decide to collect some data more frequently than others. For example, you may decide to collect publication and presentation data from staff on an annual basis. However, you made decide to collect information on policies every 3 years, since policies are more static. You will probably want to collect information early on in your initiative and then on an ongoing basis to look at changes. Formative data are valuable and can inform what program modifications may be necessary. Determine your schedule for collecting data based on how you plan to use the information.
Who will do the evaluation?
Consider what resources you have to conduct the evaluation and who will conduct it.
- An independent evaluator frees up staff time, eliminates potential conflict of interest, and ensures expert analysis of program data.
- Internal staff members are familiar with the organization and the research center, but they may not be familiar with evaluation techniques or they may lack the perspective that an independent evaluator brings.
- An advisory group may be able to provide useful program review, feedback, and guidance, particularly if you are unable to conduct a formal evaluation.
- Begin planning for the evaluation at the outset of your initiative. Allot the time, costs, and labor needed to conduct an evaluation. To focus the evaluation, develop a logic model, revisit your goals, or review your funding proposal.
- Develop an evaluation plan and variables list to guide your data collection efforts.
- Share the results of the evaluation widely to promote your work and show how the feedback you got was meaningful.
Page originally created October 2011