Over the last two weeks we have discussed the evaluation of the purposes and mechanics of a citizen engagement process. This week we conclude this brief series by offering a few thoughts on evaluating outcomes – changes in policy, actions, or resource allocation that were influenced by the engagement process. Evaluating the purpose and mechanics of a given engagement process is important for understanding how to best operate a process; evaluation of outcomes helps you measure the overall value of the process in relation to the actual delivery of services and the efficiency and efficacy of governing. Evaluating and reporting outcomes will also build trust in the process and assure the public that their input is being used. This in turn can lead to increased involvement by the public in future processes.
As with the other forms of evaluation, it is best to keep future evaluation in mind from the start. If you can identify or develop data sets that will be relevant before, during, and after a given engagement process, you will have different data points to compare at the end of the process and will be able to more effectively demonstrate where changes have occurred and why.
Some questions you may consider asking early in a process when developing an evaluation plan for outcomes are: What data is currently collected within the community that might serve as a baseline? What additional data might be useful and can a baseline be obtained through an initial survey? What types of issues have been arising in our community, both with regard to specific policies and with the way those policies are made? What existing processes or efforts could be informed by the information obtained during the engagement process and how will we connect with those? If policy or action recommendations are to be made during the process, what is the likely time period for implementation, and thus for evaluation? During the process you might gather, through surveys and evaluations, information that answers the following questions: What subjects are citizens most focused on? What interests or concerns or values are being expressed? What information is being relied on? What information is missing or misunderstood? What kinds of changes are being proposed or recommendations made? What types of time frames are being discussed for implementation? These kinds of questions will help you fine-tune your evaluation plan. After the process you can review the data and ask: What have we learned and how can it help us make better decisions? What ongoing efforts could be informed by the information obtained or included in an implementation plan? If there are barriers (including lack of resources) to implementing certain changes or recommendations made by citizens, what are they, and how might we address them? How can track report progress and over what time frame should that occur? Asking and answering these types of questions can lead to further dialogue, education, and reduced conflict over decisions made.
When evaluating the outcomes of your engagement process, you can use a wide array of survey tools and local data. If you choose to focus on quantitative data, you may consider looking at an organization like the Baltimore Neighborhood Indicators Alliance for some examples on measurement. Depending on the issue, you could also choose to focus on simple and easily conveyed indicators like money saved or spent or changes in energy use. For other issues, qualitative data may be used. For example, Columbia, Missouri’s vision tracking report, which we helped to develop, simply indicates for specific, identified goals whether the goal has been completed, progress is being made, further action or approval is needed, or the goal is no longer being pursued.
A common complaint citizens make about engagement efforts is that their recommendations just “sat on the shelf”. Members of your community want to see that their input is used. This is why tracking and reporting how various substantive decisions and actions are affected by that input is the type of evaluation the public is most likely to be interested in. Dissemination and discussion of such evaluations can and should occur over a defined time period, since implementation often occurs over many years.