Category Archives: Measuring Outcomes

Accountability and Time Frames

One factor that has severely eroded our communal problem solving capacity is news and electoral cycles that are shorter than the long-lived consequences of various actions and policies.  The approach of reporting on political issues as if they were short term sporting events (who’s winning, who’s losing? who landed the latest blow? will it slow the opponent down?) further obscures the complexities of the issues at hand.  We rarely look back at the choices both made and not made and how they might have compared.  And we rarely look at what questions were not asked that might have led to a different result.

The Indiana Public Utilities Commission in an order addressing a request for emergency rate relief quoted last year from “The Black Swan: The Impact of the Highly Improbable Event” by Nassim Nicolas Taleb (Random House, 2007):  “We concentrate on things we already know and time and time again fail to take into consideration what we don’t know.”  The Commission rejected claims that the “emergency” triggering the request for rate relief was a “black swan” event — defined as an event that is unpredictable, carries a massive impact, and compels us after the fact to make up an explanation that makes it appear less predictable.  More often we are the victim not of an truly unpredictable event but of an unwarranted optimism at the outset, the failure to identify or analyze alternative (and likely) scenarios, and the failure to look at the interactive consequences among related issues.   This failure to plan for the long term in favor a short term benefit (such as keeping taxes or rates lower than they might otherwise be) is reflected in our deteriorating highways, bridges, utility infrastructure, and school systems.  Another example is the failure to look at the long term costs of imprisonment in favor of short term appeals to “law and order”, which resulted in a rapid rise in prison populations due to the incarceration of low-risk, non-violent offenders, and burdened state budgets to the extent that many states are now quietly looking for less costly alternatives.  Yet another example from recent years is the optimistic re-allocations of state funds for ethanol production, which unexpectedly raised animal feed, and ultimately food prices as demand for corn surged.

Lurching from crisis to crisis does not enhance our communal life.  As we design new structures for public dialogue, we can also expand our evaluation processes to focus on what have we learned and how we can ask better questions so as to make better choices.  As we look back at decisions made we might ask what data was missing and why? How can better data be obtained in the future?  What interactive effects were observed among issues and how can those be better anticipated?  Are there better, less costly ways to make progress towards a desired end?  Would the issues look dramatically different if we were looking at a 5, 10 or 20 year time frame?  Which frame best fits the need we are trying to meet?   This approach might also help us move beyond the winner-loser mentality of our current politics, and focus on who is helping us think in sustainable ways, and who is not.

Accountability in Action

A key part of accountability is reporting back to constituents on the effects of various programs and policies in a consistent and understandable form. Several local governments have put in place dashboards or other reports that allow citizens to easily track progress toward certain goals.  Here are some examples:

Albemarle County, Virginia

Minneapolis, Minnesota

Westminister, CO

The best reports of progress are those that (i) are aligned with clearly identified citizen priorities, (ii) help citizens understand cause and effect, and (iii) allow for ongoing discussion of new options and actions.  See for example the indicators of the degree of government influence at the Albemarle website, explanations of each measure in the Minnesota reports, or trend lines and comparative data in the Westminster reports.

What Are You Evaluating? Part 3

Over the last two weeks we have discussed the evaluation of the purposes and mechanics of a citizen engagement process.  This week we conclude this brief series by offering a few thoughts on evaluating outcomes – changes in policy, actions, or resource allocation that were influenced by the engagement process.  Evaluating the purpose and mechanics of a given engagement process is important for understanding how to best operate a process; evaluation of outcomes helps you measure the overall value of the process in relation to the actual delivery of services and the efficiency and efficacy of governing.  Evaluating and reporting  outcomes will also build trust in the process and assure the public that their input is being used.  This in turn can lead to increased involvement by the public in future processes.

As with the other forms of evaluation, it is best to keep future evaluation in mind from the start.  If you can identify or develop data sets that will be relevant before, during, and after a given engagement process, you will have different data points to compare at the end of the process and will be able to more effectively demonstrate where changes have occurred and why.

Some questions you may consider asking early in a process when developing an evaluation plan for outcomes are:  What data is currently collected within the community that might serve as a baseline? What additional data might be useful and can a baseline be obtained through an initial survey? What types of issues have been arising in our community, both with regard to specific policies and with the way those policies are made?  What existing processes or efforts could be informed by the information obtained during the engagement process and how will we connect with those? If policy or action recommendations are to be made during the process, what is the likely time period for implementation, and thus for evaluation?  During the process you might gather, through surveys and evaluations, information that answers the following questions:  What subjects are citizens most focused on?  What interests or concerns or values are being expressed? What information is being relied on? What information is missing or misunderstood? What kinds of changes are being proposed or recommendations made? What types of time frames are being discussed for implementation? These kinds of questions will help you fine-tune your evaluation plan.   After the process you can review the data and ask:  What have we learned and how can it help us make better decisions?  What ongoing efforts could be informed by the information obtained or included in an implementation plan?  If there are barriers (including lack of resources) to implementing certain changes or recommendations made by citizens, what are they, and how might we address them? How can track report progress and over what time frame should that occur?  Asking and answering these types of questions can lead to further dialogue, education, and reduced conflict over decisions made.

When evaluating the outcomes of your engagement process, you can use a wide array of survey tools and local data.   If you choose to focus on quantitative data, you may consider looking at an organization like the Baltimore Neighborhood Indicators Alliance for some examples on measurement.  Depending on the issue, you could also choose to focus on simple and easily conveyed indicators like money saved or spent or changes in energy use.  For other issues, qualitative data may be used.  For example, Columbia, Missouri’s vision tracking report, which we helped to develop, simply indicates for specific, identified goals whether the goal has been completed, progress is being made, further action or approval is needed, or the goal is no longer being pursued.

A common complaint citizens make about engagement efforts is that their recommendations just “sat on the shelf”.    Members of your community want to see that their input is used.  This is why tracking and reporting how various substantive decisions and actions are affected by that input is the type of evaluation the public is most likely to be interested in.  Dissemination and discussion of such evaluations can and should occur over a defined time period, since implementation often occurs over many years.

What Are You Evaluating? Part 2

Last week we discussed evaluating the purpose of a citizen engagement process.  This week we continue our series of posts on evaluating citizen engagement processes by offering a few suggestions on how you might consider evaluating the mechanics of a citizen engagement process.

Once you know your purpose you can move on to mechanics of an engagement process.  Questions you might ask as you setup the process include: What type of process would best engage our audience? What kind of process can we implement in a responsible and sustainable manner? What types of resources (including information, volunteers, rooms, equipment, food, etc.) will we need? What types of funds or in-kind donations are available? What kinds of outreach are needed?  What level of participation are we hoping for?  What type of training or orientation will be needed for the process to be productive? Setting baseline, (high and low) targets in these areas and developing related checklists for implementation will help in recruitment, volunteer training, and ongoing evaluation.

After you have already initiated a process you can ask, are we on track? Is our process operating as planned?  Were our original assumptions and projections correct or do we need to adjust to changing circumstances?  In evaluating an ongoing process you have to be willing to make changes in setup and how the dialogue is being facilitated.  You may also need to reassess your expectations and slow the process down, allowing more time for discussion, or break it into stages.  In a process focused on building citizen engagement, for instance, if citizens are in fact engaged, too strong of a push on “completing” the process and calling it finished rather than allowing for extra time, can create dissatisfaction, disengagement, and distrust.

After an engagement process you can evaluate the mechanics of what worked, and what might be improved and document lessons learned.  Questions you might ask here are: Did things flow smoothly?  Were resources available when needed?  Was the process operated cost effectively?  Were issues sequenced effectively?  Were good connections made between various stages of the process?  Were some channels of communication and outreach more productive than others and if so which ones? Did it vary by community?  How satisfied was the public with the opportunities provided for input, or that their input was heard and valued? These evaluations can be shared with  internal audiences for future planning, and making them public can also build trust with your constituents and demonstrate that you respect and want to encourage ongoing engagement.

Use Evaluation To Improve Planning and Build Trust

As we have previously discussed in our series of posts on Structuring Engagement, conducting a successful engagement process requires plans tailored to fit your community.  A specific plan for evaluation, designed at the outset of the process, will help you understand, verify, and improve the outcomes of engagement processes within your community.  Developing such a plan has several other benefits as well.  For instance, setting up a plan for evaluation will help you clearly identify and communicate goals for the process.  Identifying potential outcomes and how progress will be evaluated and reported also assures the public that the process is not just ‘window dressing’  but a serious and legitimate effort at public engagement.  By discussing the methods for measuring outcomes, you can help to better frame public communication both at the outset and throughout the project, as well as at the end.

Here are a few questions you can consider to help you design an evaluation plan:

  • What is the purpose of our engagement process, both substantively and procedurally?
  • Who are we trying to engage and why?
  • Who are the audiences for our evaluation results?  Are their interests similar or different?
  • Given our overall purposes and our audiences, what is our purpose in providing evaluations of the process?
  • What sorts of information and reporting formats will help both us and our audiences better understand the outcomes of the engagement process?
  • What types of information are readily available and what additional information will need to be collected?
  • How can this additional information be most easily and most cost effectively collected?
  • How (and how often) will this information be collected, analyzed and reported? (Note that the answer to this may differ depending on the different types of information being collected and the purpose for its collection).

Note that the results of your evaluation do not need to be statistically significant to be useful.  In fact, trying to make your evaluations into statistical data collection projects may distract you from learning about the more qualitative accomplishments made during engagement processes and building a rich dialogue.  Instead, think about evaluation as a way to gather information that can help inform, without controlling, outcomes.  Ongoing gathering and sharing of information collected, with an invitation for further feedback, can improve both communication and analysis, and assist you and your community identify and make progress toward common goals.

Measuring the Outcomes of Engagement

In October 2010 John Gaventa and Gregory Barrett published an Institute of Development Studies (IDS) report titled “So What Difference Does it Make? Mapping the Outcomes of Citizen Engagement” which  assessed the outcomes of 100 different citizen engagement projects across 20 countries.  From these projects they identified 800 different outcomes, which they then grouped into four general categories: construction of citizenship, practices of citizen participation, responsive and accountable states, and  inclusive and cohesive societies.  Each of these categories includes positive and negative possible outcomes.  We have included Gaventa and Barrett’s summary chart (p 25) below.

While this report is a qualitative assessment, it is aligned with the literature on expected outcomes, and confirms those outcomes are occurring.  The report also provides insight on the issue of how those outcomes might be tracked and measured by providing a well defined set of categories for evaluating outcomes.

As we have previously discussed, one way of strengthening engagement processes over time is to demonstrate to your community that their engagement makes a positive difference in the life of the community.   The IDS report indicates  that the citizen engagement projects assessed had positive outcomes more often than negative ones.  Negative outcomes were in many instances tied to flawed structures for engagement.  The report’s  framework provides us a starting point for imagining how we can measure the outcomes of engagement in our own communities and either demonstrate progress or identify needed changes.   When starting an engagement process, it would be worth thinking about some of the categories in the report and how you might use them to measure outcomes in a way that helps you assess and communicate where change is needed and where progress is being made.