Outcomes First: Best Practices and Metrics for Public Participation

Recently, the White House launched another in a series of public participation activities around the US National Action Plan for Open Government. This time, they’re focusing on developing best practices and metrics for public participation. This is an intriguing question, and there is certainly no shortage of expertise and research around the subject. While the definition of open government, embodied in the Open Government Directive, is focused on transparency, participation, and collaboration, I submit that “participation,” is a means, not an end – the end goal should be “substantive contributions.” Substantive contribution varies depending on the type of participatory activity, making it difficult to compare those kinds of activities. However, it is possible to measure across similar activities, and there are some rules to apply in the design of a participatory activity that drive success.

Comparing Participation Activities

The White House seeks advice on comparing public participation activities across a wide variety of programs, agencies, budgets, and the like. In this instance, it is useful to observe the International Association for Public Participation (IAP2) Spectrum of Public Participation. It is important to compare participation activities based on their desired goals. IAP2 postulates five overarching archetypes:

  • Inform – to provide the public with information that assists them in understanding a problem, alternatives, and/or solutions
  • Consult – to obtain public feedback on analysis, alternatives, or decisions
  • Involve – to work directly with the public throughout a process, ensuring concerns and aspirations are understood and considered
  • Collaborate – to partner with the public in each aspect of a decision, including the development of alternatives and preferred solution
  • Empower – to place the final decision-making in the hands of the public

It should be easy to see how much of the work of the Federal government already fits within this spectrum – for example:

  • Federal Advisory Committees are a hybrid sort of public participation – the membership is made of members of the public (some are even at-large representatives), and they are empowered to make recommendations to the government. Non-members are allowed to submit comments and view the work of these committees in the open, providing them a consultative role
  • Rulemaking is an involvement sort of public participation, but requires a good deal of informing before members of the general public truly can engage on the issue
  • The National Environmental Policy Act provides for a wide variety of involvement opportunities for the general public, as well.

There are many more, but each of these is so very different that it doesn’t make sense to compare them at all. Instead, it is useful to benchmark and understand the variety inside groups of similar activities.

A further complication when you’re comparing participation activities is that the subject matter and affected population will drive the volume of participation. Take, for example, a rulemaking that sets fuel efficiency standards for cars – such an activity will receive a lot more attention than a rulemaking that impacts where people can operate air tours.

Minimum Standards of Good Participation

The minimum standard of good participation is simple: everyone who should have participated was sufficiently represented - substantively. It is a very rare occurrence that an agency doesn’t have a sense of the parties that would be interested in one of their participation activities. Whether it’s an online dialogue about a strategic plan, a policy issue, a rulemaking, or some notice (public meeting, non-regulatory activity, etc.), an agency should be able to estimate who they would like to see involved, and they should be able to tell whether they’ve achieved sufficient representation.

As an example, earlier this year, the Department of Transportation (DOT) held a national dialogue on the subject of women in blue-collar transportation careers. One of the goals for this dialogue was to involve stakeholders from a wide array of disciplines, and another goal was to ensure that the dialogue extended outside of Washington, DC. DOT generated a map of participation to test whether they achieved those goals – you should check it out!

But, going back to my original statement, one must understand what constitutes “good” participation. Depending on the medium, participation can range from something as simple as a vote (elections are, after all, a public participation activity) to actively working through a series of problems through a charrette. “Good” participation is actually substantive contribution – and measures of substance should be around the desired outcomes of a given activity:

  • When conducting an online participation activity, it is useful to measure both visitors and participants. While an agency may be hosting a consultative activity, “lurkers” could be benefitting by becoming more informed, which is certainly an acceptable outcome, even if there isn’t substantive contribution. Informing activities result in substantive contributions outside a dialogue
  • Evaluating the quality of participation is also key. If the activity’s desired outcome is to uncover and debate differences among stakeholders, then one should measure the amount of discussion as well as the amount of original ideas presented. It is also possible to assess whether participant contributions were on topic and whether they addressed the questions at hand

In short, “good” participation is an elusive concept, and the techniques applied to measure substantive contribution are (again) defined by the type of activity being conducted. “Good” participation is most definitely not defined by the volume of participants. The Institute for Local Government has developed a worksheet that helps assess the effectiveness of public engagement activities, which may be useful for measuring “good” public participation. Also, check out the Canadian Community for Dialogue and Deliberation (@C2D2ca) article on evaluating public participation.

Increasing Diversity of Representation

It starts with notice. Dave Meslin explains this better than I ever could in this TED talk. Pay special attention to example #1. Official notice is neither sufficient nor effective notice. Agencies have to get out of the habit of publishing their notice in the Federal Register and hoping for the best. We’re not going to broaden the base by relying on one channel.

From there, it comes down to designing an effective public participation activity. There is a growing body of research around the public participation activities. Here are some:

The bottom line here is that increasing diversity of representation is not going to be solved with technology alone. Deliberate planning around who to reach, how best to reach them and draw them into the activity, and the design of that activity all come into play.

Informing Participation

Too often, we assume that people enter a participation activity with all the facts and are ready to engage upon entering the space (whether in person or online). The truth of the matter is that this is often far from the case. Objective presentation of the context and the issues at hand is critically important. There are at least two rules to apply here:

  • Orient participants to the process they’re about to begin. If you’re increasing the diversity of participation and engaging non-traditional stakeholders, it’s highly likely that they don’t know much about the overarching process. Set their expectations and educate
  • Sometimes, even traditional stakeholders require orientation to new participatory activities. I have seen agencies move their participatory activities online, only to see their traditional stakeholders engage by copying and pasting what would normally have been submitted as a form letter into an online forum, making it difficult to engage in a discussion around their contribution
  • Provide tips to participants. If your participatory process is somewhat unstructured (such as a dialogue), let them know what constitutes “useful” feedback. In DOT’s Regulation Room experiment (see this paper from Cornell), participants frequently wondered where they could vote, but a rulemaking comment period isn’t a place where voting occurs. You have to structure the space to create the participation you desire.

Conclusion

The questions addressed in this post are interrelated – agencies need to establish their desired outcomes from a participatory activity, which drives the measure of “good” participation, and “good” participation is both diverse and informed. Deliberate planning, actual notice, and orientation to the medium are all critical success factors for an outstanding participatory activity.

Disclaimer: DOT is a client