Allegory Child Welfare Clarifications Data Strategy Governance Homelessness Information & Referral Information System Design Interoperability Open Civic Data Performance Measurement Problem Statements Project Management Reviews Social Work Systems Thinking Taxonomies Veterans

Open Civic Data and the Human Services — Looking Beyond Today’s Flea Market

Governments are throwing open their gates and releasing large volumes of data for public use. Open civic data is the latest revolution, as Paul Wormeli recently called it on the Stewards of Change blog. But the revolution is nowhere near complete. Summing up the movement’s current moment, he concludes:

Any time there is such a tsunami of innovation… there are shortcomings and issues with achieving a more rational methodology… Observers have noted that open data offerings have taken hold so quickly that there are no useful ways to compare the data sets from city to city, or to gain any national insight from the publications of individual sites. In effect, there are no standards on which cities might find a common ground to publish data sets that could be aggregated. There are certainly advantages to building such standards for comparison purposes, but the stage of our current development is exciting just as it is unfolding…      

That’s an apt statement of the open data movement’s major limitation.

Yes, this is a moment of extraordinary innovation. And it’s a lot of fun. Browsing the federal U.S. DATA.GOV portal is a bit like wandering through a really good flea market discovering charming—and in many cases useful—items that you would never have thought of.

A limitation of flea markets, though: much of the stuff that you need for daily living isn’t sold there. Another limitation: the stuff that you do find is usually idiosyncratic and difficult to match. That’s why more people furnish their homes from Ikea than from flea markets.

My conclusion: Open civic data will only live up to its potential if the range of data offerings expands and the data becomes more standardized.

So how exactly could that happen? Or, to make the question more manageable: How could that happen for open data in the human services?

Hmmm. Actually, that points to a different question that would be a better starting point: What kinds of open data are relevant to the human services anyway?

Well, what do human service organizations do? People often describe it as a linear sequence of steps. Sometimes it’s even sketched out as a logic model. First there are problems and needs. Next, services are offered. Then services are provided. Finally, there are outcomes.

(Side note for systems thinking mavens: Yes, I know that’s a wildly oversimplified view. But for the present purpose, it will do the trick.)

So what kinds of open civic data would correspond to these steps?

Needs / problems. This is the richest vein of existing open data so far. There have long been public statistics on poverty from the census and on prevalence of crime from the police. Now education and health statistics are more easily accessible too. There are lots of data on quality of life issues reported to 311 centers (e.g. sightings of rats, housing complaints) that are relevant too. The list goes on and on. Some of these data sets do have standard formats. For some others, conversations are happening about possible standards. In short, work is in progress.

Services offered. This one is messy. Plenty of organizations create databases of records to direct people toward the service programs in their communities. Unfortunately, that means multiple overlapping stores of data. And most of them haven’t been opened up yet. Standards exist but so far haven’t been widely adopted. This has been called the Community Resource Directory Problem, and there are energetic discussions among information and referral providers about how to resolve it. (Stay tuned for updates.)

Services provided and outcomes achieved. This is the human service sector’s critical gap in open data. How much work is being done with whom by what organizations to address what problems in what ways… and what are the results? If you wander around DATA.GOV looking for this kind of information, you won’t find much.

Client-level records are confidential, of course, so data on outputs and outcomes could only be provided in the aggregate. So let’s imagine aggregated data in standardized formats, sliced and diced in a lot of useful ways—by geographic areas and demographic characteristics and time periods, for example. Let’s imagine that such data were so commonly available that program planners and community advocates could easily overlay a city map with reliable measures on clients served and numbers of services provided and results achieved. What would the impact be? Currently, that kind of exploration is expensive and laborious; researchers first must negotiate access, then typically spend enormous effort cleaning the data. But what if it were quick and cheap to do? There’s already an animated public conversation about how to improve the effectiveness and efficiency of human service programs. Wide availability of well-structured open data about outputs and outcomes would make the conversation broader, faster and better informed.

So what’s standing in the way? Of course there are a lot of financial, technical, political and inter-organizational issues to resolve. But there’s an even more basic barrier: isolated specialist conversations.  There are at least three major efforts to improve human service data, but they’re not talking to each other enough.

One effort has to do with performance measurement. A whole profession has grown up to help public and nonprofit human service providers measure what they’re doing. Recently there’s been more focus on the need for comparability and the fact that when funders don’t coordinate their requirements with each other, service providers suffer. There’s a growing aspiration to specify common measures. But high-level conversations about performance measurement can often be a bit naive about the practicalities of collecting data and the role of information technology. Influential books about performance measurement treat technology mostly as a means of storing and delivering measures that someone else—mysteriously and providentially—has already compiled. There’s not much discussion of the barriers to producing performance measures or how to overcome them. And the question of how widely measures should be published is rarely raised.

Another effort promotes interoperability. The National Information Exchange Model (NIEM) now makes it much easier to set up channels for information systems to talk with each other. But the largest part of that effort has focused on streamlining business processes; take a tour of the interoperability packages that agencies have built and you’ll find that only a handful of them are designed to transport performance measures.

And another—the Open Civic Data movement—is mostly concerned with transparency and the advantages of public access to government data.

So far, these three efforts haven’t coalesced. Have there been any projects that specify common measures for a large set of comparable human service programs and deploy interoperability standards to collect the data and warehouse the measures together in a place that’s openly available? Not yet, as far as I know. Bringing these separate strands together would be a major step toward wiring the human service sector to become a responsive whole.

So what’s the solution? For starters: An interdisciplinary conversation about creating an open ecosystem of high quality standardized measures.

—Derek Coursen

P.S. Next week in California there will be an Open Health and Human Services Datafest. It’s the first open data conference I’ve heard of that has the words human services in its title. A milestone!

If you found this post useful, please pass it on! (And subscribe to the blog, if you haven’t already.)

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Advertisements