The Human Service Sector’s Four Biggest Information Management Problems

What are the human service sector’s biggest information management problems? The question might seem odd. How can we say what the biggest problems are if we don’t talk about the goals of managing information first? And since different stakeholders have different goals, isn’t the answer really that it depends on where you sit?

Not necessarily. Some kinds of problems are systemic. They can get in the way of everyone’s goals. They may be so common that people mistake them for natural features of the landscape. They may have no clear owner. All the parties may be doing their jobs perfectly well, but the collective result is a mess.

Focusing on that kind of situation, I’m going to stick my neck out and make an argument for what the four biggest problems are. (The order below is not meant to rank them by importance, though. I’ll begin with the ones that have already gotten the most attention and then proceed toward the ones that have not been talked about as much yet.)

1. Isolated Agency Silos

No surprise here. Silos have become universally unpopular. The phrase connecting silos now gets over four million hits on Google, while breaking down silos gets half a million. It’s a wonder there are any free-standing silos left.

But the human service sector is still the land of silos. The problem has been recognized for decades. Various local and state agencies provide services of different kinds, there’s a lot of overlap between the clients of different agencies, and the clients’ needs are interrelated. Sharing data on clients would allow agencies to understand and meet people’s needs more completely. It could streamline enrollment and save everyone paperwork, time and money. But are agencies able to do that? Up until very recently, the answer was usually no. Established practices, culture and sometimes laws (or at least legal advice) would create organizational walls through which data could not pass. Meanwhile, technical barriers usually made it too expensive to integrate data across different information systems anyway.

That has begun to change, though, since the National Information Exchange Model (NIEM) launched its Human Services Domain two years ago. NIEM is a technology toolkit sponsored by the federal government that allows agencies to set up communication channels between their information systems. Many jurisdictions are already using NIEM to help child welfare agencies and juvenile courts coordinate their work more closely. New York City’s HHS-Connect enables caseworkers from different city agencies to access each other’s case data about the clients that they have in common. There are lots of good early success stories.

The technology exists. Will it be used? Unlike some federally mandated data standards of the past, NIEM is a flexible toolkit at the disposal of locally controlled efforts. It can actually save money in comparison to previous technological options. For localities that want to share data, the way forward is open. What remains to be seen is how far and how quickly the movement to integrate data will spread, and how agencies will negotiate the delicate legal and ethical issues around client privacy.

2. Unsuccessful Information System Projects

Today all but the smallest human service organizations depend on information systems. Software enforces operational rules, collects data and produces information for managers. But many organizations cannot find a turnkey solution that fits their needs. They are forced to embark on some kind of software project. They may have a commercial off-the-shelf (COTS) system significantly customized or they may build a new system from scratch. Either way, it’s risky—software projects frequently fail entirely or in part. If that happens, the organization will be left with depleted resources and demoralized stakeholders who are unable to achieve whatever goals had been riding on the system’s success.

That’s not unique to the human services. In 1995 the Standish Group famously reported that 31% of information system projects in all industries were cancelled before ever being completed. In 2006 the same firm estimated that only 35% of projects were successful; 19% were failures and the remaining 46% were challenged. (That’s a nice way of saying that they had budget or timeline overruns, or the scope had to be reduced.) Research and innovation to understand and prevent this kind of mess has been ongoing for decades.

Since there are software project messes everywhere, we could leap to the conclusion that the messes in the human service sector must result from the same causes as the messes elsewhere. (The usual laundry list includes lack of user involvement, lack of executive support, lack of clear requirements, and half a dozen more.) But that’s the kind of easy answer that makes me suspicious. Those broad categories are no doubt useful for some purposes. But they don’t home in on the peculiar challenges that human service organizations face. Here are three that deserve special attention: the pace of change, the need for performance data, and the sector’s peculiar terminology.

Human service policies and practices change quickly. As a recent white paper from the American Public Human Services Association (APHSA) and Microsoft emphasized, no one who is building or buying human service software should assume that the environment will remain stable for long. And no one wants to be stuck with a system that can only meet yesterday’s needs. A key success factor is therefore flexibility. To achieve that, the software industry is currently emphasizing new approaches to software architecture. But architecture is only part of the solution. As I’ve written elsewhere, other factors need to be taken into account too. How procurement documents are written, how project management roles are defined, and how analysts think about requirements will all impact how flexible the resulting system is.

The role of performance data is another complicating factor. A for-profit enterprise can always look to its bottom line to see whether or not it is succeeding. A human service organization cannot; it must collect and analyze data on a slew of largely non-monetary aspects of its work. One major purpose of a human service information system is to provide that data. But another is to manage the operational workflow. And too often, those purposes collide. Stakeholder groups with very different priorities compete for support from and control of the same information system. This tension is so central to the experience of human service organizations that it needs to be addressed as a phenomenon in its own right; generic prescriptions lifted from generic project management methodologies are not enough.

And one more peculiar challenge: human service terminology. Many of the core concepts that the sector uses to organize its work—even such basic terms as program, client and case—are vague and ambiguous, can be interpreted differently in different contexts, and have fuzzy, arbitrary or unstable boundaries. It’s hard to build solid software on such a weak foundation. I am beginning to hear an awareness of this problem bubbling up in various places around the sector. Stay tuned for further discussion in later posts.

3. Barriers to Producing Performance Measures

In the previous section I painted a broad-strokes picture of software project messes and suggested that the need for performance data is one complicating factor. Now I’m going to turn that inside out by shifting the view to the production of performance measures as a general problem area. (From that perspective, of course, one of the sore spots has to do with information systems.)

The drive to measure the impact of human service programs has been growing for decades. From the 1960s on, program evaluators developed sophisticated statistical and qualitative methods. Then in the 1990s, funders and public administration schools wanted to create tighter feedback loops that could more directly inform decision-making. They promoted performance measurement systems based on simple descriptive statistics about inputs, outputs, quality, efficiency and outcomes. This is the form of measurement that most human service organizations are most closely involved with today.

There’s now a profession of consultants who lead stakeholders through the process of thinking about their goals and theory of change. They typically produce a wish list of the measures that would help the organization monitor and improve its performance. But in the next stage, when the wish list is presented to operational staff and data administrators, it usually turns out that only a minority of the desired statistics is feasible to produce. Most of them are neither supported by available data nor can be obtained by easy changes in data collection practices.

When I first encountered this problem in several projects, I was surprised. So for a couple of years I carried out my own (blissfully unscientific) survey. Whenever I met anyone else who had led performance measurement projects, I would ask the person how many of the measures on their wish lists had turned out to be feasible. The answer I got was invariably somewhere between 20% and 40%.

What’s going on here? It turns out that there are a lot of practical barriers to producing performance measures. (Elsewhere I’ve presented a formal way of categorizing all of them.) Some barriers are easy to understand. Staff may simply not record data that don’t seem relevant to providing services. An outcome measure that looks at a client’s situation a year after discharge will necessarily require costly follow-up, since the data would not normally be collected for any other purpose. And if the only way to produce a measure is to merge data from two different systems, that will add extra cost too. (If the systems belong to different agencies, then there may be inter-organizational barriers to overcome as well.)

But people often encounter a far more frustrating situation. Data on a topic of interest is indeed already collected—but in a way that makes it useless for producing the desired measures. Often the problem is in the way the data is structured. In other words, a lot of human service information systems are very poorly aligned with the needs of performance measurement.

What to do? As a first step, the whole range of barriers needs be openly discussed more. Resources on performance measurement give excellent guidance on how to choose measures. But then there is almost dead silence as the wish list is thrown over a wall to the unfortunates who are tasked with implementing it. For whatever reasons, the performance measurement profession has simply not focused much on the practical challenges. But the problems remain, and awareness of them seems to be growing. Last year the Center for Effective Philanthropy (CEP) carried out a survey of nonprofit organizations including many in the human service field. About 80% of leaders believe they should use performance measures to demonstrate effectiveness. The same percentage already does use data to improve performance. But over 70% wanted more discussion with their funders about how to develop the skills of staff to collect and interpret data. One pointed out that funders should not require grantees to measure outcomes as though it could be done without cost. Leaving aside the dynamics around grant-making, the CEP report reflects a performance measurement profession that needs to more intentionally address issues of feasibility and cost.

And overcoming the barriers that have to do with information systems will call for a really new kind of conversation. Measurers of performance and builders of software so far do not have much of a common language. Above I asserted that human service information systems often cannot produce the data needed for measures. Some software designers might retort: A performance measure is just a kind of report and a report is just a kind of system requirement. Tell us what the requirements are and we’ll tell you how we can meet them. That’s the classic way that engineering is taught. Unfortunately, it only works if the requirements are well understood in advance and relatively stable thereafter. But human service performance measures are frequently put in place or undergo change long after information systems have been built. And the effort to interpret existing data usually leads stakeholders to new questions and a desire for more data. Systems are doomed to disappoint if they can only answer the questions that stakeholders articulated in the initial requirements, or if they cannot easily be modified to cope with change. What is needed, I suggest, is a fundamentally new approach to what a human service information system is. That’s one of the topics that this blog will explore.

4. Uncoordinated Demands on Service Providers

As we’ve seen, producing performance measures at all is surprisingly difficult and costly. How much worse, then, is the situation of nonprofit service providers that must report different data to an array of different funders.

The human services are a vast collaboration between government and nonprofit organizations. A labyrinth of funding streams flows among federal, state and local governments to the nonprofits that provide direct services. Private philanthropy kicks in another chunk of money to address needs that government doesn’t and to experiment with new ways of solving problems.

Funders need data on how their grantees are performing. Some ask the grantees to define their own measures. More commonly, the funder tells the grantee what statistics to report. Some go even farther and mandate that the grantee must use a specific information system to collect and transmit granular data. With that approach, a large grant portfolio can amass and analyze the same data from hundreds of organizations about thousands of clients and millions of services.

The problem is that nonprofits with multiple funders must juggle all their separate demands. Different funders ask for statistics that are similar but still have significant differences. At best this means extra work manipulating spreadsheets; often, though, the grantee must actually set up a separate mechanism to collect different data. And when funders mandate the use of different information systems, that creates silos. The grantee must do repeated data entry into multiple systems and will be unable to analyze clients’ experience across the various programs. In these ways, uncoordinated data requirements are a drain on limited resources, and often prevent nonprofits from acquiring the kind of information management capacity that they need to fulfill their missions.

What’s the solution? For the past few years people have been raising the idea of some kind of coordination among funders. In a recent blog posting at Markets for Good, Laura Quinn suggests that nonprofits would benefit from a rationalized set of metrics and indicators that they’re expected to report on, standardized as much as possible per sector with a standard way to provide them to those who need them. A 2009 report from FSG lifted up several experiments in building platforms for sharing measurement data from multiple organizations. One of the models they discuss, comparative performance systems, means prescribing identical indicators and data collection methods. This idea is gaining traction, but it also leads to questions—both technical and political—about the feasibility and limitations of various approaches to standardizing data. And those, too, will be among the topics that this blog will explore.

—Derek Coursen

If you found this post useful, please pass it on! (And subscribe to the blog, if you haven’t already.)

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License


3 Comments on “The Human Service Sector’s Four Biggest Information Management Problems”

  1. Thanks for making the effort to pull together a lot of disparate theories and perspectives into a more accessible framework. Many of these barriers go beyond organizations and agencies and point out the need for changes in broader and even more unmanageable public perceptions.

    I think that the emerging form of philanthropy currently known as social venture philanthropy, “philanthrocapitalism”, social impact investment, etc. adds a new element of complexity to the mix. This often involves bridging the gaps between business and nonprofit cultures, and can add yet another set of measurements to the already overburdened nonprofits.
    Bonnie Osinski

  2. Kevin Gaines says:

    I read this and almost every problem identified here is a current issue for myself and my colleagues. As you and the readers know, the feds are aware of these issues and are doing their diligence, if not to resolve them, to at least provide states, localities and nonprofits the latitude to develop solutions.

    The question is whether human services decision makers, wherever they sit, will exercise sufficient patience and diplomacy to develop and manage governance structures that help us manage our expectations of each other.

  3. Great start. Your readers should know that the Stewards of Change symposium discussed these and other issues.

    This blog is a great way to keep those concerns current.


Your Thoughts?