Allegory Child Welfare Clarifications Data Architecture Data Interoperability Homelessness Information & Referral Information System Design National Information Exchange Model Open Civic Data Performance Measurement Problem Statements Project Management Reviews Social Work Systems Thinking Taxonomies Veterans

Open Civic Data and the Human Services — Looking Beyond Today’s Flea Market

Governments are throwing open their gates and releasing large volumes of data for public use. Open civic data is the latest revolution, as Paul Wormeli recently called it on the Stewards of Change blog. But the revolution is nowhere near complete. Summing up the movement’s current moment, he concludes:

Any time there is such a tsunami of innovation… there are shortcomings and issues with achieving a more rational methodology… Observers have noted that open data offerings have taken hold so quickly that there are no useful ways to compare the data sets from city to city, or to gain any national insight from the publications of individual sites. In effect, there are no standards on which cities might find a common ground to publish data sets that could be aggregated. There are certainly advantages to building such standards for comparison purposes, but the stage of our current development is exciting just as it is unfolding…      

That’s an apt statement of the open data movement’s major limitation.

Yes, this is a moment of extraordinary innovation. And it’s a lot of fun. Browsing the federal U.S. DATA.GOV portal is a bit like wandering through a really good flea market discovering charming—and in many cases useful—items that you would never have thought of.

A limitation of flea markets, though: much of the stuff that you need for daily living isn’t sold there. Another limitation: the stuff that you do find is usually idiosyncratic and difficult to match. That’s why more people furnish their homes from Ikea than from flea markets.

My conclusion: Open civic data will only live up to its potential if the range of data offerings expands and the data becomes more standardized.

So how exactly could that happen? Or, to make the question more manageable: How could that happen for open data in the human services?

Hmmm. Actually, that points to a different question that would be a better starting point: What kinds of open data are relevant to the human services anyway?

Well, what do human service organizations do? People often describe it as a linear sequence of steps. Sometimes it’s even sketched out as a logic model. First there are problems and needs. Next, services are offered. Then services are provided. Finally, there are outcomes.

(Side note for systems thinking mavens: Yes, I know that’s a wildly oversimplified view. But for the present purpose, it will do the trick.)

So what kinds of open civic data would correspond to these steps?

Needs / problems. This is the richest vein of existing open data so far. There have long been public statistics on poverty from the census and on prevalence of crime from the police. Now education and health statistics are more easily accessible too. There are lots of data on quality of life issues reported to 311 centers (e.g. sightings of rats, housing complaints) that are relevant too. The list goes on and on. Some of these data sets do have standard formats. For some others, conversations are happening about possible standards. In short, work is in progress.

Services offered. This one is messy. Plenty of organizations create databases of records to direct people toward the service programs in their communities. Unfortunately, that means multiple overlapping stores of data. And most of them haven’t been opened up yet. Standards exist but so far haven’t been widely adopted. This has been called the Community Resource Directory Problem, and there are energetic discussions among information and referral providers about how to resolve it. (Stay tuned for updates.)

Services provided and outcomes achieved. This is the human service sector’s critical gap in open data. How much work is being done with whom by what organizations to address what problems in what ways… and what are the results? If you wander around DATA.GOV looking for this kind of information, you won’t find much.

Client-level records are confidential, of course, so data on outputs and outcomes could only be provided in the aggregate. So let’s imagine aggregated data in standardized formats, sliced and diced in a lot of useful ways—by geographic areas and demographic characteristics and time periods, for example. Let’s imagine that such data were so commonly available that program planners and community advocates could easily overlay a city map with reliable measures on clients served and numbers of services provided and results achieved. What would the impact be? Currently, that kind of exploration is expensive and laborious; researchers first must negotiate access, then typically spend enormous effort cleaning the data. But what if it were quick and cheap to do? There’s already an animated public conversation about how to improve the effectiveness and efficiency of human service programs. Wide availability of well-structured open data about outputs and outcomes would make the conversation broader, faster and better informed.

So what’s standing in the way? Of course there are a lot of financial, technical, political and inter-organizational issues to resolve. But there’s an even more basic barrier: isolated specialist conversations.  There are at least three major efforts to improve human service data, but they’re not talking to each other enough.

One effort has to do with performance measurement. A whole profession has grown up to help public and nonprofit human service providers measure what they’re doing. Recently there’s been more focus on the need for comparability and the fact that when funders don’t coordinate their requirements with each other, service providers suffer. There’s a growing aspiration to specify common measures. But high-level conversations about performance measurement can often be a bit naive about the practicalities of collecting data and the role of information technology. Influential books about performance measurement treat technology mostly as a means of storing and delivering measures that someone else—mysteriously and providentially—has already compiled. There’s not much discussion of the barriers to producing performance measures or how to overcome them. And the question of how widely measures should be published is rarely raised.

Another effort promotes interoperability. The National Information Exchange Model (NIEM) now makes it much easier to set up channels for information systems to talk with each other. But the largest part of that effort has focused on streamlining business processes; take a tour of the interoperability packages that agencies have built and you’ll find that only a handful of them are designed to transport performance measures.

And another—the Open Civic Data movement—is mostly concerned with transparency and the advantages of public access to government data.

So far, these three efforts haven’t coalesced. Have there been any projects that specify common measures for a large set of comparable human service programs and deploy interoperability standards to collect the data and warehouse the measures together in a place that’s openly available? Not yet, as far as I know. Bringing these separate strands together would be a major step toward wiring the human service sector to become a responsive whole.

So what’s the solution? For starters: An interdisciplinary conversation about creating an open ecosystem of high quality standardized measures.

—Derek Coursen

P.S. Next week in California there will be an Open Health and Human Services Datafest. It’s the first open data conference I’ve heard of that has the words human services in its title. A milestone!

Did you find this post useful? If so then please subscribe to the blog via email or RSS feed using the widgets on the right. There will be more discussion of these and related topics in the coming months.

Futurism: Rewiring the Human Service Sector to Become a Responsive Whole


Over the last decade or so, the human service sector in the U.S. has started down the road toward an extraordinary transformation.

Here the doubtful or desperate reader may retort: Yes, it’s started down the road toward losing its funding. Fair enough; austerity is everywhere and it’s not clear whether, when or how that might change.

But looking toward a farther horizon, a more hopeful transformation becomes visible too. There’s nothing dramatic about it. Progress is happening in quiet fits and starts. It’s coming together without a guiding hand. And the drivers are separate movements that only occasionally notice each other’s existence.

What’s happening might be imagined as a wiring project: autonomous organizations are becoming wired together by shared information. But that image, borrowed from electronics, isn’t enough. When information is shared quickly enough and intensively enough, coordinated behavior can emerge in an almost biological way. Eventually, the human service sector could begin to behave almost as though it were a single organism. It could become routinely able to respond quickly, and with collective intelligence, to significant events—whether at the level of an individual client’s crisis or an emerging social problem.

That may seem fanciful. It certainly crosses into the realm of futuristic speculation. But at least three major trends can be observed today that are leading the sector in that direction.

The first is the rise of data interoperability. A few years ago the National Information Exchange Model (NIEM) began to develop standards so that human service agencies can easily build automated exchanges of data between their information systems. As a result, it’s now cheaper and more feasible to wire together, say, a family court and a county child welfare agency. A child’s service plan is instantly transmitted to the court, and the court’s approval (or otherwise) is transmitted back to the child welfare agency. The NIEM tools can be deployed wherever organizations agree that they have a business process in common that they want to streamline electronically.

How will data interoperability change the way agencies do their work? Most obviously, it makes communication faster. But the impact isn’t merely to speed up the work that agencies have always done. More agile communication tends to improve the quality of decision-making. And as multiple agencies become accustomed to communicating more quickly and in more depth, agency leaders are more likely to discover ways of reengineering the way they work together. That will lead, also, to subtle shifts in how people think about the borders of their agencies’ work. Formerly isolated silos will be connected—not only through shared data but also in the minds of the people who operate them.

The second trend is the push toward common performance measures. Since the 1990s, public and nonprofit programs have been under pressure to report measures that will show how well they’re fulfilling their missions. Now there’s a growing notion that there ought to be, for each specific type of program, a common set of measures. Already, funding agencies that pay multiple programs to do the same kind of work usually require all of them to report similar data. The next step will be to coordinate decisions about measures across multiple funders. There are plenty of efforts to pinpoint what the measures should be. (The Urban Institute’s Outcome Indicators Project, for example, suggests measures to be used by transitional housing, employment training and prisoner re-entry program, among others.) Eventually, that kind of coordination will lead to online platforms that allow stakeholders to compare the work and results of multiple organizations and programs. That’s already a reality in the world of arts and cultural organizations, where common measures are collected and distributed by the Pew Cultural Data Project. For the human services, it’s happening more slowly. Right now it’s easy to look up estimates of homelessness, or outcomes in child welfare cases at the level of states or cities; and other federal programs offer similar reports. As the thirst for information increases, it’s only a matter of time before platforms emerge that will offer more finely grained indicators.

As human service programs become empirically comparable, that fact will invisibly wire them together. The connection will happen through the awareness of all the people who will be able to look at a panorama of programs in comparison with each other. When executives can easily compare other programs’ numbers to their own, that will influence their decision-making. Ready access to common measures will guide funders and new program planners. Statistics on outputs, outcomes, costs and quality will no longer languish in obscure internal reports—they’ll be out in the world affecting the actions of a broad range of human service stakeholders.

And the third trend is open civic data. It’s the idea that data collected and maintained by the government is of potential use to civil society; if it doesn’t compromise individuals’ privacy or national security, then it ought to be open to the public. As more and more public data becomes available, common human service performance measures will get mashed up with economic, ecological, public health and criminal justice statistics. Those mash-ups will give stakeholders a more textured understanding of the interplay of factors that impact human service efforts. They’ll help program managers to fine-tune their interventions. And they’ll allow planners to identify emerging needs. Of course, researchers have always sought out this kind of information. Right now, though, that’s laborious and costly. As the ecosystem of available data grows, it will become easier, cheaper and therefore more commonplace.

These three trends come from very different origins. The idea of common performance measures was born out of the frustration of funders who felt that they were often flying blind. The National Information Exchange Model arose out of justice agencies’ need to exchange information. And the open civic data movement is related, by ethos, to the ideas of open source software and open copyright licenses. Each one is mostly talked about within its own circle of constituents, and the areas of overlap are only beginning to be explored. All of them, though, are pushing the human service sector toward coalescing into a more responsive whole.

And one more thing they have in common: none of these trends is really about technology innovation per se. This is a different kind of innovation. It’s the creation of new kinds of conversation in the human service sector: a conversation about common uses of data; a conversation about collective choices for structuring data; and—the hardest part—a conversation about defining the common meaning behind the data.

—Derek Coursen

Did you find this post useful? If so then please subscribe to the blog via email or RSS feed using the widgets on the right. There will be more discussion of these and related topics in the coming months.

Free as in Puppies: Taking on the Community Resource Directory Problem

Last week Code for America (CfA) released Beyond Transparency: Open Data and the Future of Civic Innovation, an anthology of essays about open civic data. The book aims to examine what is needed to build an ecosystem in which open data can become the raw materials to drive more effective decision-making and efficient service delivery, spur economic activity, and empower citizens to take an active role in improving their own communities.

An ecosystem of open data? How might this brave new thing intersect with human service organizations? That’s mostly beyond the scope of CfA’s current book. One chapter, though—“Toward a Community Data Commons” by Greg Bloom—takes a very serious stab at resolving a perennial headache of information and referral efforts.

Bloom is trying to solve what he calls the community resource directory problem. Various players—government agencies, libraries, human service organizations—develop lists of community resources, i.e. programs to which people can be referred. Originally the lists were paper binders. Later they became databases. Almost always, each directory belongs to the agency that created and maintains it. That’s a problem: what the community needs isn’t a bunch of directories, it’s a single unified directory that would allow one-stop shopping. And the overlap among directories is inefficient too: each one has to separately update its information about the same programs.

One solution might be to somehow make directory data free and open to all. Bloom describes one experiment in that direction. It’s instructive because of the way it failed. Open211, which a CfA team built in 2011 in the Bay Area, scraped together directory data from a lot of sources and made it available via the web or mobile phone—and it also allowed the users at large to submit data. As Bloom tells it: This last part was key: Open 211 not only enabled users to create and improve their community’s resource directory data themselves, it was counting on them to so. But the crowd didn’t come. This, it turns out, is precisely what administrators of 211 programs had predicted. In fact, the most effective 211s hire a team of researchers who spend their time calling agencies to solicit and verify their information.

Exactly right. Reading this, I vividly remembered a project I did years ago: updating the Queens Library’s entire directory of immigrant-serving agencies. It took well over a hundred hours of tedious phone work. (There was time on the pavement too… for example, an afternoon wandering around Astoria looking for a Franciscan priest—I did not have his address or last name—who was rumored to be helpful to Portuguese-speaking immigrants. I never found him.) And then each piece of data had to be fashioned and arranged to fit into a consistent format.

That’s what goes into maintaining a high quality community resource directory. It will not just happen. It cannot be crowd-sourced. And this harsh fact—the labor cost of carefully curated information products—can be hard to reconcile with the aspirations of the open civic data movement.

The lesson: it’s certainly possible to collect community resource information and then set it free… but it will be, as Bloom says, free as in puppies. (This wry expression comes from the open source software movement, where people make the distinction between free as in beer—meaning something that can be used gratis—and free as in speech—meaning the liberty to creatively modify. Someone noticed that freely modifiable software might also require significant extra labor to maintain—like puppies offered for free that need to be fed, trained, and cleaned up after.)

But then Bloom takes the problem toward an interesting possible solution. He invokes the idea of a commons—not the libertarian commons of the proverbial tragedy but rather an associational commons that would include the shared resource (in this case, the data) and a formal set of social relationships around it. He suggests that a community data co-op might be an effective organizational framework for producing the common data pool and facilitating its use.

It’s an intriguing idea. It acknowledges the necessary complexity and cost of maintaining a directory. It might be able to leverage communitarian impulses among nonprofits. And if successful, it could be a far more efficient way of working than the current situation of multiple independent and overlapping directories. Of course, it would face all the usual practical difficulties that cooperatives do; but there’s no reason those should be insurmountable.

This framework could solve a lot of current problems in information and referral.  But how might it eventually fit into some larger imagined ecosystem of open data?

Bloom offers a vision for how such a unified directory could be widely used by social workers, librarians, clients, community planners, and emergency managers. That seems entirely feasible, because all those players would need the same kind of information: which programs offer what services when and where.

But Bloom also takes the vision a couple of steps further, imagining a journalist seeing this same data while researching city contracts; and how the directory might be combined with other shared knowledge bases—about the money flowing into and out of these services, about the personal and social outcomes produced by the services, etc.—to be understood and applied by different people in different ways throughout this ecosystem as a collective wisdom about the “State of Our Community.”

This, I think, is where Bloom’s vision will face strong headwinds. I don’t mean political or inter-organizational resistance (though those might crop up too). The problem is that across the broad domain of human service data, very few sub-domains have achieved much clarity or uniformity. Community resource records happen to be one of the more advanced areas. For a couple of decades there’s been a library system standard (MARC21) for storing these records, and now AIRS offers an XSD data standard. So the way is fairly clear toward creating large standardized databases of community resources. Those could then be meshed with, say, the databases of 990 forms that U.S. nonprofits submit to the Internal Revenue Service. The problem is that outside of these few (relatively) clean small sub-domains, human service data gets very murky and chaotic indeed.

A project to mesh community resource records with data on contracts, funding sources and public expenditures, for example, would immediately run into the problem that the linkage would need to be made through the concept of a program. Yet that core term is used in very different ways. Sometimes it means a discrete set of resources, practices and goals; sometimes it implies a separate organizational structure; and sometimes it seems to be a mere synonym for a funding stream. The use of that term would need to be tightened, or it would need to be replaced by some clearer concept. But even then, people trying to mesh community resource data with fiscal administrative data would find that the latter are equally unruly. A city’s contract with a nonprofit, for example, may fund one program or many; and the original source of the contract’s funding may be one or many government funding streams from the federal, state or city level. There is no uniform pattern for how these arrangements must be made, nor are there well-developed data standards.

Meshing community resource records with programmatic statistics such as outputs and outcomes would be equally fraught. While there’s a movement toward standardizing some performance measures, concrete results on the ground have been slow in coming. Even if complications from the politics around performance measurement were miraculously eliminated, there would still be the issue of murky and chaotic data that don’t easily support performance measures.

So what’s the solution?

In a nutshell: for the human service sector’s work to become a significant part of the ecosystem of open civic data downstream, the sector will have to embark on a new kind of conversation about the way data is organized upstream. This will necessarily be a longer-term conversation. It will have to involve a more diverse set of stakeholders than are usually assembled at the same table. It will have to ask (and answer) unfamiliar questions such as how can information system designers create good interrogable models of  public service work rather than merely meeting stated user requirements? It will have to take a hard look at sub-domains that have often not been modeled very well. (Funny-looking taxonomies are an important red flag for identifying those.)

Eventually, that kind of conversation can lead the sector toward far more coherent and holistic ways of organizing its data. The downstream benefits: more successful information system projects, more efficient production of performance measures, and more meaningful data for open civic uses.

—Derek Coursen

Did you find this post useful? If so then please subscribe to the blog via email or RSS feed using the widgets on the right. There will be more discussion of these and related topics in the coming months.

The Diagnostic Value of Funny-Looking Taxonomies

Taxonomies are everywhere in information management, yet they are hardly ever formally acknowledged and managed.

So begins a very entertaining article published last year by Malcolm Chisholm. It’s entitled “The Celestial Emporium of Benevolent Knowledge” after a famous short story in which Jorge Luis Borges presented a (fictional) ancient Chinese taxonomy of animals. The deliberately absurd chaos of Borges’ taxonomy serves as a jumping off point for Chisholm to outline the (non-fictional) ancient Western art of taxonomy: the logical and constant division of a genus into the mutually exclusive and jointly exhaustive species that compose it. The very existence of this practice is now almost newsworthy because, as Chisholm notes, traditional logic is hardly taught in the West anymore. (Once upon a time that statement itself would have seemed as unlikely as one of Borges’ fantasies; but no, it is true.)

How do taxonomies show up in software? The most obvious way is in lookup tables. Those tables determine the categories from which the user is allowed to choose.

I’ve seen strangely constructed taxonomies create costly problems in a lot of information systems, and it continues to  puzzle me that there’s relatively little discussion of taxonomy among software designers; so I was glad to see Chisholm’s article come out in a popular newsletter.

Why so little discussion? Perhaps it’s because people don’t want to simply bemoan a problem. Fair enough. But in this case, the problem actually points to its own solution.

In fact, the taxonomies already embedded in lookup tables can have enormous diagnostic value for information system designers. They can be a very fast track to understanding past problems and eliciting current requirements.

Several reasons why:

1 – They’re often an important (albeit perhaps ill-organized) embodiment of the way the organization thinks about its work.

2 – Their virtues and vices are easy to discuss with the stakeholders who are used to them.

3 – They’re a gateway to understand and critique larger architectural decisions, since a lookup table is the domain of a particular attribute within a particular entity within a conceptual data model.

4 – Their very nature carries with it the expectation of a rigorous internally consistent structure. That doesn’t mean that the ancient tradition of classical logic need be considered sacrosanct. But when we see a list that is clearly not being guided by those rules, it should at least lead us to ask: Why not, and what price is being paid for it?

This is an almost archaeological use of taxonomies—but the purpose isn’t merely to dig up the past, it’s to better understand the present and to better design the future.

The trick is to keep an eye out for things that look funny, then ask how they got that way, then ask how they’re working out right now and what might need to be different. (After all, taxonomies from yesterday determine how data analysts can slice and dice today.)

Here’s an example, a lookup table in a human service information system. The software tracks referrals, meaning the attempts by one human service program to help people receive services from another human service program. Referrals are a basic part of what such organizations do; yet data about referrals is often of rather poor quality.

This lookup table lists the permissible statuses of a referral:

  • Client Received Service
  • Client Refused Service
  • Client On Waiting List
  • Service Not Available
  • Referral Inappropriate
  • Appointment Pending
  • Client No Show For Appointment
  • Pending—Client In Hospital
  • Pending—Client Too Ill
  • Pending—Letter/Info Sent
  • Pending—Needs Home Visit
  • Pending—Scheduling Conflict
  • Pending—Unable To Contact
  • Pending—Requires Reassessment
  • Pending—Needs Spanish-Speaking Staff
  • Lost to Follow-up

Having suggested that this may be a funny-looking taxonomy, I now risk being accused of suffering from some strange Aristotelian snobbery. After all, what’s wrong with it?

Well, in real life the process of a human service referral generally involves several stages and decision points, each of which involves various parties and possibilities. There is an initial outreach by one service provider to another, (often) the making of an appointment for the client, (often) an assessment by the second service provider, the offer (or not) of services, the acceptance (or not) by the client, and then the actual provision (or not) of the services.

Meanwhile, the original service provider follows up with the second one to find out what happened. This list looks like a grab bag of responses that the second service provider might give at any point along the way. But it lacks any explicit representation of the expected stages. So perhaps, strictly speaking, it’s really several taxonomies—belonging to several stages of a referral—that have all been joined together willy-nilly in a table. The problem is that without explicitly representing each stage, there’s no way to analyze whether (and how aptly) the values cover all possible situations. Furthermore, without that context, there are few cues about the exact meaning of some of the statuses. (What exactly is pending? Is it the service itself or some prerequisite stage?) It also points to the question: is it really enough for the organization to record only the current status without capturing any information about how the process unfolds?

So this taxonomy has certain limitations. They’re not necessarily horrible or laughable, but they do lead to useful questions about the original design decisions and about how well the resulting artifact is serving the organization’s needs. It looks as though the original decision was simply not to tease out what goes on in the referral process very precisely, but instead stuff everything into an ill-defined status field. (If so, it’s probably an example of merely meeting stated requirements instead of creating a good interrogable model.) Then the already loose taxonomy may have been further changed as administrators added new values when users requested them. The next question should be: How well has this approach worked out for the data analysts downstream? And if the answer is not very well then how might this area be better modeled in the future?

That’s the value of paying attention to funny-looking taxonomies.

And beyond the individual organization, they might even be helpful for understanding how well (or poorly) the work of an entire sector is currently being modeled. Right now the human service sector is fitfully advancing toward various ways of standardizing its data: common performance measures, common data exchange standards, and others. Taxonomies are a necessary part of that mix.

Do you have any favorite funny-looking human service taxonomies? If so please share using the Comments form below!

—Derek Coursen

Did you find this post useful? If so then please subscribe to the blog via email or RSS feed using the widgets on the right. There will be more discussion of these and related topics in the coming months.

REVEALED: The Secret Desire of Information System Stakeholders

A lot of projects to deploy human service information systems end up, sadly, as failures or only half-successes. Why?

Here’s one major reason: stakeholders have a secret desire—actually a secret expectation—that many project managers don’t understand. (In fact, it’s usually so secret that the stakeholders themselves don’t even know they have it.)

A typical scenario:

Imagine there’s a software system being designed to meet some set of specifications. You call a meeting of everyone involved in the project. You ask them a deliberately naïve question: What IS this information system anyway?

After they finish giving you the hairy eyeball, they all reply something like: This system will streamline {list of major business processes} and it has {list of handy dandy features}. In other words, everyone in the room talks about it the same way they’d talk about a Swiss Army knife: It’s a tool that contains a bunch of smaller tools.

That’s a red flag that things are probably going to turn out badly.

Fast forward to the same room a couple of months later. New stakeholders have entered the scene. They’ll need data to do their jobs. They weren’t around when the specs were written. But they’re delighted to hear that the system will collect data on a whole lot of areas they’re interested in: client demographics, history and risk factors, service plans, services provided, outcomes. So they sit down and make a list of the data they want. And then the project manager has to tell them: Sorry, we’ll only be able to give you maybe a bit more than half of what you’re asking for.

The new stakeholders are aghast. You mean the system won’t give us information on {a, b, c}? The developers are called in, and they explain: The {such-and-such interfaces} will capture data on that subject area, but in a different way. See how the specs for {reports x, y, z} were written?  And if the new stakeholders hint that the design is inadequate and perhaps even shortsighted, then the developers too go away frustrated. What, did they expect us to be clairvoyant? They weren’t around to tell us what they needed!

What’s going on? The real problem is a gap between two ways of looking at what the database is supposed to be. The (unspoken) position of one large part of the software development profession is: the database is just the substructure that supports a bunch of tools that we build to do specific things that were stated in requirements.

But the stakeholders have a different (unstated) expectation. They imagine that the database is intended to be an interrogable model of the organization’s work in its environment.

On its face, that idea might seem ridiculous. For how could a database be a model of a whole human service organization? Wouldn’t it have to be as rich and extensive as the organization itself, like Borges’ map which was as large as the Empire it described?

But no, fortunately the stakeholders are more reasonable than that. What they expect—subliminally, of course—is simply that each subject area captured in the database ought to be a rich enough, precise enough, coherent enough model of their organization’s reality so that the data will be able to answer a lot of reasonable permutations of questions about the organization’s work. (And the stakeholders shouldn’t have to articulate every possible permutation to the designers. It’s the designers’ job to make sure that a decent model gets built.)

And in fact, that is a reasonable enough expectation. The proof is in the successful projects (too few, alas!) that do anticipate a lot of their stakeholders’ needs.

There’s a typical scenario with those too:

Stakeholders walk nervously into the meeting with their list of new (or unexpectedly changed) requirements. The project manager and the developers frown, hold their breath… listen… and then exhale, relaxing as they realize that the requirements will not be difficult to meet after all… because the model they built was close enough to reality. Their model has passed yet another test.

But how do they do it? How do some designers build a good interrogable model of the business in its environment—while others only manage to build a Swiss Army knife?

It’s a matter of what they’re looking at and how they’re looking.

Much more about that in future posts.

© Copyright 2013 Derek Coursen

Did you find this post useful? If so then please subscribe to the blog via email or RSS feed using the widgets on the right. There will be more discussion of these and related topics in the coming months.

Taking a Systems Stance Toward the Problems of Human Service Information

There’s more and more talk about using systems thinking to solve problems. But many people find it hard to put a finger on exactly what that might mean.

One reason: systems thinking can feel very alien. It’s inherently abstract. It tells observers to set aside their usual ways of looking at a situation. It introduces an unfamiliar set of concepts. It invokes principles that apply to systems in general. But what kinds of principles? And what is a system anyway? Most people aren’t used to thinking in those terms.

Another reason: systems thinking is a very broad tradition. A few dozen well-known thinkers developed systems approaches over the last century. They share some common ground, but they also go in very different directions. On its own, the phrase systems thinking merely points to the broad tradition, it doesn’t specify which problems could be addressed with which tools.

These are real stumbling blocks to talking with people about the value of systems thinking. But just for fun, let’s ratchet up the difficulty a bit more.

What about the role of systems thinking in managing human service information? Uh oh. People already talk about human service systems in a non-technological sense. And people also refer to software as information systems.

All that existing language makes it hard to pose a question like: How can systems thinking lead toward building more successful information systems for human service systems? Beneath the hypnotic repetition of the word system, with all its vague associations, the listener’s mind soon feels like it’s turning into oatmeal.

It’s still an important question, though.

Resources for an Elevator Pitch

What to do? For starters, it would help if there were a quick and easy way to make systems thinking accessible to people who aren’t already fascinated with it. An elevator pitch.

(Then, after systems thinking has been distinguished from information systems, perhaps the conversation can turn to the relationship between them.)

One way to start, with only fifteen seconds: A system is just a bunch of parts that make up a whole. Systems thinking looks at how the parts work together, what they might be trying to do, how they can change over time, and how they relate to other stuff outside, which might be changing too. (That’s my attempt to boil down one of the classic articles in the tradition, written by Russell Ackoff.)

If the elevator is moving slowly in a tall building—or gets stuck—then there may be time to go a bit deeper. Here’s another resource: a very readable and entertaining article by Mike Metcalfe. He proposes that systems thinking is really a critical stance. Systems thinking means choosing to pay attention to transformation (inputs and outputs), connectivity among elements, purpose (what a system is intended to do), synthesis (disparate elements working together) and the boundary between what is included in the system and what is not.

Rethinking the Boundaries of Human Service Systems

That last item—concern with boundaries—is the area where systems thinking has so far made its biggest impact on the human service sector. It’s taken various forms. Social workers have long used the ecosystems perspective to holistically understand the situation of their clients. At the institutional level, there’s a growing desire to coordinate service delivery among multiple agencies: a decade ago people began developing resources for planning service integration and today the idea has morphed into a whole business model designed around horizontal integration that U.S. states are being encouraged to adopt.

And of course, integrating services depends on integrating information. That too has taken different forms. There’s now a technology toolkit, the National Information Exchange Model, that helps agencies exchange data between separate information systems. Some jurisdictions are developing common client indexes that link all their information systems so that all agencies can share a unified view of each human service client. Companies are marketing enterprise-wide human service software platforms to government. And designers have moved toward building databases that more accurately reflect the human service ecosystem.

But rethinking boundaries is only the beginning of what systems thinking could do for the human service sector. Other possibilities haven’t been explored as much yet. Here’s one:

Designing Data That Can Serve Multiple Feedback Loops

The common business jargon of asking someone for feedback and closing the loop originally comes from the systems thinking tradition. It’s based on the fact that organizations collect data from the environment and feed it back into their own operations. (Living organisms and some machines—such as thermostats—do that too.)

Human service organizations use a lot of different kinds of feedback loops. They’re usually treated, though, as if they were entirely different activities.

A caseworker records data about the services she’s provided and the progress that the client has made, and then looks back at those records when she needs to revise the client’s service plan. (That’s a feedback loop—a very small one.) Before meeting with caseworkers, their supervisor runs a report to see how many of each worker’s clients received the recommended set of services that month. (Another feedback loop, slightly larger.) An executive creates a new  strategic plan designed to increase the proportion of clients who graduate from the program and achieve good outcomes a year later. (A much bigger feedback loop.) Evaluators analyze the data from a whole cohort of funded programs and make judgements about whether the model is effective for its cost. (A really big feedback loop!)

(Note for systems thinking mavens only: this may remind you of the Viable System Model by which Stafford Beer described different levels of cybernetic processing in organizations. There’s a good study of a homeless shelter by Dale Fitch using that framework. But this post is headed in a different direction: toward software development methodology.)

These activities get labeled as casework and supervision and performance measurement and evaluation. And of course, they are all those different things. What they have in common, though, is that they’re all feedback loops.

[Skeptical Questioner: Well, so what? How does calling them all feedback loops help anything?]

I’m glad you asked.

Calling them all feedback loops is useful because the human service sector has a big problem with data. Organizations typically spend enormous resources collecting data and building software to support particular feedback loops—only to discover that the data, for one reason or another, don’t support other feedback loops that are equally important. (Think of software systems that support caseworkers well but performance measurement poorly. Or vice versa.)

In this blog’s first post I outlined what I think are the sector’s four biggest information management problems: isolated agency silos, unsuccessful information system projects, barriers to producing performance measures, and uncoordinated demands on service providers. The last two of those could be summarized as: data that can’t support multiple feedback loops. And that problem, in turn, often contributes to the failure of information system projects.

But if these are problems of feedback loops, then the systems thinking tradition—which has devoted a lot of effort to studying such matters—can offer resources that will help. (They may challenge some of the conventional wisdom of the software development profession too.) Much more on that in a future post.

© Copyright 2013 Derek Coursen

Did you find this post useful? If so then please subscribe to the blog via email or RSS feed using the widgets on the right. There will be more discussion of these and related topics in the coming months.

The King’s Homeless Veterans and the Magic Box (A Fairy Tale)

Chapter 1

Once upon a time a King said to his court:

“A soldier who served in our Kingdom’s wars should not sleep in the street.

Let us eliminate veteran homelessness from all our realm.”

And all of the court, and all of his people—including his Right Hand Men who extol military service and his Left Hand Men who wail about the injustice of poverty—joined hands and shouted “Yes! Yes!”

So they went out into the world to carry out their ambitious plan. And behold, there was a Viceroy of Houses who knew the secrets of is-homeless. But alas, it was a different viceroy, the Viceroy of Veterans, who knew the secrets of is-a-veteran.

So the King appointed a mediator to get the two viceroys to share their secrets.

“If you merge your precious secrets together, we can figure out who is a homeless veteran!” the mediator suggested bravely.

And brazenly added:

“If we share data really fast, we can send veterans directly to all of the new homes that both Viceroys have established. And we can help the King figure out if he is reaching his goal. Otherwise how would he know?”

“By our troth!” the Viceroys shouted. “Let it be so!” And they issued a joint declaration.

Chapter 2

But not everyone was so eager to implement the plan.

There were many landed interests among the minor lords, knights and vassals.

The jealous privacy lords said: “The decrees of the old King are still in force. They are for the people’s own good. SHARING IS NOT ALLOWED!”

The timid knights of security said: “I must first inspect the ramparts around the other Viceroy’s castles and interview all of their soldiers before revealing our secrets. For traitors and scoundrels lurk everywhere. And each worker must possess a master token and provide the secret codes.”

The miserly masters of the coin, the procurement officers, said: “How much does it cost? Who shall pay? Are we to share our treasure as well as our secrets?”

The legions of faceless vassals asked: “Have all of the required scrolls and parchments been fully rendered? Have they been duly copied by the scribes, reviewed by masters of the chronicles, and disseminated by the messengers? Do they have the proper seals and counter-seals?

The beleaguered master builders chanted in chorus, “It must wait until after the next three release cycles. And these cycles are long, indeed. Yea, each cycle is longer than the seasons, with more phases than the moon.”

And they all put many traps in the mediator’s way.

Meanwhile the homeless veterans groaned, oblivious to the machinations within the court.

Chapter 3

The mediator became discouraged. He had entered the King’s service to help such needy souls. But the King’s other servants had countervailing instructions.

Yet he could not turn back—not if he ever wanted future wages in the Kingdom. He too, he realized, was a faceless court servant bearing a contract that he must conclude.

So he labored on. Guided by his heart, but prodded by the copper coins, he navigated the pitfalls.

He and his hired minions began digging the narrowest of secure tunnels between the Viceroys’ two lands. The tunnel was too small for a person, but could carry a whisper, which could bear a secret.

And soon, one day—not too far away they say—

(It cannot begin before the leaves begin to fall, but must be complete before the first flowers bloom, the King demands!)

—when the secret whispers from both fiefdoms meet, the mediator will guide them to the magic box, the legendary probabilistic identity matching algorithm.

The box will be hidden behind multiple Walls of Fire. The mediator shall construct a hidden Portal that allows the whispers to pass through the flames. The whispers will be encoded with a secret cipher and placed inside an envelope. It shall be formed to resemble the bars of soap that only the nobles use. On the other side of the Portal, the soap shall be washed away and the cleansed secrets will be placed in the magic box.

And in that magic box the two distant whispers will join to become a voice. Each whisper will learn that it was just a faint echo of a larger whole. And each voice will say:  “I am a person! I am one of the homeless veterans of whom you spoke! Help me!” And the voices will multiply, and shout in unison:  “Help us! Help us!”

And if the King has not been pushed off a fiscal cliff, or sequestered in a dungeon by the rival houses, he will help them.

And they will find homes. And education. And jobs. And balm for their wounds and comfort for their souls. And all of the beneficent goodness they deserve for having volunteered for the King’s wars. And the wars of the previous Kings as well.

And in that day, seemingly so far away, the itinerant mediator will move on to other missions.

But on his way, he will reach into his satchel for one of his hard-earned coins, and toss it to the first homeless veteran he meets. For he knows that all of the power of the magic box cannot instantly conjure the soup to fill the poor man’s cup before that night’s hard rest.

The end.

Brian Sokol

Note:  This fairy tale is inspired by a real project. Details may become another post in the future. The true story is still at the beginning of Chapter 3.

Did you find this post useful? If so then please subscribe to the blog via email or RSS feed using the widgets on the right. There will be more discussion of these and related topics in the coming months.


Get every new post delivered to your Inbox.

Join 131 other followers