Allegory Child Welfare Clarifications Data and Analytics Data Architecture Data Interoperability Homelessness Information & Referral Information System Design National Information Exchange Model Open Civic Data Performance Measurement Problem Statements Project Management Reviews Social Work Systems Thinking Taxonomies Veterans

The Varieties of Holistic Human Service Information


The word holistic comes up a lot when people talk about the human services. It’s an ideal: to be (or become) holistic. But what do people mean? And what does holism have to do with managing information?

People are striving toward something. It needs to be outlined more sharply. Otherwise holism will be but a buzzword that befogs the mind. If we think we want holism—whatever that turns out to be—we should start by looking for clarity.

The Discipline of Searching for Wholes

Let’s step back and look at where this concept came from. Holism first began kicking around the public mind under that particular name almost a hundred years ago.¹ It’s one facet of what’s come to be called systems thinking. (In fact, system and whole are synonyms.)

Holism is partly about how you see. Breaking a thing down into its constituent parts is one way to understand something—but not the only way. In fact, that can be terribly limiting approach. Instead of narrowing your focus, why not expand it instead? By zooming out and bringing more into the frame, an observer can perceive phenomena that only appear in the whole, not in the parts. (This approach has been called expansionism in contrast to reductionism.)

So how do you recognize a whole when you see one? To paraphrase one of the great systems theorists: A whole is an objective thing, but it’s also a subjective thing because the observer chooses which parts to include—yet it’s still objective because the observer has to prove that the parts really are connected!² (This can be a helpful angle on an old debate: whether what people perceive is determined by their own preconceptions or is actually out there.)

In other words, a whole (or system) is both construed and discovered. So when people says that they want to be holistic, they usually mean that they aspire to expand their range of vision. They want to include more parts and, in that way, perceive a whole that is larger than what they could see before.

The everyday act of asking a new question is often an attempt to see whether a system exists. For example, suppose someone asks: Should we offer our program to people who have experience x? This frequently comes up in successful programs that are looking to expand to new client populations. It can also be asked in less successful programs that are reconsidering their theories of change. Either way, it’s a question about a possible system. It asks whether or not there is a relationship between the clients’ experience x—whatever that is—and some aspect of the program. The questioner is trying to bring both of these things, which perhaps have not been considered together until now, into the same frame, to see whether they are in fact parts within a whole.

That’s why the path of holism inevitably means seeking more information. If you’re going to discover a new whole that includes new parts, you need data to figure out exactly how the parts are related to each other.

But of course, most answers are not final, and no whole that an observer has construed and discovered can ever be considered complete. And for that reason, holism is necessarily an ongoing aspiration. It implies a disciplined commitment to continue questioning. It is endless pursuit of a broader and richer understanding. It is a radically open approach.

Competing Wholes and the Data Problem

And that’s where things start getting complicated for the human services. Why? Because there are so many different stakeholders, and a lot of them are asking questions as they try to expand their range of vision. That’s a good thing. But it also means that there’s a slew of different—and at times competing—attempts at being holistic.

After all, the sector is an enormous collaboration. There are governments at different levels, philanthropic foundations and donors, the organizations providing services, managers and front-line staff and fiscal officers and evaluators, the beneficiaries of the services, policymakers and advocates… the list goes on. Each stakeholder is an observer that construes the human service system in his or her own particular way. And that’s perfectly normal. If an architect, a mechanical engineer and a social psychologist all look at the same house, they will construe three very different systems.³

But for practical matters of managing information, it poses a problem: how are all these different stakeholders—with their particular views of the system and their particular aspirations to see it more holistically—going to get the data they need?

Let’s imagine a big table. Let’s say it belongs to a human service agency, or perhaps to a large group of human service and related agencies. (It may not even be entirely clear who owns the table or who has a seat there.) In the middle is the information management stuff. It’s a vast swathe of territory where decisions have to be made: what data to collect; what questions to answer; what staff or consultant positions with what qualifications will be paid for by whom to do what tasks; and what information systems should be designed in what way to do what for whom.

Uh oh. Whose idea of the human service system is all this information management stuff going to serve?⁴ Whose idea of relevant data will it capture? Whose work processes will it support? Whose questions will it answer?

Three Patterns of Holistic Information

To get an idea of the range of what all these stakeholders want, it helps to think about different patterns of information. It turns out that most information is organized according to the two most central concepts in the human services: client and program.

Traditional human service information was embodied in the old cardboard client chart. It represented one client’s experience in one program. That was pretty simple. Information gets more interesting—and more difficult to acquire—when the number increases.

A more holistic pattern of information is the individual client at the intersection of multiple programs. For front-line workers to address a client’s unique situation, they need to understand the client’s environment. (That’s a basic tenet of the ecosystems perspective that has informed social work for decades.) Programs elsewhere that the client participates in are a critically important part of that environment. It’s a safe bet that in any human service setting, at some point stakeholders will wish they could bring together data from multiple programs and use it to coordinate work with each individual client.

Another pattern is the aggregate of clients within a single program. This is the information most used by managers and executives and funders and researchers. Sometimes it’s organized under the rubric of performance measurement and sometimes evaluation. It’s rarely thought of in connection with the idea of holism—but it should be. Many aspects of program effectiveness, efficiency and quality can only be clearly understood in the context of the whole aggregate pool of clients. In every human service program there will be a constant stream of stakeholder requests for aggregate information.

And the third pattern is the aggregate of clients at the intersection of multiple programs. This is the information that shows fragmented public policy in action, as efforts to address different social problems interact with each other. Understanding of the relationship between mental health and homelessness, or how the foster care system feeds into the juvenile justice system, has advanced because people have meshed data sets from multiple programs. In the past those analyses have usually been slow, laborious and therefore expensive.

Most of the more difficult information management stuff in the middle of the table will fall in one or another of these buckets.

Acknowledging Each Other’s Existence

But it may seem strange to suggest the image of a large table at all. Although the human service sector is an enormous collaboration, it’s also a constellation of very separate entities, each pursuing its own functional agenda. For decades the usual practice was for each stakeholder group autonomously to build the information management tools that it needed. Information projects often proceeded without even acknowledging the existence of other stakeholders that collect or use data on the same clients—or even the same services. So far there has been barely any common table to speak of!

Fortunately, it’s now dawning on a lot of people that this fragmented approach doesn’t work very well. No stakeholder group can perform its role without depending on data from others—at least not efficiently, and often not effectively either. The sector is rife with tragedies of the data commons in various forms: agency silos that do not talk to each other, information systems that cannot produce good data for analysis, and funders that do not coordinate among each other regarding the data that they require of their grantees.

So the image of a shared table points to something that is slowly coming into being. As stakeholders recognize the need for common data, decision-making about collecting and managing the data will necessarily become a more collective process. There will be a new conversation on a new basis: The human services form one ecosystem. We need to create a coherent ecosystem of data. Learning how to organize that kind of collaboration—which is itself another form of holism—will be the sector’s main challenge in the coming decades.

One big part of the challenge is cultural and inter-organizational. Working in a child welfare program involves a different knowledge base than working with homeless populations. The worldview of front-line workers is different from that of evaluators, and both are different from executives or fiscal officers. Government agencies have different concerns depending on whether they are at the federal, state or local level—and nonprofits have yet another perspective. Crossing boundaries takes effort. It will demand that the different stakeholder groups each learn to enter, at least somewhat, into the worldview of the others. New forums to build and institutionalize collaboration will be needed.

But while increasing collaboration is necessary, it’s not sufficient. Even when everyone sits down at the same table, the issue remains: Whose idea of the human service system is all this information management stuff going to serve?  That’s not just a problem of culture and inter-organizational politics and power. It’s also a problem of the methodologies that people use to manage information. To move forward, the sector is going to have to throw out some of today’s conventional wisdom.

Open-Ended Inquiry as an Ethos for Data Design

The two main activities around human service information are building software and analyzing (after acquiring) data. They’re directly related in one obvious way: software collects data and a great deal of data collection (though not all) relies on software. Beyond that, though, on the surface they seem to be two entirely different things. One is the province of technologists using programming languages and software development methodologies to create digital tools that run on electronic gizmos. The other involves people with various backgrounds—typically social science, management, social work, public health or finance—asking and answering questions about what’s going on in the human service system because they want to impact it in some way.

But building software and analyzing data have something important in common: they’re both complex social activities. They necessarily embody some guiding beliefs and ideals—an ethos—about how to do the work. So a basic question needs to be asked: To what degree does the organizing ethos of these activities take into account the need to facilitate open-ended inquiry?

In software development, much of the ethos is based on concepts from project managementscope and time and cost and stakeholder buy-in and communication. That’s because it makes business sense to organize software development into projects, and those factors have to be managed to achieve success. But the idea of project is a very closed-ended way of framing the situation. It establishes some requirements and sets a boundary around them, the requirements become the measure of success, and that which falls within the boundary is the main driver of the system’s design. Traditional waterfall methodologies try to firmly establish the boundary at an early stage. Incremental and iterative (also known as agile) methodologies accept that the boundary will change during the project, and they figure out how to go with the flow. But in both camps, the closed-ended paradigm remains: project requirements drive design.

Projects to collect and analyze data have a similar structure. In performance measurement, the sequence often starts with a logic model and proceeds to a list of measures. Social research projects start with hypotheses to be tested. Evaluators draw up lists of questions they plan to explore. These are the boundaries around the goals, which then lead to the specifications for data to be collected. Again, project requirements drive data design.

So in both activities—software development and data analysis—the reigning ethos assumes a closed-ended approach. And that’s the problem. An information project can be successful in meeting its goals on time and within cost, yet fail in larger—more holistic—senses. The most common symptom of this is when an organization builds up a central store of client and service and outcome data and then sadly finds that the structure of the data was so narrowly and woodenly determined by stated goals that it cannot adequately answer many reasonable questions in the same subject area. (Well, but that wasn’t in the requirements!)

Is there any alternative? As an experiment, you might try going to a software developer or someone who organizes data collection and saying: I would like you to design me a way of collecting data that will facilitate as much open-ended inquiry as possible. Chances are that they’ll tell you that you’re asking for something meaningless or impossible. After all, if you don’t clearly state your goals in advance, how can anyone design data for you?

That’s fair enough in one sense: there must certainly be requirements in order for builders to understand what exactly they need to build. But there is a way of arriving at requirements that actually does facilitate open-ended inquiry. It’s quite simple: first think of the data as a model of what’s in the human service system and its environment, not as an artifact that is designed to serve a particular function or answer a particular question. This is an uncommon approach, but not a new one. In fact, it’s been explored by software development theorists for a while under the name of domain modeling.⁵ And there are published case studies of successful information projects in the human services that have been carried out in this way.⁶

Domain modeling is powerful because it allows a group of stakeholders from very different backgrounds—evaluators and front-line service providers, interacting government agencies, funders and grantees—to define, together, the universe of data that can meet the largest set of common needs. Paradoxically, it has that power because it avoids talking about anyone’s particular needs directly. Instead, domain modeling focuses on defining and understanding what is most permanent and least tied to ephemeral requirements. And when everyone shares a common understanding of how to represent the whole human service domain, each stakeholder will have a much better chance of getting the data they need to pursue their own requirements.

Conclusion: One of the most important factors that will determine the human service sector’s future success or failure in using information well is whether or not the various stakeholder groups can come together to develop a comprehensive domain model.

—Derek Coursen



¹ J. C. Smuts Holism and Evolution (1926)

² This paraphrase combines aspects of Point 3 and Point 6 in R. Ackoff ‘Towards a System of Systems Concepts’ (1971).

³ Ibid.

⁴ The notion of the [information] system that serves vs. the [organizational] system that is served is explored in P. Checkland and S. Holwell Information, Systems and Information Systems (1998).

⁵ See e.g., R. Offen ‘Domain Understanding is the Key to Successful System Development’ (2002) and P. Oldfield ‘Domain Modeling’ (2002)

⁶ D. Coursen ‘Why Clarity and Holism Matter for Managing Human Service Information’ (2012)

If you found this post useful, please pass it on! (And subscribe to the blog, if you haven’t already.)

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Open Civic Data and the Human Services — Looking Beyond Today’s Flea Market

Governments are throwing open their gates and releasing large volumes of data for public use. Open civic data is the latest revolution, as Paul Wormeli recently called it on the Stewards of Change blog. But the revolution is nowhere near complete. Summing up the movement’s current moment, he concludes:

Any time there is such a tsunami of innovation… there are shortcomings and issues with achieving a more rational methodology… Observers have noted that open data offerings have taken hold so quickly that there are no useful ways to compare the data sets from city to city, or to gain any national insight from the publications of individual sites. In effect, there are no standards on which cities might find a common ground to publish data sets that could be aggregated. There are certainly advantages to building such standards for comparison purposes, but the stage of our current development is exciting just as it is unfolding…      

That’s an apt statement of the open data movement’s major limitation.

Yes, this is a moment of extraordinary innovation. And it’s a lot of fun. Browsing the federal U.S. DATA.GOV portal is a bit like wandering through a really good flea market discovering charming—and in many cases useful—items that you would never have thought of.

A limitation of flea markets, though: much of the stuff that you need for daily living isn’t sold there. Another limitation: the stuff that you do find is usually idiosyncratic and difficult to match. That’s why more people furnish their homes from Ikea than from flea markets.

My conclusion: Open civic data will only live up to its potential if the range of data offerings expands and the data becomes more standardized.

So how exactly could that happen? Or, to make the question more manageable: How could that happen for open data in the human services?

Hmmm. Actually, that points to a different question that would be a better starting point: What kinds of open data are relevant to the human services anyway?

Well, what do human service organizations do? People often describe it as a linear sequence of steps. Sometimes it’s even sketched out as a logic model. First there are problems and needs. Next, services are offered. Then services are provided. Finally, there are outcomes.

(Side note for systems thinking mavens: Yes, I know that’s a wildly oversimplified view. But for the present purpose, it will do the trick.)

So what kinds of open civic data would correspond to these steps?

Needs / problems. This is the richest vein of existing open data so far. There have long been public statistics on poverty from the census and on prevalence of crime from the police. Now education and health statistics are more easily accessible too. There are lots of data on quality of life issues reported to 311 centers (e.g. sightings of rats, housing complaints) that are relevant too. The list goes on and on. Some of these data sets do have standard formats. For some others, conversations are happening about possible standards. In short, work is in progress.

Services offered. This one is messy. Plenty of organizations create databases of records to direct people toward the service programs in their communities. Unfortunately, that means multiple overlapping stores of data. And most of them haven’t been opened up yet. Standards exist but so far haven’t been widely adopted. This has been called the Community Resource Directory Problem, and there are energetic discussions among information and referral providers about how to resolve it. (Stay tuned for updates.)

Services provided and outcomes achieved. This is the human service sector’s critical gap in open data. How much work is being done with whom by what organizations to address what problems in what ways… and what are the results? If you wander around DATA.GOV looking for this kind of information, you won’t find much.

Client-level records are confidential, of course, so data on outputs and outcomes could only be provided in the aggregate. So let’s imagine aggregated data in standardized formats, sliced and diced in a lot of useful ways—by geographic areas and demographic characteristics and time periods, for example. Let’s imagine that such data were so commonly available that program planners and community advocates could easily overlay a city map with reliable measures on clients served and numbers of services provided and results achieved. What would the impact be? Currently, that kind of exploration is expensive and laborious; researchers first must negotiate access, then typically spend enormous effort cleaning the data. But what if it were quick and cheap to do? There’s already an animated public conversation about how to improve the effectiveness and efficiency of human service programs. Wide availability of well-structured open data about outputs and outcomes would make the conversation broader, faster and better informed.

So what’s standing in the way? Of course there are a lot of financial, technical, political and inter-organizational issues to resolve. But there’s an even more basic barrier: isolated specialist conversations.  There are at least three major efforts to improve human service data, but they’re not talking to each other enough.

One effort has to do with performance measurement. A whole profession has grown up to help public and nonprofit human service providers measure what they’re doing. Recently there’s been more focus on the need for comparability and the fact that when funders don’t coordinate their requirements with each other, service providers suffer. There’s a growing aspiration to specify common measures. But high-level conversations about performance measurement can often be a bit naive about the practicalities of collecting data and the role of information technology. Influential books about performance measurement treat technology mostly as a means of storing and delivering measures that someone else—mysteriously and providentially—has already compiled. There’s not much discussion of the barriers to producing performance measures or how to overcome them. And the question of how widely measures should be published is rarely raised.

Another effort promotes interoperability. The National Information Exchange Model (NIEM) now makes it much easier to set up channels for information systems to talk with each other. But the largest part of that effort has focused on streamlining business processes; take a tour of the interoperability packages that agencies have built and you’ll find that only a handful of them are designed to transport performance measures.

And another—the Open Civic Data movement—is mostly concerned with transparency and the advantages of public access to government data.

So far, these three efforts haven’t coalesced. Have there been any projects that specify common measures for a large set of comparable human service programs and deploy interoperability standards to collect the data and warehouse the measures together in a place that’s openly available? Not yet, as far as I know. Bringing these separate strands together would be a major step toward wiring the human service sector to become a responsive whole.

So what’s the solution? For starters: An interdisciplinary conversation about creating an open ecosystem of high quality standardized measures.

—Derek Coursen

P.S. Next week in California there will be an Open Health and Human Services Datafest. It’s the first open data conference I’ve heard of that has the words human services in its title. A milestone!

Did you find this post useful? If so then please subscribe to the blog via email or RSS feed using the widgets on the right. There will be more discussion of these and related topics in the coming months.

Futurism: Rewiring the Human Service Sector to Become a Responsive Whole


Over the last decade or so, the human service sector in the U.S. has started down the road toward an extraordinary transformation.

Here the doubtful or desperate reader may retort: Yes, it’s started down the road toward losing its funding. Fair enough; austerity is everywhere and it’s not clear whether, when or how that might change.

But looking toward a farther horizon, a more hopeful transformation becomes visible too. There’s nothing dramatic about it. Progress is happening in quiet fits and starts. It’s coming together without a guiding hand. And the drivers are separate movements that only occasionally notice each other’s existence.

What’s happening might be imagined as a wiring project: autonomous organizations are becoming wired together by shared information. But that image, borrowed from electronics, isn’t enough. When information is shared quickly enough and intensively enough, coordinated behavior can emerge in an almost biological way. Eventually, the human service sector could begin to behave almost as though it were a single organism. It could become routinely able to respond quickly, and with collective intelligence, to significant events—whether at the level of an individual client’s crisis or an emerging social problem.

That may seem fanciful. It certainly crosses into the realm of futuristic speculation. But at least three major trends can be observed today that are leading the sector in that direction.

The first is the rise of data interoperability. A few years ago the National Information Exchange Model (NIEM) began to develop standards so that human service agencies can easily build automated exchanges of data between their information systems. As a result, it’s now cheaper and more feasible to wire together, say, a family court and a county child welfare agency. A child’s service plan is instantly transmitted to the court, and the court’s approval (or otherwise) is transmitted back to the child welfare agency. The NIEM tools can be deployed wherever organizations agree that they have a business process in common that they want to streamline electronically.

How will data interoperability change the way agencies do their work? Most obviously, it makes communication faster. But the impact isn’t merely to speed up the work that agencies have always done. More agile communication tends to improve the quality of decision-making. And as multiple agencies become accustomed to communicating more quickly and in more depth, agency leaders are more likely to discover ways of reengineering the way they work together. That will lead, also, to subtle shifts in how people think about the borders of their agencies’ work. Formerly isolated silos will be connected—not only through shared data but also in the minds of the people who operate them.

The second trend is the push toward common performance measures. Since the 1990s, public and nonprofit programs have been under pressure to report measures that will show how well they’re fulfilling their missions. Now there’s a growing notion that there ought to be, for each specific type of program, a common set of measures. Already, funding agencies that pay multiple programs to do the same kind of work usually require all of them to report similar data. The next step will be to coordinate decisions about measures across multiple funders. There are plenty of efforts to pinpoint what the measures should be. (The Urban Institute’s Outcome Indicators Project, for example, suggests measures to be used by transitional housing, employment training and prisoner re-entry program, among others.) Eventually, that kind of coordination will lead to online platforms that allow stakeholders to compare the work and results of multiple organizations and programs. That’s already a reality in the world of arts and cultural organizations, where common measures are collected and distributed by the Pew Cultural Data Project. For the human services, it’s happening more slowly. Right now it’s easy to look up estimates of homelessness, or outcomes in child welfare cases at the level of states or cities; and other federal programs offer similar reports. As the thirst for information increases, it’s only a matter of time before platforms emerge that will offer more finely grained indicators.

As human service programs become empirically comparable, that fact will invisibly wire them together. The connection will happen through the awareness of all the people who will be able to look at a panorama of programs in comparison with each other. When executives can easily compare other programs’ numbers to their own, that will influence their decision-making. Ready access to common measures will guide funders and new program planners. Statistics on outputs, outcomes, costs and quality will no longer languish in obscure internal reports—they’ll be out in the world affecting the actions of a broad range of human service stakeholders.

And the third trend is open civic data. It’s the idea that data collected and maintained by the government is of potential use to civil society; if it doesn’t compromise individuals’ privacy or national security, then it ought to be open to the public. As more and more public data becomes available, common human service performance measures will get mashed up with economic, ecological, public health and criminal justice statistics. Those mash-ups will give stakeholders a more textured understanding of the interplay of factors that impact human service efforts. They’ll help program managers to fine-tune their interventions. And they’ll allow planners to identify emerging needs. Of course, researchers have always sought out this kind of information. Right now, though, that’s laborious and costly. As the ecosystem of available data grows, it will become easier, cheaper and therefore more commonplace.

These three trends come from very different origins. The idea of common performance measures was born out of the frustration of funders who felt that they were often flying blind. The National Information Exchange Model arose out of justice agencies’ need to exchange information. And the open civic data movement is related, by ethos, to the ideas of open source software and open copyright licenses. Each one is mostly talked about within its own circle of constituents, and the areas of overlap are only beginning to be explored. All of them, though, are pushing the human service sector toward coalescing into a more responsive whole.

And one more thing they have in common: none of these trends is really about technology innovation per se. This is a different kind of innovation. It’s the creation of new kinds of conversation in the human service sector: a conversation about common uses of data; a conversation about collective choices for structuring data; and—the hardest part—a conversation about defining the common meaning behind the data.

—Derek Coursen

Did you find this post useful? If so then please subscribe to the blog via email or RSS feed using the widgets on the right. There will be more discussion of these and related topics in the coming months.

Free as in Puppies: Taking on the Community Resource Directory Problem

Last week Code for America (CfA) released Beyond Transparency: Open Data and the Future of Civic Innovation, an anthology of essays about open civic data. The book aims to examine what is needed to build an ecosystem in which open data can become the raw materials to drive more effective decision-making and efficient service delivery, spur economic activity, and empower citizens to take an active role in improving their own communities.

An ecosystem of open data? How might this brave new thing intersect with human service organizations? That’s mostly beyond the scope of CfA’s current book. One chapter, though—“Toward a Community Data Commons” by Greg Bloom—takes a very serious stab at resolving a perennial headache of information and referral efforts.

Bloom is trying to solve what he calls the community resource directory problem. Various players—government agencies, libraries, human service organizations—develop lists of community resources, i.e. programs to which people can be referred. Originally the lists were paper binders. Later they became databases. Almost always, each directory belongs to the agency that created and maintains it. That’s a problem: what the community needs isn’t a bunch of directories, it’s a single unified directory that would allow one-stop shopping. And the overlap among directories is inefficient too: each one has to separately update its information about the same programs.

One solution might be to somehow make directory data free and open to all. Bloom describes one experiment in that direction. It’s instructive because of the way it failed. Open211, which a CfA team built in 2011 in the Bay Area, scraped together directory data from a lot of sources and made it available via the web or mobile phone—and it also allowed the users at large to submit data. As Bloom tells it: This last part was key: Open 211 not only enabled users to create and improve their community’s resource directory data themselves, it was counting on them to so. But the crowd didn’t come. This, it turns out, is precisely what administrators of 211 programs had predicted. In fact, the most effective 211s hire a team of researchers who spend their time calling agencies to solicit and verify their information.

Exactly right. Reading this, I vividly remembered a project I did years ago: updating the Queens Library’s entire directory of immigrant-serving agencies. It took well over a hundred hours of tedious phone work. (There was time on the pavement too… for example, an afternoon wandering around Astoria looking for a Franciscan priest—I did not have his address or last name—who was rumored to be helpful to Portuguese-speaking immigrants. I never found him.) And then each piece of data had to be fashioned and arranged to fit into a consistent format.

That’s what goes into maintaining a high quality community resource directory. It will not just happen. It cannot be crowd-sourced. And this harsh fact—the labor cost of carefully curated information products—can be hard to reconcile with the aspirations of the open civic data movement.

The lesson: it’s certainly possible to collect community resource information and then set it free… but it will be, as Bloom says, free as in puppies. (This wry expression comes from the open source software movement, where people make the distinction between free as in beer—meaning something that can be used gratis—and free as in speech—meaning the liberty to creatively modify. Someone noticed that freely modifiable software might also require significant extra labor to maintain—like puppies offered for free that need to be fed, trained, and cleaned up after.)

But then Bloom takes the problem toward an interesting possible solution. He invokes the idea of a commons—not the libertarian commons of the proverbial tragedy but rather an associational commons that would include the shared resource (in this case, the data) and a formal set of social relationships around it. He suggests that a community data co-op might be an effective organizational framework for producing the common data pool and facilitating its use.

It’s an intriguing idea. It acknowledges the necessary complexity and cost of maintaining a directory. It might be able to leverage communitarian impulses among nonprofits. And if successful, it could be a far more efficient way of working than the current situation of multiple independent and overlapping directories. Of course, it would face all the usual practical difficulties that cooperatives do; but there’s no reason those should be insurmountable.

This framework could solve a lot of current problems in information and referral.  But how might it eventually fit into some larger imagined ecosystem of open data?

Bloom offers a vision for how such a unified directory could be widely used by social workers, librarians, clients, community planners, and emergency managers. That seems entirely feasible, because all those players would need the same kind of information: which programs offer what services when and where.

But Bloom also takes the vision a couple of steps further, imagining a journalist seeing this same data while researching city contracts; and how the directory might be combined with other shared knowledge bases—about the money flowing into and out of these services, about the personal and social outcomes produced by the services, etc.—to be understood and applied by different people in different ways throughout this ecosystem as a collective wisdom about the “State of Our Community.”

This, I think, is where Bloom’s vision will face strong headwinds. I don’t mean political or inter-organizational resistance (though those might crop up too). The problem is that across the broad domain of human service data, very few sub-domains have achieved much clarity or uniformity. Community resource records happen to be one of the more advanced areas. For a couple of decades there’s been a library system standard (MARC21) for storing these records, and now AIRS offers an XSD data standard. So the way is fairly clear toward creating large standardized databases of community resources. Those could then be meshed with, say, the databases of 990 forms that U.S. nonprofits submit to the Internal Revenue Service. The problem is that outside of these few (relatively) clean small sub-domains, human service data gets very murky and chaotic indeed.

A project to mesh community resource records with data on contracts, funding sources and public expenditures, for example, would immediately run into the problem that the linkage would need to be made through the concept of a program. Yet that core term is used in very different ways. Sometimes it means a discrete set of resources, practices and goals; sometimes it implies a separate organizational structure; and sometimes it seems to be a mere synonym for a funding stream. The use of that term would need to be tightened, or it would need to be replaced by some clearer concept. But even then, people trying to mesh community resource data with fiscal administrative data would find that the latter are equally unruly. A city’s contract with a nonprofit, for example, may fund one program or many; and the original source of the contract’s funding may be one or many government funding streams from the federal, state or city level. There is no uniform pattern for how these arrangements must be made, nor are there well-developed data standards.

Meshing community resource records with programmatic statistics such as outputs and outcomes would be equally fraught. While there’s a movement toward standardizing some performance measures, concrete results on the ground have been slow in coming. Even if complications from the politics around performance measurement were miraculously eliminated, there would still be the issue of murky and chaotic data that don’t easily support performance measures.

So what’s the solution?

In a nutshell: for the human service sector’s work to become a significant part of the ecosystem of open civic data downstream, the sector will have to embark on a new kind of conversation about the way data is organized upstream. This will necessarily be a longer-term conversation. It will have to involve a more diverse set of stakeholders than are usually assembled at the same table. It will have to ask (and answer) unfamiliar questions such as how can information system designers create good interrogable models of  public service work rather than merely meeting stated user requirements? It will have to take a hard look at sub-domains that have often not been modeled very well. (Funny-looking taxonomies are an important red flag for identifying those.)

Eventually, that kind of conversation can lead the sector toward far more coherent and holistic ways of organizing its data. The downstream benefits: more successful information system projects, more efficient production of performance measures, and more meaningful data for open civic uses.

—Derek Coursen

Did you find this post useful? If so then please subscribe to the blog via email or RSS feed using the widgets on the right. There will be more discussion of these and related topics in the coming months.

The Diagnostic Value of Funny-Looking Taxonomies

Taxonomies are everywhere in information management, yet they are hardly ever formally acknowledged and managed.

So begins a very entertaining article published last year by Malcolm Chisholm. It’s entitled “The Celestial Emporium of Benevolent Knowledge” after a famous short story in which Jorge Luis Borges presented a (fictional) ancient Chinese taxonomy of animals. The deliberately absurd chaos of Borges’ taxonomy serves as a jumping off point for Chisholm to outline the (non-fictional) ancient Western art of taxonomy: the logical and constant division of a genus into the mutually exclusive and jointly exhaustive species that compose it. The very existence of this practice is now almost newsworthy because, as Chisholm notes, traditional logic is hardly taught in the West anymore. (Once upon a time that statement itself would have seemed as unlikely as one of Borges’ fantasies; but no, it is true.)

How do taxonomies show up in software? The most obvious way is in lookup tables. Those tables determine the categories from which the user is allowed to choose.

I’ve seen strangely constructed taxonomies create costly problems in a lot of information systems, and it continues to  puzzle me that there’s relatively little discussion of taxonomy among software designers; so I was glad to see Chisholm’s article come out in a popular newsletter.

Why so little discussion? Perhaps it’s because people don’t want to simply bemoan a problem. Fair enough. But in this case, the problem actually points to its own solution.

In fact, the taxonomies already embedded in lookup tables can have enormous diagnostic value for information system designers. They can be a very fast track to understanding past problems and eliciting current requirements.

Several reasons why:

1 – They’re often an important (albeit perhaps ill-organized) embodiment of the way the organization thinks about its work.

2 – Their virtues and vices are easy to discuss with the stakeholders who are used to them.

3 – They’re a gateway to understand and critique larger architectural decisions, since a lookup table is the domain of a particular attribute within a particular entity within a conceptual data model.

4 – Their very nature carries with it the expectation of a rigorous internally consistent structure. That doesn’t mean that the ancient tradition of classical logic need be considered sacrosanct. But when we see a list that is clearly not being guided by those rules, it should at least lead us to ask: Why not, and what price is being paid for it?

This is an almost archaeological use of taxonomies—but the purpose isn’t merely to dig up the past, it’s to better understand the present and to better design the future.

The trick is to keep an eye out for things that look funny, then ask how they got that way, then ask how they’re working out right now and what might need to be different. (After all, taxonomies from yesterday determine how data analysts can slice and dice today.)

Here’s an example, a lookup table in a human service information system. The software tracks referrals, meaning the attempts by one human service program to help people receive services from another human service program. Referrals are a basic part of what such organizations do; yet data about referrals is often of rather poor quality.

This lookup table lists the permissible statuses of a referral:

  • Client Received Service
  • Client Refused Service
  • Client On Waiting List
  • Service Not Available
  • Referral Inappropriate
  • Appointment Pending
  • Client No Show For Appointment
  • Pending—Client In Hospital
  • Pending—Client Too Ill
  • Pending—Letter/Info Sent
  • Pending—Needs Home Visit
  • Pending—Scheduling Conflict
  • Pending—Unable To Contact
  • Pending—Requires Reassessment
  • Pending—Needs Spanish-Speaking Staff
  • Lost to Follow-up

Having suggested that this may be a funny-looking taxonomy, I now risk being accused of suffering from some strange Aristotelian snobbery. After all, what’s wrong with it?

Well, in real life the process of a human service referral generally involves several stages and decision points, each of which involves various parties and possibilities. There is an initial outreach by one service provider to another, (often) the making of an appointment for the client, (often) an assessment by the second service provider, the offer (or not) of services, the acceptance (or not) by the client, and then the actual provision (or not) of the services.

Meanwhile, the original service provider follows up with the second one to find out what happened. This list looks like a grab bag of responses that the second service provider might give at any point along the way. But it lacks any explicit representation of the expected stages. So perhaps, strictly speaking, it’s really several taxonomies—belonging to several stages of a referral—that have all been joined together willy-nilly in a table. The problem is that without explicitly representing each stage, there’s no way to analyze whether (and how aptly) the values cover all possible situations. Furthermore, without that context, there are few cues about the exact meaning of some of the statuses. (What exactly is pending? Is it the service itself or some prerequisite stage?) It also points to the question: is it really enough for the organization to record only the current status without capturing any information about how the process unfolds?

So this taxonomy has certain limitations. They’re not necessarily horrible or laughable, but they do lead to useful questions about the original design decisions and about how well the resulting artifact is serving the organization’s needs. It looks as though the original decision was simply not to tease out what goes on in the referral process very precisely, but instead stuff everything into an ill-defined status field. (If so, it’s probably an example of merely meeting stated requirements instead of creating a good interrogable model.) Then the already loose taxonomy may have been further changed as administrators added new values when users requested them. The next question should be: How well has this approach worked out for the data analysts downstream? And if the answer is not very well then how might this area be better modeled in the future?

That’s the value of paying attention to funny-looking taxonomies.

And beyond the individual organization, they might even be helpful for understanding how well (or poorly) the work of an entire sector is currently being modeled. Right now the human service sector is fitfully advancing toward various ways of standardizing its data: common performance measures, common data exchange standards, and others. Taxonomies are a necessary part of that mix.

Do you have any favorite funny-looking human service taxonomies? If so please share using the Comments form below!

—Derek Coursen

Did you find this post useful? If so then please subscribe to the blog via email or RSS feed using the widgets on the right. There will be more discussion of these and related topics in the coming months.

REVEALED: The Secret Desire of Information System Stakeholders

A lot of projects to deploy human service information systems end up, sadly, as failures or only half-successes. Why?

Here’s one major reason: stakeholders have a secret desire—actually a secret expectation—that many project managers don’t understand. (In fact, it’s usually so secret that the stakeholders themselves don’t even know they have it.)

A typical scenario:

Imagine there’s a software system being designed to meet some set of specifications. You call a meeting of everyone involved in the project. You ask them a deliberately naïve question: What IS this information system anyway?

After they finish giving you the hairy eyeball, they all reply something like: This system will streamline {list of major business processes} and it has {list of handy dandy features}. In other words, everyone in the room talks about it the same way they’d talk about a Swiss Army knife: It’s a tool that contains a bunch of smaller tools.

That’s a red flag that things are probably going to turn out badly.

Fast forward to the same room a couple of months later. New stakeholders have entered the scene. They’ll need data to do their jobs. They weren’t around when the specs were written. But they’re delighted to hear that the system will collect data on a whole lot of areas they’re interested in: client demographics, history and risk factors, service plans, services provided, outcomes. So they sit down and make a list of the data they want. And then the project manager has to tell them: Sorry, we’ll only be able to give you maybe a bit more than half of what you’re asking for.

The new stakeholders are aghast. You mean the system won’t give us information on {a, b, c}? The developers are called in, and they explain: The {such-and-such interfaces} will capture data on that subject area, but in a different way. See how the specs for {reports x, y, z} were written?  And if the new stakeholders hint that the design is inadequate and perhaps even shortsighted, then the developers too go away frustrated. What, did they expect us to be clairvoyant? They weren’t around to tell us what they needed!

What’s going on? The real problem is a gap between two ways of looking at what the database is supposed to be. The (unspoken) position of one large part of the software development profession is: the database is just the substructure that supports a bunch of tools that we build to do specific things that were stated in requirements.

But the stakeholders have a different (unstated) expectation. They imagine that the database is intended to be an interrogable model of the organization’s work in its environment.

On its face, that idea might seem ridiculous. For how could a database be a model of a whole human service organization? Wouldn’t it have to be as rich and extensive as the organization itself, like Borges’ map which was as large as the Empire it described?

But no, fortunately the stakeholders are more reasonable than that. What they expect—subliminally, of course—is simply that each subject area captured in the database ought to be a rich enough, precise enough, coherent enough model of their organization’s reality so that the data will be able to answer a lot of reasonable permutations of questions about the organization’s work. (And the stakeholders shouldn’t have to articulate every possible permutation to the designers. It’s the designers’ job to make sure that a decent model gets built.)

And in fact, that is a reasonable enough expectation. The proof is in the successful projects (too few, alas!) that do anticipate a lot of their stakeholders’ needs.

There’s a typical scenario with those too:

Stakeholders walk nervously into the meeting with their list of new (or unexpectedly changed) requirements. The project manager and the developers frown, hold their breath… listen… and then exhale, relaxing as they realize that the requirements will not be difficult to meet after all… because the model they built was close enough to reality. Their model has passed yet another test.

But how do they do it? How do some designers build a good interrogable model of the business in its environment—while others only manage to build a Swiss Army knife?

It’s a matter of what they’re looking at and how they’re looking.

Much more about that in future posts.

© Copyright 2013 Derek Coursen

Did you find this post useful? If so then please subscribe to the blog via email or RSS feed using the widgets on the right. There will be more discussion of these and related topics in the coming months.

Taking a Systems Stance Toward the Problems of Human Service Information

There’s more and more talk about using systems thinking to solve problems. But many people find it hard to put a finger on exactly what that might mean.

One reason: systems thinking can feel very alien. It’s inherently abstract. It tells observers to set aside their usual ways of looking at a situation. It introduces an unfamiliar set of concepts. It invokes principles that apply to systems in general. But what kinds of principles? And what is a system anyway? Most people aren’t used to thinking in those terms.

Another reason: systems thinking is a very broad tradition. A few dozen well-known thinkers developed systems approaches over the last century. They share some common ground, but they also go in very different directions. On its own, the phrase systems thinking merely points to the broad tradition, it doesn’t specify which problems could be addressed with which tools.

These are real stumbling blocks to talking with people about the value of systems thinking. But just for fun, let’s ratchet up the difficulty a bit more.

What about the role of systems thinking in managing human service information? Uh oh. People already talk about human service systems in a non-technological sense. And people also refer to software as information systems.

All that existing language makes it hard to pose a question like: How can systems thinking lead toward building more successful information systems for human service systems? Beneath the hypnotic repetition of the word system, with all its vague associations, the listener’s mind soon feels like it’s turning into oatmeal.

It’s still an important question, though.

Resources for an Elevator Pitch

What to do? For starters, it would help if there were a quick and easy way to make systems thinking accessible to people who aren’t already fascinated with it. An elevator pitch.

(Then, after systems thinking has been distinguished from information systems, perhaps the conversation can turn to the relationship between them.)

One way to start, with only fifteen seconds: A system is just a bunch of parts that make up a whole. Systems thinking looks at how the parts work together, what they might be trying to do, how they can change over time, and how they relate to other stuff outside, which might be changing too. (That’s my attempt to boil down one of the classic articles in the tradition, written by Russell Ackoff.)

If the elevator is moving slowly in a tall building—or gets stuck—then there may be time to go a bit deeper. Here’s another resource: a very readable and entertaining article by Mike Metcalfe. He proposes that systems thinking is really a critical stance. Systems thinking means choosing to pay attention to transformation (inputs and outputs), connectivity among elements, purpose (what a system is intended to do), synthesis (disparate elements working together) and the boundary between what is included in the system and what is not.

Rethinking the Boundaries of Human Service Systems

That last item—concern with boundaries—is the area where systems thinking has so far made its biggest impact on the human service sector. It’s taken various forms. Social workers have long used the ecosystems perspective to holistically understand the situation of their clients. At the institutional level, there’s a growing desire to coordinate service delivery among multiple agencies: a decade ago people began developing resources for planning service integration and today the idea has morphed into a whole business model designed around horizontal integration that U.S. states are being encouraged to adopt.

And of course, integrating services depends on integrating information. That too has taken different forms. There’s now a technology toolkit, the National Information Exchange Model, that helps agencies exchange data between separate information systems. Some jurisdictions are developing common client indexes that link all their information systems so that all agencies can share a unified view of each human service client. Companies are marketing enterprise-wide human service software platforms to government. And designers have moved toward building databases that more accurately reflect the human service ecosystem.

But rethinking boundaries is only the beginning of what systems thinking could do for the human service sector. Other possibilities haven’t been explored as much yet. Here’s one:

Designing Data That Can Serve Multiple Feedback Loops

The common business jargon of asking someone for feedback and closing the loop originally comes from the systems thinking tradition. It’s based on the fact that organizations collect data from the environment and feed it back into their own operations. (Living organisms and some machines—such as thermostats—do that too.)

Human service organizations use a lot of different kinds of feedback loops. They’re usually treated, though, as if they were entirely different activities.

A caseworker records data about the services she’s provided and the progress that the client has made, and then looks back at those records when she needs to revise the client’s service plan. (That’s a feedback loop—a very small one.) Before meeting with caseworkers, their supervisor runs a report to see how many of each worker’s clients received the recommended set of services that month. (Another feedback loop, slightly larger.) An executive creates a new  strategic plan designed to increase the proportion of clients who graduate from the program and achieve good outcomes a year later. (A much bigger feedback loop.) Evaluators analyze the data from a whole cohort of funded programs and make judgements about whether the model is effective for its cost. (A really big feedback loop!)

(Note for systems thinking mavens only: this may remind you of the Viable System Model by which Stafford Beer described different levels of cybernetic processing in organizations. There’s a good study of a homeless shelter by Dale Fitch using that framework. But this post is headed in a different direction: toward software development methodology.)

These activities get labeled as casework and supervision and performance measurement and evaluation. And of course, they are all those different things. What they have in common, though, is that they’re all feedback loops.

[Skeptical Questioner: Well, so what? How does calling them all feedback loops help anything?]

I’m glad you asked.

Calling them all feedback loops is useful because the human service sector has a big problem with data. Organizations typically spend enormous resources collecting data and building software to support particular feedback loops—only to discover that the data, for one reason or another, don’t support other feedback loops that are equally important. (Think of software systems that support caseworkers well but performance measurement poorly. Or vice versa.)

In this blog’s first post I outlined what I think are the sector’s four biggest information management problems: isolated agency silos, unsuccessful information system projects, barriers to producing performance measures, and uncoordinated demands on service providers. The last two of those could be summarized as: data that can’t support multiple feedback loops. And that problem, in turn, often contributes to the failure of information system projects.

But if these are problems of feedback loops, then the systems thinking tradition—which has devoted a lot of effort to studying such matters—can offer resources that will help. (They may challenge some of the conventional wisdom of the software development profession too.) Much more on that in a future post.

© Copyright 2013 Derek Coursen

Did you find this post useful? If so then please subscribe to the blog via email or RSS feed using the widgets on the right. There will be more discussion of these and related topics in the coming months.


Get every new post delivered to your Inbox.

Join 131 other followers