Allegory Child Welfare Clarifications Data Strategy Governance Homelessness Information & Referral Information System Design Interoperability Open Civic Data Performance Measurement Problem Statements Project Management Reviews Social Work Systems Thinking Taxonomies Veterans

The Varieties of Holistic Human Service Information

ThreePatterns

The word holistic comes up a lot when people talk about the human services. It’s an ideal: to be (or become) holistic. But what do people mean? And what does holism have to do with managing information?

People are striving toward something. It needs to be outlined more sharply. Otherwise holism will be just a buzzword that befogs the mind. If we think we want holism—whatever that turns out to be—we should start by looking for clarity.

The Discipline of Searching for Wholes

Let’s step back and look at where this concept came from. Holism first began kicking around the public mind under that particular name almost a hundred years ago.¹ It’s one facet of what’s come to be called systems thinking. (In fact, system and whole are synonyms.)

Holism is partly about how you see. Breaking a thing down into its constituent parts is one way to understand something—but not the only way. In fact, that can be terribly limiting approach. Instead of narrowing your focus, why not expand it instead? By zooming out and bringing more into the frame, an observer can perceive phenomena that only appear in the whole, not in the parts. (This approach has been called expansionism in contrast to reductionism.)

So how do you recognize a whole when you see one? To paraphrase one of the great systems theorists: A whole is an objective thing, but it’s also a subjective thing because the observer chooses which parts to include—yet it’s still objective because the observer has to prove that the parts really are connected!² (This can be a helpful angle on an old debate: whether what people perceive is determined by their own preconceptions or is actually out there.)

In other words, a whole (or system) is both construed and discovered. So when people says that they want to be holistic, they usually mean that they aspire to expand their range of vision. They want to include more parts and, in that way, perceive a whole that is larger than what they could see before.

The everyday act of asking a new question is often an attempt to see whether a system exists. For example, suppose someone asks: Should we offer our program to people who have experience x? This frequently comes up in successful programs that are looking to expand to new client populations. It can also be asked in less successful programs that are reconsidering their theories of change. Either way, it’s a question about a possible system. It asks whether or not there is a relationship between the clients’ experience x—whatever that is—and some aspect of the program. The questioner is trying to bring both of these things, which perhaps have not been considered together until now, into the same frame, to see whether they are in fact parts within a whole.

That’s why the path of holism inevitably means seeking more information. If you’re going to discover a new whole that includes new parts, you need data to figure out exactly how the parts are related to each other.

But of course, most answers are not final, and no whole that an observer has construed and discovered can ever be considered complete. And for that reason, holism is necessarily an ongoing aspiration. It implies a disciplined commitment to continue questioning. It is endless pursuit of a broader and richer understanding. It is a radically open approach.

Competing Wholes and the Data Problem

And that’s where things start getting complicated for the human services. Why? Because there are so many different stakeholders, and a lot of them are asking questions as they try to expand their range of vision. That’s a good thing. But it also means that there’s a slew of different—and at times competing—attempts at being holistic.

After all, the sector is an enormous collaboration. There are governments at different levels, philanthropic foundations and donors, the organizations providing services, managers and front-line staff and fiscal officers and evaluators, the beneficiaries of the services, policymakers and advocates… the list goes on. Each stakeholder is an observer that construes the human service system in his or her own particular way. And that’s perfectly normal. If an architect, a mechanical engineer and a social psychologist all look at the same house, they will construe three very different systems.³

But for practical matters of managing information, it poses a problem: how are all these different stakeholders—with their particular views of the system and their particular aspirations to see it more holistically—going to get the data they need?

Let’s imagine a big table. Let’s say it belongs to a human service agency, or perhaps to a large group of human service and related agencies. (It may not even be entirely clear who owns the table or who has a seat there.) In the middle is the information management stuff. It’s a vast swathe of territory where decisions have to be made: what data to collect; what questions to answer; what staff or consultant positions with what qualifications will be paid for by whom to do what tasks; and what information systems should be designed in what way to do what for whom.

Uh oh. Whose idea of the human service system is all this information management stuff going to serve?⁴ Whose idea of relevant data will it capture? Whose work processes will it support? Whose questions will it answer?

Three Patterns of Holistic Information

To get an idea of the range of what all these stakeholders want, it helps to think about different patterns of information. It turns out that most information is organized according to the two most central concepts in the human services: client and program.

Traditional human service information was embodied in the old cardboard client chart. It represented one client’s experience in one program. That was pretty simple. Information gets more interesting—and more difficult to acquire—when the number increases.

A more holistic pattern of information is the individual client at the intersection of multiple programs. For front-line workers to address a client’s unique situation, they need to understand the client’s environment. (That’s a basic tenet of the ecosystems perspective that has informed social work for decades.) Programs elsewhere that the client participates in are a critically important part of that environment. It’s a safe bet that in any human service setting, at some point stakeholders will wish they could bring together data from multiple programs and use it to coordinate work with each individual client.

Another pattern is the aggregate of clients within a single program. This is the information most used by managers and executives and funders and researchers. Sometimes it’s organized under the rubric of performance measurement and sometimes evaluation. It’s rarely thought of in connection with the idea of holism—but it should be. Many aspects of program effectiveness, efficiency and quality can only be clearly understood in the context of the whole aggregate pool of clients. In every human service program there will be a constant stream of stakeholder requests for aggregate information.

And the third pattern is the aggregate of clients at the intersection of multiple programs. This is the information that shows fragmented public policy in action, as efforts to address different social problems interact with each other. Understanding of the relationship between mental health and homelessness, or how the foster care system feeds into the juvenile justice system, has advanced because people have meshed data sets from multiple programs. In the past those analyses have usually been slow, laborious and therefore expensive.

Most of the more difficult information management stuff in the middle of the table will fall in one or another of these buckets.

Acknowledging Each Other’s Existence

But it may seem strange to suggest the image of a large table at all. Although the human service sector is an enormous collaboration, it’s also a constellation of very separate entities, each pursuing its own functional agenda. For decades the usual practice was for each stakeholder group autonomously to build the information management tools that it needed. Information projects often proceeded without even acknowledging the existence of other stakeholders that collect or use data on the same clients—or even the same services. So far there has been barely any common table to speak of!

Fortunately, it’s now dawning on a lot of people that this fragmented approach doesn’t work very well. No stakeholder group can perform its role without depending on data from others—at least not efficiently, and often not effectively either. The sector is rife with tragedies of the data commons in various forms: agency silos that do not talk to each other, information systems that cannot produce good data for analysis, and funders that do not coordinate among each other regarding the data that they require of their grantees.

So the image of a shared table points to something that is slowly coming into being. As stakeholders recognize the need for common data, decision-making about collecting and managing the data will necessarily become a more collective process. There will be a new conversation on a new basis: The human services form one ecosystem. We need to create a coherent ecosystem of data. Learning how to organize that kind of collaboration—which is itself another form of holism—will be the sector’s main challenge in the coming decades.

One big part of the challenge is cultural and inter-organizational. Working in a child welfare program involves a different knowledge base than working with homeless populations. The worldview of front-line workers is different from that of evaluators, and both are different from executives or fiscal officers. Government agencies have different concerns depending on whether they are at the federal, state or local level—and nonprofits have yet another perspective. Crossing boundaries takes effort. It will demand that the different stakeholder groups each learn to enter, at least somewhat, into the worldview of the others. New forums to build and institutionalize collaboration will be needed.

But while increasing collaboration is necessary, it’s not sufficient. Even when everyone sits down at the same table, the issue remains: Whose idea of the human service system is all this information management stuff going to serve?  That’s not just a problem of culture and inter-organizational politics and power. It’s also a problem of the methodologies that people use to manage information. To move forward, the sector is going to have to throw out some of today’s conventional wisdom.

Open-Ended Inquiry as an Ethos for Data Design

The two main activities around human service information are building software and analyzing (after acquiring) data. They’re directly related in one obvious way: software collects data and a great deal of data collection (though not all) relies on software. Beyond that, though, on the surface they seem to be two entirely different things. One is the province of technologists using programming languages and software development methodologies to create digital tools that run on electronic gizmos. The other involves people with various backgrounds—typically social science, management, social work, public health or finance—asking and answering questions about what’s going on in the human service system because they want to impact it in some way.

But building software and analyzing data have something important in common: they’re both complex social activities. They necessarily embody some guiding beliefs and ideals—an ethos—about how to do the work. So a basic question needs to be asked: To what degree does the organizing ethos of these activities take into account the need to facilitate open-ended inquiry?

In software development, much of the ethos is based on concepts from project managementscope and time and cost and stakeholder buy-in and communication. That’s because it makes business sense to organize software development into projects, and those factors have to be managed to achieve success. But the idea of project is a very closed-ended way of framing the situation. It establishes some requirements and sets a boundary around them, the requirements become the measure of success, and that which falls within the boundary is the main driver of the system’s design. Traditional waterfall methodologies try to firmly establish the boundary at an early stage. Incremental and iterative (also known as agile) methodologies accept that the boundary will change during the project, and they figure out how to go with the flow. But in both camps, the closed-ended paradigm remains: project requirements drive design.

Projects to collect and analyze data have a similar structure. In performance measurement, the sequence often starts with a logic model and proceeds to a list of measures. Social research projects start with hypotheses to be tested. Evaluators draw up lists of questions they plan to explore. These are the boundaries around the goals, which then lead to the specifications for data to be collected. Again, project requirements drive data design.

So in both activities—software development and data analysis—the reigning ethos assumes a closed-ended approach. And that’s the problem. An information project can be successful in meeting its goals on time and within cost, yet fail in larger—more holistic—senses. The most common symptom of this is when an organization builds up a central store of client and service and outcome data and then sadly finds that the structure of the data was so narrowly and woodenly determined by stated goals that it cannot adequately answer many reasonable questions in the same subject area. (Well, but that wasn’t in the requirements!)

Is there any alternative? As an experiment, you might try going to a software developer or someone who organizes data collection and saying: I would like you to design me a way of collecting data that will facilitate as much open-ended inquiry as possible. Chances are that they’ll tell you that you’re asking for something meaningless or impossible. After all, if you don’t clearly state your goals in advance, how can anyone design data for you?

That’s fair enough in one sense: there must certainly be requirements in order for builders to understand what exactly they need to build. But there is a way of arriving at requirements that actually does facilitate open-ended inquiry. It’s quite simple: first think of the data as a model of what’s in the human service system and its environment, not as an artifact that is designed to serve a particular function or answer a particular question. This is an uncommon approach, but not a new one. In fact, it’s been explored by software development theorists for a while under the name of domain modeling.⁵ And there are published case studies of successful information projects in the human services that have been carried out in this way.⁶

Domain modeling is powerful because it allows a group of stakeholders from very different backgrounds—evaluators and front-line service providers, interacting government agencies, funders and grantees—to define, together, the universe of data that can meet the largest set of common needs. Paradoxically, it has that power because it avoids talking about anyone’s particular needs directly. Instead, domain modeling focuses on defining and understanding what is most permanent and least tied to ephemeral requirements. And when everyone shares a common understanding of how to represent the whole human service domain, each stakeholder will have a much better chance of getting the data they need to pursue their own requirements.

Conclusion: One of the most important factors that will determine the human service sector’s future success or failure in using information well is whether or not the various stakeholder groups can come together to develop a comprehensive domain model.

—Derek Coursen

 

NOTES

¹ J. C. Smuts Holism and Evolution (1926)

² This paraphrase combines aspects of Point 3 and Point 6 in R. Ackoff ‘Towards a System of Systems Concepts’ (1971).

³ Ibid.

⁴ The notion of the [information] system that serves vs. the [organizational] system that is served is explored in P. Checkland and S. Holwell Information, Systems and Information Systems (1998).

⁵ See e.g., R. Offen ‘Domain Understanding is the Key to Successful System Development’ (2002) and P. Oldfield ‘Domain Modeling’ (2002)

⁶ D. Coursen ‘Why Clarity and Holism Matter for Managing Human Service Information’ (2012)

If you found this post useful, please pass it on! (And subscribe to the blog, if you haven’t already.)

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Advertisements