2,296
Views
0
CrossRef citations to date
0
Altmetric
Teaching Case

Building an Organizational Culture to Support Evidence-Informed Practice: A Teaching Case

From 1984 to 2004 I served as the CEO of a strategic planning and evaluation firm specializing in reform, evaluation, and funding of public and nonprofit social service systems that made important contributions in the fields of juvenile justice, homelessness, perinatal substance abuse, community policing, and mental health treatment. By 2004, I had become disheartened with the outcomes of our efforts to persuade people to make change. So I decided to start looking for a job as executive director of a nonprofit human service organization at which I could be responsible for actually making change rather than just advocating for change. At about that time, the CEO of FSA—with which my firm had been doing organizational consulting—left his position on very short notice. The board president asked if I would be willing to act as interim CEO. It seemed ideal to me to take the position on an interim basis, to see if I liked it, as I could always help them find a permanent director and go back to consulting.

While the services were decent, FSA was an agency that was near death financially, having lost money in 13 of the last 14 years. Successive CEO’s had convinced the board that losing money was inevitable given the nature of agency contracts. Each year, as the agency continued to lose money, the board borrowed against assets to make up the deficit. Slowly, 120 years of accumulated assets had passed into the hands of lenders, who often charged very high interest rates since it was obvious that FSA’s future viability was uncertain. When I arrived—in September 2004— the board had recently passed the annual budget with a $350,000 deficit. It was planning to sell the (heavily mortgaged) agency headquarters to give the agency enough cash to survive one more year. So my first year was spent putting the fiscal house in order, reorganizing and pruning the top-heavy administration, and pulling FSA back from the brink. I loved the challenge, working with staff, and, particularly, valued working with programs serving the most in-need individuals and families in San Francisco.

Beyond the fiscal challenges

One of the first things I discovered was a rather weak set of middle managers. Many had been promoted into management because they were skilled clinicians; trained as therapists and not as managers, their management style was focused only on supporting members of their staff, whether they were effective or not. There were no metrics or standards for managing caseloads, measuring productivity, or determining whether we were succeeding in helping our clients. There was no way to separate effective from ineffective staff or to distinguish hard-working employees from those who were not working a full day.

Although FSA has been around since 1889, the modern agency had been assembled in the 1970s and 1980s by a dynamic executive director named Ira Okun. Ira had a burning passion to do everything that could be done for the poor. In pursuing that passion, he had built FSA into the largest outpatient service provider in San Francisco. The downside was the lack of an underlying service logic or focus. While the agency provided a vast range of services to all ages, genders, and ethnicities, there was nothing like a continuum of care. Our service programs were located throughout San Francisco and when staff came to the administrative offices, they acted like they were visiting a foreign country rather than the agency that employed them. Even though there were many different kinds of services across the age spectrum, no program ever referred a client to another program. In spite of these challenges, most of the programs were actually pretty good, but the agency had no way of assessing or improving service quality.

Working toward evidence-based practice

I came to FSA as a reformer with the goal of putting our programs on the cutting edge of social service innovation. In the mid 80s, I got involved in the Children’s System of Care (CSOC) movement. The goal of the CSOC movement was to unite on the local level the four major government agencies working with high-risk children: children’s mental health services, child welfare services, juvenile probation, and schools. CSOCs, once organized, would have common outcome objectives, integrated services, and shared resources to achieve the goals of keeping seriously emotionally disturbed kids “out of trouble, in school, and in the community.” When I became involved, there was only one CSOC in one California County and just six others in the whole country. During the 20 years that I worked with most of the larger CSOCs in California, Washington, and Oregon (and at federal-level agencies), CSOCs became the dominant service model throughout the United States, successfully keeping children in the community. For example, within 10 years of creating a CSOC, the children’s wing at Napa State Hospital (which formerly had a long waiting list and year-long delays in admission) was closed for lack of clients. With the right incentives, localities had learned how to keep children with their families instead of locking them up in large institutions. The CSOC movement, starting with a great idea and a few model programs, brought changes in services for emotionally disturbed children that seemed impossible at the time the movement began. Over the course of my career, I was involved with several other movements that transformed services for particular populations. I believe that the marriage of creative ideas and operational discipline can change the world, and I wanted FSA to be a place where new ideas were emerging on a continual basis.

In the five years before I came to FSA, I had been working with the National Institute of Mental Health in its efforts to bring evidence-based mental health practices (EBPs) to the community. EBPs are rigorous psychosocial methodologies for treating serious mental illness. Alone or in combination with medication, EBPs can address most cases of serious mental illness. However, attempts to migrate these treatments out of the university and into community-based settings had faced major challenges. The majority of seriously mentally ill individuals continued to receive unstructured psychotherapy and haphazard medication in community-based settings. This gap between what was possible and what was actually happening was very troubling to me. I kept thinking of the whole generation of people lost unnecessarily to severe mental illness, and I wanted that situation to change.

So I came to FSA thinking here was a chance to implement evidence-based treatment at the community level. I assumed that this would be enthusiastically embraced by staff and clients. I discovered first that no one had heard of EBP, and when it was explained to them, they wanted nothing to do with it. They did not want the structure that EBP required. They did not want to provide disease-specific treatment. They were offended that I seemed to be saying that the approach they had been practicing for decades was inadequate. I quickly discovered that therapists whom I liked and deeply respected detested me. Many people quit. In Thomas Kuhn’s Structure of Scientific Revolutions, he says that most paradigm shifts happen not because practitioners of the old paradigm are converted to the new paradigm, but because practitioners of the old paradigm retire or die, and are replaced by practitioners of the new paradigm. It was deeply troubling to me to be so disliked, but I was determined to push this change through: Indeed, I thought that I had a moral obligation to push it through. Most of those people who originally opposed EBP and who are still at FSA now support the change.

Over the next three years, there were several failed attempts to implement EBPs. However, over time, an expanding group of staff learned how to establish the infrastructure, develop the trainings, and provide the clinical supervision necessary to implement EBPs. FSA is a happier place now (supported by annual staff survey results) and staff are proud of their effective work. In the beginning, however, staff viewed nearly every new initiative as hopeless and the agency as doomed. Therefore, continuing to soldier on in one’s job was their only responsibility while they waited for the inevitable pink slip. In contrast, I was saying that we can be the best in the country by demonstrating evidence-based practice. Most of the staff seemed to be thinking that “the agency director is nuts for introducing evidence-based practice in an underfunded and overloaded staff dealing with increasingly complex and non-compliant clients as well as implying that what we have been doing for the past twenty years is no longer good enough and he is not even a mental health clinician.”

In the light of this resistance, my first failed strategy was to try to build a consensus about EBPs. I established a joint management and staff team to plan out the evidence-based practices we wanted to implement and the path we would take to train people in evidence-based practices. I soon realized that the implicit goal of the committee was to prevent me from implementing any evidence-based practices, although one or two program directors were actually excited by the idea. Finally, I hit on the idea of creating the “Felton Institute.” Kitty Felton was FSA’s founding director and a real pioneer in the age of social services in the early 1900s. I appointed as director of the institute one of the program managers who was most enthusiastic about EBP. She got a new title but had to continue in her program director job, since we didn’t have any money to fund her position. I discovered that the process of naming things gives them a reality and permanence. It seemed silly and grandiose to name an entity that didn’t exist. However, once we had an “institute,” people began to accept that we were going to do EBPs one way or the other. The terms of the conversation began to change.

Electronic health records

The second thing I did turned out to have much greater ramifications than I anticipated. This was to develop an electronic health record system to replace our paper records. When I came to FSA, everything was documented on paper only: vast amounts of paper securely locked in file cabinets. One of the mantras that my consulting firm would say to clients was “you can’t change what you can’t measure.” By that standard, FSA couldn’t change anything since it couldn’t measure anything. I wanted to change everything, so we began to plan to automate all our client charts. With the help of a software developer, we built an agency-wide electronic health- record system using Salesforce as a data platform. Salesforce wound up donating 285 licenses and some cash for development in exchange for using FSA in their marketing.

Initially there was a lot of fear. We had staff members in their 50s and 60s who had never turned on a computer. We had an 80-year-old psychiatrist who would handwrite his progress notes and give them to a 70-year-old woman who would type them on an IBM Selectric typewriter; that was her full-time job. Ironically, none of the other big agencies were automated either; it was a sad commentary on the state of nonprofits in the heart of Silicon Valley. We achieved user adoption through a number of strategies. First, we made the system look exactly like the paper charts. People didn’t have to learn new business rules at the same time they were learning to use a computer. Secondly, we gave people a lot of support. If we had to teach them to type, we did that. If they needed 40 hours of one-to-one training (one person did), we gave them that. In exchange, we ruled out “I am too old, I can’t do it, you never trained me, and it doesn’t do what I need” as possible excuses for not using it.

The way we rolled out the electronic health record system also stimulated user acceptance. We started with our best-managed program and then rolled it out to successive programs, one or two programs per month. Our general rule was “We’ll give you all the support you need and all the training you need. We’ll set up your charts for you on the computer so all you have to do is start using it on the first day. Then, after 30 days, nothing you do on paper will count anymore.” Actually, we had to give some people 60 or 90 days, but in general terms everybody in a program got automated simultaneously. No excuses, no keeping your case notes on post-its for a month. Once a couple of programs were automated, we realized that staff were spending about 50% less time on paperwork, giving them 25% more face-to-face time with clients. Once people heard about how much easier paperwork was with the automated system, there was a clamor from the other programs to get automated as soon as possible. Finally, although it seems silly, it was important that we gave it a name: CIRCE (Client Integrated Record of Care). Calling it CIRCE gave it a unified existence. People think of it—like the Felton Institute—as an inevitable and unchangeable part of FSA.

We provide excellent support to the clinicians, so that they have come to see CIRCE as a tool that makes their work life better. Each program selects a representative to the CIRCE Advisory Board, which meets monthly. The advisory board brings in suggestions for enhancements, with one or two enhancements being selected for implementation each month. The Salesforce platform is such a rapid development environment that the selected enhancements are usually rolled out to the staff within two weeks, giving CIRCE a lot of credibility. With our records automated and our staff online, we went from having no information to having a vast amount of information that took us a while to figure out how to use for operational management.

One of our consultants pointed out that FSA, like every organization whose product is words (law firms, schools, social service agencies, consulting firms), had difficulty communicating between the programs. Each program thought it was doing a great job, that its staff was overworked, and that other programs were not doing so well and were dragging down the agency. That is why we set up CIRCE, so that every program could see every other program’s performance metrics. Then we set up a monthly operations committee meeting at which each program reported on its metrics; if they did not meet their goals, they shared their analysis to explain why they had not met them and what they were going to go to do to meet them next month. The process had a number of results, including (a) training our middle managers how to manage with a focus on outcomes, (b) making it very clear which programs were strong (most) and which were weak (two), and (c) helping program staff see that most of the other programs were working as hard as they were, so that program managers began to help each other to address ongoing problems.

Although each program has its own performance metrics, the focus in this teaching case is on the metrics for the mental health programs, since those targeted the area of our agency that was most vulnerable. The metrics in mental health are based on county-developed standards for contracted services. For example, each staff person must document 1,055 hours of billable client service per year; about 89 hours per month. A small number of staff members were simply not working hard and many left the agency within six months. Many more staff members were working hard to serve the clients but were not documenting their hours. As a result, we were not getting paid for the work they were doing, and we were essentially paying them to work for free for the county. Over a six-month period, by focusing on this metric, our revenue per FTE went up by about 20%. This meant that we could earn our contracts with fewer staff, give staff raises, and still have funding left over for our new initiatives.

The second metric that we came up was called Chart Health. During my first two years we experienced an audit when a staff member who was fired for fraud claimed to the county that everyone in her program was committing fraud. The resulting audit found that the only fraud was committed by the person who reported us, but the inadequate charting that was discovered in the audit resulted in a loss to the agency of about $400,000. That almost sent us over the edge one more time. So we set up CIRCE to track what we call the “five pillars of chart health,” including (1) timely assessments with annual updates, (2) timely care planning within two months of assessment, (3) documentation of client consent, (4) documentation of HIPPA consent, and (5) treatment authorization for extended hours of service. If a chart had all five pillars correct, it was invulnerable to an audit. We set the standard so that no more than 5% of the charts could be out of compliance in even one of those five pillars and no chart could be out of compliance for more than 30 days (when we started, about 40% of the charts in the agency were out of compliance). Rather than come down hard on the managers for chart health (especially since we were doing so many other things at once) we started giving gold stars at the Operations Committee meeting for programs meeting their chart health target. Soon getting the gold star became a priority for program staff.

Our final operational metric established that no more than three working days could elapse between the time a client is seen and the time the progress note is recorded. This change, after almost two years, brought us back to what I wanted to do the day we started: bring in evidence-based practices for all our programs.

Here is where the university-developed EBPs clashed with the realities of nonprofit work. Staff members had caseloads of 40–50 severely mentally ill clients. These clients usually had more than one severe mental illness; half were substance abusers; more than a quarter had chronic physical or cognitive conditions. Staff members were constantly trying to prevent clients from being evicted from their housing, from being arrested, from being denied food stamps, and/or from going hungry. The first thing I wanted to do was to reduce caseloads to 30 severely mentally ill clients. This was a challenge because the county said we have to take anybody who came in the door or anybody who was referred to us. And our staff would make great efforts to reach out, working really hard to maintain client contact with very little time for therapeutic work with their clients. I wanted to reduce caseloads and increase salaries while increasingly holding staff accountable for their service contract obligations.

I realized that we needed to do two things before we could successfully implement evidence-based treatments. First, rather than just continuously responding to client’s self-generated crises, the case management model of service needed to become more systematized. In essence, we needed to move from a deficit focused to a strengths focused case management model. Because of the lack of research evidence nationally on validated case management regimens, we developed our own, based on the principles of motivational interviewing and strengths-based case management. We had, by this time, begun to make significant academic connections with UCSF, UCB, and other Bay Area universities. Working with many nationally known experts, we were able to design a comprehensive approach to case management—called Motivational Care Management. All of our staff are now required to be certified in MCM within their first two years at the agency. All our program directors and clinical supervisors are required to be certified as MCM clinical supervisors. Once a staff person is certified in MCM, they get a significant raise. MCM helped our staff to learn how to set boundaries so that they did not devote their whole work day to addressing the crises in their clients’ lives. This new approach to case management helped staff clear space to actually have time to provide therapy for clients.

Second, we undertook the development of outcome measurement tools that reflected the realities of our clients rather than the controlled treatment environments in which most evidence-based treatments have been validated. Our clients arrive with multiple challenges and we did not have the luxury of using hour-long diagnostic tools for each condition. However, we needed accurate and comprehensive diagnoses, since EBPs are diagnosis-specific. So we needed diagnostic tools that quickly provided rigorous diagnoses and could be completed in less than two hours, but there was no such tool. The diagnostic tool needed to measure the functional severity of the mental illness and the client’s feelings about their lives within the context of the mental health recovery model. At this point, we began working with Dr. Patricia Arean, faculty member in the UCSF Psychiatry Department and a national figure in mental health treatment for the elderly. We developed a proposal for a five-year grant from the National Institute of Mental Health to underwrite the Felton Institute and fund our EBP implementation. This grant was funded; a central initiative of the NIMH project was to create these diagnostic and measurement tools. After almost two years of work, we developed and piloted a suite of assessment tools that now allow us to provide outcome-guided treatment. Each of these assessment tools was vetted and modified by a committee of clients to ensure that they captured information that clients felt was essential. The set of tools includes two components:

  • The Diagnostic Tree. The tree is administered by the therapist. It begins with a series of trigger questions that probe for the possibility of the 10 most common mental illnesses and substance abuse. If clients answer the trigger questions in a positive way, then there are follow-up questions that dig deeper with respect to a particular illness.

  • The Outcome Toolkit. The toolkit consists of a series of questions that are answered by the clients every month. In our pilot trials, it takes clients about a half hour to answer these questions. Elements of the toolkit are: (a) Disease Severity Scale; (b) World Health Organization Quality of Life Scale; and (c) Community Living Skills Scale

Using Federal stimulus funding, we created a computer kiosk for clients to self-administer the scales using a touch screen, with availability in Spanish, Chinese, and English. The scales are also available in audio versions for anyone who is not sufficiently literate to use a written version. The clients also get to see how they are doing and the kiosk encourages them to enter any issues they want to talk about with the therapist. The therapist and the client get the same sets of results. It took about four years to get to the point of training staff on the assessment tools and implementing a range of evidence-based treatments.

Finally, we designed a comprehensive curriculum organized by the Felton Institute that each staff person is required to complete within three years of coming to FSA. The Path of Learning is tailored to the type of program and the age range of the target population. To pass each class, staff are required to master the elements of the EBP (which usually takes 2–3 months) and then to practice the EBP for at least six months under a trained clinical supervisor. The clinical supervisor will listen to sessions taped with the permission of the client, monitoring the extent to which the staff person has mastered the techniques of the treatment or social service. Once certified in an EBP, staff members receive a salary increase. Once they complete the entire Path of Learning, they are moved up to the highest pay grade in their field. As part of our commitment to recovery, we have a Path of Learning for all staff: clerical staff, peer case aides, outreach workers, child-care providers, and receptionists, in addition to case managers and therapists.

The Felton Institute now has a fund of intellectual property that is starting to find a market. We are selling our Motivational Care Management training broadly, and a number of organizations are looking to adopt our Outcome Kiosks. Many grants and donations are being received specifically for the institute, which now has a training and research budget of over $1 million annually.

Not all staff appreciated this new approach to professional development and approximately 20% to 30% left the agency. Obviously I needed and received the support of the Board of Directors as I expected some staff to complain to board members. Actually the board members were far more supportive than I expected. In the beginning, it was just me pushing for training and research along with the support of one other person, and now the whole agency buys in. It helped to have several senior staff members either retire or resign because the younger staff members who were far more receptive to the changes related to evidence-informed practice. As we now hire new staff, we describe our evidence-based treatment programs and promise to train them on the different models.

Lessons learned

I was not a good manager when I became the executive director. I came in and I thought I was going to inspire everybody and they were going to work effectively. Most were not inspired, and those who were did not know how to translate inspiration into achievement. I slowly came to realize that substantial training and coaching were necessary to develop an effective management team. In addition, given the early conflicts, it became clear that we needed to set standards of discourse ensuring that staff can disagree with one another but it needed to be done in a respectful and constructive way. We also adapted the military’s model of “after action reviews” to learn from each other as to how we can do a better job of implementing and monitoring new programs. We believe that our evidence-based service model is better for clients. We want to see our service model offered in other localities, either by FSA providing the services directly or by training other nonprofits to use our service model. Our major question is “How do we share this model with other communities in a way that is best for people in need?”

Discussion questions

  1. Bennett describes both “failed” and successful strategies that he undertook at FSA to implement EBPs across the agency’s programs. One unsuccessful strategy involved creating a joint management and staff team. Based on your experience, could he have approached the creation of the team in a way that might have anticipated and addressed the resistance he encountered? What team-building strategies might he have employed?

  2. Consider the staff supports that were provided during the transition to the Electronic Health Record system. Given the substantial level of resources involved, would this level of effort be feasible in your organization or organizations that you have studied? Can you identify less resource-intensive approaches that would also be effective?

  3. Bennett refers to the “clash” between EBPs developed and tested in carefully structured environments and the complex demands of providing services in community settings. Have you been involved in efforts to implement EBPs? What were the biggest challenges encountered, and how were they resolved? If the challenges were not resolved, what were the consequences?

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.