Plan-Do-Study-Act in Practice
My friends Lilia Pino, Jen Nabong, and Mary Dolansky will share a story this week at the Institute for Healthcare Improvement’s Annual Forum. These friends have embraced the spirit and practice of Plan-Do-Study-Act. I’m grateful for project lessons I’ve learned with them in my role as an Improvement Advisor with the IHI.
Lilia and Jen work at MinuteClinic at CVS; Mary is a nursing professor at Case Western Reserve University. All three have been core members of a larger team supported by a three-year grant from the John A. Hartford Foundation that ends next spring. The JAF has funded development and deployment of age-friendly care at MinuteClinic since 2018 in cooperation with Case Western.
What are my friends trying to accomplish?
A larger team at MinuteClinic has defined age-friendly care. For all visits by patients older than 65 years excepting vaccination and ‘Express Lane’ visits, providers should assess, act on and document specific elements of the 4Ms: What Matters, Medications, Mobility and Mentation. For example, the Mentation M involves administering the PHQ-2 for depression and the Mini-Cog for cognitive impairment. Providers interpret results and offer appropriate follow-up actions.
At MinuteClinic, there are more than three thousand providers and tens of thousands of visits each month. A lot of opportunity to deliver age-friendly care!
The story at the Forum focuses on a Plan-Do-Study-Act cycle done between December 2021 and March 2022. The cycle asked 24 nurse practitioners to modify their work, with the aim to improve the fraction of eligible visits with all 4Ms assessed, acted on and documented. Two earlier cycles in 2021 had helped the team build a schedule of weekly communications to focus on each M individually and then as a bundle. The team also learned to use a corporate report that scores every visit on M performance: no M’s documented or all 4Ms documented or any of the other fourteen possible combinations. For each provider, the team created simple weekly feedback of M scores.
The control chart at the top of this post shows the headline for the story at the IHI Forum.
The chart is a classic example of a picture that shows improvement: the baseline period to the left of the shaded test period is relatively low; during the active test period and after, the performance is better. As a group, providers have sustained the gain over the past several months.
Plot details
Each grey dot is the performance for one provider in one week. This “performance dot” is the percentage of eligible visits with all 4Ms for a week. The black dots are the percentage of eligible visits with all 4Ms, summing over all provider visits for a week. The purple shaded area is the six-week period of our focused test. Individual providers in the test group had between 0 and 10 eligible visits per week, with a median number of three visits.
The red-dashed lines are control limits using p-chart calculations; the limits vary with the variation in the total weekly.
Project Lessons
PDSA practice: the power of several linked cycles
The team embraced learning by doing. They have sustained their inquiry over multiple cycles since May 2021.
The first test with six providers in May 2021 had little impact on rate of visits with all 4Ms. The test revealed that increasing complete 4Ms visits, despite management wishes and substantial resources, might require specific messaging about each M.
In July and August 2021, the team tested weekly communication ideas with 15 providers. The second test again had only a modest impact on improvement. The team realized a need to focus attention on the details of documentation. Jen flagged the importance of chart review by the provider to highlight specific gaps in documentation.
With increasing confidence in useful changes, the team pushed for the third PDSA cycle to test their beliefs. The team aimed to improve complete 4Ms performance by 30% over baseline. for a set of providers. The test achieved the improvement aim during the period of the test; performance has slipped about six per cent since March but has remained stable in a control chart sense.
The team designed a fourth test cycle in August 2022. The cycle will test whether local managers can apply a package of 4Ms materials and coaching advice. This is the only way to spread the coaching intervention to many locations and providers as Lilia and Jen do not have the capacity to coach more than a few dozen providers themselves. In October and November 2022, the team tested the logistics of the coach package with one local leader and one provider. The intervention increased complete 4Ms performance from 0% before the intervention to more than 60% during the intervention. A matched control provider did not improve at all.
The team plans to run the August 2022 design in Q1 2023.
A Management Model
If you look carefully at the plot, you can see more grey dots above zero per cent after the start of the test period—a sign of improvement. You also can see a blur of grey dots at zero after the test period, which shows there are many visits that have still have no Ms done.
Joseph Juran’s model of self-control reminded the team that managers have responsibility to provide a foundation for excellent performance via three conditions. First, providers must know what, why and how to assess, act on and document 4Ms care. Second, providers must know whether their current performance matches desired performance. Finally, providers need to know how to adjust current performance when it falls short of the target.
MinuteClini has built many tools and training modules to address Juran’s first condition. In my experience, many organizations focus attention on the first condition but do not invest enough effort to deliver the second and third conditions.
The team’s third and fourth PDSA cycles address Juran’s second and third conditions. The team has addressed the second condition by building weekly reports and coaching message templates to deliver feedback on missing elements of documentation, visit by visit. The team has addressed the third condition in two ways. Retrospectively, providers learn to review charts from previous visits to repair errors in documentation. Prospectively, providers are asked to practice their own PDSA cycles: reflect on missing Ms and aim to follow the work standard on the next eligible visit, with invitation to check with their coach for help.
Causal Thinking
Like most improvement stories presented at the IHI Forum, the third PDSA cycle is a non-randomized experiment. The team selected the participants, carried out the intervention and summarized the impact. Could other factors have caused the improvement we observed?
In April 2022, the team looked at two other groups of providers to check. One group could have been chosen for the test but were not; the other group were providers from the same clinic as the test providers. The two post hoc control groups show a small improvement in 4Ms performance starting in February 2022. However, the test group’s improvement is much larger. The team believes the tested changes contribute to at least a 20% improvement in complete 4Ms visits, relative to baseline.
The fourth cycle has been designed as a randomized experiment. A dozen regions will have a regional leader act as a coach. The team built a method to identify two providers within each region who are similar in their 4Ms experience and performance and work attitude. The plan is to randomly assign one provider to work with the coach for less than 30 minutes a week for a few weeks, with the other provider acting as control, region by region. This design builds in the control group from the start of the project. This “randomized block” design makes causal inference a bit simpler. I look forward to my friends’ next report!