Track and market outcome results
January 28, 2015
Private practitioners are continually challenged to come up with new ways of attracting clients. Rather than learning trendy new treatments to lure new customers, imagine being able to measure and market the quality of your services more effectively.
Few clinicians have ventured into this powerful way to position a private practice: focusing on improving and reporting one’s clinical outcomes. This may sound audacious but some agencies and insurance companies are already working on it, and it is likely that within a generation it will simply be expected that therapists be this accountable.
Some therapists have tried an alternative to marketing their own outcomes: They emphasize their use of empirically supported techniques, or, to coin a phrase, other people’s outcomes. The day is coming when psychologists will consider such marketing unethical to establish the effectiveness of their services because (1) the evidence is already strong that empirically supported techniques are not differentially effective from treatment-as-usual when studies are methodologically rigorous and (2) individual therapists vary widely in their effectiveness.
Measuring one’s own results is more empirically sound and more effective, and it sidesteps the quagmire of online ratings.
The ethical impetus is unmistakable: Regardless of the treatment techniques used, therapists who continually measure client-reported well-being (and therapeutic alliance) obtain better outcomes and a lower dropout rate than clinicians who do not formally collect and use this feedback.
Outcomes tracking has become commonplace in hospitals where budgets and personnel make it easier to implement and use the results to improve patient outcomes. And, while most psychologists are trained in the basic methods required to track their own results, they may lack information about how to do so quickly, validly and inexpensively.
10 rules for getting started
1. Use validated client-rated measures of general well-being and treatment alliance that are cheap, ultra-brief and easy to score. Longer measures provide more clinical information but are unlikely to be used session-by-session. Use the alliance measure: Most therapists are overconfident about their ability to detect and respond to early breaches that lead to dropout.
2. Join a community of excellence to learn about the research and how to use outcome and alliance data to clients’ best advantage – www.centerforclinicalexcellence.com is a large one (4,000 members), and it’s free.
3. Try it now, with every new intake. A little experience with this sort of feedback does much more to convince most providers than all the academic or empirical arguments in favor of it.
4. Don’t get carried away by predictive models that alert clinicians to therapy that is going off track. Software-based alerts can be useful and correct in general, but also “dumb” on an individual level. However, learning the predictors of poor results (e.g., poor alliances that don’t improve quickly or lack of early improvement in well-being) can be invaluable for improving therapy quality.
5. Outcome and alliance measurement are pan-theoretical (though they frequently influence treatment). They can be integrated into psychoanalysis, group therapy, couples therapy, play therapy or short-term therapy.
6. Implement the process with extreme rigor or don’t do it at all. If you don’t buy the power and importance of this kind of client collaboration and do it with total integrity, others won’t buy it either. You get one shot to do this right: Use standards of measurement and reporting that are beyond reproach.
7. Expect a range of responses from clients about measuring clinical change, from extreme gratitude to skepticism. Few clients will refuse to track how things are going if you believe it is likely to improve therapy.
8. Referrers and clients often care more that we want the feedback than what our results actually are. It communicates a lot when a therapist wants the truth more than self-satisfaction.
9. Understand the statistical nuances, but keep it simple. If your reporting is too complex, no one will understand your outcomes. The two simplest concepts for describing treatment effectiveness are derived from pre/post effect size and clinically significant change. Learn about them.
10. Wait until you have at least 30 completed cases before publishing results (to ensure some stability in your outcomes). Keep your data current and keep up with the evolving research on psychotherapy outcomes.
More from The National Psychologist:
office every other month, subscribe today!.