Sunday, February 07, 2016

There is no Holy Grail, just small chalices

Given the stakes to society and the persistent growth in health care delivery costs throughout the developed nations, there is an understandable desire to achieve the “breakthrough” technological solutions that will result in a substantial disruption in diagnostic and treatment practices and patterns that have evolved over the decades.  Well intentioned and intelligent people with thoughtful ideas are focused on ways to achieve these solutions.  Investors, seeing the large (and growing) percentage of each nation’s GDP that is devoted to health care, likewise hunger for the opportunity to grab even a small portion of that wealth.

As I noted in a blog post last year, an area that consumes tremendous energy is the search for the Holy Grail of decision support products that would mine health care “big data.” People are looking for the algorithms that could help doctors—in real time—analyze the condition of patients and put in place more efficient and efficacious diagnostic regimes and treatment modalities. I explained in that blog post why these efforts will fail. Let me summarize:

1 -- The data that is collected is not reliable enough to draw connections between patient characteristics, clinical decisions and outcomes.  It is not reliable for two reasons.  First, it is simply not reliable.  Much data that is collected and/or coded in hospitals and physician practices is done so poorly, or in a format that is not clinically accurate.  Second, it is likely to be characterized by such wide standard deviations as to make it unsuitable for predictive purposes.

2 -- It is unlikely that the algorithms that are designed to produce work rules will be trusted by doctors. In part, this is due to the standard deviation problem noted above. That is, the models will not be sufficiently rigorous in their predictive capacity. Maybe more important, there is a general lack of trust on the part of doctors with regard to using formulaic approaches in their practices. While doctors are the victims of many kinds of cognitive errors—diagnostic anchoring, confirmation bias, and the like--they are often not trained to reflect on and catch these biases. They are trained instead to trust their own judgment and take personal responsibility for their patients. It would be but a small minority of doctors who would be able to overcome those biases and that training to use big-data-driven decision support tools—even if such tools were able to overcome the statistical difficulties mentioned above.

3 – The process for selling such systems into the hospital market is complex and almost infinitely slow. The sales cycle will kill off all but the most highly capitalized firms. Even excellent products will often wither and die on the vine.

Does this suggest that there is no potential for disruptive technologies that can improve health care delivery at a reduced cost? No, but it suggests that there is not a Holy Grail, but rather a group of smaller, potentially jewel-encrusted, chalices. Targeted innovation is the way to go. Think small, think focused, and think about how to achieve quick results that benefit the doctor, the patient, and the hospital.

Wait, did I just put the doctor first on that list? The Ptolemeic health care system has the doctor at the center of the solar system, and it will be that way for a long time to come. Unless your product helps the doctor feel that they are doing a better job and can fit into their work flow, it’s not worth pursuing.

I’ll provide an example that originated in Melbourne, produced by a firm called Global Kinetics. The approach is described in this article. A Parkinson’s patient wears a simple device on their wrist for a week or two. The accelerometer contained in the device correlates the extent of the patient’s movement disorder with the drug dosages they have taken. (The “watch” also, by the way, provides the patient with a reminder to take the drug at the specified times, leading to a higher level of adherence and providing a higher level of precision to the experiment.) The report is transmitted to a standard hand-held device, using a patient code that is fully privacy protected.

The technology and the reports produced by this approach do not substitute for the judgment of the neurologist. Rather that judgment—previously based on trial and error--is enhanced by a real-time, patient specific experiment. The process can be repeated as often as the doctor deems necessary--more often for a patient suffering a rapid deterioration from the disease, and less often for a more stable patient.

The device is not bought by the hospital, and so it bypasses the highly competitive capital budgeting process. Rather, the product is provided as part of a service offering, the test result that is provided to the doctor. The fee for each report is well within the normal operating budget of the neurology department, requiring no special allocation of funds. In short, acceptance simply requires a decision by the doctors themselves.

I offer this as a perfect example of a jeweled chalice. Simple hardware and software technologies; easily incorporated into the doctors' workflow; enhancing their ability to exercise professional judgment; and offered in a sales process that does not create competition among hospital factions and is consistent with normal budget processes. It is by this path that technology can disrupt health care—one carefully designed step at a time.

2 comments:

Lucien Engelen said...

Paul, briefly as per your request :
1. Is true for professionals data, but don't forget the stream of patient generated data that is coming up.
2. Once it has been proven that (some of those) algorithm DO work and are even better than them, they WILL have to trust them.
3. Agreed, although there is a whole new channel coming up via crowdfunding. (i.e. apps)


In general : the patient will own more of his own data (producing it himself) granting access to the professional by a subscription. Big secure platforms (like Health Suite Digital Platform from Philips) will overtake much of the now incompetent data gathering, and make sense of that OR showing the faulty data.

gr Lucien

Anonymous said...

Re Lucien's comment #2:
I'm not sure that it's true to say that doctors will realize proven algorithms work better and will thus trust them. After all, there's very strong evidence for such things as central line protocols and use of VAP bundles, yet broad adoption of these kinds of evidence-based practices is far from widespread. Why would adoption of algorithms be different?

m.e.