Problems We Need To Address Before Healthcare AI Becomes A Thing

Posted on September 7, 2018 I Written By

Anne Zieger is veteran healthcare branding and communications expert with more than 25 years of industry experience. and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also worked extensively healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

Just about everybody who’s anybody in health IT is paying close attention to the emergence of healthcare AI, and the hype cycle is in full swing. It’d be easier to tell you what proposals I haven’t seen for healthcare AI use than those I have.

Of course, just because a technology is hot and people are going crazy over it doesn’t mean they’re wrong about its potential. Enthusiasm doesn’t equal irrational exuberance. That being said, it doesn’t hurt to check in on the realities of healthcare AI adoption. Here are some issues I’m seeing surface over and over again, below.

The black box

It’s hard to argue that healthcare AI can make good “decisions” when presented with the right data in the right volume. In fact, it can make them at lightning speed, taking details into account which might not have seemed important to human eyes. And on a high level, that’s exactly what it’s supposed to do.

The problem with this, though, is that this process may end up bypassing physicians. As things stand, healthcare AI technology is seldom designed to show how it reached its conclusions, and it may be due to completely unexpected factors. If clinical teams want to know how the artificial intelligence engine drew a conclusion, they may have to ask their IT department to dig into the system and find out. Such a lack of transparency won’t work over the long term.

Workflow

Many healthcare organizations have tweaked their EHR workflow into near-perfect shape over time. Clinicians are largely satisfied with work patterns and patient throughput is reasonable. Documentation processes seem to be in shape. Does it make sense to throw an AI monkeywrench into the mix? The answer definitely isn’t an unqualified yes.

In some situations, it may make sense for a provider to run a limited test of AI technology aimed at solving a specific problem, such as assisting radiologists with breast cancer scan interpretations. Taking this approach may create less workflow disruption. However, even a smaller test may call for a big investment of time and effort, as there aren’t exactly a ton of best practices available yet for optimizing AI implementations, so workflow adjustments might not get enough attention. This is no small concern.

Data

Before an AI can do anything, it needs to chew on a lot of relevant clinical data. In theory, this shouldn’t be an issue, as most organizations have all of the digital data they need.  If you need millions of care datapoints or several thousand images, they’re likely to be available. The thing is, they may not be as usable as one might hope.

While healthcare providers may have an embarrassment of data on hand, much of it is difficult to filter and mine. For example, while researchers and some isolated providers are using natural language processing to dig up useful information, critics point out that until more healthcare info is indexed and tagged there’s only so much it can do. It may take a new generation of data processing and indexing tech to prepare the data before we have the right data to feed an AI.

These are just a few practical issues likely to arise as providers begin to use AI technologies; I’m sure there are many others you might be able to name. While I have little doubt we can work our way through such issues, they aren’t trivial, and it could take a while before we have standardized approaches in place for addressing them. In the meantime, it’s probably a good idea to experiment with AI projects and prepare for the day when it becomes more practical.