Free Hospital EMR and EHR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to Hospital EMR and EHR for FREE!

Problems We Need To Address Before Healthcare AI Becomes A Thing

Posted on September 7, 2018 I Written By

Anne Zieger is veteran healthcare branding and communications expert with more than 25 years of industry experience. and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also worked extensively healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or

Just about everybody who’s anybody in health IT is paying close attention to the emergence of healthcare AI, and the hype cycle is in full swing. It’d be easier to tell you what proposals I haven’t seen for healthcare AI use than those I have.

Of course, just because a technology is hot and people are going crazy over it doesn’t mean they’re wrong about its potential. Enthusiasm doesn’t equal irrational exuberance. That being said, it doesn’t hurt to check in on the realities of healthcare AI adoption. Here are some issues I’m seeing surface over and over again, below.

The black box

It’s hard to argue that healthcare AI can make good “decisions” when presented with the right data in the right volume. In fact, it can make them at lightning speed, taking details into account which might not have seemed important to human eyes. And on a high level, that’s exactly what it’s supposed to do.

The problem with this, though, is that this process may end up bypassing physicians. As things stand, healthcare AI technology is seldom designed to show how it reached its conclusions, and it may be due to completely unexpected factors. If clinical teams want to know how the artificial intelligence engine drew a conclusion, they may have to ask their IT department to dig into the system and find out. Such a lack of transparency won’t work over the long term.


Many healthcare organizations have tweaked their EHR workflow into near-perfect shape over time. Clinicians are largely satisfied with work patterns and patient throughput is reasonable. Documentation processes seem to be in shape. Does it make sense to throw an AI monkeywrench into the mix? The answer definitely isn’t an unqualified yes.

In some situations, it may make sense for a provider to run a limited test of AI technology aimed at solving a specific problem, such as assisting radiologists with breast cancer scan interpretations. Taking this approach may create less workflow disruption. However, even a smaller test may call for a big investment of time and effort, as there aren’t exactly a ton of best practices available yet for optimizing AI implementations, so workflow adjustments might not get enough attention. This is no small concern.


Before an AI can do anything, it needs to chew on a lot of relevant clinical data. In theory, this shouldn’t be an issue, as most organizations have all of the digital data they need.  If you need millions of care datapoints or several thousand images, they’re likely to be available. The thing is, they may not be as usable as one might hope.

While healthcare providers may have an embarrassment of data on hand, much of it is difficult to filter and mine. For example, while researchers and some isolated providers are using natural language processing to dig up useful information, critics point out that until more healthcare info is indexed and tagged there’s only so much it can do. It may take a new generation of data processing and indexing tech to prepare the data before we have the right data to feed an AI.

These are just a few practical issues likely to arise as providers begin to use AI technologies; I’m sure there are many others you might be able to name. While I have little doubt we can work our way through such issues, they aren’t trivial, and it could take a while before we have standardized approaches in place for addressing them. In the meantime, it’s probably a good idea to experiment with AI projects and prepare for the day when it becomes more practical.

Using NLP with Machine Learning for Predictive Analytics in Healthcare

Posted on December 12, 2016 I Written By

John Lynn is the Founder of the blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of and John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

There are a lot of elements involved in doing predictive analytics in healthcare effectively. In most cases I’ve seen, organizations working on predictive analytics do some but not all that’s needed to really make predictive analytics as effective as possible. This was highlighted to me when I recently talked with Frank Stearns, Executive Vice President from HBI Solutions at the Digital Health Conference in NYC.

Here’s a great overview of the HBI Solutions approach to patient risk scores:


This process will look familiar to most people in the predictive analytics space. You take all the patient data you can find, put it into a machine learning engine and output a patient risk score. One of the biggest trends happening with this process is the real-time nature of this process. Plus, I also love the way the patient risk score includes the attributes that influenced a patients risk score. Both of these are incredibly important when trying to make this data actionable.

However, the thing that stood out for me in HBI Solutions’ approach is the inclusion of natural language processing (NLP) in their analysis of the unstructured patient data. I’d seen NLP being used in EHR software before, but I think the implementation of NLP is even more powerful in doing predictive analytics.

In the EHR world, you have to be absolutely precise. If you’re not precise with the way you code a visit, you won’t get paid. If you’re not precise with how the diagnosis is entered into the EHR, that can have long term consequences. This has posed a real challenge for NLP since NLP is not 100% accurate. It’s gotten astoundingly good, but still has its shortcomings that require a human review when utilizing it in an EHR.

The same isn’t true when applying NLP to unstructured data when doing predictive analytics. Predictive analytics by its very nature incorporates some modicum of variation and error. It’s understood that predictive analytics could be wrong, but is an indication of risk. Certainly a failing in NLP’s recognition of certain data could throw off a predictive analytic. That’s unfortunate, but the predictive analytics aren’t relied on the same way documentation in an EHR is relied upon. So, it’s not nearly as big of a deal.

Plus, the value that’s received from applying NLP to pull out the nuggets of information that exists in the unstructured narrative sections of healthcare data is well worth that small amount of risk of the NLP being incorrect. As Frank Stearns from HBI solutions pointed out to me, the unstructured data is often where the really valuable data about a patients’ risk score exist.

I’d be interested in having HBI Solutions do a study of the whole list of findings that are often available in the unstructured data that weren’t available otherwise. However, it’s not hard to imagine a doctor documenting patient observations in the unstructured EHR narrative that they didn’t want to include as a formal diagnosis. Not the least of these are behavioral health observations that the doctor saw, observed, and documented but didn’t want to fully diagnose. NLP can pull these out of the narrative and include them in their patient risk score.

Given this perspective, it’s hard to imagine we’ll ever be able to get away from using NLP or related technology to pull out the valuable insights in the unstructured data. Plus, it’s easy to see how predictive analytics that don’t use NLP are going to be deficient when trying to use machine learning to analyze patients. What’s amazing is that HBI Solutions has been applying machine learning to healthcare for 5 years. That’s a long time, but also explains why they’ve implemented such advanced solutions like NLP in their predictive analytics solutions.