Free Hospital EMR and EHR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to Hospital EMR and EHR for FREE!

The Distributed Hospital On The Horizon

Posted on February 24, 2017 I Written By

Anne Zieger is veteran healthcare editor and analyst with 25 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

If you’re reading this blog, you already know that distributed, connected devices and networks are the future of healthcare.  Connected monitoring devices are growing more mature by the day, network architectures are becoming amazingly fluid, and with the growth of the IoT, we’re adding huge numbers of smart devices to an already-diverse array of endpoints.  While we may not know what all of this will look when it’s fully mature, we’ve already made amazing progress in connecting care.

But how will these trends play out? One nice look at where all this is headed comes from Jeroen Tas, chief innovation and strategy officer at Philips. In a recent article, Tas describes a world in which even major brick-and-mortar players like hospitals go almost completely virtual.  Certainly, there are other takes out there on this subject, but I really like how Tas explains things.

He starts with the assertion that the hospital of the future “is not a physical location with waiting rooms, beds and labs.” Instead, a hospital will become an abstract network overlay connecting nodes. It’s worth noting that this isn’t just a concept. For an example, Tas points to the Mercy Virtual Care Center, a $54 million “hospital without beds” dedicated to telehealth and connected care.  The Center, which has over 300 employees, cares for patients at home and in beds across 38 hospitals in seven states.

While the virtual hospital may not rely on a single, central campus, physical care locations will still matter – they’ll just be distributed differently. According to Tas, the connected health network will work best if care is provided as needed through retail-type outlets near where people live, specialist hubs, inpatient facilities and outpatient clinics. Yes, of course, we already have all of these things in place, but in the new connected world, they’ll all be on a single network.

Ultimately, even if brick-and-mortar hospitals never disappear, virtual care should make it possible to cut down dramatically on hospital admissions, he suggests.  For example, Tas notes that Philips partner Banner Health has slashed hospital admissions almost 50% by using telehealth and advanced analytics for patients with multiple chronic conditions. (We’ve also reported on a related pilot by Partners HealthCare Brigham and Women’s Hospital, the “Home Hospital,” which sends patients home with remote monitoring devices as an alternative to admissions.)

Of course, the broad connected care outline Tas offers can only take us so far. It’s all well and good to have a vision, but there are still some major problems we’ll have to solve before connected care becomes practical as a backbone for healthcare delivery.

After all, to cite one major challenge, community-wide connected health won’t be very practical until interoperable data sharing becomes easier – and we really don’t know when that will happen. Also, until big data analytics tools are widely accessible (rather than the province of the biggest, best-funded institutions) it will be hard for providers to manage the data generated by millions of virtual care endpoints.

Still, if Tas’s piece is any indication, consensus is building on what next-gen care networks can and should be, and there’s certainly plenty of ways to lay the groundwork for the future. Even small-scale, preliminary connected health efforts seem to be fostering meaningful changes in how care is delivered. And there’s little doubt that over time, connected health will turn many brick-and-mortar care models on their heads, becoming a large – or even dominant – part of care delivery.

Getting there may be tricky, but if providers keep working at connected care, it should offer an immense payoff.

Is Your Current Analytics Infrastructure Keeping You From Success in Healthcare Analytics?

Posted on February 17, 2017 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

The following is a paid blog post sponsored by Intel.

Healthcare analytics is all the talk in healthcare right now.  It’s really no surprise since many have invested millions and even billions of dollars in digitizing their health data.  Now they want to extract value from that data.  No doubt, the promise of healthcare analytics is powerful.  I like to break this promise out into two categories: Patient Analysis and Patient Influence.

Patient Analysis

On the one side of healthcare analytics is analyzing your patient population to pull reports on patients who need extra attention.  In some cases, these patients are the most at risk portions of your population with easy to identify disease states.  In other cases, they’re the most expensive portion of your population.  Both of these are extremely powerful analytics as your healthcare organization works to improve patient care and lower costs.

An even higher level of patient analysis is using healthcare analytics to identify patients who don’t seem to be at risk, but whose health is in danger.  These predictive analytics are much more difficult to create because by their very nature they’re imperfect.  However, this is where the next generation of patient analysis is going very quickly.

Patient Influence

On the other side of healthcare analytics is using patient data to influence patients.  Patient influence analytics can tell you simple things like what type of communication modality is preferred by a patient.  This can be used on an individual level to understand whether you should send an email, text, or make a phone call or it can be used on the macro level to drive the type of technologies you buy and content you create.

Higher level patient influence analytics take it one step further as they analyze a patient’s unique preferences and what influences the patient’s healthcare decision making.  This often includes pulling in outside consumer data that helps you understand and build a relationship with the patient.  This analytic might tell you that the patient is a huge sports fan and which is their favorite team.  It might also tell you that this person has a type A personality.  Together these analytics can inform you on the most appropriate ways and methods to interact and influence the patient.

What’s Holding Healthcare Analytics Back?

Both of these healthcare analytics approaches have tremendous promise, but many of them are being held back by a healthcare organization’s current analytics infrastructure.

The first problem many organizations have is where they are storing their data.  I’d describe their data as being stored in virtual prisons.  We need to unlock this data and free it so that it can be used in healthcare analytics.  If you can’t get at the data within your own organization, how can we even start talking about all the health data being stored outside the four walls of your organization?  Plus, we need to invest in the right storage that can support the growth of this data.  If you don’t solve these data access and storage pieces, you’ll miss out on a lot of the benefits of healthcare analytics.

Second, do you trust your data?  Most hospital CIOs I talk to usually respond, “Mostly.”  If you can’t trust your data, you can’t trust your analytics.  A fundamental building block of successful analytics is building trust in your data.  This starts by implementing effective workflows that capture the data properly on the front end.

Next, do you have the processing power required to process all these analytics and data?  Healthcare analytics in many healthcare organizations reminds me of the old days when graphic designers and video producers would have to wait hours for graphics programs to load or videos to render.  Eventually we learned not to skimp on processing power for these tasks.  We need to learn this same lesson with healthcare analytics.  Certainly cloud makes this easier, but far too often we under fund the processing power needed for these projects.

Finally, all the processing power in the world won’t help if you don’t have your most important piece of analytics infrastructure: people.  No doubt, finding experienced people in healthcare data analytics is a challenge.  It is the hardest thing to do on this list since it is very competitive and very expensive.  The good news is that if you solve the other problems above, then you become an attractive place for these experts to work.

In your search for a healthcare analytics expert, you can likely find a data expert.  You can find a clinical expert.  You can find an EHR expert.  Finding someone who can work across all three is the Holy Grail and nearly impossible to find.  This is why in most organizations healthcare analytics is a team sport.  Make sure that as you build your infrastructure of healthcare analytics people, you make sure they are solid team players.

It’s time we start getting more value out of our EHR and health IT systems.  Analytics is one of those tools that will get us there.  Just be sure that your current infrastructure isn’t holding you back from achieving those goals.

If this topic interests you and you’ll be at HIMSS 2017, join us at the Intel Health Booth #2661 on Tuesday, 2/21 from 2:00-2:45 PM where we’ll be holding a special meetup to discuss Getting Ready for Precision Health.  This meetup will also be available virtually via Periscope on the @IntelHealth Twitter account.

An Approach For Privacy – Protecting Big Data

Posted on February 6, 2017 I Written By

Anne Zieger is veteran healthcare editor and analyst with 25 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

There’s little doubt that the healthcare industry is zeroing in on some important discoveries as providers and researchers mine collections of clinical and research data. Big data does come with some risks, however, with some observers fearing that aggregated and shared information may breach patient privacy. However, at least one study suggests that patients can be protected without interrupting data collection.

In what it calls a first, a new study appearing in the Journal of the American Medical Informatics Association has demonstrated that protecting the privacy of patients can be done without too much fuss, even when the patient data is pulled into big data stores used for research.

According to the study, a single patient anonymization algorithm can offer a standard level of privacy protection across multiple institutions, even when they are sharing clinical data back and forth. Researchers say that larger clinical datasets can protect patient anonymity without generalizing or suppressing data in a manner which would undermine its use.

To conduct the study, researchers set a privacy adversary out to beat the system. This adversary, who had collected patient diagnoses from a single unspecified clinic visit, was asked to match them to a record in a de-identified research dataset known to include the patient. To conduct the study, researchers used data from Vanderbilt University Medical Center, Northwestern Memorial Hospital in Chicago and Marshfield Clinic.

The researchers knew that according to prior studies, the more data associated with each de-identified record, and the more complex and diverse the patient’s problems, the more likely it was that their information would stick out from the crowd. And that would typically force managers to generalize or suppress data to protect patient anonymity.

In this case, the team hoped to find out how much generalization and suppression would be necessary to protect identities found within the three institutions’ data, and after, whether the protected data would ultimately be of any use to future researchers.

The team processed relatively small datasets from each institution representing patients in a multi-site genotype-disease association study; larger datasets to represent patients in the three institutions’ bank of de-identified DNA samples; and large sets which stood in for each’s EMR population.

Using the algorithm they developed, the team found that most of the data’s value was preserved despite the occasional need for generalization and suppression. On average, 12.8% of diagnosis codes needed generalization; the medium-sized biobank models saw only 4% of codes needing generalization; and among the large databases representing EMR populations, only 0.4% needed generalization and no codes required suppression.

More work like this is clearly needed as the demand for large-scale clinical, genomic and transactional datasets grows. But in the meantime, this seems to be good news for budding big data research efforts.

UCSF Partners With Intel On Deep Learning Analytics For Health

Posted on January 30, 2017 I Written By

Anne Zieger is veteran healthcare editor and analyst with 25 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

UC San Francisco’s Center for Digital Health Innovation has agreed to work with Intel to deploy and validate a deep learning analytics platform. The new platform is designed to help clinicians make better treatment decisions, predict patient outcomes and respond quickly in acute situations.

The Center’s existing projects include CareWeb, a team-based collaborative care platform built on Salesforce.com social and mobile communications tech; Tidepool, which is building infrastructure for next-gen smart diabetes management apps; Health eHeart, a clinical trials platform using social media, mobile and realtime sensors to change heart disease treatment; and Trinity, which offers “precision team care” by integrating patient data with evidence and multi-disciplinary data.

These projects seem to be a good fit with Intel’s healthcare efforts, which are aimed at helping providers succeed at distributed care communication across desktop and mobile platforms.

As the two note in their joint press release, creating a deep learning platform for healthcare is extremely challenging, given that the relevant data is complex and stored in multiple incompatible systems. Intel and USCF say the next-generation platform will address these issues, allowing them to integrate not only data collected during clinical care but also inputs from genomic sequencing, monitors, sensors and wearables.

To support all of this activity obviously calls for a lot of computing power. The partners will run deep learning use cases in a distributed fashion based on a CPU-based cluster designed to crunch through very large datasets handily. Intel is rolling out the computing environment on its Xeon processor-based platform, which support data management and the algorithm development lifecycle.

As the deployment moves forward, Intel leaders plan to study how deep learning analytics and machine-driven workflows can optimize clinical care and patient outcomes, and leverage what they learn when they create new platforms for the healthcare industry. Both partners believe that this model will scale for future use case needs, such as larger convolutional neural network models, artificial networks patterned after living organizations and very large multidimensional datasets.

Once implemented, the platform will allow users to conduct advanced analytics on all of this disparate data, using machine learning and deep learning algorithms. And if all performs as expected, clinicians should be able to draw on these advanced capabilities on the fly.

This looks like a productive collaboration. If nothing else, it appears that in this case the technology platform UCSF and Intel are developing may be productized and made available to other providers, which could be very valuable. After all, while individual health systems (such as Geisinger) have the resources to kick off big data analytics projects on their own, it’s possible a standardized platform could make such technology available to smaller players. Let’s see how this goes.

Searching for Disruptive Healthcare Innovation in 2017

Posted on January 17, 2017 I Written By

Colin Hung is the co-founder of the #hcldr (healthcare leadership) tweetchat one of the most popular and active healthcare social media communities on Twitter. Colin is a true believer in #HealthIT, social media and empowered patients. Colin speaks, tweets and blogs regularly about healthcare, technology, marketing and leadership. He currently leads the marketing efforts for @PatientPrompt, a Stericycle product. Colin’s Twitter handle is: @Colin_Hung

Disruptive Innovation has been the brass ring for technology companies ever since Clayton Christensen popularized the term in his seminal book The Innovator’s Dilemma in 1997. According to Christensen, disruptive innovation is:

“A process by which a product or service takes root initially in simple applications at the bottom of a market and then relentlessly moves up market, eventually displacing established competitors.”

Disruption is more likely to occur, therefore, when you have a well established market with slow-moving large incumbents who are focused on incremental improvements rather than truly innovative offerings. Using this definition, healthcare has been ripe for innovation for a number of years. But where is the AirBNB/Uber/Google of healthcare?

On a recent #hcldr tweetchat we asked what disruptive healthcare technologies might emerge in 2017. By far the most popular response was Artificial Intelligence (AI) and Machine Learning.

Personally, I’m really excited about the potential of AI applied to diagnostics and decision support. There is just no way a single person can stay up to speed on all the latest clinical research while simultaneously remembering every symptom/diagnosis from the past. I believe that one day we will all be using AI assistance to guide our care – as common as we use a GPS today to help navigate unknown roads.

Some #hcldr participants, however, were skeptical of AI.

While I don’t think @IBMWatson is on the same trajectory as Theranos, there is merit to being wary of “over-hype” when it comes to new technologies. When a shining star like Theranos falls, it can set an entire industry back and stifle innovation in an area that may warrant investment. Can you imagine seeking funding for a technology that uses small amounts of blood to detect diseases right now? Too much hype can prematurely kill innovation.

Other potentially disruptive technologies that were raised during the chat included: #telehealth, #wearables, patient generated health data (#PDHD), combining #HealthIT with consumer services and #patientengagement.

The funniest and perhaps most thoughtful tweet came from @YinkaVidal, who warned us that innovations have a window of usefulness. What was once ground-breaking can be rendered junk by the next generation.

What do you believe will be the disruptive healthcare technology to emerge in 2017?

A Look At Geisinger’s Big Data Efforts

Posted on December 28, 2016 I Written By

Anne Zieger is veteran healthcare editor and analyst with 25 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

This week I got a look at a story appearing in a recent issue of Harvard Business Review which offers a description of Geisinger Health System’s recent big data initiatives. The ambitious project is designed not only to track and analyze patient outcomes, but also to visualize healthcare data across cohorts of patients and networks of providers and even correlate genomic sequences with clinical care. Particularly given that Geisinger has stayed on the cutting edge of HIT for many years, I think it’s worth a look.

As the article’s authors note, Geisinger rolled out a full-featured EMR in 1996, well ahead of most of its peers. Like many other health systems, Geisinger has struggled to aggregate and make use of data. That’s particularly the case because as with other systems, Geisinger’s legacy analytics systems still in place can’t accommodate the growing flood of new data types emerging today.

Last year, Geisinger decided to create a new infrastructure which could bring this data together. It implemented Unified Data Architecture allowing it to integrate big data into its existing data analytics and management.  According to the article, Geisinger’s UDA rollout is the largest practical application of point-of-care big data in the industry. Of particular note, Geisinger is crunching not only enterprise healthcare data (including HIE inputs, clinical departmental systems and patient satisfaction surveys) and consumer health tools (like smartphone apps) but even grocery store and loyalty program info.

Though all of its data hasn’t yet been moved to the UDA, Geisinger has already seen some big data successes, including:

* “Close the Loop” program:  Using natural language processing, the UDA analyzes clinical and diagnostic imaging reports, including free text. Sometimes it detects problems that may not be relevant to the initial issue (such as injuries from a car crash) which can themselves cause serious harm. The program has already saved patient lives.

* Early sepsis detection/treatment: Geisinger uses the UDA to bring all sepsis-patient information in one place as they travel through the hospital. The system alerts providers to real-time physiologic data in patients with life-threatening septic shock, as well as tracking when antibiotics are prescribed and administered. Ninety percent of providers who use this tool consistently adhere to sepsis treatment protocols, as opposed to 40% of those who don’t.

* Surgery costs/outcomes: The Geisinger UDA tracks and integrates surgical supply-chain data, plus clinical data by surgery type and provider, which offers a comprehensive view of performance by provider and surgery type.  In addition to offering performance insight, this approach has also helped generate insights about supply use patterns which allow the health system to negotiate better vendor deals.

To me, one of the most interesting things about this story is that while Geisinger is at a relatively early stage of its big data efforts, it has already managed to generate meaningful benefits from its efforts. My guess is that its early successes are more due to smart planning – which includes worthwhile goals from day one of the rollout — than the technology per se. Regardless, let’s hope other hospital big data projects fare so well. (Meanwhile, for a look at another interesting hospital big data project, check out this story.)

ACO-Affiliated Hospitals May Be Ahead On Strategic Health IT Use

Posted on December 26, 2016 I Written By

Anne Zieger is veteran healthcare editor and analyst with 25 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

Over the past several years I’ve been struck by how seldom ACOs seem to achieve the objectives they’re built to meet – particularly cost savings and quality improvement goals – even when the organizations involved are pretty sophisticated.

For example, the results generated the Medicare Shared Savings Program and  Pioneer ACO Model have been inconsistent at best, with just 31% of participants getting a savings bonus for 2015, despite the fact that the “Pioneers” were chosen for their savvy and willingness to take on risk.

Some observers suggested this would change as hospitals and ACOs found better health IT solutions, but I’ve always been somewhat skeptical about this. I’m not a fan of the results we got when capitation was the rage, and to me current models have always looked like tarted-up capitation, the fundamental flaws of which can’t be fixed by technology.

All that being said, a new journal article suggests that I may be wrong about the hopelessness of trying to engineer a workable value-based solution with health IT. The study, which was published in the American Journal of Managed Care, has concluded that if nothing else, ACO incentives are pushing hospitals to make more strategic HIT investments than they may have before.

To conduct the study, which compared health IT adoption in hospitals participating in ACOs with hospitals that weren’t ACO-affiliated, the authors gathered data from 2013 and 2014 surveys by the American Hospital Association. They focused on hospitals’ adherence to Stage 1 and Stage 2 Meaningful Use criteria, patient engagement-oriented health IT use and HIE participation.

When they compared 393 ACO hospitals and 810 non-ACO hospitals, the researchers found that a larger percentage of ACO hospitals were capable of meeting MU Stage 1 and Stage 2. They also noted that nearly 40% of ACO hospitals had patient engagement tech in place, as compared with 15.2% of non-ACO hospitals. Meanwhile, 49% of ACO hospitals were involved with HIEs, compared with 30.1% of non-ACO hospitals.

Bottom line, the authors concluded that ACO-based incentives are proving to be more effective than Meaningful Use at getting hospitals adopt new and arguably more effective technologies. Fancy that! (Finding and implementing those solutions is still a huge challenge for ACOs, but that’s a story for another day.)

Of course, the authors seem to take it as a given that patient engagement tech and HIEs are strategic for more or less any hospital, an assumption they don’t do much to justify. Also, they don’t address how hospitals in and out of ACOs are pursuing population health or big data strategies, which seems like a big omission. This weakens their argument somewhat in my view. But the data is worth a look nonetheless.

I’m quite happy to see some evidence that ACO models can push hospitals to make good health IT investment decisions. After all, it’d be a bummer if hospitals had spent all of that time and money building them out for nothing.

Paris Hospitals Use Big Data To Predict Admissions

Posted on December 19, 2016 I Written By

Anne Zieger is veteran healthcare editor and analyst with 25 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

Here’s a fascinating story in from Paris (or par-ee, if you’re a Francophile), courtesy of Forbes. The article details how a group of top hospitals there are running a trial of big data and machine learning tech designed to predict admission rates. The hospitals’ predictive model, which is being tested at four of the hospitals which make up the Assistance Publiq-Hopitaux de Paris (AP-HP), is designed to predict admission rates as much as 15 days in advance.

The four hospitals participating in the project have pulled together a massive trove of data from both internal and external sources, including 10 years’ worth of hospital admission records. The goal is to forecast admissions by the day and even by the hour for the four facilities participating in the test.

According to Forbes contributor Bernard Marr, the project involves using time series analysis techniques which can detect patterns in the data useful for predicting admission rates at different times.  The hospitals are also using machine learning to determine which algorithms are likely to make good predictions from old hospital data.

The system the hospitals are using is built on the open source Trusted Analytics Platform. According to Marr, the partners felt that the platform offered a particularly strong capacity for ingesting and crunching large amounts of data. They also built on TAP because it was geared towards open, collaborative development environments.

The pilot system is accessible via a browser-based interface, designed to be simple enough that data science novices like doctors, nurses and hospital administration staff could use the tool to forecast visit and admission rates. Armed with this knowledge, hospital leaders can then pull in extra staffers when increased levels of traffic are expected.

Being able to work in a distributed environment will be key if AP-HP decides to roll the pilot out to all of its 44 hospitals, so developers built with that in mind. To be prepared for the future, which might call for adding a great deal of storage and processing power, they designed distributed, cloud-based system.

“There are many analytical solutions for these type of problems, [but] none of them have been implemented in a distributed fashion,” said Kyle Ambert, an Intel data scientist and TAP contributor who spoke with Marr. “Because we’re interested in scalability, we wanted to make sure we could implement these well-understood algorithms in such a way that they work over distributed systems.”

To make this happen, however, Ambert and the development team have had to build their own tools, an effort which resulted in the first contribution to an open-source framework of code designed to carry out analysis over scalable, distributed framework, one which is already being deployed in other healthcare environments, Marr reports.

My feeling is that there’s no reason American hospitals can’t experiment with this approach. In fact, maybe they already are. Readers, are you aware of any US facilities which are doing something similar? (Or are most still focused on “skinny” data?)

Easing The Transition To Big Data

Posted on December 16, 2016 I Written By

Anne Zieger is veteran healthcare editor and analyst with 25 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

Tapping the capabilities of big data has become increasingly important for healthcare organizations in recent years. But as HIT expert Adheet Gogate notes, the transition is not an easy one, forcing these organizations to migrate from legacy data management systems to new systems designed specifically for use with new types of data.

Gogate, who serves as vice president of consulting at Citius Tech, rightly points out that even when hospitals and health systems spend big bucks on new technology, they may not see any concrete benefits. But if they move through the big data rollout process correctly, their efforts are more likely to bear fruit, he suggests. And he offers four steps organizations can take to ease this transition. They include:

  • Have the right mindset:  Historically, many healthcare leaders came up through the business in environments where retrieving patient data was difficult and prone to delays, so their expectations may be low. But if they hope to lead successful big data efforts, they need to embrace the new data-rich environment, understand big data’s potential and ask insightful questions. This will help to create a data-oriented culture in their organization, Gogate writes.
  • Learn from other industries: Bear in mind that other industries have already grappled with big data models, and that many have seen significant successes already. Healthcare leaders should learn from these industries, which include civil aviation, retail and logistics, and consider adopting their approaches. In some cases, they might want to consider bringing an executive from one of these industries on board at a leadership level, Gogate suggests.
  • Employ the skills of data scientists: To tame the floods of data coming into their organization, healthcare leaders should actively recruit data scientists, whose job it is to translate the requirements of the methods, approaches and processes for developing analytics which will answer their business questions.  Once they hire such scientists, leaders should be sure that they have the active support of frontline staffers and operations leaders to make sure the analyses they provide are useful to the team, Gogate recommends.
  • Think like a startup: It helps when leaders adopt an entrepreneurial mindset toward big data rollouts. These efforts should be led by senior leaders comfortable with this space, who let key players act as their own enterprise first and invest in building critical mass in data science. Then, assign a group of core team members and frontline managers to areas where analytics capabilities are most needed. Rotate these teams across the organization to wherever business problems reside, and let them generate valuable improvement insights. Over time, these insights will help the whole organization improve its big data capabilities, Gogash says.

Of course, taking an agile, entrepreneurial approach to big data will only work if it has widespread support, from the C-suite on down. Also, healthcare organizations will face some concrete barriers in building out big data capabilities, such as recruiting the right data scientists and identifying and paying for the right next-gen technology. Other issues include falling reimbursements and the need to personalize care, according to healthcare CIO David Chou.

But assuming these other challenges are met, embracing big data with a willing-to-learn attitude is more likely to work than treating it as just another development project. And the more you learn, the more successful you’ll be in the future.

Using NLP with Machine Learning for Predictive Analytics in Healthcare

Posted on December 12, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

There are a lot of elements involved in doing predictive analytics in healthcare effectively. In most cases I’ve seen, organizations working on predictive analytics do some but not all that’s needed to really make predictive analytics as effective as possible. This was highlighted to me when I recently talked with Frank Stearns, Executive Vice President from HBI Solutions at the Digital Health Conference in NYC.

Here’s a great overview of the HBI Solutions approach to patient risk scores:

healthcare-predictive-analytics-model

This process will look familiar to most people in the predictive analytics space. You take all the patient data you can find, put it into a machine learning engine and output a patient risk score. One of the biggest trends happening with this process is the real-time nature of this process. Plus, I also love the way the patient risk score includes the attributes that influenced a patients risk score. Both of these are incredibly important when trying to make this data actionable.

However, the thing that stood out for me in HBI Solutions’ approach is the inclusion of natural language processing (NLP) in their analysis of the unstructured patient data. I’d seen NLP being used in EHR software before, but I think the implementation of NLP is even more powerful in doing predictive analytics.

In the EHR world, you have to be absolutely precise. If you’re not precise with the way you code a visit, you won’t get paid. If you’re not precise with how the diagnosis is entered into the EHR, that can have long term consequences. This has posed a real challenge for NLP since NLP is not 100% accurate. It’s gotten astoundingly good, but still has its shortcomings that require a human review when utilizing it in an EHR.

The same isn’t true when applying NLP to unstructured data when doing predictive analytics. Predictive analytics by its very nature incorporates some modicum of variation and error. It’s understood that predictive analytics could be wrong, but is an indication of risk. Certainly a failing in NLP’s recognition of certain data could throw off a predictive analytic. That’s unfortunate, but the predictive analytics aren’t relied on the same way documentation in an EHR is relied upon. So, it’s not nearly as big of a deal.

Plus, the value that’s received from applying NLP to pull out the nuggets of information that exists in the unstructured narrative sections of healthcare data is well worth that small amount of risk of the NLP being incorrect. As Frank Stearns from HBI solutions pointed out to me, the unstructured data is often where the really valuable data about a patients’ risk score exist.

I’d be interested in having HBI Solutions do a study of the whole list of findings that are often available in the unstructured data that weren’t available otherwise. However, it’s not hard to imagine a doctor documenting patient observations in the unstructured EHR narrative that they didn’t want to include as a formal diagnosis. Not the least of these are behavioral health observations that the doctor saw, observed, and documented but didn’t want to fully diagnose. NLP can pull these out of the narrative and include them in their patient risk score.

Given this perspective, it’s hard to imagine we’ll ever be able to get away from using NLP or related technology to pull out the valuable insights in the unstructured data. Plus, it’s easy to see how predictive analytics that don’t use NLP are going to be deficient when trying to use machine learning to analyze patients. What’s amazing is that HBI Solutions has been applying machine learning to healthcare for 5 years. That’s a long time, but also explains why they’ve implemented such advanced solutions like NLP in their predictive analytics solutions.