Free Hospital EMR and EHR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to Hospital EMR and EHR for FREE!

An Approach For Privacy – Protecting Big Data

Posted on February 6, 2017 I Written By

Anne Zieger is veteran healthcare editor and analyst with 25 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

There’s little doubt that the healthcare industry is zeroing in on some important discoveries as providers and researchers mine collections of clinical and research data. Big data does come with some risks, however, with some observers fearing that aggregated and shared information may breach patient privacy. However, at least one study suggests that patients can be protected without interrupting data collection.

In what it calls a first, a new study appearing in the Journal of the American Medical Informatics Association has demonstrated that protecting the privacy of patients can be done without too much fuss, even when the patient data is pulled into big data stores used for research.

According to the study, a single patient anonymization algorithm can offer a standard level of privacy protection across multiple institutions, even when they are sharing clinical data back and forth. Researchers say that larger clinical datasets can protect patient anonymity without generalizing or suppressing data in a manner which would undermine its use.

To conduct the study, researchers set a privacy adversary out to beat the system. This adversary, who had collected patient diagnoses from a single unspecified clinic visit, was asked to match them to a record in a de-identified research dataset known to include the patient. To conduct the study, researchers used data from Vanderbilt University Medical Center, Northwestern Memorial Hospital in Chicago and Marshfield Clinic.

The researchers knew that according to prior studies, the more data associated with each de-identified record, and the more complex and diverse the patient’s problems, the more likely it was that their information would stick out from the crowd. And that would typically force managers to generalize or suppress data to protect patient anonymity.

In this case, the team hoped to find out how much generalization and suppression would be necessary to protect identities found within the three institutions’ data, and after, whether the protected data would ultimately be of any use to future researchers.

The team processed relatively small datasets from each institution representing patients in a multi-site genotype-disease association study; larger datasets to represent patients in the three institutions’ bank of de-identified DNA samples; and large sets which stood in for each’s EMR population.

Using the algorithm they developed, the team found that most of the data’s value was preserved despite the occasional need for generalization and suppression. On average, 12.8% of diagnosis codes needed generalization; the medium-sized biobank models saw only 4% of codes needing generalization; and among the large databases representing EMR populations, only 0.4% needed generalization and no codes required suppression.

More work like this is clearly needed as the demand for large-scale clinical, genomic and transactional datasets grows. But in the meantime, this seems to be good news for budding big data research efforts.

UCSF Partners With Intel On Deep Learning Analytics For Health

Posted on January 30, 2017 I Written By

Anne Zieger is veteran healthcare editor and analyst with 25 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

UC San Francisco’s Center for Digital Health Innovation has agreed to work with Intel to deploy and validate a deep learning analytics platform. The new platform is designed to help clinicians make better treatment decisions, predict patient outcomes and respond quickly in acute situations.

The Center’s existing projects include CareWeb, a team-based collaborative care platform built on Salesforce.com social and mobile communications tech; Tidepool, which is building infrastructure for next-gen smart diabetes management apps; Health eHeart, a clinical trials platform using social media, mobile and realtime sensors to change heart disease treatment; and Trinity, which offers “precision team care” by integrating patient data with evidence and multi-disciplinary data.

These projects seem to be a good fit with Intel’s healthcare efforts, which are aimed at helping providers succeed at distributed care communication across desktop and mobile platforms.

As the two note in their joint press release, creating a deep learning platform for healthcare is extremely challenging, given that the relevant data is complex and stored in multiple incompatible systems. Intel and USCF say the next-generation platform will address these issues, allowing them to integrate not only data collected during clinical care but also inputs from genomic sequencing, monitors, sensors and wearables.

To support all of this activity obviously calls for a lot of computing power. The partners will run deep learning use cases in a distributed fashion based on a CPU-based cluster designed to crunch through very large datasets handily. Intel is rolling out the computing environment on its Xeon processor-based platform, which support data management and the algorithm development lifecycle.

As the deployment moves forward, Intel leaders plan to study how deep learning analytics and machine-driven workflows can optimize clinical care and patient outcomes, and leverage what they learn when they create new platforms for the healthcare industry. Both partners believe that this model will scale for future use case needs, such as larger convolutional neural network models, artificial networks patterned after living organizations and very large multidimensional datasets.

Once implemented, the platform will allow users to conduct advanced analytics on all of this disparate data, using machine learning and deep learning algorithms. And if all performs as expected, clinicians should be able to draw on these advanced capabilities on the fly.

This looks like a productive collaboration. If nothing else, it appears that in this case the technology platform UCSF and Intel are developing may be productized and made available to other providers, which could be very valuable. After all, while individual health systems (such as Geisinger) have the resources to kick off big data analytics projects on their own, it’s possible a standardized platform could make such technology available to smaller players. Let’s see how this goes.

Some Projections For 2017 Hospital IT Spending

Posted on January 4, 2017 I Written By

Anne Zieger is veteran healthcare editor and analyst with 25 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

A couple of months ago, HIMSS released some statistics from its survey on US hospitals’ plans for IT investment over the next 12 months. The results contain a couple of data points that I found particularly interesting:

  • While I had expected the most common type of planned spending to be focused on population health or related solutions, HIMSS found that pharmacy was the most active category. In fact, 51% of hospitals were planning to invest in one pharmacy technology, largely to improve tracking of medication dispensing in additional patient care environments. Researchers also found that 6% of hospitals were planning to add carousels or packagers in their pharmacies.
  • Eight percent hospitals said that they plan to invest in EMR components, which I hadn’t anticipated (though it makes sense in retrospect). HIMSS reported that 14% of hospitals at Stage 1-4 of its Electronic Medical Record Adoption Model are investing in pharmacy tech for closed loop med administration, and 17% in auto ID tech. Four percent of Stage 6 hospitals plan to support or expand information exchange capabilities. Meanwhile, 60% of Stage 7 hospitals are investing in hardware infrastructure “for the post-EMR world.”

Other data from the HIMSS report included news of new analytics and telecom plans:

  • Researchers say that recent mergers and acquisitions are triggering new investments around telephony. They found that 12% of hospitals with inpatient revenues between $25 million and $125 million – and 6% of hospitals with more than $500 million in inpatient revenues — are investing in VOIP and telemedicine. FWIW, I’m not sure how mergers and acquisitions would trigger telemedicine rollouts, as they’re already well underway at many hospitals — maybe these deals foster new thinking and innovation?
  • As readers know, hospitals are increasingly spending on analytics solutions to improve care and make use of big data. However (and this surprised me) only 8% of hospitals reported plans to buy at least one analytics technology. My guess is that this number is small because a) hospitals may not have collected their big data assets in easily-analyzed form yet and b) that they’re still hoping to make better use of their legacy analytics tools.

Looking at these stats as a whole, I get the sense that the hospitals surveyed are expecting to play catch-up and shore up their infrastructure next year, rather than sink big dollars into future-looking solutions.

Without a doubt, hospital leaders are likely to invest in game-changing technologies soon such as cutting-edge patient engagement and population health platforms to prepare for the shift to value-based health. It’s inevitable.

But in the meantime it probably makes sense for them to focus on internal cost drivers like pharmacy departments, whose average annual inpatient drug spending shot up by more than 23% between 2013 and 2015. Without stanching that kind of bleeding, hospitals are unlikely to get as much value as they’d like from big-idea investments in the future.

Paris Hospitals Use Big Data To Predict Admissions

Posted on December 19, 2016 I Written By

Anne Zieger is veteran healthcare editor and analyst with 25 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

Here’s a fascinating story in from Paris (or par-ee, if you’re a Francophile), courtesy of Forbes. The article details how a group of top hospitals there are running a trial of big data and machine learning tech designed to predict admission rates. The hospitals’ predictive model, which is being tested at four of the hospitals which make up the Assistance Publiq-Hopitaux de Paris (AP-HP), is designed to predict admission rates as much as 15 days in advance.

The four hospitals participating in the project have pulled together a massive trove of data from both internal and external sources, including 10 years’ worth of hospital admission records. The goal is to forecast admissions by the day and even by the hour for the four facilities participating in the test.

According to Forbes contributor Bernard Marr, the project involves using time series analysis techniques which can detect patterns in the data useful for predicting admission rates at different times.  The hospitals are also using machine learning to determine which algorithms are likely to make good predictions from old hospital data.

The system the hospitals are using is built on the open source Trusted Analytics Platform. According to Marr, the partners felt that the platform offered a particularly strong capacity for ingesting and crunching large amounts of data. They also built on TAP because it was geared towards open, collaborative development environments.

The pilot system is accessible via a browser-based interface, designed to be simple enough that data science novices like doctors, nurses and hospital administration staff could use the tool to forecast visit and admission rates. Armed with this knowledge, hospital leaders can then pull in extra staffers when increased levels of traffic are expected.

Being able to work in a distributed environment will be key if AP-HP decides to roll the pilot out to all of its 44 hospitals, so developers built with that in mind. To be prepared for the future, which might call for adding a great deal of storage and processing power, they designed distributed, cloud-based system.

“There are many analytical solutions for these type of problems, [but] none of them have been implemented in a distributed fashion,” said Kyle Ambert, an Intel data scientist and TAP contributor who spoke with Marr. “Because we’re interested in scalability, we wanted to make sure we could implement these well-understood algorithms in such a way that they work over distributed systems.”

To make this happen, however, Ambert and the development team have had to build their own tools, an effort which resulted in the first contribution to an open-source framework of code designed to carry out analysis over scalable, distributed framework, one which is already being deployed in other healthcare environments, Marr reports.

My feeling is that there’s no reason American hospitals can’t experiment with this approach. In fact, maybe they already are. Readers, are you aware of any US facilities which are doing something similar? (Or are most still focused on “skinny” data?)

Easing The Transition To Big Data

Posted on December 16, 2016 I Written By

Anne Zieger is veteran healthcare editor and analyst with 25 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

Tapping the capabilities of big data has become increasingly important for healthcare organizations in recent years. But as HIT expert Adheet Gogate notes, the transition is not an easy one, forcing these organizations to migrate from legacy data management systems to new systems designed specifically for use with new types of data.

Gogate, who serves as vice president of consulting at Citius Tech, rightly points out that even when hospitals and health systems spend big bucks on new technology, they may not see any concrete benefits. But if they move through the big data rollout process correctly, their efforts are more likely to bear fruit, he suggests. And he offers four steps organizations can take to ease this transition. They include:

  • Have the right mindset:  Historically, many healthcare leaders came up through the business in environments where retrieving patient data was difficult and prone to delays, so their expectations may be low. But if they hope to lead successful big data efforts, they need to embrace the new data-rich environment, understand big data’s potential and ask insightful questions. This will help to create a data-oriented culture in their organization, Gogate writes.
  • Learn from other industries: Bear in mind that other industries have already grappled with big data models, and that many have seen significant successes already. Healthcare leaders should learn from these industries, which include civil aviation, retail and logistics, and consider adopting their approaches. In some cases, they might want to consider bringing an executive from one of these industries on board at a leadership level, Gogate suggests.
  • Employ the skills of data scientists: To tame the floods of data coming into their organization, healthcare leaders should actively recruit data scientists, whose job it is to translate the requirements of the methods, approaches and processes for developing analytics which will answer their business questions.  Once they hire such scientists, leaders should be sure that they have the active support of frontline staffers and operations leaders to make sure the analyses they provide are useful to the team, Gogate recommends.
  • Think like a startup: It helps when leaders adopt an entrepreneurial mindset toward big data rollouts. These efforts should be led by senior leaders comfortable with this space, who let key players act as their own enterprise first and invest in building critical mass in data science. Then, assign a group of core team members and frontline managers to areas where analytics capabilities are most needed. Rotate these teams across the organization to wherever business problems reside, and let them generate valuable improvement insights. Over time, these insights will help the whole organization improve its big data capabilities, Gogash says.

Of course, taking an agile, entrepreneurial approach to big data will only work if it has widespread support, from the C-suite on down. Also, healthcare organizations will face some concrete barriers in building out big data capabilities, such as recruiting the right data scientists and identifying and paying for the right next-gen technology. Other issues include falling reimbursements and the need to personalize care, according to healthcare CIO David Chou.

But assuming these other challenges are met, embracing big data with a willing-to-learn attitude is more likely to work than treating it as just another development project. And the more you learn, the more successful you’ll be in the future.

Using NLP with Machine Learning for Predictive Analytics in Healthcare

Posted on December 12, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

There are a lot of elements involved in doing predictive analytics in healthcare effectively. In most cases I’ve seen, organizations working on predictive analytics do some but not all that’s needed to really make predictive analytics as effective as possible. This was highlighted to me when I recently talked with Frank Stearns, Executive Vice President from HBI Solutions at the Digital Health Conference in NYC.

Here’s a great overview of the HBI Solutions approach to patient risk scores:

healthcare-predictive-analytics-model

This process will look familiar to most people in the predictive analytics space. You take all the patient data you can find, put it into a machine learning engine and output a patient risk score. One of the biggest trends happening with this process is the real-time nature of this process. Plus, I also love the way the patient risk score includes the attributes that influenced a patients risk score. Both of these are incredibly important when trying to make this data actionable.

However, the thing that stood out for me in HBI Solutions’ approach is the inclusion of natural language processing (NLP) in their analysis of the unstructured patient data. I’d seen NLP being used in EHR software before, but I think the implementation of NLP is even more powerful in doing predictive analytics.

In the EHR world, you have to be absolutely precise. If you’re not precise with the way you code a visit, you won’t get paid. If you’re not precise with how the diagnosis is entered into the EHR, that can have long term consequences. This has posed a real challenge for NLP since NLP is not 100% accurate. It’s gotten astoundingly good, but still has its shortcomings that require a human review when utilizing it in an EHR.

The same isn’t true when applying NLP to unstructured data when doing predictive analytics. Predictive analytics by its very nature incorporates some modicum of variation and error. It’s understood that predictive analytics could be wrong, but is an indication of risk. Certainly a failing in NLP’s recognition of certain data could throw off a predictive analytic. That’s unfortunate, but the predictive analytics aren’t relied on the same way documentation in an EHR is relied upon. So, it’s not nearly as big of a deal.

Plus, the value that’s received from applying NLP to pull out the nuggets of information that exists in the unstructured narrative sections of healthcare data is well worth that small amount of risk of the NLP being incorrect. As Frank Stearns from HBI solutions pointed out to me, the unstructured data is often where the really valuable data about a patients’ risk score exist.

I’d be interested in having HBI Solutions do a study of the whole list of findings that are often available in the unstructured data that weren’t available otherwise. However, it’s not hard to imagine a doctor documenting patient observations in the unstructured EHR narrative that they didn’t want to include as a formal diagnosis. Not the least of these are behavioral health observations that the doctor saw, observed, and documented but didn’t want to fully diagnose. NLP can pull these out of the narrative and include them in their patient risk score.

Given this perspective, it’s hard to imagine we’ll ever be able to get away from using NLP or related technology to pull out the valuable insights in the unstructured data. Plus, it’s easy to see how predictive analytics that don’t use NLP are going to be deficient when trying to use machine learning to analyze patients. What’s amazing is that HBI Solutions has been applying machine learning to healthcare for 5 years. That’s a long time, but also explains why they’ve implemented such advanced solutions like NLP in their predictive analytics solutions.

Steps Hospitals Should Consider When Migrating EMR Data

Posted on November 2, 2016 I Written By

Anne Zieger is veteran healthcare editor and analyst with 25 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

When your organization decides to convert to a new EMR, the problems it faces extend beyond having to put the right technical architecture in place. Deciding which data to migrate and how much data to migrate from the previous EMR poses additional challenges, and they’re not trivial.

On the one hand, moving over all of your data is expensive (and probably not necessary). On the other, if you migrate too little data, clinicians won’t have an adequate patient history to work from, and what’s more, may not be in compliance with legal requirements.

But there are methods for determining how to make the transition successfully. HCI Group Data Technical Lead Mustafa Raja, argues that there are three key factors hospitals should consider when planning to migrate legacy EMR data into a new system:

  • Decide which data you will archive and which you will migrate. While many organizations fall back on moving six months of acute care data and a year’s worth of ambulatory data, Raja recommends looking deeper. Specifically, while ambulatory transitions may just include medications the patients are on and diagnostic codes in the past year, acute care data encompasses many different data types, including allergies, medications, orders, labs and radiology reports. So deciding what should transition isn’t a one-size-fits-all decision. Once you’ve made the decision as to what data will be transitioned, see that whatever archival storage system you decide upon is easily accessible and not too costly, Raja suggests. You’ll want to have the data available, in part, to respond to security audits.
  • Consider how complex the data is before you choose it for transition to the new EMR. Bear in mind that data types will vary, and that storage methods within the new system may vary from the old. If you are migrating from a nonstandard legacy system to an EMR with data standards in place — which is often the case — you’ll need to decide whether you are willing to go through the standardization process to make the old data available. If not, bear in mind that the nonstandard data won’t be easily accessible or usable, which can generate headaches.
  • Be prepared for the effect of changes in clinical rules and workflow. When upgrading from your legacy system, you’ll probably find that some of its functionality doesn’t work well with the new system, as the new system’s better-optimized workflows will be compatible with the old system, Raja notes. What kind of problems will you encounter? Raja offers the example of a legacy system which includes non-required fields in one of its forms, transitioning to a system that DOES require the fields. Since the data for the newly-required fields doesn’t exist, how do you handle the problem?

Of course, your plans for data migration will be governed by many other considerations, including the speed at which you have to transition, the purposes to which you plan to put your new EMR, your budget, staffing levels and more. But these guidelines should offer a useful look at how to begin thinking about the data migration process.

Are Your Health Data Efforts a Foundation for the Future?

Posted on June 10, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I recently was talking with Jonathan Sheldon from Oracle and I was inspired by the idea that today’s data projects could be the essential foundation for future healthcare analytics and care that form what we now call Precision Medicine. Chew on that idea for a minute. There’s a lot of power in the idea of building blocks that open up new avenues for innovation.

How many healthcare ideas have been shot down because “that’s impossible”? Lots of them. Why are so many of these things “impossible”? They’re impossible because there are usually 10-15 things that need to be accomplished to be able to make the impossible possible.

Take healthcare analytics as an example. I once worked with a clinician to do a study on obesity in our patient population. As we started to put together the study it required us to pull all of the charts for patients whose BMI was over a certain level. Since we were on an EHR, I ran the report and the clinician researching the study easily had a list of every patient that met her criteria. Imagine trying to do that study before EHR. Someone would have had to manually go through thousands of paper charts to identify which ones met the criteria. No doubt that study would have been met with the complaint “That’s impossible.” (Remember that too expensive or time consuming is considered impossible for most organizations.)

What I just described was a super simple study. Now take that same concept and apply it beyond studies into things like real time analytics displayed to the provider at the point of care. How do you do that in a paper chart world? That’s right. You don’t even think about it because it’s impossible.

Sometimes we have to take a step back and imagine the building blocks that will be necessary for future innovation. Clean, trusted data is a good foundational building block for that innovation. The future of healthcare is going to be built on the back of health data. Your ability to trust your data is going to be an essential step to ensuring your organization can do the “impossible”.

The Current State Of “Big Data” In Healthcare – Health Care CXO Scene

Posted on November 2, 2015 I Written By

David Chou is the Vice President / Chief Information & Digital Officer for Children’s Mercy Kansas City. Children’s Mercy is the only free-standing children’s hospital between St. Louis and Denver and provide comprehensive care for patients from birth to 21. They are consistently ranked among the leading children’s hospitals in the nation and were the first hospital in Missouri or Kansas to earn the prestigious Magnet designation for excellence in patient care from the American Nurses Credentialing Center

Prior to Children’s Mercy David held the CIO position at University of Mississippi Medical Center, the state’s only academic health science center. David also served as senior director of IT operations at Cleveland Clinic Abu Dhabi and CIO at AHMC Healthcare in California. His work has been recognized by several publications, and he has been interviewed by a number of media outlets. David is also one of the most mentioned CIOs on social media, and is an active member of both CHIME and HIMSS. Subscribe to David’s latest CXO Scene posts here and follow me at Twitter
Facebook.

Editor’s Note: A big welcome to David Chou, the newest member of the Healthcare Scene family of bloggers. David has a great background as a hospital CIO and will bring a wealth of knowledge to Hospital EMR and EHR readers. We’re calling David’s series of blog posts the Healthcare CXO Scene. You can receive the CXO Scene blogs by email as well. Welcome David!

Healthcare is finally evolving towards utilizing data in our decision-making.  The landscape has changed dramatically with the adoption of Electronic Medical Record across the nation. Healthcare use to be a predominately paper based vertical and there are still lots of areas where it is dominated by paper. The fax is also still alive as a communication channel, but the industry has transformed dramatically in the last few years.

According to the Office Of The National Coordinator in 2013, nearly six in ten (59%) hospitals had adopted at least a Basic EHR system. This represents an increase of 34% from 2012 to 2013 and a five-fold increase since 2008. I am sure that percentage is even higher in 2015 in our journey towards an electronic world.

The workflow for the clinician and physician documentation does take a little longer now that they have to type instead of write their notes, but the advantages of having discrete data elements to run analytics will transform the decision making of every organization. If you Google the definition of “big data” the consensus definition is the wealth of structured, semi-structured and unstructured data that has the potential to be mined for information.

Unfortunately the healthcare vertical is still playing catch up and the majority of the organizations still only have Electronic Medical Record (EMR) data being used for decision-making. The healthcare vertical use to be similar to the airline industry where the key to success was keeping the hospital beds occupied similar to how the airline industry wanted to keep every seat on the airplane filled. The new model of care is figuring out a mechanism to keep patients out of the hospital beds and focus on keeping them healthy through preventative measures. We have to do all of this while figuring out the right financial model to be profitable.

As we move down the journey where we transition from a fee for service payment model to a value based payment model it is critical for every organization to transform their business process. Analytics will be key in making that change. Now let’s focus on the 2 key challenges that will force healthcare providers to focus on data to drive their decisions impacting their operations internally and externally.

Challenge #1: Healthcare reimbursements from Medicare and Medicaid have reduced year after year

This has a huge financial impact on health care since the Medicare expenditures have been growing as the baby boomer population ages. There has also been a steady increase of Medicaid expenditures, so the trend of lower reimbursements for taking care of a growing population will be what lies ahead for us in health care. Effective, quality delivery of care while reducing waste will be the main driver of success in the future.

Healthcare providers must understand the cost of delivering care down to the unit level. You will be surprised by the variation of cost for various procedures. The same procedure cost can vary by as much as 15-25% based on the products used. So one of the key elements of cost containment is standardization. As we transition to a value based payment model there will also be value based contracts which will be structured towards a shared savings model. The contractual terms will vary but the general theme will be to incentivize the providers to reduce cost for providing quality care to a population by offering a percentage of the net savings. We are seeing this trend in the Medicare shared saving program and leveraging data analytics will be the key-driving tool for this to be successful.

Challenge #2: The Move Towards Personalized Care

Consumers/patients have different expectations now. We are living in an on-demand personalized world where every industry vertical is moving towards a predictive environment including healthcare. The ideal scenario would be to consume data from the social platforms, wearables/sensors, mobile, public data, and other sources so that we can really understand in real time the current state of the consumer/patient.

Let’s assume the scenario of a digital consumer who is currently a diabetic patient that has been prescribed to be on a low calorie diet. The patient wears a fitbit and also has their smartphone app that tracks her heart rate. The heart rate is a bit higher than normal and the patient feels a little bit off. This wearable and mobile app is integrated with a central monitoring system at the hospital and an alarm triggers a clinician who checks the patient profile and history and takes the proactive measure of making a video call to the patient.

The patient answers the video call with the clinician and they have a video interaction where the clinician can see the facial color of the patient and asks a few questions. Fortunately the patient finished an intense workout about a hour ago so things are fine with the irregular heart rate at the moment and this video interaction also alleviates any anxiety for the patient. It is about 7pm so the patient decides to get something to eat and he is craving a burger so he pulls in to the drive through. The patient has his GPS turned on from his smartphone and also posts on Facebook that he is at a fast food chain’s drive through. This data element is picked up by the hospital’s CRM app and then an automated text is sent to the patient reminding him of the low calorie diet and makes a few recommendation from the menu. The patient can now make an inform decision and instead of ordering a burger he orders a grilled chicken sandwich.

The technology that I have described is already in place and it is similar to the retail sector when you walk in to the store and they already know your behavior. There is a trigger to create an action which hopefully equates to a sale.

Healthcare must move towards this culture of living in an on demand world where we can predict or persuade a behavior by the patient. The challenge that I see is that the majority of healthcare providers are still focused on their internal operations leveraging EMR data and we have not focused on the digital consumer yet. There are a lot of great work being put together by enterprise vendors and healthcare providers, but as we move down the journey of managing population health we can really learn from the other verticals and how they leverage the big data technology to improve consumer/patient engagement. All of this will ultimately lead to a healthier population.

If you’d like to receive future health care C-Level executive posts by David in your inbox, you can subscribe to future Health Care CXO Scene posts here.

Key Big Data Challenges Providers Must Face

Posted on July 17, 2015 I Written By

Anne Zieger is veteran healthcare editor and analyst with 25 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

Everybody likes to talk about the promise of big data, but managing it is another story. Taming big data will take new strategies and new IT skills, neither of which are a no-brainer, according to new research by the BPI Network.

While BPI Network has identified seven big data pain points, I’d argue that they boil down to just a few key issues:

* Data storage and management:  While providers may prefer to host their massive data stores in-house, this approach is beginning to wear out, at least as the only strategy in town. Over time, hospitals have begun moving to cloud-based solutions, at least in hybrid models offloading some of their data. As they cautiously explore outsourcing some of their data management and storage, meanwhile, they have to make sure that they have security locked down well enough to comply with HIPAA and repel hackers.

Staffing:  Health IT leaders may need to look for a new breed of IT hire, as the skills associated with running datacenters have shifted to the application level rather than data transmission and security levels. And this has changed hiring patterns in many IT shops. When BPI queried IT leaders, 41% said they’d be looking for application development pros, compared with 24% seeking security skills. Ultimately, health IT departments will need staffers with a different mindset than those who maintained datasets over the long term, as these days providers need IT teams that solve emerging problems.

Data and application availability: Health IT execs may finally be comfortable moving at least some of their data into the cloud, probably because they’ve come to believe that their cloud vendor offers good enough security to meet regulatory requirements. But that’s only a part of what they need to consider. Whether their data is based in the cloud or in a data center, health IT departments need to be sure they can offer high data availability, even if a datacenter is destroyed. What’s more, they also need to offer very high availability to EMRs and other clinical data-wrangling apps, something that gets even more complicated if the app is hosted in the cloud.

Now, the reality is that these problems aren’t big issues for every provider just yet. In fact, according to an analysis by KPMG, only 10% of providers are currently using big data to its fullest potential. The 271 healthcare professionals surveyed by KPMG said that there were several major barriers to leveraging big data in their organization, including having unstandardized data in silos (37%), lacking the right technology infrastructure (17%) and failing to have data and analytics experts on board (15%).  Perhaps due to these roadblocks, a full 21% of healthcare respondents had no data analytics initiatives in place yet, though they were at the planning stages.

Still, it’s good to look at the obstacles health IT departments will face when they do take on more advanced data management and analytics efforts. After all, while ensuring high data and app availability, stocking the IT department with the right skillsets and implementing a wise data management strategy aren’t trivial, they’re doable for CIOs that plan ahead. And it’s not as if health leaders have a choice. Going from maintaining an enterprise data warehouse to leveraging health data analytics may be challenging, but it’s critical to make it happen.