Free Hospital EMR and EHR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to Hospital EMR and EHR for FREE!

Open Source Tool Offers “Synthetic” Patients For Hospital Big Data Projects

Posted on September 13, 2017 I Written By

Anne Zieger is veteran healthcare editor and analyst with 25 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

As readers will know, using big data in healthcare comes with a host of security and privacy problems, many of which are thorny.

For one thing, the more patient data you accumulate, the bigger the disaster when and if the database is hacked. Another important concern is that if you decide to share the data, there’s always the chance that your partner will use it inappropriately, violating the terms of whatever consent to disclose you had in mind. Then, there’s the issue of working with incomplete or corrupted data which, if extensive enough, can interfere with your analysis or even lead to inaccurate results.

But now, there may be a realistic alternative, one which allows you to experiment with big data models without taking all of these risks. A unique software project is underway which gives healthcare organizations a chance to scope out big data projects without using real patient data.

The software, Synthea, is an open source synthetic patient generator that models the medical history of synthetic patients. It seems to have been built by The MITRE Corporation, a not-for-profit research and development organization sponsored by the U.S. federal government. (This page offers a list of other open source projects in which MITRE is or has been involved.)

Synthea is built on a Generic Module Framework which allows it to model varied diseases and conditions that play a role in the medical history of these patients. The Synthea modules create synthetic patients using not only clinical data, but also real-world statistics collected by agencies like the CDC and NIH. MITRE kicked off the project using models based on the top ten reasons patients see primary care physicians and the top ten conditions that shorten years of life.

Its makers were so thorough that each patient’s medical experiences are simulated independently from their “birth” to the present day. The profiles include a full medical history, which includes medication lists, allergies, physician encounters and social determinants of health. The data can be shared using C-CDA, HL7 FHIR, CSV and other formats.

On its site, MITRE says its intent in creating Synthea is to provide “high-quality, synthetic, realistic but not real patient data and associated health records covering every aspect of healthcare.” As MITRE notes, having a batch of synthetic patient data on hand can be pretty, well, handy in evaluating new treatment models, care management systems, clinical support tools and more. It’s also a convenient way to predict the impact of public health decisions quickly.

This is such a good idea that I’m surprised nobody else has done something comparable. (Well, at least as far as I know no one has.) Not only that, it’s great to see the software being made available freely via the open source distribution model.

Of course, in the final analysis, healthcare organizations want to work with their own data, not synthetic substitutes. But at least in some cases, Synthea may offer hospitals and health systems a nice head start.

Hospital EMR Adoption Divide Widening, With Critical Access Hospitals Lagging

Posted on September 8, 2017 I Written By

Anne Zieger is veteran healthcare editor and analyst with 25 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

I don’t know about you, but I was a bit skeptical when HIMSS Analytics rolled out its EMRAM {Electronic Medical Record Adoption Model) research program. As some of you doubtless know, EMRAM breaks EMR adoption into eight stages, from Stage 0 (no health IT ancillaries installed) to Stage 7 (complete EMR installed, with data analytics on board).

From its launch onward, I’ve been skeptical about EMRAM’s value, in part because I’ve never been sure that hospital EMR adoption could be packaged neatly into the EMRAM stages. Perhaps the research model is constructed well, but the presumption that a multivariate process of health IT adoption can be tracked this way is a bit iffy in my opinion.

On the other hand, I like the way the following study breaks things out. New research published in the Journal of the American Medical Informatics Association looks at broader measures of hospital EHR adoption, as well as their level of performance in two key categories.

The study’s main goal was to assess the divide between hospitals using their EHRs in an advanced fashion and those that were not. One of the key steps in their process was to crunch numbers in a manner allowing them to identify hospital characteristics associated with high adoption in each of the advanced use criteria.

To conduct the research, the authors dug into 2008 to 2015 American Hospital Association Information Technology Supplement survey data. Using the data, the researchers measured “basic” and “comprehensive” EHR adoption among hospitals. (The ONC has created definitions for both basic and advanced adoption.)

Next, the research team used new supplement questions to evaluate advanced use of EHRs. As part of this process, they also used EHR data to evaluate performance management and patient engagement functions.

When all was said and done, they drew the following conclusions:

  • 80.5% of hospitals had adopted a basic EHR system, up 5.3% from 2014
  • 37.5% of hospitals had adopted at least 8 (of 10) EHR data sets useful for performance measurement
  • 41.7% of hospitals adopted at least 8 (of 10) EHR functions related to patient engagement

One thing that stood out among all the data was that critical access hospitals were less likely to have adopted at least 8 performance measurement functions and at least eight patient engagement functions. (Notably, HIMSS Analytics research from 2015 had already found that rural hospitals had begun to close this gap.)

“A digital divide appears to be emerging [among hospitals], with critical-access hospitals in particular lagging behind,” the article says. “This is concerning, because EHR-enabled performance measurement and patient engagement are key contributors to improving hospital performance.”

While the results don’t surprise me – and probably won’t surprise you either – it’s a shame to be reminded that critical access hospitals are trailing other facilities. As we all know, they’re always behind the eight ball financially, often understaffed and overloaded.

Given their challenges, it’s predictable that critical access hospitals would continue lag behind in the health IT adoption curve. Unfortunately, this deprives them of feedback which could improve care and perhaps offer a welcome boost to their efficiency as well. It’s a shame the way the poor always get poorer.

Did EMRs Help Hospitals Hit By Hurricane Harvey?

Posted on September 5, 2017 I Written By

Anne Zieger is veteran healthcare editor and analyst with 25 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

On August 25, 2005, Hurricane Katrina made landfall. Over the next few days, it devastated communities from Florida to Texas, generating massive storm surges and triggering levee failures that drowned cities like New Orleans. It was the costliest natural disaster in the history of the United States.

At the time, virtually all healthcare providers used paper medical records, many of which were destroyed by flooding. According to an AHIMA article, the flood waters destroyed roughly 400,000 paper records, a catastrophic loss by any standard.

The situation wasn’t nearly as dire at facilities like Tulane University Hospital and Clinic, though. The New Orleans-based organization had implemented an EMR before the storm hit. In the trying weeks afterward, physicians at these hospitals had access to medical records, while many other hospitals were struggling to gather patient information for months or even years after Katrina.

Now, we’re facing the aftermath of Hurricane Harvey, which has all but submerged the city of Houston. Days after the storm’s peak, which dumped a record 51.88 inches of rain on Texas, roughly a third of the Houston area was covered in water, and Texas officials estimated that close to 49,000 homes had suffered flood damage.

During the worst of the storm, some 20 Houston hospitals transferred some or all of their patients to facilities outside of the area as water rose in their basements or levees seemed ready to burst. In its immediate aftermath, many of the area’s 110 facilities shut down outpatient services and canceled elective surgeries.

But despite the challenges they faced, the majority of Houston-area hospitals remained open for business.  One reason for their ability to function: unlike the hospitals battered by Katrina, they have EMRs in place. The area didn’t see any major power outages and the systems seem to stayed online.

It’s hard to say whether New Orleans would’ve fared better if the city’s hospitals had already implemented EMRs. Houston hospitals were apparently better prepared for hurricane flooding, having put a host of storm fortifications in place after Tropical Storm Allison wreaked massive damage sixteen years ago.

That being said, it seems likely that the EMRs have helped hospitals keep the doors open and keep caring for patients. If nothing else, they gave facilities a giant head start over New Orleans hospitals post-disaster, which in some cases had virtually nothing to go on when delivering care.

Of course, digital data offers some significant advantages over paper records of any kind, including but not limited to the ability to backup records to off-site facilities well out of a given disaster zone.  But organizing patient data in an EMR, arguably, offers additional benefits, not the least of which is the ability to access existing workflows and protocols. Few tools are better suited to capturing, sharing and preserving care records in the midst of a catastrophic event like Harvey.

Over the next few decades, some observers predict that care will become massively decentralized, with remote nurses, telemedicine and connected health doing much of the heavy lifting day-to-day. If that comes to pass, and health IT intelligence is distributed across mobile devices instead, the EMR of today may be far less important to healthcare organizations hoping to rebound after a disaster. But until then, it’s safe to say that it’s a good thing Houston’s hospitals don’t rely on paper records anymore.

Rush Sues Patient Monitoring Vendor, Says System Didn’t Work

Posted on August 25, 2017 I Written By

Anne Zieger is veteran healthcare editor and analyst with 25 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

Rush University Medical Center has filed suit against one of its health IT vendors, claiming that its patient monitoring system didn’t work as promised and may have put patients in danger.

According to a story in the Chicago Tribune, Rush spent $18 million installing the Infinity Acute Monitoring Solution system from Telford, PA-based Draeger Inc. between 2012 and early 2016.  The Infinity system included bedside monitors, larger data aggregating monitors at central nursing stations, battery-powered portable monitors and M300 wireless patient-worn monitors.

However, despite years of attempting to fix the system, its patient alarms were still unreliable and inaccurate, it contends in the filing, which accuses Draeger of breach of contract, unjust enrichment and fraud.

In the suit, the 664-bed hospital and academic medical center says that the system was dogged by many issues which could have had an impact on patient safety. For example, it says, the portable monitors stopped collecting data when moved to wireless networks and sometimes stole IP addresses from bedside monitors, knocking the bedside monitor off-line leaving the patient unmonitored.

In addition, the system allegedly sent out false alarms for heart arrhythmia patients with pacemakers, distracting clinicians from performing their jobs, and failed monitor apnea until 2015, according to the complaint. Even then, the system wasn’t monitoring some sets of apnea patients accurately, it said. Near the end, the system erased some patient records as well, it contends.

Not only that, Draeger didn’t deliver everything it was supposed to provide, the suit alleges, including wired-to-wireless monitoring and monitoring for desaturation of neonatal patients’ blood oxygen.

As if that weren’t enough, Draeger didn’t respond effectively when Rush executives told it about the problems it was having, according to the suit. “Rather than effectively remediating these problems, Draeger largely, and inaccurately, blamed them on Rush,” it contends.

While Draeger provided a software upgrade for the system, it was extremely difficult to implement, didn’t fix the original issues and created new problems, the suit says.

According to Rush, the Draeger system was supposed to last 10 years. However, because of technical problems it observed, the medical center replaced the system after only five years, spending $30 million on the new software, it says.

Rush is asking the court to make Draeger pay that the $18 million it spent on the system, along with punitive damages and legal fees.

It’s hard to predict the outcome of such a case, particularly given that the system’s performance has to have depended in part on how Rush managed the implementation. Plus, we’re only seeing the allegations made by Rush in the suit and not Draeger’s perspective which could be very different and offer other details. Regardless, it seems likely these proceedings will be watched closely in the industry. Regardless of whether they are at fault or not, no vendor can afford to get a reputation for endangering patient safety, and moreover, no hospital can afford to buy from them if they do.

A New Hospital Risk-Adjustment Model

Posted on August 23, 2017 I Written By

Anne Zieger is veteran healthcare editor and analyst with 25 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

Virtually all of the risk adjustment models with which I’m familiar are based on retrospective data. This data clearly has some predictive benefits – maybe it’s too cliché to say the past is prologue – and is already in our hands.

To look at just one example of what existing data archives can do, we need go no further than the pages of this blog. Late last year, I shared the story of a group of French hospitals which are working to predict admission rates as much as 15 days in advance by mining a store of historical data. Not surprisingly, the group’s key data includes 10 years’ worth of admission records.

The thing is, using historical data may not be as helpful when you’re trying to develop risk-adjustment models. After all, among other problems, the metrics by which evaluate care shift over time, and our understanding of disease states changes as well, so using such models to improve care and outcomes has its limitations.

I’ve been thinking about these issues since John shared some information on a risk-adjustment tool which leverages relevant patient care data collected almost in real time.

The Midas Hospital Risk Adjustment Model, which is created specifically for single organizations, samples anywhere from 20 to 600 metrics, which can include data on mortality, hospital-acquired complications, unplanned readmission, lengths of stay and charges. It’s built using the Midas Health Analytics Platform, which comes from a group within healthcare services company Conduent. The platform captures data across hospital functional areas and aggregates it for use in care management

The Midas team chooses what metrics to include using its in-house tools, which include a data warehouse populated with records on more than 100 million claims as well as data from more than 800 hospitals.

What makes the Midas model special, Conduent says, is that it incorporates a near-time feed of health data from hospital information systems. One of the key advantages to doing so is that rather than basing its analysis on ICD-9 data, which was in use until relatively recently, it can leverage clinically-detailed ICD-10 data, the company says.

The result of this process is a model which is far more capable of isolating small but meaningful differences between individual patients, Conduent says. Then, using this model, hospitals risk-adjust clinical and financial outcomes data by provider for hospitalized patients, and hopefully, have a better basis for making future decisions.

This approach sounds desirable (though I don’t know if it’s actually new). We probably need to move in the direction of using fresh data when analyzing care trends. I suspect few hospitals or health system would have the resources to take this on today, but it’s something to consider.

Still, I’d want to know two things before digging into Midas further. First, while the idea sounds good, is there evidence to suggest that collecting recent data offers superior clinical results? And in that vein, how much of an improvement does it offer relative to analysis of historical data? Until we know these things, it’s hard to tell what we’ve got here.

Is Allscripts An Also-Ran In The Hospital EMR Business?

Posted on August 18, 2017 I Written By

Anne Zieger is veteran healthcare editor and analyst with 25 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

It all began with a question, as many classic tales do. Someone writing for the HIStalk.com website  – I think it was ever-anonymous, eponymous  leader Mr. HISTalk – asked readers to answer the question “Who will benefit most from the proposed acquisition of McKesson EIS by Allscripts?”

The survey results were themselves worth a read:

* Approximately 29% voted for “McKesson customers”
* About 27% voted for “Allscripts customers”
* 8.4% voted for “McKesson shareholders”
* Roughly 23% voted for “Allscripts shareholders”
* About 13% voted for “Allscripts competitors”

Two things about these responses interested me. One is that almost a third of respondents seem to think McKesson will make the bigger score after being acquired by Allscripts. The other is that a not-inconsiderable 13% of the site’s well-informed readers think the deal will help Allscripts’ competitors. If these readers are right, perhaps Allscripts should rethink the deal.

I was even more engaged by the analysis that followed, which the writer took a close look at the dynamics of the hospital EMR market and commented on how Allscripts fit in. The results weren’t surprising, but again, if I were running Allscripts I’d take the following discussion seriously.

After working with data supplied by Blain Newton, EVP of HIMSS Analytics, the writer drew some firm conclusions. Here are some of the observations he shared:

  • While McKesson has twice as many hospitals as Allscripts, most of these hospitals have less than 150 beds, which means that the acquisition may offer less benefit, he suggests.
  • In addition to having only 3% of hospitals overall, Allscripts controls only 6% of the 250+ bed hospital market, which probably doesn’t position it for success. In contrast, he notes, Epic controls 20% of this market and Meditech 19%.
  • His sense is that while hospitals typically want a full suite of products when they work with Epic, Cerner or Meditech, Allscripts customers may be more prone to buying just a few key systems.
  • Ultimately, he argues, Cerner, Epic and Meditech have a commanding lead in this market, for reasons which include that the three are well ahead when it comes to the overall number of hospital served.
  • Given his premise, he believes that Epic is at the top of the pyramid, as it has almost double the number of hospitals with 500+ beds that Cerner does.

To cap off his analysis, Mr. HISTalk concludes that market forces make it unlikely that a dark horse will squeeze out one of the top hospital EMR vendors: “Everybody else is eating their dust and likely to lose business due to hospital consolidation and a shift toward the most successful vendors as much as all of us who – for our own reasons – wish that weren’t the case.”

It would take a separate analysis to predict whether the top three hospital EMR vendors are likely to win out over each other, but Epic seems to hold the most cards. Last year, I wrote a piece suggesting that Cerner was edging up on Epic, but I’m not sure whether or not my logic still holds. Epic may indeed be King of the (HIT) Universe for the foreseeable future.

E-Patient Update: When EMRs Make A Bad Process Worse

Posted on August 14, 2017 I Written By

Anne Zieger is veteran healthcare editor and analyst with 25 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

Last week, I wrote an item reflecting on a video interview John did with career CIO Drex DeFord. During the video, which focused on patient engagement and care coordination, DeFord argued that it’s best to make sure your processes are as efficient as they can get before you institutionalize them with big technology investments.

As I noted in the piece, it’d be nice if hospitals did the work of paring down processes to perfection before they embed those processes in their overall EMR workflow, but that as far as I know this seldom happens

Unfortunately, I’ve just gotten a taste of what can go wrong under these circumstances. During the rollout of its enterprise EMR, a health system with an otherwise impeccable reputation dropped the ball in a way which may have harmed my brother permanently.

An unusual day

My brother Joey, who’s in his late 40s, has Down’s Syndrome. He’s had a rocky health history, including heart problems that go with the condition and some others of his own. He lives with my parents in the suburbs of a large northeastern city about an hour by air from my home.

Not long ago, when I was staying with them, my brother had a very serious medical problem. One morning, I walked into the living room to find him wavering in and out of consciousness, and it became clear that he was in trouble. I woke my parents and called 911. As it turned out, his heart was starting and stopping which, unless perhaps you’re an emergency physician, was even scarier to watch than you might think.

Even for a sister who’d watched her younger brother go through countless health troubles, this is was a pretty scary day.  Sadly, the really upsetting stuff happened at the hospital.

Common sense notions

When we got Joey to the ED at this Fancy Northeastern Hospital, the staff couldn’t have been more helpful and considerate. (The nurses even took Joe’s outrageous flirting in stride.)  Within an hour or two, the clinical team had recommended implanting him with a pacemaker. But things went downhill from there.

Because he arrived on Friday afternoon, staff prepared for the implantation right away, as the procedure apparently wasn’t available Saturday and Sunday and he needed help immediately. (The lack of weekend coverage strikes me as ludicrous, but it’s a topic for another column.)

As part of the prep, staff let my mother know that the procedure was typically done without general anesthesia. At the time, my mother made clear that while Joey was calm now, he might very well get too anxious to proceed without being knocked out. She thought the hospital team understood and were planning accordingly.

Apparently, though, the common-sense notion that some people freak out and need to be medicated during this kind of procedure never entered their mind, didn’t fit with their processes or both. Even brother’s obvious impairment doesn’t seem to have raised any red flags.

“I don’t have his records!”

I wasn’t there for the rest of the story, but my mother filled me in later. When Joey arrived in the procedure room, staff had no idea that he might need special accommodations and canceled the implantation when he wouldn’t hold still. Mom tells me one doctor yelled: “But I don’t have his records!” Because the procedure didn’t go down that day, he didn’t get his implant until Monday.

This kind of fumbling isn’t appropriate under any circumstances, but it’s even worse when it’s predictable.  Apparently, my brother had the misfortune to show up on the first day of the hospital’s EMR go-live process, and clinicians were sweating it. Not only were they overtaxed, and rushing, they were struggling to keep up with the information flow.

Of course, I understand that going live on an EMR can be stressful and difficult. But in this case, and probably many others, things wouldn’t have fallen apart if their process worked in the first place prior to the implementation. Shouldn’t they have had protocols in place for road bumps like skittish patients or missing chart information even before the EMR was switched on?

Not the same

Within days of getting Joey back home, my mom saw that things were not the same with him. He no longer pulls his soda can from the fridge or dresses himself independently. He won’t even go to the bathroom on his own anymore. My mother tells me that there’s the old Joe (sweet and funny) and the new Joe (often combative and confused).  Within weeks of the pacemaker implantation, he had a seizure.

Neither my parents nor I know whether the delay in getting the pacemaker put in led to his loss of functioning. We’re aware that the episode he had at home prior to treatment could’ve led to injuries that affect his functioning today.  We also know that adults with Down’s Syndrome slip into dementia at a far younger age than is typical for people without the condition. But these new deficits only seemed to set in after he came home.

My mother still simmers over the weekend he spent without much-needed care, seemingly due to a procedural roadblock that just about anyone could’ve anticipated. She thinks about the time spent between Friday and Monday, during which she assumes his heart was struggling to work “His heart was starting and stopping, Anne,” she said. “Starting and stopping. All because they couldn’t get it right the first time.”

Is It Time To Put FHIR-Based Development Front And Center?

Posted on August 9, 2017 I Written By

Anne Zieger is veteran healthcare editor and analyst with 25 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

I like to look at questions other people in the #HIT world wonder about, and see whether I have a different way of looking at the subject, or something to contribute to the discussion. This time I was provoked by one asked by Chad Johnson (@OchoTex), editor of HealthStandards.com and senior marketing manager with Corepoint Health.

In a recent HealthStandards.com article, Chad asks: “What do CIOs need to know about the future of data exchange?” I thought it was an interesting question; after all, everyone in HIT, including CIOs, would like to know the answer!

In his discussion, Chad argues that #FHIR could create significant change in healthcare infrastructure. He notes that if vendors like Cerner or Epic publish a capabilities-based API, providers’ technical, clinical and workflow teams will be able to develop custom solutions that connect to those systems.

As he rightfully points out, today IT departments have to invest a lot of time doing rework. Without an interface like FHIR in place, IT staffers need to develop workflows for one application at a time, rather than creating them once and moving on. That’s just nuts. It’s hard to argue that if FHIR APIs offer uniform data access, everyone wins.

Far be it from me to argue with a good man like @OchoTex. He makes a good point about FHIR, one which can’t be emphasized enough – that FHIR has the potential to make vendor-specific workflow rewrites a thing of the past. Without a doubt, healthcare CIOs need to keep that in mind.

As for me, I have a couple of responses to bring to the table, and some additional questions of my own.

Since I’m an HIT trend analyst rather than actual tech pro, I can’t say whether FHIR APIs can or can’t do what Chat is describing, though I have little doubt that Chad is right about their potential uses.

Still, I’d contend out that since none other than FHIR project director Grahame Grieve has cautioned us about its current limitations, we probably want to temper our enthusiasm a bit. (I know I’ve made this point a few times here, perhaps ad nauseum, but I still think it bears repeating.)

So, given that FHIR hasn’t reached its full potential, it may be that health IT leaders should invest added time on solving other important interoperability problems.

One example that leaps to mind immediately is solving patient matching problems. This is a big deal: After all, If you can’t match patient records accurately across providers, it’s likely to lead to wrong-patient related medical errors.

In fact, according to a study released by AHIMA last year, 72 percent of HIM professional who responded work on mitigating possible patient record duplicates every week. I have no reason to think things have gotten better. We must find an approach that will scale if we want interoperable data to be worth using.

And patient data matching is just one item on a long list of health data interoperability concerns. I’m sure you’re aware of other pressing problems which could undercut the value of sharing patient records. The question is, are we going to address those problems before we began full-scale health data exchange? Or does it make more sense to pave the road to data exchange and address bumps in the road later?

Hospital CIOs Still Think Outcomes Improvement Is The Best Use Of EMR Data

Posted on August 4, 2017 I Written By

Anne Zieger is veteran healthcare editor and analyst with 25 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

Sure, there might be a lot of ways to leverage data found within EMRs, but outcomes improvement is still king. This is one of the standout conclusions from a recently-released survey of CHIME CIOs, sponsored by the trade group and industry vendor LeanTaaS, in which the two asked hospital CIOs five questions about their perceptions about the impact of EMR data use in growing operating margins and revenue.

I don’t know about you, but I wasn’t surprised to read that 24% of respondents felt that improving clinical outcomes was the most effective use of their EMR data. Hey, why else would their organizations have spent so much money on EMRs in the first place?  (Ok, that’s probably a better question than I’ve made it out to be.)

Ten percent of respondents said that increasing operational efficiencies was the best use of EMR data, an idea which is worth exploring further, but the study didn’t offer a whole lot of additional detail on their thought process. Meanwhile, 6% said that lowering readmissions was the most effective use of EMR data, and 2% felt that its highest use was reducing unnecessary admissions. (FWIW, the press release covering the survey suggested that the growth in value-based payment should’ve pushed the “reducing  readmissions” number higher, but I think that’s oversimplifying things.)

In addition to looking at EMR data benefits, the study looked at other factors that had an impact on revenue and margins. For example, respondents said that reducing labor costs (35%) and boosting OR and ED efficiency (27%) would best improve operating margins, followed by 24% who favored optimizing inpatient revenue by increasing access. I think you’d see similar responses from others in the hospital C-suite. After all, it’s hard to argue that labor costs are a big deal.

Meanwhile, 52% of the CIOs said that optimizing equipment use was the best approach for building revenue, followed by optimizing OR use (40%). Forty-five percent of responding CIOs said that OR-related call strategies had the best chance of improving operating margins.

That being said, the CIOs don’t exactly feel free to effect changes on any of these fronts, though their reasons varied.

Fifty-four percent of respondents said that budget limitations the biggest constraint they faced in launching new initiatives, and 33% of respondents said the biggest obstacle was lack of support resources. This was followed by 17% who said that new initiatives were being eclipsed by higher priority projects, 17% said they lacked buy-in from management and 10% who said he lack the infrastructure to pursue new projects.

Are any of these constraints unfamiliar to you, readers? Probably not. Wouldn’t it be nice if we did at least solved these predictable problems and could move on to different stumbling blocks?

Hospital Execs Underestimate QPP Impact

Posted on July 7, 2017 I Written By

Anne Zieger is veteran healthcare editor and analyst with 25 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

A new survey by Nuance Communications suggests that hospital finance leaders aren’t prepared to meet the demands of MACRA’s Merit-Based Incentive Payment System (MIPS), and may not understand the extent to which MIPS could impact their bottom line. Worse, survey results suggest that many of those who were convinced they knew what was involved in meeting program demands were dead wrong.

The survey found that many hospital finance leaders weren’t aware that if they don’t participate in the MIPS Quality Payment Program (QPP), they could see a 4% reduction in Medicare reimbursements by 2019.

Not only that, those who were aware of the program didn’t have a great grasp of the details. More than 75% respondents that claimed to be somewhat or very confident about their understanding of QPP got the 4% at-risk number wrong. Meanwhile, 60% of respondents either underestimated the percent of revenue at risk or simply did not know what the number was.

In addition, a significant number of respondents weren’t aware of key QPP reporting requirements. For example, just 35% of finance respondents that felt confident they understood QPP requirements actually knew that they had to submit 90 day of quality data to participate. Meanwhile, 50% either underestimated or did not know how many days of data they needed to provide.

On a broader level, as Nuance noted, the issue is that hospitals aren’t ready to meet QPP demands even if they do know what’s at stake. Too many aren’t prepared to capture complete clinical documentation, develop business processes to support this data capture and raise provider awareness of these issues. In other words, not only are finance leaders unaware of some key QPP requirements, they may not have the infrastructure to meet them.

This is a big deal. Not only will their organizations lose money if they don’t meet QPP requirements, but they’ll miss out on a 5% positive Medicare payment adjustment if they play by the rules.

Lest the respondents sound careless, let’s do a reality check here. Without a doubt, the transition into the world of MIPS isn’t a simple one. Hospitals and medical practices will have to meet deadlines and present quality data in new ways. That would be a hassle in any event, but it’s particularly difficult given how many other quality data reporting requirements they must meet.

That being said, I’d argue that even if they’ve gotten a slow start, hospitals have enough time to meet the basic requirements of QPP compliance. For example, turning over 90 days of quality data by March of next year shouldn’t be a gigantic stretch in contrast to, say, submitting a year’s worth of data under advanced Meaningful Use models. Not to mention the Pick Your Pace option of only 1 measure which avoids all penalties.

Clearly, having the right health IT tools will be important to this process. (Not surprisingly, Nuance is picking its own reporting tools as part of the mix.) But I’m struck by the notion that organizations can’t live on technology alone in this case. As with many problems in healthcare, tech solutions aren’t worth much if the business doesn’t have the right processes in place. Let’s see if finance executives know at least that much.