Free Hospital EMR and EHR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to Hospital EMR and EHR for FREE!

Hospital EMR Adoption Divide Widening, With Critical Access Hospitals Lagging

Posted on September 8, 2017 I Written By

Anne Zieger is veteran healthcare branding and communications expert with more than 25 years of industry experience. and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also worked extensively healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or

I don’t know about you, but I was a bit skeptical when HIMSS Analytics rolled out its EMRAM {Electronic Medical Record Adoption Model) research program. As some of you doubtless know, EMRAM breaks EMR adoption into eight stages, from Stage 0 (no health IT ancillaries installed) to Stage 7 (complete EMR installed, with data analytics on board).

From its launch onward, I’ve been skeptical about EMRAM’s value, in part because I’ve never been sure that hospital EMR adoption could be packaged neatly into the EMRAM stages. Perhaps the research model is constructed well, but the presumption that a multivariate process of health IT adoption can be tracked this way is a bit iffy in my opinion.

On the other hand, I like the way the following study breaks things out. New research published in the Journal of the American Medical Informatics Association looks at broader measures of hospital EHR adoption, as well as their level of performance in two key categories.

The study’s main goal was to assess the divide between hospitals using their EHRs in an advanced fashion and those that were not. One of the key steps in their process was to crunch numbers in a manner allowing them to identify hospital characteristics associated with high adoption in each of the advanced use criteria.

To conduct the research, the authors dug into 2008 to 2015 American Hospital Association Information Technology Supplement survey data. Using the data, the researchers measured “basic” and “comprehensive” EHR adoption among hospitals. (The ONC has created definitions for both basic and advanced adoption.)

Next, the research team used new supplement questions to evaluate advanced use of EHRs. As part of this process, they also used EHR data to evaluate performance management and patient engagement functions.

When all was said and done, they drew the following conclusions:

  • 80.5% of hospitals had adopted a basic EHR system, up 5.3% from 2014
  • 37.5% of hospitals had adopted at least 8 (of 10) EHR data sets useful for performance measurement
  • 41.7% of hospitals adopted at least 8 (of 10) EHR functions related to patient engagement

One thing that stood out among all the data was that critical access hospitals were less likely to have adopted at least 8 performance measurement functions and at least eight patient engagement functions. (Notably, HIMSS Analytics research from 2015 had already found that rural hospitals had begun to close this gap.)

“A digital divide appears to be emerging [among hospitals], with critical-access hospitals in particular lagging behind,” the article says. “This is concerning, because EHR-enabled performance measurement and patient engagement are key contributors to improving hospital performance.”

While the results don’t surprise me – and probably won’t surprise you either – it’s a shame to be reminded that critical access hospitals are trailing other facilities. As we all know, they’re always behind the eight ball financially, often understaffed and overloaded.

Given their challenges, it’s predictable that critical access hospitals would continue lag behind in the health IT adoption curve. Unfortunately, this deprives them of feedback which could improve care and perhaps offer a welcome boost to their efficiency as well. It’s a shame the way the poor always get poorer.

Stages, Rankings, and Other Vanity Metrics

Posted on November 18, 2013 I Written By

John Lynn is the Founder of the blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of and John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

It seems like we’re always getting bombarded with the latest and greatest list of hospitals and EHR vendors being ranked, classified or sorted into the various levels of IT adoption. The most famous are probably the HIMSS stages, KLAS rankings, and Most Wired Hospitals. While I’m like most of you and can’t resist glancing at them, every time I do I wonder what value those rankings and classifications really have when it comes to Health IT adoption.

In the startup world there’s a term that’s very popular called vanity metrics. I believe it was first made popular by Eric Ries in this post. The idea is simple. Organizations (and the press that cover them) love to publish big numbers for an organization, but do those metrics really have any meaning?

When I look at the various stages and ranking systems out there in healthcare IT, I wonder if they’re all just vanity metrics. The press loves to put a number on something or to classify an organization versus another one. However, does the stage or ranking really say anything about what really matters to a healthcare organization?

I haven’t done any specific research on things like the quality of care or the financial qualities of organizations across these stages and rankings. Maybe organizations that rank higher or have achieved a higher stage actually do provide better care and have better financials. Although, no doubt that research would have to also inspect the causal relationship between rankings and these results. However, I wonder if these rankings and classifications are really just vanity metrics.

I wonder if there are other metrics we could use to evaluate a healthcare organization. I think the results of such metrics would find every institution wanting in some areas and excelling in others. Stages and rankings don’t take this into account. However, I believe it’s the reality at every institution.

This actually reminds me of Farzad Mostashari’s comments about Healthcare’s Inability to “Step on a Scale” Today. As Farzad asserts, healthcare can’t “step on a scale” today and know how they’re doing. This is partially because the “scales” we’re using today aren’t measuring the right metrics. It’s like the scale is telling us that we’re 5’9″ and so we’re concluding we’re overweight. Although I expect that many might argue that the scale is blank and we’re concluding whatever we want to conclude.

I’d love to hear what metrics you think a healthcare organization should be measuring. Let’s hear your thoughts in the comments.