Free Hospital EMR and EHR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to Hospital EMR and EHR for FREE!

Will Chatbots Be Embedded In Health IT Infrastructure Within Five Years?

Posted on December 10, 2018 I Written By

Anne Zieger is veteran healthcare branding and communications expert with more than 25 years of industry experience. and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also worked extensively healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

Brace yourself: The chatbots are coming. In fact, healthcare chatbots could become an important part of healthcare organizations’ IT infrastructure, according to research released by a market analyst firm. I have my doubts but do read on and see what you think.

Jupiter Research is predicting that AI-powered chatbots will become the initial point of contact with healthcare providers for many consumers. As far as I know, this approach is not widespread in the US at present, though there are many vendors developing tools that they could deploy and we’ve seen some success from companies like SimplifiMed and big tech companies like Microsoft that are enabling chatbots as well.

However, Jupiter sees things changing rapidly over the next five years. It predicts that the number of chatbot interactions will shoot up at an average annual growth rate of 167%, from an estimated 21 million per year in 2018 to 2.8 billion per year in 2023.  By that point, healthcare will represent 10% of all chatbot interactions across major verticals, Jupiter says.

According to the market research firm, there are a number of reasons chatbot use in healthcare will grow so rapidly, including consumers’ growing comfort level with using chatbots to discuss their care. Jupiter also expects to see healthcare providers routinely use chatbots for customer experience management, though again, I’ve seen little evidence that this is happening just yet.

The massive growth in patient-chatbot interactions will also be fueled by a rise in the sophistication of conversational AI platforms, a leap so dramatic that consumers will handle a growing percentage of their healthcare business entirely via chatbot, the firm says. This, in turn, will free up medical staff time, saving countries’ healthcare systems around $3.7 billion by 2023.  This would prove to be a relatively modest savings for the giant US healthcare system, but it could be quite meaningful for a smaller country.

As healthcare organizations adopt chatbot platforms, their chief goal will be to see that information collected by chatbots is transferred to EHRs and other important applications, the report says. To make this happen, these organizations will have to make sure to integrate chatbot platforms with both clinical and line-of-business applications. (Vendors like PatientSphere already offer independent platforms designed to address such issues.)

All very interesting, no? Definitely. I share Jupiter’s optimistic view of the chatbot’s role in healthcare delivery and customer service and have little doubt that even today’s relatively primitive bots are capable of handling many routine transactions.

That being said, I’m thinking it will be more like 10 years before chatbots are used widely by providers. If what I’ve seen is any indication, it will probably take that long before conversational AI can truly hold a conversation. If we hope to use AI-based chatbots routinely at the front end of important processes, they’ll just have to be smarter.

Next Steps In Making Healthcare AI Practical

Posted on November 30, 2018 I Written By

Anne Zieger is veteran healthcare branding and communications expert with more than 25 years of industry experience. and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also worked extensively healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

In recent times, AI has joined blockchain on the list of technologies that just sort of crept into the health IT toolkit.

After all, blockchain was borne out of the development of bitcoin, and not so long ago the idea that it was good for anything else wasn’t out there. I doubt its creators ever contemplated using it for secure medical data exchange, though the notion seems obvious in retrospect.

And until fairly recently, artificial intelligence was largely a plaything for advanced computing researchers. I’m sure some AI researchers gave thought to cyborg doctors that could diagnose patients while beating them at chess and serving them lunch, but few practical applications existed.

Today, blockchain is at the core of countless health IT initiatives, many by vendors but an increasing number by providers as well. Healthcare AI projects, for their part, seem likely to represent the next wave of “new stuff” adoption. It’s at the stage blockchain was a year or two ago.

Before AI becomes more widely adopted in healthcare circles, though, the industry needs to tackle some practical issues with AI, and the list of “to-dos” keeps expanding. Only a few months ago, I wrote an item citing a few obstacles to healthcare AI deployment, which included:

  • The need to make sure clinicians understand how the AI draws its conclusions
  • Integrating AI applications with existing clinical workflow
  • Selecting, cleaning and normalizing healthcare data used to “train” the AI

Since then, other tough challenges to the use of healthcare AI have emerged as the healthcare leaders think things over, such as:

Agreeing on best practices

Sure, hospitals would be interested in rolling out machine learning if they could, say, decrease the length of hospital stays for pneumonia and save millions. The thing is, how would they get going? At present, there’s no real playbook as to how these kinds of applications should be conceptualized, developed and maintained. Until healthcare leaders reach a consensus position on how healthcare AI projects should generally work, such projects may be too risky and/or prohibitively expensive for providers to consider.

Identifying use cases

As an editor, I see a few interesting healthcare AI case studies trickle into my email inbox every week, which keeps me intrigued. The thing is, if I were a healthcare CIO this probably wouldn’t be enough information to help me decide whether it’s time to take up the healthcare AI torch. Until we’ve identified some solid use cases for healthcare AI, almost anything providers do with it is likely to be highly experimental. Yes, there are some organizations that can afford to research new tech but many just don’t have the staff or resources to invest. Until some well-documented standard use cases for healthcare AI emerge, they’re likely to hang back.

The healthcare AI discussion is clearly at a relatively early stage, and more obstacles are likely to show up as providers grapple with the technology. In the meantime, getting these handled is certainly enough of a challenge.

Less Than Half of Healthcare Users Trust Critical Organizational Data

Posted on November 29, 2018 I Written By

Anne Zieger is veteran healthcare branding and communications expert with more than 25 years of industry experience. and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also worked extensively healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

If you’re a healthcare CIO, you must hope that your users trust and feel they can leverage data to do their jobs better. However, some of your colleagues don’t seem to be so sure. A new study has concluded that less than half of users in responding healthcare organizations have a high degree of trust in their clinical, operational or financial data.

The study, which was conducted by Dimensional Insight, surveyed 85 chief information officers and other senior health IT leaders. It asked these leaders how they rated trust in the data leveraged by their various user communities, the percentage of user population they felt was self-service oriented and making data-driven decisions, and whether they planned to increase or decrease their investments in data trust and self-service analytics.

When rating the level of data trust on a 10-point scale, just 40% of respondents rated their trust in financial data at eight or above, followed by 40% of clinical data users and 36% of operational data users.

Perhaps, then, it follows that healthcare organizations responding to the survey had low levels of self-service data use. Clinical data users had a particularly low rate of self-service use, while financial users seemed fairly likely to be accessing and using data independently.

Given these low levels of trust and self-service data usage, it’s not surprising to find out that 76% of respondents said they plan to invest in increasing their investment in improving clinical data trust, 77% their investments in improving operational data trust and 70%  their investment in financial data trust.

Also, 78% said they plan to increase their spending on self-service analytics for clinical data and 73% expect to spend more on self-service analytics for operational data. Meanwhile, while 68% plan to increase spending on financial self-service analytics, 2% actually planned to decrease the spending in this area, suggesting that this category is perhaps a bit healthier.

In summing up, the report included recommendations on creating more trust in organizational data from George Dealy, Dimensional Insight’s vice president of healthcare applications. Dealy’s suggestions include making sure that subject matter experts help to design systems providing information critical to their decision-making process, especially when it comes to clinicians. He also points out that health IT leaders could benefit from keeping key users aware of what data exists and making it easy for them to access it.

Unfortunately, there are still far too many data silos protected by jealous guardians in one department or another. While subject matter experts can design the ideal data sharing platform for their needs, there’s still a lot of control issues to address before everyone gets what they need. In other words, increasing trust is well and good, but the real task is seeing to it that the data is rich and robust when users get it.

AI May Be Less Skilled At Analyzing Images From Outside Organizations

Posted on November 26, 2018 I Written By

Anne Zieger is veteran healthcare branding and communications expert with more than 25 years of industry experience. and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also worked extensively healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

Using AI technologies to analyze medical images is looking more and more promising by the day. However, new research suggests that when AI tools have to cope with images from multiple health systems, they have a harder time than when they stick to just one.

According to a new study published in PLOS Medicine, interest is growing in analyzing medical images using convolutional neural networks, a class of deep neural networks often dedicated to this purpose. To date, CNNs have made progress in analyzing X-rays to diagnose disease, but it’s not clear whether CNNs trained on X-rays from one hospital or system will work just as well in other hospitals and health systems.

To look into this issue, the authors trained pneumonia screening CNNs on 158,323 chest X-rays, including 112,120 X-rays from the NIH Clinical Center, 42,396 X-rays from Mount Sinai Hospital and 3,807 images from the Indiana University Network for Patient Care.

In their analysis, the researchers examined the effect of pooling data from sites with a different prevalence of pneumonia. One of their key findings was that when two training data sites had the same pneumonia prevalence, the CNNs performed consistently, but when a 10-fold different in pneumonia rates were introduced between sites, their performance diverged. In that instance, the CNN performed better on internal data than that supplied by an external organization.

The research team found that in 3 out of 5 natural comparisons, the CNNs’ performance on chest X-rays from outside hospitals was significantly lower than on held-out X-rays from the original hospital system. This may point to future problems when health systems try to use AI for imaging on partners’ data. This is not great to learn given the benefits AI-supported diagnosis might offer across, say, an ACO.

On the other hand, it’s worth noting that the CNNs were able to determine which organization originally created the images at an extremely high rate of accuracy and calibrate its diagnostic predictions accurately. In other words, it sounds as though over time, CNNs might be able to adjust to different sets of data on the fly. (The researchers didn’t dig into how this might affect their computing performance.)

Of course, it’s possible that we’ll develop a method for normalizing imaging data that works in the age of AI, in which case the need to adjust for different data attributes may not be needed.  However, we’re at the very early stages of training AIs for image sharing, so it’s anyone’s guess as to what form that normalization will take.

Interoperability Problems Undercut Conclusions of CHIME Most Wired Survey

Posted on November 13, 2018 I Written By

Anne Zieger is veteran healthcare branding and communications expert with more than 25 years of industry experience. and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also worked extensively healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

Most of you have probably already seen the topline results from CHIME’s  “Healthcare’s Most Wired: National Trends 2018” study, which was released last month.

Some of the more interesting numbers coming out of the survey, at least for me, included the following:

  • Just 60% of responding physicians could access a hospital network’s virtual patient visit technology from outside its network, which kinda defeats the purpose of decentralizing care delivery.
  • The number of clinical alerts sent from a surveillance system integrated with an EHR topped out at 58% (alerts to critical care units), with 35% of respondents reporting that they had no surveillance system in place. This seems like quite a lost opportunity.
  • Virtually all (94%) participating organizations said that their organization’s EHR could consume discrete data, and 64% said they could incorporate CCDs and CCRs from physician-office EHRs as discrete data.

What really stands out for me, though, is that if CHIME’s overall analysis is correct, many aspects of our data analytics and patient engagement progress still hang in the balance.

Perhaps by design, the hospital industry comes out looking like it’s doing well in most of the technology strategy areas that it has questions about in the survey, but leaves out some important areas of weakness.

Specifically, in the introduction to its survey report, the group lists “integration and interoperability” as one of two groups of foundational technologies that must be in place before population health management/value-based care,  patient engagement and telehealth programs can proceed.

If that’s true, and it probably is, it throws up a red flag, which is probably why the report glossed over the fact that overall interoperability between hospitals is still very much in question. (If nothing else, it’s high time the hospitals adjust their interoperability expectations.) While it did cite numbers regarding what can be done with CCDs, it didn’t address the much bigger problems the industry faces in sharing data more fluidly.

Look, I don’t mean to be too literal here. Even if CHIME didn’t say so specifically, hospitals and health systems can make some progress on population health, patient engagement, and telehealth strategies even if they’re forced to stick to using their own internal data. Failing to establish fluid health data sharing between facility A and facility B may lead to less-than-ideal results, but it doesn’t stop either of them from marching towards goals like PHM or value-based care individually.

On the other hand, there certainly is an extent to which a lack of interoperability drags down the quality of our results. Perhaps the data sets we have are good enough even if they’re incomplete, but I think we’ve already got a pretty good sense that no amount of CCD exchange will get the results we ultimately hope to see. In other words, I’m suggesting that we take the CHIME survey’s data points in context.

MRI Installation Slip Disables Hospital iOS Devices

Posted on November 9, 2018 I Written By

Anne Zieger is veteran healthcare branding and communications expert with more than 25 years of industry experience. and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also worked extensively healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

The following is the story of an MRI installation which took a surprising turn. According to a recent post on Reddit which has since gone viral in the IT press, a problem with the installation managed to shut down and completely disable every iOS-based device in the facility.

A few weeks ago, Erik Wooldridge of  Chicago’s Morris Hospital, a perplexed member of the r/sysadmin subreddit, posted the following:

This is probably the most bizarre issue I’ve had in my career in IT. One of our multi-practice facilities is having a new MRI installed and apparently something went wrong when testing the new machine. We received a call near the end of the day from the campus that none of their cell phones work after testing [the] MRI… After going out there we discovered that this issue only impacted iOS devices. iPads, iPhones, and Apple Watches were all completely disabled.

According to Wooldridge, the outage affected about 40 users. Many of the affected devices were completely dead. Others that could power on seemed to have issues with the cellular radio, though the Wi-Fi connections continued to work. Over time, the affected devices began to recover, but one iPhone had severe service issues after the incident, and while some of the Apple Watches remained on, the touchscreens hadn’t begun working after several days.

At first, Morris and his colleagues feared that the outage could be due to an electromagnetic pulse, a terrifying possibility which could’ve meant very bad things for its data center. Fortunately, that didn’t turn out to be the problem.

Later the vendor, GE, told the poster and his colleagues that the problem was a leakage of liquid helium used for the MRI’s superconducting magnets. GE engineers turned out to be right that the leak was the source of the problems, but couldn’t explain why Android devices were untouched by the phenomenon.

Eventually, a blogger named Kyle Wiens with iFixit.org seems to found an explanation for why iOS devices were hit so hard by the helium leak. Apparently, even Apple admits that exposing iPhones to evaporating liquefied gases such as helium could take them offline.

While no one’s suggesting that liquefied helium is good for any type of microelectronic device, the bottom line seems to be that the iOS devices are more sensitive to this effect than the Android devices. Let’s hope most readers never need to test this solution out.

Many Providers Lack Dedicated Budget For Connected Medical Device Security

Posted on November 5, 2018 I Written By

Anne Zieger is veteran healthcare branding and communications expert with more than 25 years of industry experience. and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also worked extensively healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

A new vendor survey has concluded that while most providers haven’t dedicated much of their budget specifically to managing and securing connected devices, most are convinced they have the situation under control.  Rightly or wrongly, this seems to be part of a larger picture in which support for connected health devices hasn’t matured as much of the rest of the IT infrastructure.

The survey, which was conducted by Zingbox, developer of a healthcare Internet of Things analytics platform, collected responses from about 200 healthcare IT professionals in 200 clinical/biomedical engineers in the U.S., weighting results to US census levels for age, gender, region, and income.

According to Zingbox researchers, 87% of healthcare IT professionals responding to the survey said they were confident that their connected medical devices were protected from cyberattacks, and 79% said that their organization had real-time information of which on these devices might be vulnerable to cyberattacks.

Also, 69% said they believe that existing security solutions using secure laptops and servers were capable of securing their connected medical devices. Not surprisingly, the vendor’s report argued that this may not be the case, given that they aren’t designed to support on-device security solutions like anti-virus software, and that the blocking ports or protocols via gateways lead to problems that include device malfunction.

When asked whether their organizations had a budget allocated specifically to securing connected medical devices, 53% said yes, and that the amount was sufficient, while 41% said no, that they didn’t have dollars allocated to the problem or hadn’t set aside enough dollars. (I’d be interested to know how they decided whether their device security was adequate; given the relative youth of this category their standards might be worth a look.)

Meanwhile, roughly 85% of clinical/biomedical engineers said they were confident they had an accurate inventory of connected medical devices in their network, with 64% of respondents noting that such device inventories were completed manually. Thirty-four percent said they did a manual room-to-room audit to get this job done, and about 30% said they did static asset management.

To determine which devices were in use, 55% of respondents said they did so manually, while 38% said they used an automated solution. Of those clinical/biomedical engineers doing manual checks, 28% walk over to the device location to check in person, and 27% find out by contacting someone.

To keep these devices online, 73% of these engineers said they conducted maintenance on a fixed schedule, including 29% that followed manufacturer recommendations, 27% adhering to internal schedules and 17% taking a cue from reseller recommendations.

Hospitals Sharing More Patient Data Than Ever, But Is It Having An Impact On Patient Care?

Posted on November 1, 2018 I Written By

Anne Zieger is veteran healthcare branding and communications expert with more than 25 years of industry experience. and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also worked extensively healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

Brace yourself for more happy talk in a positive interoperability spin, folks. Even if they aren’t exchanging as much health data as they might have hoped, hospitals are sharing more patient health data than they ever have before, according to a new report from the ONC.

The ONC, which recently analyzed 2017 data from the American Hospital Association’s Information Technology Supplement Survey, concluded that 93% of non-federal acute care hospitals have upgraded to the 2015 Edition Health IT Certification Criteria or plan to upgrade. These criteria include new technical capabilities that support health data interoperability.

Today, most hospitals (88%) can send patient summary of care records electronically, and receive them from outside sources (74%), ONC’s analysis concluded. In addition, last year the volume of hospitals reporting that they could query and integrate patient health data significantly increased.

Not only that, the volume of hospitals engaged in four key interoperability activities (electronically sending, receiving, finding and integrating health data) climbed 41% over 2016. On the downside, however, only four in 10 hospitals reported being able to find patient health information, send, receive and integrate patient summary of care records from outside sources into their data.

According to ONC, hospitals that work across these four key interoperability domains tend to be more sophisticated than their peers who don’t.

In fact, in 2017 83% of hospitals able to send, receive, find, and integrate outside health information also had health information electronic available at the point of care. This is a 20% higher level than hospitals engaging in just three domains, and a whopping seven times higher than hospitals that don’t engage in any domain.

Without a doubt, on its face this is good news. What’s not to like? Hospitals seem to be stepping up the interoperability game, and this can only be good for patients over time.

On the other hand, it’s hard for me to measure just how important it is in the near term. Yes, it seems like hospitals are getting more nimble, more motivated and more organized when it comes to data sharing, but it’s not clear what impact this may be having on patient care processes and outcomes.

Over time, most interoperability measures I’ve seen have focused more on receipt and transmission of patient health data far more than integration of that data into EHRs. I’d argue that it’s time to move beyond measuring back and forth of data and put more impact on how often physicians use that data in their work.

There’s certainly a compelling case to be made that health data interoperability matters. I’ve never disputed that. But I think it’s time we measure success a bit more stringently. In other words, if ONC can’t define the clinical benefits of health data exchange clearly, in terms that matter to physicians, it’s time to make it happen.

Hospitals Taking Next-Gen EHR Development Seriously

Posted on October 22, 2018 I Written By

Anne Zieger is veteran healthcare branding and communications expert with more than 25 years of industry experience. and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also worked extensively healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

Physicians have never been terribly happy with EHRs, most of which have done little to meet the lofty clinical goals set forth by healthcare leaders. Despite the fact that EHRs have been a fact of life in medicine for nearly a decade, health IT leaders don’t seem to have figured out how to build a significantly better one — or even what “better” means.

While there has been the occasional project leveraging big data from EHRs to improve care processes, little has been done that makes it simple for physicians to benefit from these insights on a day-to-day basis. Not only that, while EHRs may have become more usable over time, they still don’t present patient data in an intuitive manner.

However, hospital leaders have may be developing a more-focused idea of how a next-gen EHR should work, at least if recent efforts by Stanford Medicine and Penn Medicine are any indication.

For example, Stanford has developed a next-gen EHR model which it argues could be rolled out within the next 10 years. The idea behind the model would be that clinicians and other healthcare professions would simply take care of patients, with information flowing automatically to all relevant parties, including payers, hospitals, physicians and patients. Its vision seems far less superficial than much of the EHR innovation happy talk we’ve seen in the past.

For example, in this model, an automated physician’s assistant would “listen” to interactions between doctors and patients and analyze what was said. The assistant would then record all relevant information in the physical exam section of the chart, sorting it based on what was said in the room and what verbal cues clinicians provided.

Another initiative comes from Penn Medicine, where leaders are working to transform EHRs into more streamlined, interactive tools which make clinical work easier and drive best outcomes. Again, while many hospitals and health centers have talked a good game on this front, Penn seems to be particularly serious about making EHRs valuable. “We are approaching this endeavor as if it were building a new clinical facility, laboratory or training program,” said University of Pennsylvania Health System CEO Ralph Muller in a prepared statement.

Penn hasn’t gone into many specifics as to what its next-gen EHR would look like, but in its recent statement, it provided a few hints. These included the suggestion that they should allow doctors to “subscribe” to patients’ clinical information to get real-time updates when action is required, something along the lines of what social media networks already do with feeds and notifications.

Of course, there’s a huge gap between visions and practical EHR limitations. And there’s obviously a lot of ways in which the same general goals can be met. For example, another way to talk about the same issues comes from HIT superstar Dr. John Halamka, chief information officer of the Beth Israel Deaconess Medical Center and CIO and dean for technology at Harvard Medical School.

In a blog post looking at the shift to EHR 2.0, Halamka argues for the development of a new Care Management Medical Record which enrolls patients in protocols based on conditions then ensures that they get recommended services. He also argues that EHRs should be seen as flexible platforms upon which entrepreneurs can create add-on functionality, something like apps that rest on top of mobile operating systems.

My gut feeling is that all told, we are seeing from real progress here, and that particularly given the emergence of more mature AI tools, a more-flexible EHR demanding far less physician involvement will come together. However, it’s worth noting that the Stanford researchers are looking at a 10-year timeline.  To me, it seems unlikely that things will move along any faster than that.

Hospitals Stumble When Asked To Share Medical Records With Patients

Posted on October 19, 2018 I Written By

Anne Zieger is veteran healthcare branding and communications expert with more than 25 years of industry experience. and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also worked extensively healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

By this point, few would argue that patients are unlikely to be engaged with their medical care if they don’t have free, unfettered access to the medical records. However, unfortunately, research continues to suggest that providers are struggling to meet these goals — and from my point of view, shows signs that they don’t take the entire process that seriously.

Most recently a new study found that not only may hospitals be failing to meet state and federal rules on patient medical record sharing, they may not even be communicating about their own policies consistently.  As a patient with complex medical needs, I found this troubling, though sadly, not so surprising given my past experiences.

The study, which appeared in JAMA Network Open, looked at the way in which 83 US hospitals handled medical record requests by patients. The research team conducted the requests between August 1 and December 7, 2017, tracking what medical information was made requestable, what formats of release were available, costs to receive the information and request processing times. Researchers reviewed hospital processes using medical record release authorization forms and telephone calls with medical records departments.

After analyzing their data, the researchers concluded at least some hospitals weren’t complying with regulations regarding medical information request processing times. Of the 81 hospitals that responded to the researchers with mean times of release for records, seven had ranges extending beyond state requirements before applying the single 30-day extension granted by HIPAA.

In addition, they found that patients obtained different information regarding medical records request processes when they filled out form versus when they communicated directly with medical records departments. For example, just 53% of hospitals gave patients the option to request the entire medical record on their record request forms, while when the medical record department was contacted, all the hospitals said they were able to and release an entire medical record to patients.

Perhaps offering some insight into why patient portals aren’t as muscular as they could be, just 25% of hospital medical record departments said via phone that they were able to release records to online patient portals, and less than half (40%) shared this detail this on their forms.

Another issue highlighted by the study was that the hospitals studied seem to be vague about the costs patients faced in receiving records. Apparently, 22% of hospitals disclosed they would charge patients for such records but did not specify cost, and 43% didn’t specify that there would be a fee.

Having inadvertently walked into a cost backsaw once or twice in my pre-HIT days, I can’t stress enough how disheartening unexpected records fees can be for patients. After all, in some cases patients don’t get the care they need if they don’t have up-to-date-records, and until we have a completely universal interoperability scheme in place patients are on the hook to make this happen.

Getting the records seems to have been pricey. All but one of the hospitals were able to quote the cost for receiving records on paper, at prices which began at zero but went as high as $541.50 for a 200-page record. On the digital side, 59% of the hospital stated a cost of release above the federally-recommended $6.50 flat fee per page for electronically-maintained records.

As the study authors note, it would be helpful if federal regulators keep their eye on issues related to patient medical record access, which is more costly, confusing and time-consuming than it might appear at first glance. In the meantime, hospitals might consider doing a self-audit to see if they are offering patients consistent information on the process when we ask for badly-needed medical data.