Free Hospital EMR and EHR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to Hospital EMR and EHR for FREE!

Will Chatbots Be Embedded In Health IT Infrastructure Within Five Years?

Posted on December 10, 2018 I Written By

Anne Zieger is veteran healthcare branding and communications expert with more than 25 years of industry experience. and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also worked extensively healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

Brace yourself: The chatbots are coming. In fact, healthcare chatbots could become an important part of healthcare organizations’ IT infrastructure, according to research released by a market analyst firm. I have my doubts but do read on and see what you think.

Jupiter Research is predicting that AI-powered chatbots will become the initial point of contact with healthcare providers for many consumers. As far as I know, this approach is not widespread in the US at present, though there are many vendors developing tools that they could deploy and we’ve seen some success from companies like SimplifiMed and big tech companies like Microsoft that are enabling chatbots as well.

However, Jupiter sees things changing rapidly over the next five years. It predicts that the number of chatbot interactions will shoot up at an average annual growth rate of 167%, from an estimated 21 million per year in 2018 to 2.8 billion per year in 2023.  By that point, healthcare will represent 10% of all chatbot interactions across major verticals, Jupiter says.

According to the market research firm, there are a number of reasons chatbot use in healthcare will grow so rapidly, including consumers’ growing comfort level with using chatbots to discuss their care. Jupiter also expects to see healthcare providers routinely use chatbots for customer experience management, though again, I’ve seen little evidence that this is happening just yet.

The massive growth in patient-chatbot interactions will also be fueled by a rise in the sophistication of conversational AI platforms, a leap so dramatic that consumers will handle a growing percentage of their healthcare business entirely via chatbot, the firm says. This, in turn, will free up medical staff time, saving countries’ healthcare systems around $3.7 billion by 2023.  This would prove to be a relatively modest savings for the giant US healthcare system, but it could be quite meaningful for a smaller country.

As healthcare organizations adopt chatbot platforms, their chief goal will be to see that information collected by chatbots is transferred to EHRs and other important applications, the report says. To make this happen, these organizations will have to make sure to integrate chatbot platforms with both clinical and line-of-business applications. (Vendors like PatientSphere already offer independent platforms designed to address such issues.)

All very interesting, no? Definitely. I share Jupiter’s optimistic view of the chatbot’s role in healthcare delivery and customer service and have little doubt that even today’s relatively primitive bots are capable of handling many routine transactions.

That being said, I’m thinking it will be more like 10 years before chatbots are used widely by providers. If what I’ve seen is any indication, it will probably take that long before conversational AI can truly hold a conversation. If we hope to use AI-based chatbots routinely at the front end of important processes, they’ll just have to be smarter.

Next Steps In Making Healthcare AI Practical

Posted on November 30, 2018 I Written By

Anne Zieger is veteran healthcare branding and communications expert with more than 25 years of industry experience. and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also worked extensively healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

In recent times, AI has joined blockchain on the list of technologies that just sort of crept into the health IT toolkit.

After all, blockchain was borne out of the development of bitcoin, and not so long ago the idea that it was good for anything else wasn’t out there. I doubt its creators ever contemplated using it for secure medical data exchange, though the notion seems obvious in retrospect.

And until fairly recently, artificial intelligence was largely a plaything for advanced computing researchers. I’m sure some AI researchers gave thought to cyborg doctors that could diagnose patients while beating them at chess and serving them lunch, but few practical applications existed.

Today, blockchain is at the core of countless health IT initiatives, many by vendors but an increasing number by providers as well. Healthcare AI projects, for their part, seem likely to represent the next wave of “new stuff” adoption. It’s at the stage blockchain was a year or two ago.

Before AI becomes more widely adopted in healthcare circles, though, the industry needs to tackle some practical issues with AI, and the list of “to-dos” keeps expanding. Only a few months ago, I wrote an item citing a few obstacles to healthcare AI deployment, which included:

  • The need to make sure clinicians understand how the AI draws its conclusions
  • Integrating AI applications with existing clinical workflow
  • Selecting, cleaning and normalizing healthcare data used to “train” the AI

Since then, other tough challenges to the use of healthcare AI have emerged as the healthcare leaders think things over, such as:

Agreeing on best practices

Sure, hospitals would be interested in rolling out machine learning if they could, say, decrease the length of hospital stays for pneumonia and save millions. The thing is, how would they get going? At present, there’s no real playbook as to how these kinds of applications should be conceptualized, developed and maintained. Until healthcare leaders reach a consensus position on how healthcare AI projects should generally work, such projects may be too risky and/or prohibitively expensive for providers to consider.

Identifying use cases

As an editor, I see a few interesting healthcare AI case studies trickle into my email inbox every week, which keeps me intrigued. The thing is, if I were a healthcare CIO this probably wouldn’t be enough information to help me decide whether it’s time to take up the healthcare AI torch. Until we’ve identified some solid use cases for healthcare AI, almost anything providers do with it is likely to be highly experimental. Yes, there are some organizations that can afford to research new tech but many just don’t have the staff or resources to invest. Until some well-documented standard use cases for healthcare AI emerge, they’re likely to hang back.

The healthcare AI discussion is clearly at a relatively early stage, and more obstacles are likely to show up as providers grapple with the technology. In the meantime, getting these handled is certainly enough of a challenge.

AI May Be Less Skilled At Analyzing Images From Outside Organizations

Posted on November 26, 2018 I Written By

Anne Zieger is veteran healthcare branding and communications expert with more than 25 years of industry experience. and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also worked extensively healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

Using AI technologies to analyze medical images is looking more and more promising by the day. However, new research suggests that when AI tools have to cope with images from multiple health systems, they have a harder time than when they stick to just one.

According to a new study published in PLOS Medicine, interest is growing in analyzing medical images using convolutional neural networks, a class of deep neural networks often dedicated to this purpose. To date, CNNs have made progress in analyzing X-rays to diagnose disease, but it’s not clear whether CNNs trained on X-rays from one hospital or system will work just as well in other hospitals and health systems.

To look into this issue, the authors trained pneumonia screening CNNs on 158,323 chest X-rays, including 112,120 X-rays from the NIH Clinical Center, 42,396 X-rays from Mount Sinai Hospital and 3,807 images from the Indiana University Network for Patient Care.

In their analysis, the researchers examined the effect of pooling data from sites with a different prevalence of pneumonia. One of their key findings was that when two training data sites had the same pneumonia prevalence, the CNNs performed consistently, but when a 10-fold different in pneumonia rates were introduced between sites, their performance diverged. In that instance, the CNN performed better on internal data than that supplied by an external organization.

The research team found that in 3 out of 5 natural comparisons, the CNNs’ performance on chest X-rays from outside hospitals was significantly lower than on held-out X-rays from the original hospital system. This may point to future problems when health systems try to use AI for imaging on partners’ data. This is not great to learn given the benefits AI-supported diagnosis might offer across, say, an ACO.

On the other hand, it’s worth noting that the CNNs were able to determine which organization originally created the images at an extremely high rate of accuracy and calibrate its diagnostic predictions accurately. In other words, it sounds as though over time, CNNs might be able to adjust to different sets of data on the fly. (The researchers didn’t dig into how this might affect their computing performance.)

Of course, it’s possible that we’ll develop a method for normalizing imaging data that works in the age of AI, in which case the need to adjust for different data attributes may not be needed.  However, we’re at the very early stages of training AIs for image sharing, so it’s anyone’s guess as to what form that normalization will take.

Hospitals Taking Next-Gen EHR Development Seriously

Posted on October 22, 2018 I Written By

Anne Zieger is veteran healthcare branding and communications expert with more than 25 years of industry experience. and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also worked extensively healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

Physicians have never been terribly happy with EHRs, most of which have done little to meet the lofty clinical goals set forth by healthcare leaders. Despite the fact that EHRs have been a fact of life in medicine for nearly a decade, health IT leaders don’t seem to have figured out how to build a significantly better one — or even what “better” means.

While there has been the occasional project leveraging big data from EHRs to improve care processes, little has been done that makes it simple for physicians to benefit from these insights on a day-to-day basis. Not only that, while EHRs may have become more usable over time, they still don’t present patient data in an intuitive manner.

However, hospital leaders have may be developing a more-focused idea of how a next-gen EHR should work, at least if recent efforts by Stanford Medicine and Penn Medicine are any indication.

For example, Stanford has developed a next-gen EHR model which it argues could be rolled out within the next 10 years. The idea behind the model would be that clinicians and other healthcare professions would simply take care of patients, with information flowing automatically to all relevant parties, including payers, hospitals, physicians and patients. Its vision seems far less superficial than much of the EHR innovation happy talk we’ve seen in the past.

For example, in this model, an automated physician’s assistant would “listen” to interactions between doctors and patients and analyze what was said. The assistant would then record all relevant information in the physical exam section of the chart, sorting it based on what was said in the room and what verbal cues clinicians provided.

Another initiative comes from Penn Medicine, where leaders are working to transform EHRs into more streamlined, interactive tools which make clinical work easier and drive best outcomes. Again, while many hospitals and health centers have talked a good game on this front, Penn seems to be particularly serious about making EHRs valuable. “We are approaching this endeavor as if it were building a new clinical facility, laboratory or training program,” said University of Pennsylvania Health System CEO Ralph Muller in a prepared statement.

Penn hasn’t gone into many specifics as to what its next-gen EHR would look like, but in its recent statement, it provided a few hints. These included the suggestion that they should allow doctors to “subscribe” to patients’ clinical information to get real-time updates when action is required, something along the lines of what social media networks already do with feeds and notifications.

Of course, there’s a huge gap between visions and practical EHR limitations. And there’s obviously a lot of ways in which the same general goals can be met. For example, another way to talk about the same issues comes from HIT superstar Dr. John Halamka, chief information officer of the Beth Israel Deaconess Medical Center and CIO and dean for technology at Harvard Medical School.

In a blog post looking at the shift to EHR 2.0, Halamka argues for the development of a new Care Management Medical Record which enrolls patients in protocols based on conditions then ensures that they get recommended services. He also argues that EHRs should be seen as flexible platforms upon which entrepreneurs can create add-on functionality, something like apps that rest on top of mobile operating systems.

My gut feeling is that all told, we are seeing from real progress here, and that particularly given the emergence of more mature AI tools, a more-flexible EHR demanding far less physician involvement will come together. However, it’s worth noting that the Stanford researchers are looking at a 10-year timeline.  To me, it seems unlikely that things will move along any faster than that.

Heathcare AI Maturity Index

Posted on October 16, 2018 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Everywhere you turn in healthcare you’re seeing AI. I know some people would argue with how many companies define AI. In fact, there’s no doubt that AI has started to be used for everything from simple analytics to machine learning to neural networks to true artificial intelligence. I don’t personally get worked up in the definitions of various words since I think all of these things can and will benefit healthcare. Regardless of definition, what’s clear is that this broad definition of AI is going to have a big impact on healthcare.

In a recent tweet from David Chou, he shared a really interesting look at AI adoption in healthcare as compared with other industries. The healthcare AI maturity index also looks at where healthcare’s AI trajectory is headed in the next 3 years. Check out the details in the chart below:

There are a couple of things that concern me about this chart. First, it shows that healthcare is behind other industries when it comes to AI adoption. That’s not too surprising since healthcare is usually pretty risk averse with new technology. The “First Do No Harm” is an important part of the healthcare culture that scares many away from technology like AI. The only question is will this culture prevent helpful new AI technologies from coming to healthcare.

Many people would look at the chart and see that it projects a lot of growth in healthcare AI investment. That’s a good thing, but it also is a common pattern in healthcare. Lots of anticipation and hope that never fully realizes. Will we see the same in healthcare AI?

What’s been your experiences with AI in healthcare? Where do you see AI having the most impact right now? What companies are doing AI that’s going to impact your hospital or health system? Share your thoughts in the comments or on social media with @healthcarescene.

AI Project Set To Offer Hospital $20 Million In Savings Over Three Years

Posted on October 4, 2018 I Written By

Anne Zieger is veteran healthcare branding and communications expert with more than 25 years of industry experience. and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also worked extensively healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

While they have great potential, healthcare AI technologies are still at the exploration stage in most healthcare organizations. However, here and there AI is already making a concrete difference for hospitals, and the following is one example.

According to an article in Internet Health Management, one community hospital located in St. Augustine, Florida expects to save $20 million dollars over the next the three years thanks to its AI investments.

Not long ago, 335-bed Flagler Hospital kicked off a $75,000 pilot project dedicated to improving the treatment of pneumonia, sepsis and other high mortality conditions, building on AI tools from vendor Ayasdi Inc.

Michael Sanders, a physician who serves as chief medical informatics officer for the hospital, told the publication that the idea was to “let the data guide us.” “Our ability to rapidly construct clinical pathways based on our own data and measure adherence by our staff to those standards provides us with the opportunity to deliver better care at a lower cost to our patients,” Sanders told IHM.

The pilot, which took place over just nine weeks, reviewed records dating back five years. Flagler’s IT team used Ayasdi’s tools to analyze data held in the hospital’s Allscripts EHR, including patient records, billing, and administrative data. Analysts looked at data on patterns of care, lengths of stay and patient outcomes, including the types of medications docs and for prescribing and when doctors were ordering CT scans.

After analyzing the data, Sanders and his colleagues used the AI tools to build guidelines into the Allscripts EHR, which Sanders hoped would make it easy for physicians to use them.

The project generated some impressive results. For example, the publication reported, pathways for pneumonia treatment resulted in $1,336 in administrative savings for a typical hospital stay and cut down lengths of stay by two days. All told, the new approach cut administrative costs for pneumonia treatment by $800,000.

Now, Flagler plans to create pathways to improve care for sepsis, substance abuse, heart attacks, and other heart conditions, gastrointestinal disorders and chronic conditions such as diabetes.

Given the success of the project, the hospital expects to expand the scope of its future efforts. At the outset of the project, Sanders had expected to use AI tools to take on 12 conditions, but given the initial success with rolling out AI-based pathways, Sanders now plans to take on one condition each month, with an eye on meeting a goal of generating $20 million in savings over the new few years, he told IHM.

Flagler is not the first, nor will it be the last, hospital to streamline care using AI. For another example, check out the efforts underway at Montefiore Health, which seems to be transforming its entire data infrastructure to support AI-based analytics efforts.

Three Hot Healthcare AI Categories

Posted on September 26, 2018 I Written By

Anne Zieger is veteran healthcare branding and communications expert with more than 25 years of industry experience. and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also worked extensively healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

The way people talk about AI, one might be forgiven for thinking that it can achieve magical results overnight. Unfortunately, the reality is that it’s much easier to talk about AI application than execute them.

However, there are a few categories of AI development that seems to be emerging as possible game-changers for the healthcare business. Here’s five that have caught my eye.

Radiology: In theory, we’ve been able to analyze digital radiology images for quite some time. The emergence of AI tools has supercharged the discussion, though. The growing list of vendors competing for business in this nascent market is real.

Examples include Aidence, whose Veye Chest automates analysis and reporting of pulmonary modules, aidoc, which finds acute abnormalities in imaging and adds them to the radiologist’s worklist; CuraCloud, which helps with medical imaging analysis and NLP for medical data and more. (For a more comprehensive list, check out this Medium article.)

I’d be remiss if I didn’t also mention a partnership between Facebook and the NYU School of Medicine focused on speeding up MRI imaging dramatically.

Vendors and industry talking heads have been assuring radiologists that such tools will reduce their workload while leaving diagnostic in clinical decisions in their hands. So far, it seems like they’re telling the truth.

Physician documentation: The notion of using AI to speed up the physician documentation process is very hot right now, and for good reason. The advent of EHRs has added new documentation work to physicians’ already-full plate, and some are burning out. Luckily, new AI applications may be able to de-escalate this crisis.

For example, consider applications like NoteSwift’s Samantha, an EHR virtual assistant which structures transcription content and inputs it directly into the EHR. There’s also Robin, also which “listens” to discussions in the clinic rooms, drafts clinical documentation using Conversational Speech Recognition. After review, Robin also submits final documentation directly to an EMR.

Other emerging companies offering AI-driven documentation products apps including Sophris Health, Saykara, and Suki, all of which offer some isotype of virtual assistant or medical scribe functions. Big players like Nuance and MModal are working in this space as well. If you want to find more vendors – and there’s a ton emerging out there – just Google the terms “virtual physician assistant” or “AI medical scribe.” You’ll be swamped with possibilities.

My takeaway here is that we’re getting steadily closer to a day in which simply approve documentation, click a button and populate the EHR automatically. It’s an exciting possibility.

Medical chatbots: This category is perhaps a little less mature than the previous two, but a lot is going on here. While most deployments are experimental, it’s beginning to look like chatbots will be able to do everything from triage to care management, individual patient screenings and patient education. Microsoft recently highlighted how companies can easily create healthcare chatbots on Microsoft Azure. That should open up a variety of use cases.

The hottest category in medical chatbots seems to be preliminary diagnosis. Examples include Sensely, whose virtual medical assistant avatar uses AI to suggest diagnoses based on patient symptoms, along with competitors like Babylon Health, another chatbot which offing patient screenings and tentative diagnoses and Ada, whose smartphone app offers similar options.

Other medical chatbots are virtual clinicians, such as Florence, which reminds patients to take the medication and tracks key patient health metrics like body weight and mood, while still others focus on specific medical issues. This category includes Cancer Chatbot, a resource for cancer patients,  caregivers, friends and family, and Safedrugbot, which helps doctors who need data about use of drugs during breastfeeding.

While many of these apps are in beta or still sorting out their role, they’re becoming more capable by the day and should soon be able to provide patients with meaningful medical advice. They may also be capable of helping ACOs and health systems manage entire populations by digging into patient records, digesting patient histories and using this data to monitor conditions and send specialized care reminders.

This list is far from comprehensive. These are just a few categories of AI-driven healthcare applications poised to foster big changes in healthcare – especially the nature of the health IT infrastructure. There’s a great deal more to learn about what works. Still, we’re just steps away from seeing AI-based technologies hit the industry hard. In the meantime, it might be smart to consider taking some of these for a test run.

Montefiore Health Makes Big AI Play

Posted on September 24, 2018 I Written By

Anne Zieger is veteran healthcare branding and communications expert with more than 25 years of industry experience. and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also worked extensively healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

I’ve been doing a lot of research on healthcare AI applications lately. Not surprisingly, while people find the abstract issues involved to be intriguing, most would prefer to hear news of real-life projects, so I’ve been on the lookout for good examples.

One interesting case study, which appeared recently in Health IT Analytics, comes from Montefiore Health System, which has been building up its AI capabilities. Over the past three years, it has created an AI framework leveraging a data lake, infrastructure upgrades and predictive analytics algorithms. The AI is focused on addressing expensive, dangerous health issues, HIA reports.

“We have created a system that harvests every piece of data that we can possibly find, from our own EMRs and devices to patient-generated data to socio-economic data from the community,” said Parsa Mirhaji, MD, PhD, director of the Center for Health Data Innovations at Montefiore and the Albert Einstein College of Medicine, who spoke with the publication.

Back in 2015, Mirhaji kicked off a project bringing semantic data lake technology to his organization. The first pilot using the technology was designed to find patients at risk of death or intubation within 48 hours. Now, clinicians can also see red flags for admitted patients with increased risk of mortality 3 to 5 days in advance.

In 2017, the health system also rolled out advanced sepsis detection tools and a respiratory failure detection algorithm called APPROVE, which identifies patients at a raised risk of prolonged ventilation up to 48 hours before onset, HIA reported.

The net result of these efforts was dubbed PALM, the Patient-centered Analytical  Learning Machine. PALM “represents a very new way of interacting with data in healthcare,” Miraji told HIA.

What makes PALM special is that it speeds up the process of collecting, curating, cleaning and accessing metadata which must be conducted before the data can be used to train AI models. In most cases, the process of collecting data for AI use is largely manual, but PALM automates this process, Miraji told the publication.

This is because the data lake and its graph repositories can find relationships between individual data elements on an on-the-fly basis. This automation lets Montefiore cut way down on labor needed to get these results. Miraji noted that ordinarily, it would take a team of data analysts, database administrators and designers to achieve this result.

PALM also benefits from a souped-up hardware architecture, which Montefiore created with help from Intel and other technology partners. The improved architecture includes the capacity for more system memory and processing power.

The final step in optimizing the PALM system was to integrate it into the health system’s clinical workflow. This seems to have been the hardest step. “I will say right away that I don’t think we have completely solved the problem of integrating analytics seamlessly into the workflow,” Miraji admitted to HIA.

Problems We Need To Address Before Healthcare AI Becomes A Thing

Posted on September 7, 2018 I Written By

Anne Zieger is veteran healthcare branding and communications expert with more than 25 years of industry experience. and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also worked extensively healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

Just about everybody who’s anybody in health IT is paying close attention to the emergence of healthcare AI, and the hype cycle is in full swing. It’d be easier to tell you what proposals I haven’t seen for healthcare AI use than those I have.

Of course, just because a technology is hot and people are going crazy over it doesn’t mean they’re wrong about its potential. Enthusiasm doesn’t equal irrational exuberance. That being said, it doesn’t hurt to check in on the realities of healthcare AI adoption. Here are some issues I’m seeing surface over and over again, below.

The black box

It’s hard to argue that healthcare AI can make good “decisions” when presented with the right data in the right volume. In fact, it can make them at lightning speed, taking details into account which might not have seemed important to human eyes. And on a high level, that’s exactly what it’s supposed to do.

The problem with this, though, is that this process may end up bypassing physicians. As things stand, healthcare AI technology is seldom designed to show how it reached its conclusions, and it may be due to completely unexpected factors. If clinical teams want to know how the artificial intelligence engine drew a conclusion, they may have to ask their IT department to dig into the system and find out. Such a lack of transparency won’t work over the long term.

Workflow

Many healthcare organizations have tweaked their EHR workflow into near-perfect shape over time. Clinicians are largely satisfied with work patterns and patient throughput is reasonable. Documentation processes seem to be in shape. Does it make sense to throw an AI monkeywrench into the mix? The answer definitely isn’t an unqualified yes.

In some situations, it may make sense for a provider to run a limited test of AI technology aimed at solving a specific problem, such as assisting radiologists with breast cancer scan interpretations. Taking this approach may create less workflow disruption. However, even a smaller test may call for a big investment of time and effort, as there aren’t exactly a ton of best practices available yet for optimizing AI implementations, so workflow adjustments might not get enough attention. This is no small concern.

Data

Before an AI can do anything, it needs to chew on a lot of relevant clinical data. In theory, this shouldn’t be an issue, as most organizations have all of the digital data they need.  If you need millions of care datapoints or several thousand images, they’re likely to be available. The thing is, they may not be as usable as one might hope.

While healthcare providers may have an embarrassment of data on hand, much of it is difficult to filter and mine. For example, while researchers and some isolated providers are using natural language processing to dig up useful information, critics point out that until more healthcare info is indexed and tagged there’s only so much it can do. It may take a new generation of data processing and indexing tech to prepare the data before we have the right data to feed an AI.

These are just a few practical issues likely to arise as providers begin to use AI technologies; I’m sure there are many others you might be able to name. While I have little doubt we can work our way through such issues, they aren’t trivial, and it could take a while before we have standardized approaches in place for addressing them. In the meantime, it’s probably a good idea to experiment with AI projects and prepare for the day when it becomes more practical.

Facebook Partners With Hospital On AI-based MRI Project

Posted on August 23, 2018 I Written By

Anne Zieger is veteran healthcare branding and communications expert with more than 25 years of industry experience. and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also worked extensively healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

I’ve got to say I’m intrigued by the latest from Facebook, a company which has recently been outed as making questionable choices about data privacy. Despite the kerfuffle, or perhaps because of it, Facebook is investing in some face-saving data projects.

Most recently, Facebook has announced that it will collaborate with the NYU School of Medicine to see if it’s possible to speed up MRI scans.  The partners hope to make MRI scans 10 times faster using AI technology.

The NYU professors, who are part of the Center for Advanced Imaging Innovation and Research, will be working with the Facebook Artificial Intelligence Research group. Facebook won’t be bringing any of its data to the table, but NYU will share its imaging dataset, which consists of 10,000 clinical cases and roughly 3 million images of the knee, brain and liver. All of the imaging data will be anonymized.

In taking up this effort, the researchers are addressing a tough problem. As things stand, MRI scanners work by gathering raw numerical data and turning that data into cross-sectional images of internal body structures. As with any other computing platform, crunching those numbers takes time, and the larger the dataset to be gathered, the longer the scan takes.

Unfortunately, long scan times can have clinical consequences. While some patients can cope with being in the scanner for extended periods, children, those with claustrophobia and others for whom lying down is painful might have trouble finishing the scanning session.

But if MRI scanning times can be minimized, more patients might be candidates for such scans. Not only that, physicians may be able to use MRI scans in place of X-ray and CT scans, both of which generate potentially harmful ionizing radiation.

Researchers hope to speed up the scanning process by modifying it using AI. They believe it may be possible to capture less data, speeding up the process substantially, while preserving or even enhancing the rich content gathered by an MRI machine. To do this, they will train artificial neural networks to recognize the underlying structure of the images and fill in visual information left out of the faster scanning process.

The NYU research team admits that meeting its goal will be very difficult. These neural networks would have to generate absolutely accurate images, and it’s not clear how possible this is as of yet. However, if the researchers can reconstruct high-value images in a new way, their work could have an impact on medicine as a whole.