Free Hospital EMR and EHR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to Hospital EMR and EHR for FREE!

Health Orgs Were In Talks To Collect SDOH Data From Facebook

Posted on April 9, 2018 I Written By

Anne Zieger is veteran healthcare branding and communications expert with more than 25 years of industry experience. and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also worked extensively healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

These days, virtually everyone in healthcare has concluded that integrating social determinants of health data with existing patient health information can improve care outcomes. However, identifying and collecting useful, appropriately formatted SDOH information can be a very difficult task. After all, in most cases it’s not just lying around somewhere ripe for picking.

Recently, however, Facebook began making the rounds with a proposal that might address the problem. While the research initiative has been put on hold in light of recent controversy over Facebook’s privacy practices, my guess is that the healthcare players involved will be eager to resume talks if the social media giant manages to calm the waters.

According to CNBC, Facebook was talking to healthcare organizations like Stanford Medical School and American College of Cardiology, in addition to several other hospitals, about signing a data-sharing agreement. Under the terms of the agreement, the healthcare organizations would share anonymized patient data, which Facebook planned to match up with user data from its platform.

Facebook’s proposal will sound familiar to readers of this site. It suggested combining what a health system knows about its patients, such as their age, medication list and hospital admission history, with Facebook-available data such as the user’s marital status, primary language and level of community involvement.

The idea would then be to study, with an initial focus on cardiovascular health, whether this combined data could improve patient care, something its prospective partners seem to think possible. The CNBC story included a gushing statement from American College of Cardiology interim CEO Cathleen Gates suggesting that such data sharing could create revolutionary results. According to Gates, the ACC believes that mixing anonymized Facebook data with anonymized ACC data could help greatly in furthering scientific research on how social media can help in preventing and treating heart disease.

As the business site notes, the data would not include personally identifiable information. That being said, Facebook proposed to use hashing to match individuals existing in both data sets. If the project were to have gone forward, Facebook might’ve shared data on roughly 87 million users.

Looked at one way, this arrangement could raise serious privacy questions. After all, healthcare organizations should certainly exercise caution when exchanging even anonymized data with any outside organization, and with questions still lingering on how willing Facebook is to lock data down projects like this become even riskier.

Still, under the right circumstances, Facebook could prove to be an all but ideal source of comprehensive, digitized SDOH data. Well now, arguably, might not be the time to move ahead, hospitals should keep this kind of possibility in mind.

Translating from Research to Bedside

Posted on April 2, 2018 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I’m increasingly interested in how we bridge the gap between research and practice in healthcare. No doubt my increased interest comes from the need to prove the value of data and technology in healthcare.

Remember that when we first started introducing EHR software into healthcare, the main goals were around billing and possibly efficiency. The former one has been a success in many aspects and the former has been a pretty big failure. However, the focus was never initially on how to improve care and the focus on billing has actually had a negative impact on care in ways that most people didn’t expect.

Now we’re seeing healthcare organizations trying to shift EHR models so that they do work to improve care. This has proven to be a challenge and it’s no doubt why many healthcare organizations are going beyond the EHR to make population health happen.

The other problem with moving into the clinical improvement space is that the bar is much higher. No one minds too much if you take risks in billing. That’s why most AI (Artificial Intelligence) is starting there as well. However, when you start dealing with the clinical aspects of healthcare, you have to take a much different approach and requires proper research of proposed ideas and methods.

Therein lies the challenge for much of the healthcare IT innovation. There’s a large gap between researchers and the bedside. This was highlighted really well by a researcher who described the challenge of translating research into medicine:

Speaker 3: The current models are not translational. We need more innovation and check out my cool data that does not address the topic.

The moderator was clearly the speaker’s past mentor as extra time was spent introducing this investigator’s novel interpretation of the topic. The introduction slide simply said NO in bold letters and the speaker launched into a TedX style talk on how these models are not translational and it is a waste of time for the Department of Defense or NIH to fund multi-team consortium to develop new relevant models. Remember, it was a panel discussion. This speaker left the panel and walked into the crowd spouting off about how translational research as it is defined would not prove useful and innovation was required to develop new therapies. In addition, replicative studies or lack of replication was moot because one can’t trust how other scientists conduct their science. As an example of innovation, studies demonstrating the effective integration of neuronal progenitor cells into the brain of a mouse model of epilepsy were shared. These studies were not done in a traumatic brain injury model, but a different model entirely. Innovative and published in a well-regarded journal, yes; translational, not likely and only time and additional studies will determine; relevant to the topic, no. Supporters of this young investigator probably called this display brave. There were no answers to be found here, only self-promotion. The presentation was not designed for discussion amongst peers, but was strategically delivered to help the investigator’s career trajectory. The song and dance number did not reflect a dedication to developing new therapies for people following a traumatic brain injury.

A successful Investigator’s Workshop speaker will address the topic using scientific data, but most importantly capture a story for the audience. Ideally, bullet points from learned experience or on which the speaker would like feedback will be shared and will foster discussion amongst the moderator, panelists, and audience members. It is an opportunity for the scientist to improve their approach as well as inform the audience.

This was an important insight to remember as we consider how to incorporate research into healthcare IT. The motivations of researchers are often not aligned with translating their research into practice. Researcher’s focus is often on career promotion, grant dollars, and publications. That’s a real disconnect between what most health IT vendors and healthcare organizations want to achieve.

Health Leaders Go Beyond EHRs To Tackle Value-Based Care

Posted on March 30, 2018 I Written By

Anne Zieger is veteran healthcare branding and communications expert with more than 25 years of industry experience. and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also worked extensively healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

In the broadest sense, EHRs were built to manage patient populations — but largely one patient at a time. As a result, it’s little wonder that they aren’t offering much support for value-based care as is, as a recent report from Sage Growth Partners suggests.

Sage spoke with 100 healthcare executives to find out what they saw as their value-based care capabilities and obstacles. Participants included leaders from a wide range of entities, including an ACO, several large physician practices and a midsize integrated delivery network.

The overall sense Sage seems to have gotten from its research was that while value-based care contracts are beginning to pay off, health execs are finding it difficult support these contacts using the EHRs they have in place. While their EHRs can produce quality reports, most don’t offer data aggregation and analytics, risk stratification, care coordination or tools to foster patient and clinician engagement, the report notes.

To get the capabilities they need for value-based contracting, health organizations are layering population health management solutions on top of their EHRs. Though these additional PHM tools may not be fully mature, health executives told Sage that there already seeing a return on such investments.

This is not necessarily because these organizations aren’t comfortable with their existing EHR. The Sage study found that 65% of respondents were somewhat or highly unlikely to replace their EHR in the next three years.

However, roughly half of the 70% of providers who had EHRs for at least three years also have third-party PHM tools in place as well. Also, 64% of providers said that EHRs haven’t delivered many important value-based contracting tools.

Meanwhile, 60% to 75% of respondents are seeking value-based care solutions outside their EHR platform. And they are liking the results. Forty-six percent of the roughly three-quarters of respondents who were seeing ROI with value-based care felt that their third-party population PHM solution was essential to their success.

Despite their concerns, healthcare organizations may not feel impelled to invest in value-based care tools immediately. Right now, just 5% of respondents said that value-based care accounted for over 50% of their revenues, while 62% said that such contracts represented just 0 to 10% of their revenues. Arguably, while the growth in value-based contracting is continuing apace, it may not be at a tipping point just yet.

Still, traditional EHR vendors may need to do a better job of supporting value-based contracting (not that they’re not trying). The situation may change, but in the near term, health executives are going elsewhere when they look at building their value-based contracting capabilities. It’s hard to predict how this will turn out, but if I were an enterprise EHR vendor, I’d take competition with population health management specialist vendors very seriously.

Are We Going About Population Health The Wrong Way?

Posted on March 29, 2018 I Written By

Anne Zieger is veteran healthcare branding and communications expert with more than 25 years of industry experience. and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also worked extensively healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

For most of us, the essence population health management is focusing on patients who have already experienced serious adverse health events. But what if that doesn’t work? At least one writer suggests that though it may seem counterintuitive, the best way to reduce needless admissions and other costly problems is to focus on patients identified by predictive health data rather than “gut feelings” or chasing frequent flyers.

Shantanu Phatakwala, managing director of research and development for Evolent Health, argues that focusing on particularly sick patients won’t reduce costs nearly as much as hospital leaders expect, as their assumptions don’t withstand statistical scrutiny.

Today, physicians and care management teams typically target patients with a standard set of characteristics, including recent acute events, signs of health and stability such as recent inpatient admissions and chronic conditions such as diabetes, COPD and heart disease. These metrics come from a treatment mindset rather than a predictive one, according to Phatakwala.

This approach may make sense intellectually, but in reality, it may not have the desired effect. “The reality is that patients who have already had major acute events tend to stabilize, and their future utilization is not as high,” he writes. Meanwhile, health leaders are missing the chance to prevent serious illness in an almost completely different cohort of patients.

To illustrate his point, he tells the story of a commercial entity managing 19,000 lives which began a population health management project. In the beginning, health leaders worked with the data science team, which identified 353 people whose behavior suggested that they were headed for trouble.

The entity then focused its efforts on 253 of the targeted cohort for short-term personal attention, including both personal goals (such as walking their daughter down the aisle at her wedding later that year) and health goals (such as losing 25 pounds). Care managers and nurses helped them develop plans to achieve these goals through self-management.

Meanwhile, the care team overrode data analytics recommendations regarding the remaining 100 patients and did not offer them specialized care interventions during the six-month program.  Lo and behold, care for the patients who didn’t get enrolled in health management programs cost 75% more than for patients who were targeted, at a total cost of $1.4 million. Whew!

None of this is to suggest that intuition is useless. However, this case illustrates the need for trusting data over intuition in some situations. As Phatakwala notes, this can call for a leap of faith, as on the surface it makes more sense to focus on patients who are already sick. But until clinicians feel comfortable working with predictive analytics data, health systems may never achieve the population health management results they seek, he contends. And he seems to have a good point.

Putting into Practice Today’s Innovative Technologies that Enable Healthcare Disruption

Posted on March 28, 2018 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

As we went around the #HIMSS18 annual conference in Las Vegas, we were in search of practical innovations that hospitals and health systems could implement today. We found that in spades when we visited the Lenovo Health booth and had a chance to sit down with experts from Lenovo Health, Wyatt Yelverton and Andy Nieto.

Today’s healthcare demands organizations look for innovations and efficiencies that will help them thrive in a value based healthcare world. In the following video interview with Wyatt Yelverton and Andy Nieto from Lenovo Health, I talk with them about a wide variety of subjects and technology including: AR/VR, telehealth, and smart assistants. Along with seeing the technology, we talk about how health IT professionals can get buy in for these technologies and the impact these technologies will have on their organization.

If you’re interested in some of these practical IT innovations, you’ll enjoy this interview with two Lenovo Health experts.

What are you doing in your organization around these technologies? Are you using AR/VR, Telemedicine, or smart assistants? What have you done to get buy in from your organization to implement these technologies? If you haven’t implemented them, what’s holding you back? We look forward to hearing your thoughts on social media and in the comments.

Disclosure: Lenovo Health is a sponsor of Healthcare Scene.

Study Offers EHR-Based Approach To Predicting Post-Hospital Opioid Use

Posted on March 27, 2018 I Written By

Sunny is a serial entrepreneur on a mission to improve quality of care through data science. Sunny’s last venture docBeat, a healthcare care coordination platform, was successfully acquired by Vocera communications. Sunny has an impressive track record of Strategy, Business Development, Innovation and Execution in the Healthcare, Casino Entertainment, Retail and Gaming verticals. Sunny is the Co-Chair for the Las Vegas Chapter of Akshaya Patra foundation (www.foodforeducation.org) since 2010.

With opioid abuse a raging epidemic in the United States, hospitals are looking for effective ways to track and manage opioid treatment effectively. In an effort to move in this direction, a group of researchers has developed a model which predicts the likelihood of future chronic opioid use based on hospital EHR data.

The study, which appears in the Journal of General Internal Medicine, notes that while opioids are frequently prescribed in hospitals, there has been little research on predicting which patients will progress to chronic opioid therapy (COT) after they are discharged. (The researchers defined COT as when patients were given a 90-day supply of opioids with less than a 30-day gap in supply over a 180-day period or receipt of greater than 10 opioid prescriptions during the past year.)

To address this problem, researchers set out to create a statistical model which could predict which hospitalized patients would end up on COT who had not been on COT previously. Their approach involved doing a retrospective analysis of EHR data from 2008 to 2014 drawn from records of patients hospitalized in an urban safety-net hospital.

The researchers analyzed a wide array of variables in their analysis, including medical and mental health diagnoses, substance and tobacco use, chronic or acute pain, surgery during hospitalization, having received opioid or non-opioid analgesics or benzodiazepines during the past year, leaving the hospital with opioid prescriptions and milligrams of morphine equivalents prescribed during their hospital stay.

After conducting the analysis, researchers found that they could predict COT in 79% of patients, as well as predicting when patients weren’t on COT 78% of the time.

Being able to predict which patients will end up on COT after discharge could prove to be a very effective tool. As the authors note, using EHR data to create such a predictive model could offer many benefits, particularly the ability to identify patients at high risk for future chronic opioid use.

As the study notes, if clinicians have this information, they can offer early patient education on pain management strategies and where possible, wean them off of opioids before discharging them. They’ll also be more likely to consider incorporating alternative pain therapies into their discharge planning.

While this data is exciting and provides great opportunities, we need to be careful how we use this information. Done incorrectly it could cause the 21% who are misidentified as at risk for COT to end up needing COT. It’s always important to remember that identifying those at risk is only the first challenge. The second challenge is what do you do with that data to help those at risk while not damaging those who are misidentified as at risk.

One issue the study doesn’t address is whether data on social determinants of health could improve their predictions. Incorporating both SDOH and patient-generated data might lend further insight into their post-discharge living conditions and solidify discharge planning. However, it’s evident that this model offers a useful approach on its own.

Hospitals Centralizing Telemedicine, But EMR Integration Is Still Tough

Posted on March 26, 2018 I Written By

Anne Zieger is veteran healthcare branding and communications expert with more than 25 years of industry experience. and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also worked extensively healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

Over the past few years, large healthcare providers have begun to offer their patients telemedicine options. In the past, they offered these services on an ad-hoc basis, but that seems to be changing. A new survey suggests that hospitals and health systems have begun to manage this telemedicine service lines to a central office rather than letting individual departments decide how to deliver virtual care.

The survey, which was conducted by REACH Health, polled more than 400 healthcare executives, physicians and nurses as well as other healthcare professionals. REACH, which offers enterprise telemedicine systems, has been conducting research on the telemedicine business for several years.

Forty-eight percent of respondents to the REACH Health 2018 Telemedicine Industry Benchmark Survey reported that they coordinated telemedicine services on enterprise-level, up from 39% last year. Meanwhile, 26% said that individual departments handled their own telemedicine services, down from 36% in 2017.

The providers that are taking an enterprise approach seem to have a good reason for doing so. When it analyzed the survey data, REACH concluded that organizations offering telemedicine at the enterprise level were 30% more likely to be highly successful. (Not that the company would draw any other conclusion, of course, but it does seem logical that coordinating telehealth would be more efficient.)

The survey also found that telemedicine programs provided by both behavioral health organizations and clinics have expanded rapidly over the last few years. Back in 2015, REACH found that many behavioral health providers and clinics were at the planning stages or new to delivering telemedicine, but according to the 2018 results, many now have active telemedicine programs in place, with clinic services expanding 37% and behavioral health 40%.

While healthcare organizations may be managing telemedicine centrally, their EMRs don’t seem adequate to the job. First, most survey respondents noted that the telemedicine platform wasn’t integrated with the EMR. Meanwhile, nearly half said they were documenting patient visits in the EMR after remote consultations had ended. In addition, more than one-third of respondents said that EMR doesn’t allow them to analyze telemedicine-specific metrics adequately.

Whether REACH’s solution solves the problem or not, I’m pretty sure they’re right that integrating telemedicine services data with an EMR remains difficult.

In fact, it seems obvious to me that while hospitals are still tweaking their programs for maximum impact, and getting paid for such services is still an issue, telemedicine won’t become a completely mature service line until collecting related data and integrating it with off-line patient care information is easy and efficient.

 

Coding Accuracy: Study Reveals Differences Between Domestic and Offshore Coding – HIM Scene

Posted on March 23, 2018 I Written By

The following is a guest blog post by Bill Wagner, CHPS, Chief Operating Officer, KIWI-TEK.

In January 2015 the ICD-10-preparation frenzy was at its peak. Healthcare provider organizations were scrambling to find coding support during the implementation and transition phases of the quickly approaching ICD-10 implementation deadline. KIWI-TEK was one of those outsourced coding companies being asked to supply experienced, qualified coders.

KIWI-TEK was valiantly trying to keep up with the burgeoning client requests for coding support. And although they had been actively recruiting for months, their coding bench was empty. For the first time in company history, KIWI-TEK decided to augment their team with additional coding resources by contracting with several offshore coding services.

By April 2016, the crunch for additional coding support was all but over. However, the appeal of lower coding costs via offshore coding support drives many healthcare executives to contract with international outsourced coding support. Interest in offshore coding remains even to this day as evidenced by Partners Healthcare recent decision to outsource medical record coding to India.

But what about coding accuracy? This question remains and HIM Directors deserve a data-driven answer.

The Study

Until now, the only information available for providers to compare outsourced domestic coding quality with offshore coding performance was anecdotal. Specific quality data had not been produced or shared. Amidst rampant questions and red flags, KIWI-TEK partnered with six hospitals and health systems to answer the coding industry’s toughest question: “Who delivers higher coding accuracy, domestic or offshore outsourced coding services?” (Be sure to check out the full study results for a more detailed answer to this question).

Each of the six participants had experience with both domestic and offshore coders for at least one year. The same onboarding, auditing, and training procedures were applied equally to all.

Code Accuracy Lower with Offshore

Across all six organizations, code accuracy was lower for the outsourced offshore coding service versus the domestic coding companies by an average of 6.5 percentage points.

Poor coding quality also increased payer denials with additional management time required to onboard, train and audit the outsourced offshore coders.

And What About the Cost?

The final results showed that, despite what seems to be a much lower hourly rate for offshore coders, the total cost is much higher when all factors are taken into consideration. These factors include:

Auditing – Offshore coders required an average of 6 more hours per coder per month of auditing due to poor accuracy results.

Denied claims – Offshore coders averaged 10 more denied claims on Inpatient and Same Day Surgery encounters per week than domestic coders. Reworking of denied claims on these patient types takes 40 minutes for each claim.

The Final Answer

Yes, there is a difference. Offshore coding is less accurate, and in the long-term, may also be more expensive.

To read detailed findings of the study, download the KIWI-TEK White Paper entitled “Is Offshore Coding Really Saving You Money”.

If you’d like to receive future HIM posts in your inbox, you can subscribe to future HIM Scene posts here.  KIWI-TEK is a proud sponsor of Healthcare Scene.

Understanding Cloud EMPI with Shaz Ahmad from NextGate

Posted on March 21, 2018 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Readers of this blog have no need for me to explain the importance of an effective EMPI (Enterprise Master Patient Index) in their organization. Ensuring the right identity of your patients in disparate systems is essential to effectively running a healthcare organization from both a financial and a patient safety perspective.

While every healthcare organization knows they need EMPI, many aren’t as familiar with the new cloud EMPI options that are available on the market today. In order to shed some light on cloud EMPI, I sat down with Shaz Ahmad, VP Cloud Operations and Delivery at NextGate at HIMSS 2018 to look at the advantages and disadvantages of moving to the cloud for your EMPI. Plus, we dive into topics like the cost of cloud EMPI and security concerns some might have with a cloud EMPI solution.

If you’re looking at moving your EMPI to the cloud or wondering if you should, take a minute to watch this interview to learn more about what it means to move your EMPI to the cloud.

What’s your organization’s approach to EMPI? Are you already using cloud EMPI? Are you considering a move to the cloud? What’s keeping you from moving there? We look forward to hearing your thoughts and perspectives in the comments.

EMPI is so important in healthcare and I really like how cloud EMPI can solve a challenging problem in a simple, cost effective way for many healthcare organizations and healthcare IT vendors.

Note: NextGate is a sponsor of Healthcare Scene.

Mayo Clinic Creating Souped-Up Extension Of MyChart

Posted on March 19, 2018 I Written By

Anne Zieger is veteran healthcare branding and communications expert with more than 25 years of industry experience. and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also worked extensively healthcare and health IT organizations, including several Fortune 500 companies. She can be reached at @ziegerhealth or www.ziegerhealthcare.com.

As you probably know, MyChart is Epic’s patient portal. As portals go, it’s serviceable, but it’s a pretty basic tool. I’ve used it, and I’ve been underwhelmed by what its standard offering can do.

Apparently, though, it has more potential than I thought. Mayo Clinic is working with Epic to offer a souped-up version of MyChart that offers a wide range of additional services to patients.

The new version integrates Epic’s MyChart Virtual Care – a telemedicine tool – with the standard MyChart mobile app and portal. In doing so, it’s following the steps of many other health systems, including Henry Ford Health System, Allegheny Health Network and Lakeland Health.

However, Mayo is going well beyond telemedicine. In addition to offering access to standard data such as test results, it’s going to use MyChart to deliver care plans and patient-facing content. The care plans will integrate physician-vetted health information and patient education content.

The care plans, which also bring Mayo care teams into the mix, provide step-by-step directions and support. This support includes decision guidance which can include previsit, midtreatment and post-visit planning.

The app can also send care notifications and based on data provided by patients and connected devices, adapt the care plan dynamically. The care plan engine includes special content for conditions like asthma, type II diabetes chronic obstructive heart failure, orthopedic surgery and hip/knee joint replacement.

Not surprisingly, Mayo seems to be targeting high-risk patients in the hopes that the new tools can help them improve their chronic disease self-management. As with many other standard interventions related to population health, the idea here is to catch patients with small problems before the problems blossom into issues requiring emergency department visit or hospitalization.

This whole thing looks pretty neat. I do have a few questions, though. How does the care team work with the MyChart interface, and how does that affect its workflow? What type of data, specifically, triggers changes in the care plan, and does the data also include historical information from Mayo’s EMR? Does Mayo use AI technology to support care plan adaptions? Does the portal allow clinicians to track a patient’s progress, or is Mayo assuming that if patients get high high-quality educational materials and personalized care plan that the results will just come?

Regardless, it’s good to see a health system taking a more aggressive approach than simply presenting patient health data via a portal and hoping that this information will motivate the patient to better manage their health. This seems like a much more sophisticated option.