AI Future in Healthcare blog image

Artificial intelligence (AI) has the potential to affect healthcare in profound ways — propelling better data-driven decisions on diagnoses, treatments, and population health management. Reports indicate that AI can help the U.S. health system realize $150 billion in savings by 2026. Alongside its impact on healthcare cost savings, AI is poised to support care delivery, increase evidence-based decisions, and shift care delivery towards a more proactive approach. However, although AI applications in healthcare can drive success, the adoption of AI in healthcare continues to lag compared to other industries.

At the 2022 ViVE Conference, healthcare industry leaders reflected on healthcare machine learning solutions. They discussed the barriers to AI applications in healthcare, including the ways clinicians and community health workers are leveraging it to deliver better outcomes. Arundhati Parmar, VP and Editor-in-Chief at MedCity News, moderated a panel titled “Show Me an AI-Enabled Early Warning Sign,” which included the following health experts:

  • John Brownstein, Ph.D., Chief Innovation Officer at Boston Children’s Hospital and Professor at Harvard Medical School
  • Alissa Hsu Lynch, Global Lead, MedTech Strategy and Solutions at Google Cloud
  • Balaji Ramadoss, Ph.D., Founder and CEO at Edgilty Inc.
  • Ines Vigil, M.D., M.P.H., General Manager and Senior Vice President of Provider Solutions at Clarify Health

AI is improving community health

Healthcare machine learning solutions have transformed public health and disease surveillance by scaling the collection, analysis, and interpretation of disparate data. During the early stages of the pandemic, understanding transmission patterns was necessary for public health authorities to support virus containment efforts and policies. The promise of AI delivered — the first international detection of  COVID-19 in Wuhan, China, was flagged by HealthMaps, an AI-powered disease surveillance system developed by Dr. Brownstein and researchers at Boston Children’s Hospital. During the panel, Dr. Brownstein explained how HealthMap’s disease surveillance system uses automated text processing algorithms to mine disparate social science data sources, such as online news and social media. It monitors emerging global health threats in real-time, visualizing infections disease patterns on an interactive geographic map.

“The work of my team at Boston Children’s has been building technology to support early warning of pandemics, for many years. We produced one of the earliest signs of COVID in Wuhan through the mining of news and social media. That alert came December 30th, 2019.”— John Brownstein, Ph.D., Chief Innovation Officer at Boston Children’s Hospital and Professor at Harvard Medical School

AI technologies are also aiding decisions on vaccination strategy and distribution policies. According to Dr. Brownstein, AI has the potential to uncover novel insights into how communities can receive better access to care. He worked with Harvard, Ariadne Labs, and Brigham and Women’s Hospital to launch Vaccine Planner, an AI-powered tool that maps the location of vaccine deserts within the U.S. The tool draws on various public data sources, including the CDC’s Social Vulnerability Index and HealthLandscape’s pediatric practice locations, to identify geographic areas with low vaccination uptake and access. Dr. Brownstein stated that deep insight into barriers to vaccines is critical to helping public health officials develop data-driven interventions to address health inequities.

Machine learning is guiding clinicians through tough decisions

AI applications in healthcare also extend to health systems and clinicians who are leveraging AI for data-driven insights for complex patient care decisions they face daily. During the panel, Dr. Ramadoss gave examples of AI applications in healthcare. Health systems are applying machine learning and automation to guide decision-making. He mentioned how health organizations are exploring machine learning as a fairer and equitable way to help drive better decisions on when people receive care by determining waiting room wait times for inpatient care based on specific conditions, factors, and determinants. The Institute of Medicine includes timeliness, efficiency, and patient-centered care as three of the six domains for quality healthcare. However, wait times for healthcare services remain a common barrier to care, potentially leading to adverse patient outcomes, increased mortality risk, and care satisfaction. And patients’ clinical profile and socioeconomic characteristics may impact providers’ decision-making process and the speed at which they are seen.

“Artificial intelligence and machine learning can be very practical. At Clarify, we identify all the unique characteristics of individuals and social determinants, such that clinicians can be armed with this type of information on a daily basis.” — Ines Vigil, M.D., M.P.H., General Manager and Senior Vice President of Provider Solutions at Clarify Health

Similarly, Dr. Vigil explained how healthcare analytics are helping clinicians deliver precision medicine and patient-centered care. Providers have historically faced difficulty addressing variations in care and pinpointing unwarranted clinical variations due to limited data. Today healthcare analytics software, like Clarify Care, are helping clinicians identify unwarranted clinical variations and improve care delivery by delivering precise provider performance benchmarks. Horizon Healthcare Services was able to identify the potential for $285 million in savings and address discrete clinical behaviors — showcasing the capability of AI to drive better outcomes. Dr. Vigil clarified that machine learning analytics are helping providers identify individuals with a higher risk of readmissions, giving clinicians evidence-driven confidence to change treatment approaches and care plans. Healthcare analytics with comprehensive social determinants of health factors are arming health systems with the insights needed to proactively support high-risk patient populations, mitigating the potential for disparities in care outcomes.

Responsible use of AI in healthcare

Machine learning and AI have the potential to improve care delivery and health — but AI models don’t always deliver the intended outcome. Bias in algorithms is a critical concern that damages healthcare leaders’ and clinicians’ trust. According to Lynch, eliminating AI bias has been a crucial focus for Google Cloud. They developed a responsible AI principle and a machine-learning governance process to evaluate all internal algorithms. She noted that when machine learning algorithms use data that reflect historical, cognitive, and societal bias, it can worsen care delivery and health disparities. In 2021, the FTC issued a warning against the adoption of biased healthcare algorithms. They urged businesses to adopt transparent models and regularly test AI algorithms to ensure insights are not discriminating on the basis of race, assigned gender at birth, or other protected status.

“We absolutely recognize that AI algorithms and data sets can reflect, they can reinforce, or they can reduce unfair biases. And it’s really critical, therefore, to be very proactive in the design and the engineering of AI systems — to think about health equity all along the pathway.” — Alissa Hsu Lynch, Global Lead, MedTech Strategy and Solutions at Google Cloud

The data and algorithms behind AI shouldn’t be a “black box.” Dr. Ramadoss warned that a future with proprietary, black-box models would threaten accountability and pose liability concerns, having wide-reaching implications on population health. AI is helpful when it is a tool to support decision-making, not replace it. Explainability is critical for actionable insights and responsible use of AI. Dr. Ramadoss voiced, “if everybody creates their own microcosms of intelligence, do we really understand what biases we are introducing into the system?”

Dr. Vigil agreed that “black box” AI would slow the adoption of AI in the healthcare industry. According to Dr. Vigil, “it is extremely hard to convince a clinician to do something different than they’re used to doing when you don’t have transparent information.” Each decision a clinician makes has a long-lasting effect on a patient’s health. AI models must focus on transparency and explainability. When clinicians use AI to support diagnostic and medical decisions, they need transparency and information on how models produce decisions, including how large, diverse, and representative the data sets are of the patient population they’re treating.

“Clinicians need to know what the biases are, and that starts with knowing how big a data set the algorithm was trained on. Was it 200, 2,000, or 200,000? Clinicians need to know that to be a responsible applicant of that information for their patients.” — Ines Vigil, M.D., M.P.H., General Manager and Senior Vice President of Provider Solutions at Clarify Health

The responsible use of AI in healthcare means ensuring that insights and algorithms provide solutions, not further systemic inequities. The way data is collected, ingested, and distilled for insights is critical to ensuring accountability, adoption, and clinical accuracy. For AI to support triage decision-making, diagnostics, care delivery, and population health management, clinicians need visibility into the algorithms and data behind the AI/ML. By eliminating the “black box,” AI/ML can be more readily adopted by clinicians and deliver on the promise of better care.