Health Tech

Senator Probes Google About ‘Premature Deployment of Unproven Technology’ In Healthcare Settings

Senator Mark Warner (D-Virginia) penned a letter to Google leadership expressing concerns about Med-PaLM 2 — the company’s generative AI tool for healthcare providers that is currently being used by Mayo Clinic and other health systems. The letter requested that Google provide more clarity about its chatbot’s training, accuracy, ethical considerations and deployment in healthcare settings.

A U.S. senator penned a letter to Google leadership on Tuesday expressing concerns about Med-PaLM 2 — the company’s generative AI tool for healthcare providers that is currently being used by Mayo Clinic and other health systems.

In the letter, Senator Mark Warner (D-Virginia) wrote that he is worried that the “premature deployment of unproven technology” could erode trust in the country’s medical institutions, exacerbate racial disparities in health outcomes, and increase the risk of diagnostic and care delivery errors. 

He addressed the letter to Sundar Pichai, CEO of Alphabet and its subsidiary Google.

Warner’s letter pointed out that over the past year, tech companies have been trying to capture market share by rushing to launch generative AI products — this frenzied pace of generative AI development came after OpenAI released its groundbreaking product ChatGPT last fall. Amid the generative AI boom, tech firms are willing to take bigger risks and deploy fledgling technology “in an effort to gain a first mover advantage,” Warner wrote.

“This race to establish market share is readily apparent and especially concerning in the healthcare industry, given the life-and-death consequences of mistakes in the clinical setting, declines of trust in healthcare institutions in recent years and the sensitivity of health information,” he declared.

Ever since Google unveiled Med-PaLM 2 in April, a select group of the tech giant’s healthcare provider customers have been piloting the AI model. They are testing its ability to answer medical questions, summarize unstructured texts and organize health data.

presented by

Even though AI has been used across healthcare settings for years, large language models and other generative AI tools bring “complex new questions and risks” to the field, Warner wrote. 

He cited a report published last month in the Wall Street Journal in which Greg Corrado, a senior research director at Google who worked on Med-PaLM 2, said that he didn’t “feel that this kind of technology is yet at a place where I would want it in my family’s healthcare journey.” Warner also pointed out that Google published research in May showing that Med-PaLM 2’s answers to medical questions contained more inaccurate and irrelevant information than responses written by physicians.

The letter requested that Google provide more clarity about its chatbot’s training, accuracy, ethical considerations and deployment in healthcare settings. 

Warner asked for more information on how Google is ensuring that the AI model can appropriately handle sensitive health information, as well as how the company is working to mitigate misdiagnosis risks. He also sought answers about whether patients are being informed about the tool’s use and the frequency with which Google will re-train the model.

Pichai isn’t the only tech CEO that Warner has recently grilled in a public letter. In October, the senator wrote to Meta CEO Mark Zuckerberg to express his concerns about the company’s Pixel tracking tool and seek more information about its collection of health information without patient consent.

Photo: Justin Sullivan/Getty Images

Topics