Google has been forced to remove some of its artificial intelligence (AI) health summaries after a recent investigation found that these summaries were putting users at risk of harm by providing false and misleading information.
The AI Overviews, which use generative AI to provide snapshots of essential information about a topic or question, have been deemed "helpful" and "reliable" by Google. However, experts have raised concerns over the accuracy of these summaries, particularly when it comes to health-related queries.
In one instance, typing "what is the normal range for liver blood tests" into Google led to a summary that provided inaccurate information about crucial liver function tests. This could put people with serious liver disease at risk of not seeking further medical care, as they may incorrectly assume they have a normal test result.
Experts warn that these summaries can lead to seriously ill patients being misled by incorrect information and failing to attend follow-up healthcare meetings. The investigation also found that even slight variations of the original queries prompted AI Overviews, which is a cause for concern.
Google has since removed AI Overviews for certain search terms related to liver health. However, experts say this is only a step in the right direction and more needs to be done to tackle the issue of AI Overviews providing inaccurate health information.
"This is excellent news, but it's only the first step," said Vanessa Hebditch, director of communications and policy at the British Liver Trust. "We're concerned that if people ask questions in a different way, they'll still get misleading AI Overviews."
The investigation has highlighted the need for Google to improve its quality control measures when it comes to health-related search results. The company's internal team of clinicians reviewed some of the information provided by AI Overviews and found that in many instances, the information was not inaccurate but rather relied on high-quality websites.
However, experts say that even if the information is from reputable sources, the presentation and interpretation of that information can still be misleading. As one expert noted, "A liver function test or LFT is a collection of different blood tests... But the AI Overviews present a list of tests in bold, making it very easy for readers to miss that these numbers might not even be the right ones for their test."
Google's reliance on its AI tool has led to calls for greater transparency and accountability. As one expert noted, "AI Overviews appear above ranked results... When the topic is health, errors carry more weight."
The AI Overviews, which use generative AI to provide snapshots of essential information about a topic or question, have been deemed "helpful" and "reliable" by Google. However, experts have raised concerns over the accuracy of these summaries, particularly when it comes to health-related queries.
In one instance, typing "what is the normal range for liver blood tests" into Google led to a summary that provided inaccurate information about crucial liver function tests. This could put people with serious liver disease at risk of not seeking further medical care, as they may incorrectly assume they have a normal test result.
Experts warn that these summaries can lead to seriously ill patients being misled by incorrect information and failing to attend follow-up healthcare meetings. The investigation also found that even slight variations of the original queries prompted AI Overviews, which is a cause for concern.
Google has since removed AI Overviews for certain search terms related to liver health. However, experts say this is only a step in the right direction and more needs to be done to tackle the issue of AI Overviews providing inaccurate health information.
"This is excellent news, but it's only the first step," said Vanessa Hebditch, director of communications and policy at the British Liver Trust. "We're concerned that if people ask questions in a different way, they'll still get misleading AI Overviews."
The investigation has highlighted the need for Google to improve its quality control measures when it comes to health-related search results. The company's internal team of clinicians reviewed some of the information provided by AI Overviews and found that in many instances, the information was not inaccurate but rather relied on high-quality websites.
However, experts say that even if the information is from reputable sources, the presentation and interpretation of that information can still be misleading. As one expert noted, "A liver function test or LFT is a collection of different blood tests... But the AI Overviews present a list of tests in bold, making it very easy for readers to miss that these numbers might not even be the right ones for their test."
Google's reliance on its AI tool has led to calls for greater transparency and accountability. As one expert noted, "AI Overviews appear above ranked results... When the topic is health, errors carry more weight."