Artificial Intelligence Bots Take on Health Insurers in Battle Over Care and Costs
In a bid to gain the upper hand, patients and doctors are deploying AI-powered tools to combat claims denials, prior authorizations, and soaring medical bills. Several companies and nonprofits have developed these technology-based solutions, which utilize machine learning algorithms to analyze vast amounts of data and provide personalized support to consumers.
One such company, Sheer Health, has launched an app that enables patients to connect their health insurance account, upload medical bills and claims, and ask questions about deductibles, copays, and covered benefits. The AI-powered program uses both human and machine intelligence to answer patients' queries for free, with the option to pay a fee for extra support in challenging denied claims or dealing with out-of-network reimbursements.
Meanwhile, a nonprofit organization called Counterforce Health has designed an AI assistant to help patients appeal their denied health insurance claims. This service analyzes denial letters, policy documents, and outside medical research to draft customized appeal letters. The program's creators hope that these tools will empower patients to take control of their healthcare decisions.
Other companies are using AI chatbots like Grok to provide health information and advice to consumers. According to a recent poll by the health care research nonprofit KFF, a quarter of adults under 30 have used an AI chatbot for this purpose. However, many of these users remain uncertain about the accuracy of the information provided.
State legislators are also taking action to regulate the use of AI in healthcare. Over a dozen states have passed laws governing the use of artificial intelligence in medical settings, with some banning insurance companies from using AI as the sole decision-maker in prior authorizations or medical necessity denials.
Dr. Arvind Venkat, a Pennsylvania state representative and emergency room physician, is championing legislation to regulate the use of AI in healthcare. He believes that while AI has the potential to improve care delivery and efficiency, it should not replace human oversight entirely.
"A black box" - that's how healthcare can often feel, with complex algorithms determining medical treatment and outcomes without transparency or accountability. However, experts agree that humans are essential for ensuring that AI is used responsibly and in a way that prioritizes patients' needs.
One patient, Matthew Evins, had to rely on these technology-based solutions when his insurer denied his back surgery just 48 hours before the procedure was scheduled. He eventually sought help from Sheer Health, which identified a coding error in his medical records and handled communications with his insurer. The surgery was approved about three weeks later.
While AI is not a silver bullet for healthcare challenges, its potential benefits should be recognized. However, as these technology-based solutions become more prevalent, it's essential to ensure that they are developed and deployed responsibly, with human oversight and accountability in place.
In a bid to gain the upper hand, patients and doctors are deploying AI-powered tools to combat claims denials, prior authorizations, and soaring medical bills. Several companies and nonprofits have developed these technology-based solutions, which utilize machine learning algorithms to analyze vast amounts of data and provide personalized support to consumers.
One such company, Sheer Health, has launched an app that enables patients to connect their health insurance account, upload medical bills and claims, and ask questions about deductibles, copays, and covered benefits. The AI-powered program uses both human and machine intelligence to answer patients' queries for free, with the option to pay a fee for extra support in challenging denied claims or dealing with out-of-network reimbursements.
Meanwhile, a nonprofit organization called Counterforce Health has designed an AI assistant to help patients appeal their denied health insurance claims. This service analyzes denial letters, policy documents, and outside medical research to draft customized appeal letters. The program's creators hope that these tools will empower patients to take control of their healthcare decisions.
Other companies are using AI chatbots like Grok to provide health information and advice to consumers. According to a recent poll by the health care research nonprofit KFF, a quarter of adults under 30 have used an AI chatbot for this purpose. However, many of these users remain uncertain about the accuracy of the information provided.
State legislators are also taking action to regulate the use of AI in healthcare. Over a dozen states have passed laws governing the use of artificial intelligence in medical settings, with some banning insurance companies from using AI as the sole decision-maker in prior authorizations or medical necessity denials.
Dr. Arvind Venkat, a Pennsylvania state representative and emergency room physician, is championing legislation to regulate the use of AI in healthcare. He believes that while AI has the potential to improve care delivery and efficiency, it should not replace human oversight entirely.
"A black box" - that's how healthcare can often feel, with complex algorithms determining medical treatment and outcomes without transparency or accountability. However, experts agree that humans are essential for ensuring that AI is used responsibly and in a way that prioritizes patients' needs.
One patient, Matthew Evins, had to rely on these technology-based solutions when his insurer denied his back surgery just 48 hours before the procedure was scheduled. He eventually sought help from Sheer Health, which identified a coding error in his medical records and handled communications with his insurer. The surgery was approved about three weeks later.
While AI is not a silver bullet for healthcare challenges, its potential benefits should be recognized. However, as these technology-based solutions become more prevalent, it's essential to ensure that they are developed and deployed responsibly, with human oversight and accountability in place.