On February 6th, the US government issued a memo to Medicare insurers, clarifying that AI cannot be used as the sole basis for denying claims. While machine-learning algorithms can assist in making determinations, they cannot make decisions on their own. This memo comes in response to lawsuits against health insurers, such as United Healthcare and Humana, who have been accused of using AI to wrongly deny coverage. Patients claim that the AI model nHPredict has a 90% error rate, highlighting a dangerous aspect of the technology that is receiving increased attention.
Several states, including New York and California, have also warned insurance companies to verify the fairness of their algorithms. The Centers for Medicare & Medicaid Services expressed concern about the potential for algorithms to exacerbate discrimination and bias and have urged insurers to ensure their models comply with anti-discrimination requirements.
The memo serves as a reminder that while AI can be useful tools in decision-making processes, it should not be relied upon solely when making critical decisions like those related to health care coverage. As someone who has experienced this firsthand, it is important for patients to understand how these systems work and advocate for their rights if their claims are denied or mishandled by an insurance company or an AI system.