Lawmakers and regulators in Washington are facing the challenge of regulating artificial intelligence (AI) in healthcare, and there's concern within the AI industry that they might not get it right.
The impact of AI in healthcare is already significant, with the FDA approving hundreds of AI products. These algorithms are being used for various tasks like patient scheduling, determining staffing levels in emergency rooms, and even assisting in medical diagnosis by transcribing and summarizing clinical visits. Additionally, AI is beginning to assist radiologists in interpreting medical imaging scans like MRIs and X-rays. Some physicians, like Bob Wachter from the University of California-San Francisco, even consult AI models like GPT-4 for complex cases.
However, the dynamic nature of AI poses a challenge for regulators, unlike traditional drugs whose chemistry remains constant over time. Despite this, efforts are underway at both the government and industry levels to develop rules ensuring transparency and privacy. Congress is also showing interest, with the Senate Finance Committee holding hearings on AI in healthcare.
Increased lobbying activity is also observed, with organizations actively engaging in discussions about AI regulation. TechNet, a trade group, has initiated a $25 million campaign to educate the public about the benefits of AI through TV ads.
Regulating AI in healthcare is complex due to the early stage of the technology's development. Challenges include ensuring that healthcare professionals understand and trust AI systems for clinical decision-making, as well as addressing potential biases in AI algorithms. For instance, AI systems trained on biased data may perpetuate disparities in healthcare, such as providing less care to people of color.
There's also the issue of AI systems generating inaccurate information, as demonstrated by an incident where an AI-generated prior authorization letter for a "wacky" prescription was indistinguishable from a genuine one. Additionally, AI algorithms used to predict patient behavior may inadvertently reinforce biases, as seen in a test where people of color were more likely to be deemed as no-shows for clinical appointments.
To address these challenges, there's a consensus among experts and FDA officials for the need of transparent algorithms monitored by humans over the long term. Policymakers are urged to invest in systems to track AI's performance and evolution. Despite the risks and challenges, the potential for AI to revolutionize healthcare remains promising, with ongoing advancements and innovations expected in the future.