...
U.S. States Began to Regulate Generative AI in Health Care - Stevens Law Group

U.S. States Began to Regulate Generative AI in Health Care

Artificial intelligence (AI) has been a cornerstone of innovation in healthcare for decades, streamlining processes like medical imaging analysis, patient record management, and disease prediction. Generative AI, a new class of AI capable of producing content such as text, images, and even decision recommendations, has emerged, offering transformative possibilities for patient care. Unlike traditional AI, which analyzes data to generate insights, generative AI can create entirely new outputs, including personalized treatment plans or virtual health consultations.

However, the integration of such cutting-edge technology raises significant ethical, legal, and practical concerns. These range from the accuracy of AI-generated recommendations to potential biases embedded in its algorithms. Furthermore, the opacity of generative AI models—often described as “black boxes”—makes it challenging to understand or explain their decisions. This has spurred U.S. states to introduce regulations to manage risks while harnessing the benefits of generative AI in healthcare.

The Role of Generative AI in Health Care

Generative AI holds transformative potential in healthcare. By leveraging machine learning techniques, these systems can assist in tasks ranging from synthesizing medical images for research to generating patient-specific care plans. For instance, an AI system can process a patient’s electronic health records (EHRs), identify potential health risks, and suggest preventive measures within minutes.

Generative AI is also showing promise in drug discovery. By analyzing vast datasets, AI models can simulate molecular interactions, accelerating the identification of promising drug candidates. Similarly, in diagnostics, AI tools can analyze symptoms and medical histories to offer preliminary diagnoses or flag unusual patterns that might indicate rare diseases.

However, the use of generative AI in clinical decision-making is fraught with challenges. While these systems can augment human expertise, they lack the nuanced understanding of the context that comes with years of medical training. For example, a generative AI might suggest an optimal treatment based on statistical probabilities but fail to account for a patient’s unique circumstances, preferences, or comorbidities. As a result, the integration of AI must involve close collaboration with healthcare professionals to ensure that human oversight remains central.

AI in healthcare - Stevens Law Group

Emerging State Regulations

To address the complexities of generative AI in healthcare, several states have enacted regulations designed to ensure ethical deployment and patient protection. California has taken a leading role with legislation requiring healthcare providers to disclose the use of generative AI in patient communications. Under this law, disclaimers must be clear and prominently displayed, whether in written, verbal, or video interactions. This ensures patients are aware of when AI-generated insights are influencing their care and allows them to seek clarification from a human professional.

Utah’s approach aligns closely with California’s, focusing on mandatory disclosures whenever generative AI interacts with patients. However, Utah’s law expands the scope by covering non-scripted outputs generated by AI, which could include conversational interactions during telehealth consultations.

These state laws highlight key trends in generative AI regulation. Transparency and informed consent are becoming central tenets, ensuring that patients are aware of AI’s role in their care and can make educated decisions. As these regulations evolve, they may influence other states to adopt similar measures, potentially paving the way for nationwide standards.

Federal Initiatives and Oversight

While state governments are spearheading many regulatory efforts, the federal government is also taking steps to manage generative AI in healthcare. In October 2023, President Joe Biden issued an executive order emphasizing the importance of safe, ethical, and trustworthy AI development. This directive tasked the Department of Health and Human Services (HHS) with formulating policies to address the deployment of predictive and generative AI in healthcare delivery.

Medicare regulations have also begun incorporating AI-specific provisions. For example, the Centers for Medicare & Medicaid Services (CMS) introduced rules prohibiting insurers from relying solely on AI algorithms for coverage determinations. These guidelines ensure that human reviewers remain integral to the decision-making process, particularly in complex cases where clinical judgment is essential.

Federal oversight complements state initiatives by providing a broader framework for AI governance. By aligning these efforts, the U.S. can create a cohesive regulatory environment that balances innovation with patient safety.

Ethical Considerations in Generative AI

The ethical challenges surrounding generative AI are multifaceted, touching on issues of transparency, equity, and accountability. Transparency is critical: patients must know when AI is influencing their care. Without clear disclosures, there’s a risk that patients could unknowingly rely on AI-generated advice, potentially undermining trust in the healthcare system.

Equity is another pressing concern. The training data of generative AI systems determines their effectiveness. If these datasets reflect historical biases—such as disparities in healthcare access—they can perpetuate or even exacerbate inequities. For example, an AI system trained primarily on data from affluent populations might underperform when treating patients from underserved communities.

Accountability is equally vital. While AI can assist in decision-making, the ultimate responsibility for patient care must rest with healthcare providers. Ensuring that physicians understand how AI tools function—and that they can intervene when necessary—is critical for maintaining ethical standards.

AI in healthcare - Stevens Law Group (1)

AI in Utilization Management and Prior Authorization

The utilization management (UM) and prior authorization (PA) processes are increasingly using generative AI. People often criticize these workflows, which determine insurance coverage for medical services, for their slowness and cumbersomeness. AI has the potential to streamline these processes by automating routine tasks, such as verifying medical necessity or assessing compliance with insurance policies.

However, this efficiency comes with challenges. Illinois, for instance, has enacted legislation requiring UM programs to use evidence-based criteria and involve clinical peers in adverse determinations. Similarly, Colorado mandates that AI systems used in healthcare decision-making undergo impact assessments to identify and mitigate biases or inaccuracies.

These state-level regulations underscore the need for transparency and fairness in AI-assisted UM and PA processes. As these tools become more prevalent, ensuring they adhere to ethical and legal standards will be paramount.

Technological Challenges and Risks

Despite its potential, generative AI faces significant technological hurdles. One major issue is accuracy. AI systems rely on vast amounts of data to generate outputs, but errors in data collection or processing can lead to inaccurate or misleading recommendations. For instance, an AI tool might misinterpret symptoms, leading to incorrect diagnoses or inappropriate treatment plans.

Transparency is another challenge. Many generative AI models operate as “black boxes,” making it difficult to understand how they arrive at their conclusions. This opacity can erode trust among healthcare providers and patients, especially in critical care scenarios.

Data security is a further concern. Generative AI systems often process sensitive patient information, making them prime targets for cyberattacks. Ensuring robust data protection measures are in place is essential to safeguard patient privacy and maintain trust in AI-enabled healthcare solutions.

Stakeholder Responsibilities

The successful integration of generative AI in healthcare requires collaboration among various stakeholders. Healthcare providers need to responsibly implement AI tools and provide staff with adequate training to understand their capabilities and limitations. Insurers, meanwhile, must maintain transparency in their use of AI, ensuring that coverage decisions are fair and comprehensible.

Regulators play a crucial role in setting standards and enforcing compliance. By working closely with industry organizations and patient advocacy groups, regulators can develop guidelines that address the unique challenges of generative AI. This collaborative approach fosters innovation while protecting patient interests.

Future Directions and Trends

The regulation of generative AI is still in its infancy, but its trajectory suggests a future marked by greater oversight and standardization. As states continue to adopt AI-specific legislation, there is growing momentum for a unified regulatory framework at the federal level. This could include standardized disclosure requirements, guidelines for data quality, and mechanisms for addressing biases.

Patient expectations are also evolving. With the rise of telemedicine and digital health tools, patients increasingly expect personalized, efficient care. Generative AI can meet these demands, but only if deployed responsibly. Innovations such as explainable AI—which makes AI decision-making processes more transparent—could help build trust and ensure that AI complements, rather than replaces, human expertise.

Conclusion

Generative AI represents a significant leap forward in healthcare innovation, offering tools that can enhance diagnosis, treatment, and patient care. However, its integration into clinical settings requires careful regulation to mitigate risks and uphold ethical standards. By balancing innovation with patient safety, U.S. states and federal agencies are paving the way for a future where generative AI transforms healthcare for the better.

As stakeholders across the healthcare ecosystem work together, they must prioritize transparency, equity, and accountability to ensure that generative AI delivers on its promise without compromising the principles of ethical care.

Keeping up with the fast-changing rules around AI in healthcare can be overwhelming. That’s where Stevens Law Group comes in. We’re here to make it easier for you. Whether you’re a healthcare provider, an insurer, or a tech company, our team has the experience to help you stay compliant, avoid risks, and get the most out of AI in your work.

From understanding disclosure laws to putting ethical AI practices in place, we’ve got you covered. Reach out to Stevens Law Group today to set up a consultation and see how we can help you navigate the challenges and opportunities of AI.

Let’s make AI work for you—safely and responsibly.

References:


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *