Now is the Time to Prepare Best Practices to Use AI Responsibly to its Full Potential
By Thomas Garza and Kaitlyn Rice
Research on AI is moving unpredictably fast. Though the use of AI holds great potential in the healthcare space to improve affordability, efficiencies and even outcomes, the stakes are inherently higher in use cases that involve patient care. This warrants appropriate guardrails internally, and shared best practices externally, to balance experimentation and innovation with patient safety and privacy protections.
As a healthcare technology company with a vision for how AI can make healthcare more affordable and accessible, Oscar has been forthcoming and transparent in sharing our learnings on AI, in part to inform policy making. State and federal policymakers are just beginning to develop frameworks and approaches to regulating AI, so now is the time for health tech companies to be proactive about self-regulation and to engage with policymakers to shape the emerging AI regulatory landscape. To level set, let’s review what’s happened on the AI policy front to date:
NAIC MODEL BULLETIN. In December 2023, a model bulletin on the Use of AI Systems by Insurers was adopted by the National Association of Insurance Commissioners (NAIC), a national organization representing insurance regulators in all 50 states, D.C. and U.S. territories. The bulletin applies a risk-based governance approach to AI regulation and expects health insurers using AI to create a written program for the use of AI Systems to mitigate the risks of non-compliance and consumer harm. Oscar provided feedback during both comment periods which was incorporated in the final bulletin and supports the model bulletin given its risk-based approach, focus on generative AI, and emphasis on transparency of AI models. As of this writing, 17 states and the District of Columbia have issued the bulletin.
WHITE HOUSE COMMITMENTS. In the fall of 2023, Oscar co-authored and was one of more than 40 healthcare payers and providers who pledged to a set of healthcare AI commitments with the White House, which serve to align industry self-governance. This effort is a strong example of private-public partnership to ensure safe and innovative deployment of AI and industry wide adoption of these commitments sets a precedent for future federal action.
STATE LEGISLATION. During this year’s state legislative sessions, AI was a hot topic: bills have been introduced to regulate the use of AI in some form in almost every state. This includes more innocuous proposals such as establishing AI Task Forces, requiring reasonable transparency, and prohibiting bias. However, bills introduced in New York, Oklahoma, and Pennsylvania specifically target the use of AI algorithms in health insurance utilization review and claims processes, particularly in light of ongoing public concerns and litigation of insurer use of AI in claims. A bill in Georgia, which died early on in session, proposed to create an outright ban on the use of AI in making healthcare decisions, including insurance coverage determinations. These wide sweeping proposals are of concern because they fail to consider the risk spectrum of AI, which depending on how it is defined can encompass widely accepted practices as simple as spell check in internet browsers. Even more concerning, these types of proposals would hinder use of AI in even benign, administrative functions in healthcare determinations which hold great potential for operational improvement and cost saving with minimal patient risk. Oscar therefore believes that our members will be best served by a more targeted, narrow focus on high-risk AI use cases, similar to the NAIC model bulletin, which allows healthcare companies like Oscar to continue to responsibly innovate.
FEDERAL ACTIVITIES: Though there has been significant interest in and discussion of AI policy broadly, there have been minimal tangible federal developments on AI regulation for the healthcare industry to date, though we expect both legislative and regulatory action soon. Most notable for industry so far is the recent ACA nondiscrimination final rule, which reiterated that nondiscrimination in health programs and activities applies to the use of AI, clinical algorithms, predictive analytics and other tools. Additionally, President Biden issued an Executive Order (EO) last year, which outlines key actions executive agencies must take with respect to government use of AI and issuing AI guidance. The recently published White House 180 Day Update on the EO indicates the completion of such agency actions; namely, the creation of the HHS AI Task Force which has one year to develop a strategic plan that includes policies and frameworks – possibly including regulatory action – on responsible deployment and use of AI and AI-enabled technologies. In Congress, both the House and Senate have formal workstreams and/or task forces critically studying AI, with the Senate recently publishing a bipartisan AI roadmap which is expected to inform legislation later this year.
What can Health Tech Companies do now to Prepare for this Changing Landscape?
Oscar strongly believes that transparency is essential to build trust and accelerate development of AI technologies that will optimize healthcare delivery. To other health tech companies navigating the changing AI regulatory landscape, we encourage a similar approach rooted in transparency. Oscar has pledged and continuously refers to the White House AI principles to guide our work, including:
Adhering to a risk management framework, logging any use cases and applications of AI and accounting for potential harms, and taking steps to mitigate them;
Deploying trust mechanisms that inform users if content is AI-generated. Transparency of use cases to the members and patients you serve is essential to maintaining trust as technology changes; and
Publicly sharing many of our AI use cases on this blog as well as publicly sharing our internal AI governance process as an example to industry and policymakers of how healthcare companies can leverage generative AI in a responsible, safe, and secure way.
Finally, Oscar will stay engaged with federal and state policymakers and with like minded innovators on these evolving topics. Working together, we can ensure appropriate governance of AI so that it can be responsibly used to its full potential.