skip to main content

Eye on AI – Artificial Intelligence Regulation Within Health Data Analytics

June 12, 2023

Raul A. Tabora, Jr. - Bond, Schoeneck & King

With the onset of robotic technology and machine learning, which is starting to influence many areas of our lives, the Office of the National Coordinator for Health Information Technology (ONC) is now focusing on the need to regulate the mechanics of so-called “Artificial Intelligence” (AI) which seeks transform the healthcare sector. This regulatory issuance is designed to provide some level of comfort to consumers and a level of compliance for providers and technology companies involved in data analytics and support. On a deeper level, this is an initial wake up call for all those involved in data exchange and data analytics as well as those who seek better care outcomes from aggregated predictive modeling for health care interventions.

Here are some highlights to the Notice of Proposed Rulemaking (NPR) which is the first of several planned segments as outreach to ensure that our clients are alerted to these developments. To be sure, the NPR is hundreds of pages long and very intense on detail, however, the NPR does start to implement regulations on machine learning technology (i.e., self-taught computers and software) which also establishes a structure for future areas of AI uses.

First, the ONC outlines prior internal memoranda and guidance with regard to AI starting as of 2020:

“In November of 2020, the Office of the Management and Budget released a Memorandum for the Heads of Executive Departments and Agencies on Guidance for Regulation of Artificial Intelligence Applications, which directed that “[w]hen considering regulations or policies related to AI applications, agencies should continue to promote advancements in technology and innovation, while protecting American technology, economic and national security, privacy, civil liberties, and other American values, including the principles of freedom, human rights, the rule of law, and respect for intellectual property.” [70] This was followed by an executive order in December of 2020: E.O. 13960 Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government.[71] The executive order stated: “The ongoing adoption and acceptance of AI will depend significantly on public trust. Agencies must therefore design, develop, acquire, and use AI in a manner that fosters public trust and confidence while protecting privacy, civil rights, [and] civil liberties[.]” (85 FR 78939).

In June of 2021, the Government Accountability Office (GAO) published Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities, which specifically outlined key principles and actions “[t]o help entities promote accountability and responsible use of AI systems.” This included outlining four principles for the framework, including governance, data, performance, and monitoring.[72]

In September of 2022, the Biden-Harris Administration published Principles for Enhancing Competition and Tech Platform Accountability, which included a principle related to stopping discriminatory algorithmic decision-making.[73] In October of 2022, the Biden-Harris Administration published a Blueprint for an AI Bill of Rights, which outlines five principles, informed by public input, that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. These principles are safe and effective systems; algorithmic discrimination protections; data privacy; notice and explanation; and human alternatives, consideration, and fallback.[74]

Finally, in February of 2023, E.O. 14901: Further Advancing Racial Equity and Support for Underserved Communities Through the Federal Government was issued (88 FR 10825–10833).[75] E.O. 14091 of Feb. 16, 2023, builds upon previous equity-related E.O.s, including E.O. 13985.[76] Section 1 of E.O. 14091 requires the Federal Government to “promote equity in science and root out bias in the design and use of new technologies, such as artificial intelligence.” Section 8, subsection (f) of E.O. 14091 requires agencies to consider opportunities to “prevent and remedy discrimination, including by protecting the public from algorithmic discrimination.”

Second, the proposed rule outlines the development of “peer-reviewed” evidence with regard to AI development:

A growing body of peer-reviewed evidence, technical and socio-technical expert analyses, and government activities and reports [77] focus on ensuring that the promise of AI and machine learning (ML) can equitably accelerate advancements in healthcare to improve the health and well-being of the American public. We are therefore proposing to incorporate new requirements into the ONC Health IT Certification Program for Health IT Modules that support AI and ML technology. These requirements align with the Federal Government's efforts to promote trustworthy AI and the Department's stated policies on advancing equity in the delivery of health and human services.[78]

We believe that the continued evolution of decision support software, especially as it relates to AI- and ML-driven predictive DSIs, necessitates new requirements for the Program's CDS criterion. These include proposed requirements for new sets of information that are necessary to guide decision-making based on recommendations (outputs) from predictive DSIs, such as an expanded set of “source attributes” and information related to how intervention risk is managed by developers of certified health IT with Health IT Modules that enable or interface with predictive DSIs. We believe that these new sets of information would provide appropriate information to help guide decisions at the time and place of care, consistent with 42 U.S.C. 300jj–11(b)(4).

This lays the groundwork for the start of regulation with regard to systems classified as “algorithms” and “predictive models of machine learning technology – otherwise known as “artificial intelligence”. Definitive regulations in this area are then outlined in the Proposed Rule which seeks to regulate this area with regard to technology involving “decision support interventions (DSI)” by certification standards, “attest” standards along with risk analysis, mitigation and risk governance:

Artificial Intelligence, Algorithms, and Predictive Models in Healthcare 
We consider AI to encompass a broad and varied set of technologies that generally incorporate algorithms or statistical models. Early examples of AI in healthcare, sometimes referred to as “expert systems,” were based on codified expert knowledge, logic models, and deterministic rules to recommend treatment for individuals, and systems of this type are widely used today to provide clinical decision support (CDS).[79] *** The current and potential applications of AI to healthcare are vast ranging from interpretation of medical imaging; efficient allocation of scarce healthcare resources; improved diagnostic and prognostic accuracy; and reduced clinician burden and subsequent burnout.[83] *** We propose the certification criterion, “decision support interventions (DSI)” in § 170.315(b)(11). The DSI criterion is a revised certification criterion as it serves as both an iterative and replacement criterion for the “clinical decision support (CDS)” criterion in § 170.315(a)(9). We believe that the continued evolution of decision support software, especially as it relates to AI- and ML-driven predictive models, necessitates new requirements and a new name for the Program's CDS criterion. We propose to revise the name of the CDS criterion to “decision support interventions” to reflect the various and expanding forms of decision support that certified Health IT Modules enable or interface with.

***

We propose in § 170.315(b)(11)(vii) to require developers of certified health IT with Health IT Modules certified to § 170.315(b)(11) that enable or interface with predictive DSIs ( i.e., developers that attest “Yes” in § 170.315(b)(11)(v)(A) for one or more modules) to employ or engage in and document information regarding their intervention risk management (IRM) practices. These practices are listed in proposed § 170.315(b)(11)(vii)(A)( 1) through ( 3). We propose three categories of IRM practices, including “risk analysis,” in § 170.315(b)(11)(vii)(A)( 1), “risk mitigation,” in § 170.315(b)(11)(vii)(A)( 2), and “governance,” in § 170.315(b)(11)(vii)(A)( 3 ) for each predictive DSI, as defined in § 170.102, they enable or interface with.

There will be follow up issuances in this area by our health and long term care practices as it impacts clients in this area of change. As an example of more in-depth coverage in future issuances, the ONC has proposed to revise the requirement in 42 CFR § 170.315(g)(10)(vi) to specify that a Health IT Module's authorization server must be able to revoke and must revoke an authorized application's access at a patient's direction within 1 hour of the request. This will have some far reaching effects with regard to care collaborative functions which must develop a seamless system of compliance for community-based consent standards. However, it is in keeping with the public policy on patient choice with regard to health care records. As noted in the proposed rule: “Patients want to know if AI is being used in their care, and understand how and why it is being used in their care.[185] We understand an emerging trend is for health care providers to inform patients about the use of these technologies, including predictive DSIs, in making decisions about their care.[186] We support patients being informed about technologies that directly affect individuals or their health information and understand transparency can increase public trust and confidence in technology.”

We will be supplementing the ongoing regulation of AI as it is further imbedded within governmental and health payor criteria. Should you have questions or are seeking assistance, please contact Raul A. Tabora, Jr., or any of the Bond attorneys with whom you regularly work.

PDF File View as PDF