House AI Task Force Report

Overview

On December 17, 2024, the House Artificial Intelligence Task Force released a report containing “guiding principles, forward-looking recommendations, and policy proposals to ensure American continues to lead the world in responsible AI innovation.” The more than 250 page report contains 66 key findings and 85 recommendations. Below are summaries of the privacy and health-related provisions. Other sections, including one on Research, Development & Standards, Intellectual Privacy, Federal Preemption and Open & Closed Systems may also be of interest.

Data Privacy

Key Findings

  • AI has the potential to exacerbate privacy harms.

  • Americans have limited recourse for many privacy harms.

  • Federal privacy laws could potentially augment state laws.

Recommendations

  • Explore mechanisms to promote access to data in privacy-enhanced ways.

  • Ensure privacy laws are generally applicable and technology-neutral.

Data is the new oil, proclaimed British mathematician Clive Humby in 2006, and it is more true in the AI era then ever. AI requires “vast amounts of data” for training purposes, and has the potential to analyze large amounts of data as well. Both of these activities can put privacy at risk.

 There is no comprehensive federal data privacy and security law that addresses the potential harms from AI, which is why 19 states have enacted their own privacy laws. This patchwork of laws can create confusion of consumers/patients and regulatory complexity for industry.

Of interest: the Task Force did not recommend a federal data privacy law.

Healthcare

Key Findings    

  • AI's use in healthcare can potentially reduce administrative burdens and speed up drug development and clinical diagnosis. 

  • The lack of ubiquitous, uniform standards for medical data and algorithms impedes system interoperability and data sharing.

Recommendations

  • Encourage the practices needed to ensure AI in healthcare is safe, transparent, and effective.

  • Maintain robust support for healthcare research related to AI.

  • Create incentives and guidance to encourage risk management of AI technologies in healthcare across various deployment conditions to support AI adoption and improve privacy, enhance security, and prevent disparate health outcomes.

  • Support the development of standards for liability related to AI issues.

  • Support appropriate payment mechanisms without stifling innovation. 

AI is currently used for administrative processes in health settings, as part of medical devices and to aid in the exploration of new compounds for treatment of medical concerns. The latter two applications fall under FDA’s purview.

 AI Use in Drug Development

Using AI may decrease the time it takes to develop, approve and market a new drug; this may be particularly helpful for rare diseases where development costs can outweigh potential remuneration. The number of drugs in development that have been developed with at least some AI has increased; FDA’s white paper discussed this in-depth.[1]

AI Use in Diagnostics

AI is used in diagnostics such as digital pathology,[2] mammography[3] and electrocardiograms.[4] These applications are FDA regulated devices.

AI Use in Clinical Decision Making

AI has been used in CDS,[5] however, concerns have been raised about some its uses. Sepsis, in particular, has been thorny.[6],[7] CDS can be used to address health disparities and perpetuate them. AI in EHR is not generally subject to FDA review.

Coverage/Payment

Payment for AI-enabled technologies is complicated. CMS pays for only a few AI apps, and does not pay for back-end technologies that improve office procedures. Many payors are using AI to review/deny claims,[8] which is now illegal in California.[9] CMS adopted a final rule restricting the use of AI for denials by Medicare Advantage plans.[10]

AI-enabled medical devices may be paid for by insurers, including CMS, the same as other devices; payors do not generally distinguish between an AI-device and its traditional counterpart,[11] particularly when the services are being paid for through a bundled payment mechanism.[12] However, there are a number of codes for AI medical services, such as AI enabled diabetic retinopathy screening (CPT 92229).[13]

AI Data

As discussed above, AI requires large data sets for training; these can be difficult to find and leverage, particularly accounting for the diversity of the US and issues with interoperability of data. De-identification of PHI from data sets for privacy reasons may create issues with incomplete or biased data sets. Once data is turned over to a third party, such as an app developer, the patient may no longer have any control or ownership of the data; for example, when a third party gets acquired, the data goes to the new parent company.[14]

Concerns

  • Limitations in data sets can lead to bias

  • Data required are usually protected health information[15] or otherwise sensitive medical data

  • Liability regarding the use of CDS is still an open question

Of interest: With respect to all FDA regulated AI, the report recommends that “Congress should explore whether the current laws and regulations need to be enhanced to help the FDA’s post-market evaluation process ensure that AI technologies in healthcare are continually and sufficiently monitored for safety, efficacy, and reliability.”

Next Steps

There are no clear next steps associated with the report. Stakeholders should engage Task Force members and relevant Committee members regarding potential legislative solutions to the problems raised.

 


[1] Resources regarding FDA’s work on AI in drug development can be found here: https://www.fda.gov/about-fda/center-drug-evaluation-and-research-cder/artificial-intelligence-drug-development

[2] See, for example, https://www.nature.com/articles/s41746-024-01106-8

[3] In a Swedish study, AI enabled mammography was as accurate as standard readings and faster, available at https://www.thelancet.com/journals/lanonc/article/PIIS1470-2045(23)00298-X/abstract

[4] One model shows an improvement in diagnoses based one AI review of ECGs, at https://www.thelancet.com/journals/landig/article/PIIS2589-7500(24)00172-9/fulltext

[5] FDA’s CDS Guidance is available at https://www.fda.gov/media/109618/download.

[6] Bhargava, A., et al, FDA-Authorized AI/ML Tool for Sepsis Prediction: Development and Validation, NEJM AI (2024), available at https://ai.nejm.org/doi/full/10.1056/AIoa2400867

[7] See, for example, STAT News’s investigation of Epic’s sepsis algorithm at https://www.statnews.com/2022/10/24/epic-overhaul-of-a-flawed-algorithm/

[8] See, for example, STAT News’s article at https://www.statnews.com/2024/12/12/artificial-intelligence-appealing-health-insurance-denials/ and an overview of the use of AI in insurance denials at https://jamanetwork.com/journals/jama-health-forum/fullarticle/2816204.

[9] California’s Physicians Make Decisions Act (SB 1120) goes into effect January 1, 2025.

[10] “MA organizations must ensure that they are making medical necessity determinations based on the circumstances of the specific individual, as outlined at § 422.101(c), as opposed to using an algorithm or software that doesn't account for an individual's circumstances.” See the full paragraph at https://www.federalregister.gov/d/2023-07115/p-789

[11] For a deep dive into CMS payment for software, more generally, see MedPAC’s Report to Congress on Medicare and the Healthcare Delivery System from June 2024 at https://www.medpac.gov/wp-content/uploads/2024/06/Jun24_Ch4_MedPAC_Report_To_Congress_SEC.pdf.

[12] Procedures may be classified into DRGs, which determines payment. See more at https://www.cms.gov/medicare/payment/prospective-payment-systems/acute-inpatient-pps/ms-drg-classifications-and-software

[13] “Seven category III code have been established for AI augmentative data analysis involved in electrocardiogram measurements (0902T and 0932T), medical chest imagining (0877T-0880T), and image-guided prostate biopsy (0898T).” American Medical Association, AMA releases CPT 2025 code set, Sep. 10, 2024 available at https://www.ama-assn.org/press-center/press-releases/ama-releases-cpt-2025-code-set

[14] The report cites the case of DeepMind’s acquisition by Google, and questions of data ownership. However, Google has won the court case regarding misuse of patient data. See Prismall v. Google UK (Case CA-2023-001263); the most recent decision, from November 2024, is here: https://www.judiciary.uk/wp-content/uploads/2024/12/Prismall-v-Google-UK-Ltd-Approved-judgment-11.12.24.pdf

[15] See HHS definition of PHI at https://www.hhs.gov/answers/hipaa/what-is-phi/index.html

Next
Next

Cell & Gene Therapy FAQ