期刊论文详细信息
Frontiers in Computer Science
Informing the ethical review of human subjects research utilizing artificial intelligence
Computer Science
Gil Alterovitz1  Christos Andreas Makridis1  Michael Kim1  Joshua Mueller1  Rafael Fricks1  Anthony Boese1  Isabel J. Hildebrandt2  Molly Klote3  Don Workman3 
[1] United States Department of Veterans Affairs, National Artificial Intelligence Institute, Washington, DC, United States;United States Department of Veterans Affairs, VA Long Beach Healthcare System, Veterans Health Administration, Long Beach, CA, United States;United States Department of Veterans Affairs, Veterans Health Administration, Washington, DC, United States;
关键词: artificial intelligence;    ethics;    human subjects;    institutional review board;    research and development;    trustworthy AI;   
DOI  :  10.3389/fcomp.2023.1235226
 received in 2023-06-05, accepted in 2023-08-21,  发布年份 2023
来源: Frontiers
PDF
【 摘 要 】

IntroductionThe rapid expansion of artificial intelligence (AI) has produced many opportunities, but also new risks that must be actively managed, particularly in the health care sector with clinical practice to avoid unintended health, economic, and social consequences.MethodsGiven that much of the research and development (R&D) involving human subjects is reviewed and rigorously monitored by institutional review boards (IRBs), we argue that supplemental questions added to the IRB process is an efficient risk mitigation technique available for immediate use. To facilitate this, we introduce AI supplemental questions that provide a feasible, low-disruption mechanism for IRBs to elicit information necessary to inform the review of AI proposals. These questions will also be relevant to review of research using AI that is exempt from the requirement of IRB review. We pilot the questions within the Department of Veterans Affairs–the nation's largest integrated healthcare system–and demonstrate its efficacy in risk mitigation through providing vital information in a way accessible to non-AI subject matter experts responsible for reviewing IRB proposals. We provide these questions for other organizations to adapt to fit their needs and are further developing these questions into an AI IRB module with an extended application, review checklist, informed consent, and other informational materials.ResultsWe find that the supplemental AI IRB module further streamlines and expedites the review of IRB projects. We also find that the module has a positive effect on reviewers' attitudes and ease of assessing the potential alignment and risks associated with proposed projects.DiscussionAs projects increasingly contain an AI component, streamlining their review and assessment is important to avoid posing too large of a burden on IRBs in their review of submissions. In addition, establishing a minimum standard that submissions must adhere to will help ensure that all projects are at least aware of potential risks unique to AI and dialogue with their local IRBs over them. Further work is needed to apply these concepts to other non-IRB pathways, like quality improvement projects.

【 授权许可】

Unknown   
Copyright This work is authored by Christos Andreas Makridis, Anthony Boese, Rafael Fricks, Don Workman, Molly Klote, Joshua Mueller, Isabel J. Hildebrandt, Michael Kim and Gil Alterovitz on behalf of the U.S. Government and as regards Dr. Makridis, Dr. Boese, Dr. Fricks, Dr. Workman, Dr. Klote, Dr. Mueller, Dr. Hildebrandt, Dr. Kim, Dr. Alterovitz, and the U.S. Government, is not subject to copyright protection in the United States. Foreign and other copyrights may apply.

【 预 览 】
附件列表
Files Size Format View
RO202310129381806ZK.pdf 184KB PDF download
  文献评价指标  
  下载次数:5次 浏览次数:5次