中国全科医学 ›› 2023, Vol. 26 ›› Issue (19): 2423-2427.DOI: 10.12114/j.issn.1007-9572.2023.0007

• 数智医疗与信息化研究 • 上一篇    下一篇

人工智能算法偏见与健康不公平的成因与对策分析

陈龙1, 曾凯2, 李莎1, 陶璐1, 梁玮1, 王皓岑3, 杨如美1,*()   

  1. 1.211166 江苏省南京市,南京医科大学护理学院
    2.510515 广东省广州市,南方医科大学护理学院
    3.School of Nursing,Purdue University,Indiana 47907,USA
  • 收稿日期:2023-01-05 修回日期:2023-03-23 出版日期:2023-07-05 发布日期:2023-03-30
  • 通讯作者: 杨如美

  • 作者贡献:陈龙负责论文撰写和修改;曾凯、李莎、陶璐、梁玮、王皓岑参与修改论文;杨如美负责论文选题、修改和质量控制;全部作者阅读并同意最终稿件的提交。
  • 基金资助:
    国家自然科学基金资助项目(72004098,72204117); 江苏高校哲学社会科学研究一般项目(2020SJA0302); 南京医科大学高层次引进人才项目(NMUR2020006); 南京医科大学研究生优质教育资源建设项目(2021F005); 江苏高校优势学科建设工程项目"护理学"(苏政办发〔2018〕87号); "十四五"高等教育科学研究规划课题(苏高教会〔2021〕16号YB009); 南京医科大学内涵建设专项护理学优势学科资助

Causes and Countermeasures of Algorithmic Bias and Health Inequity

CHEN Long1, ZENG Kai2, LI Sha1, TAO Lu1, LIANG Wei1, WANG Haocen3, YANG Rumei1,*()   

  1. 1. School of Nursing, Nanjing Medical University, Nanjing 211166, China
    2. School of Nursing, Southern Medical University, Guangzhou 510515, China
    3. School of Nursing, Purdue University, Indiana 47907, USA
  • Received:2023-01-05 Revised:2023-03-23 Published:2023-07-05 Online:2023-03-30
  • Contact: YANG Rumei

摘要: 随着信息技术的发展,人工智能为疾病诊疗带来重要价值。然而,人工智能中存在算法偏见现象,可导致医疗卫生资源分配不均等问题,严重损害患者的健康公平。算法偏见是人为偏见的技术化体现,其形成与人工智能开发过程密切相关,主要源于数据收集、训练优化和输出应用3个方面。医护工作者作为患者健康的直接参与者,应采取相应措施以预防算法偏见,避免其引发健康公平问题。医护工作者需保障健康数据真实无偏见、优化人工智能的公平性和加强其输出应用的透明度,同时需思考如何处理临床实践中算法偏见引发的不公平现象,全面保障患者健康公平。本研究就健康领域中算法偏见的形成原因和应对策略展开综述,以期提高医护工作者识别和处理算法偏见的意识与能力,为保障信息化时代中的患者健康公平提供参考。

关键词: 人工智能, 算法偏见, 健康公平, 人为偏见, 信息化

Abstract:

With the development of information technology, artificial intelligence shows great potentials for clinical diagnosis and treatment. Nevertheless, bias in algorithms derived by artificial intelligence can lead to problems such as unequal distribution of healthcare resources, which significantly affect patients' health equity. Algorithmic bias is a technical manifestation of human bias, whose formation strongly correlates with the entire development process of artificial intelligence, starting from data collection, model training and optimization to output application. Healthcare providers, as the key direct participants in ensuring patients' health, should take corresponding measures to prevent algorithmic bias to avoid its related health equity issues. It is important for healthcare providers to ensure the authenticity and unbiasedness of health data, optimize the fairness of artificial intelligence, and enhance the transparency of its output application. In addition, healthcare providers need to consider how to tackle bias-related health inequity, so as to comprehensively ensure patients' health equity. In this study, we reviewed the causes and coping strategies related to algorithmic bias in healthcare, with the aim of improving healthcare providers' awareness and ability to identify and address algorithmic bias, and laying a foundation for ensuring patients' health equity in the information age.

Key words: Artificial intelligence, Algorithmic bias, Health equity, Human bias, Information