Abstract:Medical AI has been gradually put into clinical application, which has a wide and profound impact on the process of medical decision-making. However, patients are always under an unconscious “dominated” state, which requires timely response from the law to specify the doctor's informing obligation under clinical application of medical AI. Through the investigation of dimension of doctor, medical AI and patient, the trigger of doctor's informing obligation should meet two conditions: firstly, doctors fail to achieve “meaningful involvement”, that is, they neither have the corresponding ability of supervision, nor they take the appropriate measures of supervision; secondly, the auxiliary decision-making of medical AI will have a significant impact on the patient, which needs to be judged according to the scene. In order to form a more concrete applicable standard, the trigger of doctor's informing obligation should be connected with the classifying management of medical AI: AI products administrated not as medical devices or medical AI administrated as class one device will not trigger the informing obligation of doctors; medical AI administrated as class two device will not trigger the informing obligation in principle, but the administrative and judicial institutions would reserve the necessary discretion; medical AI administrated as class three device will trigger the informing obligation of doctors definitely.
李润生. 医疗人工智能临床应用中医生告知义务的触发条件与衔接机制[J]. 《深圳大学学报》(人文社科版), 2023, 40(1): 92-100.
LI Run-sheng. On the Triggering Condition and Connecting Mechanism of Doctor's Informing Obligation under Clinical Application of Medical AI. , 2023, 40(1): 92-100.
[1] Martin Stumpe. Applying Deep Learning to Metastatic Breast Cancer Detection[EB/OL].https://ai.googleblog.com/2018/10/applying-deep-learning-to-metastatic.html, 2021-11-10. [2] Alice G.Gosfield. Artificial Intelligence and the Practice of Medicine[J].Health Law Handbook, 2019,(7):1-15. [3] 张学高,胡建平. 医疗健康人工智能应用案例集[M].北京:人民卫生出版社,2020. [4] W. Nicholson Price II. Regulating Black-box Medicine[J]. Michigan Law Review,2017,116(3):421-473. [5] Ferretti A,Schneider M, Blasimme A.Machine Learning in Medicine: Opening the New Data Protection Black Box[J].European Data Protection Law Review,2018,(4):320-332. [6] 马特. 民事视域下知情同意权的权利基础及规则建构[J].江淮论坛,2014,(5):132-137. [7] 黄丁全. 医事法新论[M].北京:法律出版社,2013. [8] 赵西巨. 医生对新疗法的使用和告知[J].东方法学,2009,(6):119-133. [9] 赵西巨,李心沁.侵害患者知情同意权责任纠纷中医疗损害鉴定之内容[J].证据科学,2015,(1):94-106. [10] Michael Dunn, K.W.M.Fulford, Jonathan Herring, Ashok Handa.Between the Reasonable and the Particular: Defla-ting Autonomy in the Legal Regulation of Informed Consent to Medical Treatment[J].Health Care Analysis,2019, 27:110-127. [11] 沈德咏,杜万华.最高人民法院医疗损害责任司法解释理解与适用[M].北京:人民法院出版社,2018.314. [12] Nadia N.Sawicki.Modernizing Informed Consent:Expanding the Boundaries of Materiality[J].University of Illinois Law Review,2016,(3):821-871. [13] 张凌寒. 商业自动化决策算法解释权的功能定位与实现路径[J].苏州大学学报(哲学社会科学版),2020,(2):51-60. [14] Philipp Hacker, Ralf Krestel, Stefan Grundmann et al. Explainable AI under Contract and Tort Law: Legal Incentives and Technical Challenges[J].Artifcial Intelligen-ce and Law,2020,28:415-439. [15] W.Nicholson Price II.Medical AI and Contextual Bias[J].Harvard Journal of Law & Technology,2019,(33):65-115. [16] 刘伶俐,贺一墨,刘祥德.患者对人工智能医疗的认知及信任度调查[J].中国医学伦理学,2019,(8):986-990. [17] Robin C. Feldman, Ehrik Aldana, Kara Stein. Artificial Intelligence in the Health Care Space: How We Can Trust What We Cannot Know[J].Stanford Law and Policy Review,2019,30:399-419. [18] Sally Daub. Defensive Diagnostics: The Legal Implications of AI in Radiology[EB/OL].http://ai-med.io/defensive-diagnostics-the-legal-implications-of-ai-in-radiology,2021-08-02. [19] Thomas Hoeren, Maurice Niehof.Artificial Intelligence in Medical Diagnoses and the Right to Explanation[J]. European Data Protection Law Review,2018,(3):308-319. [20] Talya Van Embden.Paging Dr.Robot:Applying an Outdated Regulated Scheme to Robotic Medicine[J].Nova Law Review,2019,43(3):387-416.