世界衛生組織警告說,醫療人工智慧對較貧窮國家來說可能是「危險的」
生成式人工智慧在醫療保健領域的快速發展促使該機構制定了道德使用指南。
大衛.亞當 / 2024 年 1 月 18 日 / 新聞 / Nature
技術人員使用基於人工智慧的方法來篩檢子宮頸癌樣本。/ 圖片來源:法新社,蓋蒂圖片社
世界衛生組織(WHO)警告稱,引入基於人工智慧(AI)的醫療保健技術可能對低收入國家的人們造成「危險」。
該組織今天發布了一份報告,描述了大型多模式模型( large multi-modal models, LMMs)的新指南,並表示,開發技術的使用不僅僅由科技公司和富裕國家的公司決定,這一點至關重要。該機構表示,如果模型沒有根據資源貧乏地區人們的數據進行訓練,這些人群可能無法得到演算法的良好服務。
世界衛生組織數位健康與創新主任阿蘭·拉布里克表示:「作為技術飛躍的一部分,我們最不希望看到的事情是世界各國社會結構中不平等和偏見的傳播或擴大。」今天在媒體簡報會上說。
被事件所超越
世界衛生組織於 2021 年發布了第一份關於醫療保健領域人工智慧的指南。但不到三年後,由於 LMM 的功能和可用性的提高,促使該組織更新了這些指南。這些模型也稱為生成式人工智慧,包括為流行的 ChatGPT 聊天機器人提供支援的模型,可處理和生成文字、視訊和圖像。
WHO 表示,LMM「的採用速度比歷史上任何消費者應用程式都要快」。醫療保健是一個熱門目標。模型可以產生臨床記錄、填寫表格並幫助醫生診斷和治療患者。一些公司和醫療保健提供者正在開發特定的人工智慧工具。
谷歌人工智慧比人類醫生有更好的臨床態度,並且可以做出更好的診斷
世界衛生組織表示,其指南是作為向成員國提供的建議而發布的,旨在確保 LMM 的爆炸性增長促進和保護公眾健康,而不是損害公眾健康。在最壞的情況下,該組織警告稱,全球將出現「逐底競爭」,即公司力爭成為第一個發布應用程式的人,即使這些應用程式不起作用且不安全。它甚至引發了「模型崩潰」的可能性,這是一種虛假資訊循環,在這種循環中,接受不準確或虛假資訊訓練的 LMM 污染了互聯網等公共資訊來源。
世界衛生組織首席科學家 Jeremy Farrar 表示:「生成式人工智慧技術有潛力改善醫療保健,但前提是那些開發、監管和使用這些技術的人能夠識別並充分考慮相關風險」。
該機構警告說,這些強大工具的運作絕不能只交給科技公司。拉布里克表示:「各國政府必須共同努力,有效規範人工智慧技術的開發和使用」。民間團體和接受醫療保健的人們必須為 LMM 開發和部署的所有階段做出貢獻,包括其監督和監管。
排擠學術界
世界衛生組織在報告中警告說,鑑於培訓、部署和維護這些計畫的成本高昂,LMM 開發可能會被「工業捕獲」。報告稱,已經有令人信服的證據顯示,在人工智慧研究領域,最大的公司正在排擠大學和政府,「空前」數量的博士生和教師離開學術界進入工業界。
該指南建議獨立第三方對大規模部署的 LMM 執行並發布強制發布後審計。世界衛生組織補充說,此類審計應評估工具保護資料和人權的程度。
它還建議,從事可用於醫療保健或科學研究的 LMM 的軟體開發人員和程式設計師應該接受與醫務人員相同的道德培訓。它還表示,政府可以要求開發人員註冊早期演算法,以鼓勵負面結果的公布並防止偏見和炒作的發布。
doi:https://doi.org/10.1038/d41586-024-00161-1
Medical AI could be ‘dangerous’ for poorer nations, WHO warns
The rapid growth of generative AI in health care has prompted the agency to set out guidelines for ethical use.
David Adam / 18 January 2024 / news / Nature
A technician uses an artificial-intelligence-based method to screen a sample for cervical cancer.
Credit: AFP via Getty
The introduction of health-care technologies based on artificial intelligence (AI) could be “dangerous” for people in lower-income countries, the World Health Organization (WHO) has warned.
The organization, which today issued a report describing new guidelines on large multi-modal models (LMMs), says it is essential that uses of the developing technology are not shaped only by technology companies and those in wealthy countries. If models aren’t trained on data from people in under-resourced places, those populations might be poorly served by the algorithms, the agency says.
“The very last thing that we want to see happen as part of this leap forward with technology is the propagation or amplification of inequities and biases in the social fabric of countries around the world,” Alain Labrique, the WHO’s director for digital health and innovation, said at a media briefing today.
Overtaken by events
The WHO issued its first guidelines on AI in health care in 2021. But the organization was prompted to update them less than three years later by the rise in the power and availability of LMMs. Also called generative AI, these models, including the one that powers the popular ChatGPT chatbot, process and produce text, videos and images.
LMMs have been “adopted faster than any consumer application in history”, the WHO says. Health care is a popular target. Models can produce clinical notes, fill in forms and help doctors to diagnose and treat patients. Several companies and health-care providers are developing specific AI tools.
Google AI has better bedside manner than human doctors — and makes better diagnoses
The WHO says its guidelines, issued as advice to member states, are intended to ensure that the explosive growth of LMMs promotes and protects public health, rather than undermining it. In the worst-case scenario, the organization warns of a global “race to the bottom”, in which companies seek to be the first to release applications, even if they don’t work and are unsafe. It even raises the prospect of “model collapse”, a disinformation cycle in which LMMs trained on inaccurate or false information pollute public sources of information, such as the Internet.
“Generative AI technologies have the potential to improve health care, but only if those who develop, regulate and use these technologies identify and fully account for the associated risks,” said Jeremy Farrar, the WHO’s chief scientist.
Operation of these powerful tools must not be left to tech companies alone, the agency warns. “Governments from all countries must cooperatively lead efforts to effectively regulate the development and use of AI technologies,” said Labrique. And civil-society groups and people receiving health care must contribute to all stages of LMM development and deployment, including their oversight and regulation.
Crowding out academia
In its report, the WHO warns of the potential for “industrial capture” of LMM development, given the high cost of training, deploying and maintaining these programs. There is already compelling evidence that the largest companies are crowding out both universities and governments in AI research, the report says, with “unprecedented” numbers of doctoral students and faculty leaving academia for industry.
The guidelines recommend that independent third parties perform and publish mandatory post-release audits of LMMs that are deployed on a large scale. Such audits should assess how well a tool protects both data and human rights, the WHO adds.
It also suggests that software developers and programmers who work on LMMs that could be used in health care or scientific research should receive the same kinds of ethics training as medics. And it says governments could require developers to register early algorithms, to encourage the publication of negative results and prevent publication bias and hype.
doi: https://doi.org/10.1038/d41586-024-00161-1