《電子技術(shù)應(yīng)用》
您所在的位置:首頁(yè) > 人工智能 > 設(shè)計(jì)應(yīng)用 > 生成式人工智能對(duì)個(gè)人信息保護(hù)的挑戰(zhàn)與治理路徑
生成式人工智能對(duì)個(gè)人信息保護(hù)的挑戰(zhàn)與治理路徑
網(wǎng)絡(luò)安全與數(shù)據(jù)治理
萬(wàn)美秀
南昌大學(xué)法學(xué)院
摘要: 以ChatGPT為代表的生成式人工智能技術(shù)給各行各業(yè)帶來(lái)顛覆性變革,但也引發(fā)個(gè)人信息泄露、算法偏見(jiàn)、虛假信息傳播等個(gè)人信息侵權(quán)危機(jī)。傳統(tǒng)“基于權(quán)利保護(hù)”的路徑過(guò)于強(qiáng)調(diào)個(gè)人信息保護(hù)而阻礙人工智能產(chǎn)業(yè)的發(fā)展,“基于風(fēng)險(xiǎn)防范”的路徑則更加凸顯個(gè)人信息的合理利用價(jià)值,價(jià)值選擇上更優(yōu)。但以權(quán)利保護(hù)和風(fēng)險(xiǎn)保護(hù)共同治理,才能實(shí)現(xiàn)利益平衡并建立個(gè)人信息的長(zhǎng)效保護(hù)機(jī)制。在個(gè)人信息處理規(guī)則上,以“弱同意”規(guī)則取代僵化嚴(yán)苛的知情同意規(guī)則;在目的限制原則上,以“風(fēng)險(xiǎn)限定”取代“目的限定”;在個(gè)人信息最小化原則上,以“風(fēng)險(xiǎn)最小化”取代“目的最小化”。在此基礎(chǔ)上,進(jìn)一步加強(qiáng)生成式人工智能數(shù)據(jù)來(lái)源合規(guī)監(jiān)管,提升算法透明性和可解釋性,強(qiáng)化科技倫理規(guī)范和侵權(quán)責(zé)任追究。
中圖分類(lèi)號(hào):D913;TP399文獻(xiàn)標(biāo)識(shí)碼:ADOI:10.19358/j.issn.2097-1788.2024.04.009
引用格式:萬(wàn)美秀.生成式人工智能對(duì)個(gè)人信息保護(hù)的挑戰(zhàn)與治理路徑[J].網(wǎng)絡(luò)安全與數(shù)據(jù)治理,2024,43(4):53-60.
The challenge and governance path of generative artificial intelligence to personal information protection
Wan Meixiu
Law School, Nanchang University
Abstract: Generative artificial intelligence technology represented by ChatGPT has brought disruptive changes to all walks of life, but also triggered personal information infringement crises such as personal information disclosure, algorithmic bias, and false information dissemination. The traditional "right protection-based" path overly emphasizes personal information protection and hinders the development of the artificial intelligence industry. The "risk prevention-based" path highlights the rational use value of personal information and is better in value selection. However,only by governing together with right protection and risk protection can we achieve a balance of interests and establish a long-term protection mechanism for personal information. In terms of personal information processing rules, the rigid and strict informed consent rules should be replaced by the "weak consent" rule; in terms of purpose limitation principles, the "purpose limitation" principle should be replaced by the "risk limitation"; in terms of personal information minimization principles, the "purpose minimization" principle should be replaced by the "risk minimization". On this basis, we should further strengthen the compliance supervision of generative artificial intelligence data sources, improve the transparency and interpretability of algorithms, and strengthen the standardization of scientific and technological ethics and the investigation of tort liability.
Key words : generative AI;ChatGPT;personal information protection;governance path

引言

ChatGPT為代表的生成式人工智能掀起了全球第四次科技革命浪潮,成為帶動(dòng)全球經(jīng)濟(jì)增長(zhǎng)的新引擎[1]。然而,作為新一代人工智能技術(shù),生成式人工智能在不斷迭代更新與變革生產(chǎn)關(guān)系的同時(shí),也帶來(lái)了諸多個(gè)人信息保護(hù)的法律風(fēng)險(xiǎn)。生成式人工智能的運(yùn)行以海量用戶(hù)的個(gè)人信息為基礎(chǔ),在輸入端、模擬訓(xùn)練端、模擬優(yōu)化端、輸出端等各環(huán)節(jié)都離不開(kāi)個(gè)人信息的使用。在大規(guī)模的數(shù)據(jù)處理和不透明的算法黑箱背景下,生成式人工智能便產(chǎn)生了違法收集個(gè)人信息、制造虛假有害信息、算法偏見(jiàn)與歧視等問(wèn)題。對(duì)此,各國(guó)監(jiān)管部門(mén)廣泛關(guān)注,美國(guó)、法國(guó)、意大利、西班牙、加拿大等多國(guó)政府已宣布對(duì)ChatGPT進(jìn)行調(diào)查監(jiān)管,并出臺(tái)了相應(yīng)監(jiān)管規(guī)范。2023年7月10日,我國(guó)網(wǎng)信辦等七部門(mén)也聯(lián)合發(fā)布了《生成式人工智能服務(wù)管理暫行辦法》(以下簡(jiǎn)稱(chēng)“《暫行辦法》”),明確了促進(jìn)生成式人工智能技術(shù)發(fā)展的具體措施,對(duì)支持和規(guī)范生成式人工智能發(fā)展作出了積極有力的回應(yīng)。但需要注意的是,《暫行辦法》對(duì)個(gè)人信息保護(hù)的規(guī)定僅在第4、7、9、11、19條中援引《個(gè)人信息保護(hù)法》的相關(guān)規(guī)定,對(duì)使用生成式人工智能技術(shù)侵犯?jìng)€(gè)人信息權(quán)益呈現(xiàn)出的新問(wèn)題缺乏專(zhuān)門(mén)規(guī)定,而繼續(xù)延用《個(gè)人信息保護(hù)法》面臨諸多適用困境。


本文詳細(xì)內(nèi)容請(qǐng)下載:

http://theprogrammingfactory.com/resource/share/2000005969


作者信息:

萬(wàn)美秀

(南昌大學(xué)法學(xué)院,江西南昌330031)


Magazine.Subscription.jpg

此內(nèi)容為AET網(wǎng)站原創(chuàng),未經(jīng)授權(quán)禁止轉(zhuǎn)載。