《電子技術(shù)應(yīng)用》
您所在的位置:首頁 > 其他 > 設(shè)計(jì)應(yīng)用 > 揭開“算法的面紗”:關(guān)于構(gòu)建算法解釋框架的思考
揭開“算法的面紗”:關(guān)于構(gòu)建算法解釋框架的思考
網(wǎng)絡(luò)安全與數(shù)據(jù)治理 10期
劉 燁
(同濟(jì)大學(xué)法學(xué)院,上海200092)
摘要: 如何解決算法可解釋性問題是算法治理中的一個(gè)重要法律議題,囿于算法使用者與受眾之間不斷擴(kuò)大的“解釋鴻溝”,現(xiàn)階段算法解釋困境存在于算法運(yùn)行、決策形成到應(yīng)用的過程之中,具體體現(xiàn)在數(shù)據(jù)識(shí)別的失衡、證明依據(jù)的不足和損害結(jié)果的泛化三方面??紤]到不同算法運(yùn)用場域之下解釋需求的差異,借助體系思維構(gòu)建起算法解釋框架,或?qū)⒊蔀榻鉀Q可解釋性問題的突破口。以解釋對(duì)象為邏輯起點(diǎn)將解釋方法劃分為定向告知、公開披露和行政報(bào)備三種模式,并基于“場景公正”理念將其應(yīng)用于醫(yī)療、信息推薦、金融等領(lǐng)域,針對(duì)不同業(yè)務(wù)和場景區(qū)分可解釋性程度和標(biāo)準(zhǔn),以期實(shí)現(xiàn)算法可解釋。
中圖分類號(hào):D922.14
文獻(xiàn)標(biāo)識(shí)碼:A
DOI:10.19358/j.issn.2097-1788.2023.10.012
引用格式:劉燁.揭開“算法的面紗”:關(guān)于構(gòu)建算法解釋框架的思考[J].網(wǎng)絡(luò)安全與數(shù)據(jù)治理,2023,42(10):72-78.
Unveiling the "algorithmic veil": reflection on building an algorithmic interpretation framework
Liu Ye
(School of Law, Tongji University, Shanghai 200092, China)
Abstract: A proper solution of the problem on the explainability of algorithm is an important legal issue in algorithm governance, limited by the expanding "explanation gap" between algorithm users and audiences. However, at this stage, the dilemma of the explainability of algorithm exists in the process of algorithm operation, decisionmaking formation and application, which is embodied in three aspects: imbalance of data identification, insufficient proof basis and generalization of damage results. Combined with the differences in interpretation requirements, explanation methods and criteria of different application fields, building an algorithm interpretation framework with the help of systematic thinking may become a breakthrough to solve the problem on the explainability of algorithm. Taking the object as the logical jumpingoff point, the explainability method is divided into three modes: directional notification, public disclosure and administrative reporting. This method can be applied to medical, information recommendation, finance and other fields based on the concept of "scene justice", and the degree of explainability and criteria are distinguished for different businesses and scenarios, so as to realize the explainability of algorithm.
Key words : algorithm governance; automated decisionmaking; explainability; criteria for explainability

0     引言

在人工智能領(lǐng)域,算法得以持續(xù)運(yùn)作、生成決策的前提在于算法可解釋。可解釋性是保障算法可被信賴的安全特征,連通著算法自動(dòng)化決策與法律規(guī)制。近年來,算法模型復(fù)雜度的大幅提升,算法使用者與決策最終受眾之間的“解釋鴻溝”也隨之?dāng)U大。而就算法這種尚未能夠?yàn)槿藗兯浞掷斫馀c把握的事物而言,出于對(duì)法律規(guī)范的可預(yù)測(cè)性和可接受性的考量,立法者需要謹(jǐn)慎考慮是否作出法律上的要求或安排。因此,如何有效解釋算法,成為解決治理問題的重要環(huán)節(jié)。

在如何更好地解釋算法本身及其自動(dòng)化決策,尤其在如何讓用戶有效理解算法的基本原理和運(yùn)行機(jī)制方面仍待進(jìn)一步完善?;诖?,本文試圖以“場景化分析”為切入口,梳理算法可解釋性在算法治理中的現(xiàn)實(shí)困境,思考在具體場景之下的優(yōu)化方案。




本文詳細(xì)內(nèi)容請(qǐng)下載:http://theprogrammingfactory.com/resource/share/2000005742




作者信息:

劉燁

(同濟(jì)大學(xué)法學(xué)院,上海200092)


微信圖片_20210517164139.jpg

此內(nèi)容為AET網(wǎng)站原創(chuàng),未經(jīng)授權(quán)禁止轉(zhuǎn)載。