《電子技術(shù)應(yīng)用》
您所在的位置:首頁 > 人工智能 > 設(shè)計應(yīng)用 > 基于偽觸發(fā)詞的并行預(yù)測篇章級事件抽取方法
基于偽觸發(fā)詞的并行預(yù)測篇章級事件抽取方法
電子技術(shù)應(yīng)用
秦海濤1,2,線巖團1,2,相艷1,2,黃于欣1,2
1.昆明理工大學(xué) 信息工程與自動化學(xué)院; 2.昆明理工大學(xué) 云南省人工智能重點實驗室
摘要: 篇章級事件抽取一般將事件抽取任務(wù)分為候選實體識別、事件檢測和論元識別3個子任務(wù),然后采用級聯(lián)的方式依次進行,這樣的方式會造成誤差傳遞;另外,現(xiàn)有的大多數(shù)模型在解碼事件時,對事件數(shù)量的預(yù)測隱含在解碼過程中,且只能按照預(yù)定義的事件順序及預(yù)定義的角色順序預(yù)測事件論元,使得先抽取的事件并沒有考慮到后面抽取的事件。針對以上問題提出一種多任務(wù)聯(lián)合的并行預(yù)測事件抽取框架。首先,使用預(yù)訓(xùn)練語言模型作為文檔句子的編碼器,檢測文檔中存在的事件類型,并使用結(jié)構(gòu)化自注意力機制獲取偽觸發(fā)詞特征,預(yù)測每種事件類型的事件數(shù)量;然后將偽觸發(fā)詞特征與候選論元特征進行交互,并行預(yù)測每個事件對應(yīng)的事件論元,在大幅縮減模型訓(xùn)練時間的同時獲得與基線模型相比更好的性能。最終事件抽取結(jié)果F1值為78%,事件類型檢測子任務(wù)F1值為98.7%,事件數(shù)量預(yù)測子任務(wù)F1值為90.1%,實體識別子任務(wù)F1值為90.3%。
中圖分類號:TP391 文獻標志碼:A DOI: 10.16157/j.issn.0258-7998.244868
中文引用格式: 秦海濤,線巖團,相艷,等. 基于偽觸發(fā)詞的并行預(yù)測篇章級事件抽取方法[J]. 電子技術(shù)應(yīng)用,2024,50(4):67-74.
英文引用格式: Qin Haitao,Xian Yantuan,Xiang Yan,et al. Parallel prediction of document-level event extraction method via pseudo trigger words[J]. Application of Electronic Technique,2024,50(4):67-74.
Parallel prediction of document-level event extraction method via pseudo trigger words
Qin Haitao1,2,Xian Yantuan1,2,Xiang Yan1,2,Huang Yuxin1,2
1.Faculty of Information Engineering and Automation, Kunming University of Science and Technology;2.Yunnan Key Laboratory of Artificial Intelligence, Kunming University of Science and Technology
Abstract: Document-level event extraction generally divides the task into three subtasks: candidate entity recognition, event detection, and argument recognition. The conventional approach involves sequentially performing these subtasks in a cascading manner, leading to error propagation. Additionally, most existing models implicitly predict the number of events during the decoding process and predict event arguments based on a predefined event and role order, so that the former extraction will not consider the latter extraction results. To address these issues, a multi-task joint and parallel event extraction framework is proposed in this paper. Firstly, a pre-trained language model is used as the encoder for document sentences. On this basis, the framework detects the types of events present in the document. It utilizes a structured self-attention mechanism to obtain pseudo-trigger word features and predicts the number of events for each event type. Subsequently, the pseudo-trigger word features are interacted with candidate argument features, and parallel prediction is performed to obtain various event arguments for each event, significantly reducing model training time while achieving performance comparable to the baseline model. The final F1 score for event extraction is 78%, with an F1 score of 98.7% for the event type detection subtask, 90.1% for the event quantity prediction subtask, and 90.3% for the entity recognition subtask.
Key words : document-level event extraction;multi-task joint;pre-trained language model;structured self-attention mechanism;parallel prediction

引言

近年來互聯(lián)網(wǎng)發(fā)展迅速,網(wǎng)絡(luò)媒體每天產(chǎn)生大量信息,事件抽取任務(wù)作為信息抽取的分支,能從這些非結(jié)構(gòu)化文本信息中抽取結(jié)構(gòu)化信息[1],幫助人們快速有效地做出分析和決策,是自然語言處理領(lǐng)域中一項重要的研究任務(wù),在智能問答、信息檢索、自動摘要、推薦等領(lǐng)域有著廣泛的應(yīng)用。

事件抽取從文本粒度上可以分為句子級的事件抽取[2-6]和篇章級的事件抽取[7-18],句子級事件抽取通常先識別句子中的觸發(fā)詞[1-2]來檢測事件類型,然后再抽相應(yīng)的事件論元(元素),而Li等[4]和Nguyen等[5]則采用聯(lián)合模型捕獲實體與事件之間的語義關(guān)系,同時識別事件和實體,提高了事件抽取的準確率。但是隨著文本信息的增加,一些基于觸發(fā)詞的句子級事件抽取不再適用,以及由于文檔信息在日常生活中更普遍的適用性,篇章級的事件抽取受到了更廣泛的關(guān)注。


本文詳細內(nèi)容請下載:

http://theprogrammingfactory.com/resource/share/2000005951


作者信息:

秦海濤1,2,線巖團1,2,相艷1,2,黃于欣1,2

(1.昆明理工大學(xué) 信息工程與自動化學(xué)院,云南 昆明 650500;

2.昆明理工大學(xué) 云南省人工智能重點實驗室,云南 昆明 650500)


Magazine.Subscription.jpg

此內(nèi)容為AET網(wǎng)站原創(chuàng),未經(jīng)授權(quán)禁止轉(zhuǎn)載。