天天看點

論文閱讀筆記—Reasoning on Knowledge Graphs with Debate Dynamics(AAAI,2020)動機

這是一篇非常有趣的工作,看完會覺得眼前一亮。

論文标題:Reasoning on Knowledge Graphs with Debate Dynamics

發表于AAAI,2020

動機

 很多機器學習的任務都是通過将節點與關系嵌入,并計算三元組置信度得分,然後最大化正例的得分得到嵌入向量,但究竟哪一部分對最終的得分起作用是難以解釋的,本文模型有個三個子產品,分别是兩個agent和 judge,對于待查詢三元組: q = ( s q , p q , o q ) q=\left(s_{q}, p_{q}, o_{q}\right) q=(sq​,pq​,oq​),兩個agent分别尋找證據鍊證明此三元組為True和False,并有Judge整合所有證據,得到最終結果。(聽起來有點像GAN,但看下去會發現并不是)

Agent子產品

  定義狀态States:   記 e t ( i ) e_t^{(i)} et(i)​為第i個agent在t時刻查詢的位置,則目前的狀态可記為: S t ( i ) = ( e t ( i ) , q ) ∈ S = E 2 × R × E S_{t}^{(i)}=\left(e_{t}^{(i)}, q\right) \in \mathcal{S} = \mathcal{E}^{2} \times \mathcal{R} \times \mathcal{E} St(i)​=(et(i)​,q)∈S=E2×R×E

  定義行動Actions:從狀态 S t ( i ) = ( e t ( i ) , q ) S_{t}^{(i)}=\left(e_{t}^{(i)}, q\right) St(i)​=(et(i)​,q)出發,所有可能到達的節點集合(即 e t ( i ) e_t^{(i)} et(i)​的鄰居集),記做 A S t ( i ) \mathcal{A}_{S_{t}^{(i)}} ASt(i)​​:

A S t ( i ) = { ( r , e ) ∈ R × E : S t ( i ) = ( e t ( i ) , q ) ∧ ( e t ( i ) , r , e ) ∈ K G } \mathcal{A}_{S_{t}^{(i)}}=\left\{(r, e) \in \mathcal{R} \times \mathcal{E}: S_{t}^{(i)}=\left(e_{t}^{(i)}, q\right) \wedge\left(e_{t}^{(i)}, r, e\right) \in \mathcal{K} \mathcal{G}\right\} ASt(i)​​={(r,e)∈R×E:St(i)​=(et(i)​,q)∧(et(i)​,r,e)∈KG}

  定義轉移過程:若在狀态 S t ( i ) = ( e t ( i ) , q ) S_{t}^{(i)}=\left(e_{t}^{(i)}, q\right) St(i)​=(et(i)​,q)時選擇行動 A t ( i ) = ( r , e t + 1 ( i ) ) A_{t}^{(i)}=\left(r, e_{t+1}^{(i)}\right) At(i)​=(r,et+1(i)​),則轉移過程為:

δ t ( i ) ( S t ( i ) , A t ( i ) ) : = ( e t + 1 ( i ) , q ) \delta_{t}^{(i)}\left(S_{t}^{(i)}, A_{t}^{(i)}\right):=\left(e_{t+1}^{(i)}, q\right) δt(i)​(St(i)​,At(i)​):=(et+1(i)​,q)

  将采取過的行動合并在一起得到曆史路徑: H t ( i ) = ( H t − 1 ( i ) , A t − 1 ( i ) ) H_{t}^{(i)}=\left(H_{t-1}^{(i)}, A_{t-1}^{(i)}\right) Ht(i)​=(Ht−1(i)​,At−1(i)​),其中 H 0 ( i ) = ( s q , p q , o q ) H_{0}^{(i)}=\left(s_{q}, p_{q}, o_{q}\right) H0(i)​=(sq​,pq​,oq​)

  用LSTM網絡對上一步的資訊進行編碼: h t ( i ) = L S T M ( i ) ( [ a t − 1 ( i ) , q ( i ) ] ) \boldsymbol{h}_{t}^{(i)}=\mathrm{LSTM}^{(i)}\left(\left[\boldsymbol{a}_{t-1}^{(i)}, \boldsymbol{q}^{(i)}\right]\right) ht(i)​=LSTM(i)([at−1(i)​,q(i)])

  其中 a t − 1 ( i ) = [ r t − 1 ( i ) , e t ( i ) ] ∈ R 2 d \boldsymbol{a}_{t-1}^{(i)}=\left[\boldsymbol{r}_{t-1}^{(i)}, \boldsymbol{e}_{t}^{(i)}\right] \in \mathbb{R}^{2 d} at−1(i)​=[rt−1(i)​,et(i)​]∈R2d, q ( i ) = [ e s ( i ) , r p ( i ) , e o ( i ) ] ∈ R 3 d \boldsymbol{q}^{(i)}=\left[\boldsymbol{e}_{s}^{(i)}, \boldsymbol{r}_{p}^{(i)}, \boldsymbol{e}_{o}^{(i)}\right] \in \mathbb{R}^{3 d} q(i)=[es(i)​,rp(i)​,eo(i)​]∈R3d,這裡的LSTM的輸入應該是5個長度為d的向量。值得一提的是,兩個agent和法官所使用的嵌入向量是不同的,也就是說每個節點與邊分别有三個嵌入向量。

  根據上一步編碼的資訊和這一步待選的行動空間計算每個行動的分數作為新行動的選擇政策:

d t ( i ) = softmax ⁡ ( A t ( i ) ( W 2 ( i ) ReLU ⁡ ( W 1 ( i ) h t ( i ) ) ) ) \boldsymbol{d}_{t}^{(i)}=\operatorname{softmax}\left(\boldsymbol{A}_{t}^{(i)}\left(\boldsymbol{W}_{2}^{(i)} \operatorname{ReLU}\left(\boldsymbol{W}_{1}^{(i)} \boldsymbol{h}_{t}^{(i)}\right)\right)\right) dt(i)​=softmax(At(i)​(W2(i)​ReLU(W1(i)​ht(i)​)))

 這裡政策 d t ( i ) \boldsymbol{d}_{t}^{(i)} dt(i)​的第k個分量表示選擇行動空間中第k個行動的機率,根據這一機率采樣選擇下一個行動,這一政策是馬爾科夫決策過程,因為計算中僅考慮了t-1步的政策與t步的行動空間,與之前的資訊無關,然後基于此政策選擇下一步的行動: A t ( i ) ∼  Categorical  ( d t ( i ) ) A_{t}^{(i)} \sim \text { Categorical }\left(d_{t}^{(i)}\right) At(i)​∼ Categorical (dt(i)​)

  每個agent采樣得到N個證據鍊,限制每個證據鍊的長度為T,則第i個agent第n次采樣得到的證據鍊為:

τ n ( i ) : = ( A n ~ ( i , T ) + 1 , A n ~ ( i , T ) + 2 , … , A n ~ ( i , T ) + T ) \tau_{n}^{(i)}:=\left(A_{\tilde{n}(i, T)+1}, A_{\tilde{n}(i, T)+2}, \ldots, A_{\tilde{n}(i, T)+T}\right) τn(i)​:=(An~(i,T)+1​,An~(i,T)+2​,…,An~(i,T)+T​)

其中下标定義為:

n ~ ( i , T ) : = ( 2 ( n − 1 ) + i − 1 ) T \tilde{n}(i, T):=(2(n-1)+i-1) T n~(i,T):=(2(n−1)+i−1)T

所有結果可彙總為:

τ : = ( τ 1 ( 1 ) , τ 1 ( 2 ) , τ 2 ( 1 ) , τ 2 ( 2 ) , … , τ N ( 1 ) , τ N ( 2 ) ) \tau:=\left(\tau_{1}^{(1)}, \tau_{1}^{(2)}, \tau_{2}^{(1)}, \tau_{2}^{(2)}, \ldots, \tau_{N}^{(1)}, \tau_{N}^{(2)}\right) τ:=(τ1(1)​,τ1(2)​,τ2(1)​,τ2(2)​,…,τN(1)​,τN(2)​)

Judge

  Judge實際上是一個二分類器,将兩個agent得到的證據鍊彙總得到最終的置信機率:

y n ( i ) = f ( [ τ n ( i ) , q J ] ) \boldsymbol{y}_{n}^{(i)}=f\left(\left[\boldsymbol{\tau}_{n}^{(i)}, \boldsymbol{q}^{J}\right]\right) yn(i)​=f([τn(i)​,qJ])

其中 q J \boldsymbol{q^J} qJ表示Judge中查詢q的嵌入向量: q J = [ r p J , e o J ] ∈ R 2 d \boldsymbol{q}^{J}=\left[\boldsymbol{r}_{p}^{J}, \boldsymbol{e}_{o}^{J}\right] \in \mathbb{R}^{2 d} qJ=[rpJ​,eoJ​]∈R2d。

預測最終分數:

t τ = σ ( w ⊤ ReLU ⁡ ( W ∑ i = 1 2 ∑ n = 1 N y n ( i ) ) ) t_{\tau}=\sigma\left(\boldsymbol{w}^{\top} \operatorname{ReLU}\left(\boldsymbol{W} \sum_{i=1}^{2} \sum_{n=1}^{N} \boldsymbol{y}_{n}^{(i)}\right)\right) tτ​=σ(w⊤ReLU(Wi=1∑2​n=1∑N​yn(i)​))

  則Judge部分目标函數:

L q = ϕ ( q ) log ⁡ t τ + ( 1 − ϕ ( q ) ) ( 1 − log ⁡ t τ ) \mathcal{L}_{q}=\phi(q) \log t_{\tau}+(1-\phi(q))\left(1-\log t_{\tau}\right) Lq​=ϕ(q)logtτ​+(1−ϕ(q))(1−logtτ​)

Reward

 為了展現兩個agent工作的不同,分别計算每個agent得到的證據的得分為:

t n ( i ) = w ⊤ ReL ⁡ U ( W f ( [ τ n ( i ) , q J ] ) ) t_{n}^{(i)}=\boldsymbol{w}^{\top} \operatorname{ReL} \mathrm{U}\left(\boldsymbol{W} f\left(\left[\boldsymbol{\tau}_{n}^{(i)}, \boldsymbol{q}^{J}\right]\right)\right) tn(i)​=w⊤ReLU(Wf([τn(i)​,qJ]))

定義獎賞函數:

R n ( i ) = { t n ( i )  if  i = 1 − t n ( i )  if  i = 2 R_{n}^{(i)}=\left\{\begin{array}{ll} {t_{n}^{(i)}} & {\text { if } i=1}\\ {-t_{n}^{(i)}} & {\text { if } i=2} \end{array}\right. Rn(i)​={tn(i)​−tn(i)​​ if i=1 if i=2​

  agent的累積獎賞為:

G ( i ) : = ∑ n = 1 N R n ( i ) G^{(i)}:=\sum_{n=1}^{N} R_{n}^{(i)} G(i):=n=1∑N​Rn(i)​

  用強化學習的思想最大化累積獎賞的期望對agent進行訓練: arg ⁡ max ⁡ θ ( i ) E q ∼ K G + E τ 1 ( i ) , τ 2 ( i ) , … , τ N ( i ) ∼ π θ ( i ) [ G ( i ) ∣ q ] \underset{\theta(i)}{\arg \max } \mathbb{E}_{q \sim \mathcal{K} \mathcal{G}_{+}} \mathbb{E}_{\tau_{1}^{(i)}, \tau_{2}^{(i)}, \ldots, \tau_{N}^{(i)} \sim \pi_{\theta^{(i)}}}\left[G^{(i)} | q\right] θ(i)argmax​Eq∼KG+​​Eτ1(i)​,τ2(i)​,…,τN(i)​∼πθ(i)​​[G(i)∣q]

  整個模型的訓練采用交替訓練的方式,即每一次僅訓練agent或judge,将另一個子產品中的所有參數當機。

我是碼春兒,歡迎關注公衆号檢視更多技術幹貨分享,一起面朝代碼,春暖花開!

論文閱讀筆記—Reasoning on Knowledge Graphs with Debate Dynamics(AAAI,2020)動機

繼續閱讀