天天看点

机器学习——软间隔SVM硬间隔SVM的问题软间隔SVM软间隔SVM的求解

以下内容均为个人理解,如有错误,欢迎指出

文章目录

  • 硬间隔SVM的问题
  • 软间隔SVM
      • 什么是软间隔SVM
      • 软间隔SVM的数学表达式
  • 软间隔SVM的求解
      • SMO算法求解软间隔SVM
      • 选择两个α的方法(未解释原因,只阐明步骤)

硬间隔SVM的问题

硬间隔要求间隔之间不存在任何点,这点要求非常苛刻,这也导致了硬间隔SVM对于异常点非常敏感,由于噪声的因素,可能属于A类的点分布在B类中(异常点),此时硬间隔将无法找到一个划分超平面,因此,我们导出了软间隔SVM。

软间隔SVM

什么是软间隔SVM

软间隔SVM允许部分点分布在间隔内部,此时可以解决硬间隔SVM的问题(只需将异常点放到间隔内部即可),因为间隔内部的点对于SVM的思想来说是一种错误,所以我们希望位于间隔内部的点尽可能少,其实是一种折中,即在错误较少的情况下获得不错的划分超平面

软间隔SVM的数学表达式

回顾一下硬间隔SVM的数学表达式为

min ⁡ w , b ∣ ∣ w ∣ ∣ 2 2 s . t .   y i ( w T x i + b ) ≥ 1 , i = 1 , 2..... m \begin{aligned} & \min \limits_{w,b} \frac{||w||^2}{2} \\ \\ &s.t.\ y_i(w^Tx_i+b)\geq1,i=1,2.....m \end{aligned} ​w,bmin​2∣∣w∣∣2​s.t. yi​(wTxi​+b)≥1,i=1,2.....m​

位于间隔内部的点满足 y i ( w T x i + b ) < 1 y_i(w^Tx_i+b)<1 yi​(wTxi​+b)<1,我们为每个点 ( x i , y i ) (x_i,y_i) (xi​,yi​)引入一个松弛变量 ε i \varepsilon_i εi​,对于间隔内部的点,满足

ε i > 0 y i ( w T x i + b ) + ε i ≥ 1 \begin{aligned} & \varepsilon_i>0 \\ \\ & y_i(w^Tx_i+b)+\varepsilon_i\geq 1 \end{aligned} ​εi​>0yi​(wTxi​+b)+εi​≥1​

而间隔外的点和支持向量,只需满足 ε i = 0 \varepsilon_i=0 εi​=0即可,综上,对于软间隔SVM,我们的约束条件变为

ε i ≥ 0 y i ( w T x i + b ) + ε i ≥ 1    i = 1 , 2..... m \begin{aligned} & \varepsilon_i\geq0 \\ \\ & y_i(w^Tx_i+b)+\varepsilon_i\geq 1 \ \ i=1,2.....m \end{aligned} ​εi​≥0yi​(wTxi​+b)+εi​≥1  i=1,2.....m​

由于我们希望位于间隔内的点尽可能少,为了体现这个特点,我们将优化目标变为

min ⁡ ∣ ∣ w ∣ ∣ 2 2 + C ∑ i = 1 m ε i s . t   ε i ≥ 0 y i ( w T x i + b ) + ε i ≥ 1    i = 1 , 2..... m (式1.0) \begin{aligned} &\min \frac{||w||^2}{2}+C\sum_{i=1}^m\varepsilon_i\\ &s.t \ \varepsilon_i\geq0 \\ \\ &y_i(w^Tx_i+b)+\varepsilon_i\geq 1 \ \ i=1,2.....m \tag{式1.0} \end{aligned} ​min2∣∣w∣∣2​+Ci=1∑m​εi​s.t εi​≥0yi​(wTxi​+b)+εi​≥1  i=1,2.....m​(式1.0)

其中,C>0为惩罚参数,为我们事先指定,C越大,位于间隔内部的点越少,感性一点的解释是, ∣ ∣ w ∣ ∣ 2 2 + C ∑ i = 1 m ε i \frac{||w||^2}{2}+C\sum_{i=1}^m\varepsilon_i 2∣∣w∣∣2​+C∑i=1m​εi​由两部分组成, ∣ ∣ w ∣ ∣ 2 2 \frac{||w||^2}{2} 2∣∣w∣∣2​和 C ∑ i = 1 m ε i C\sum_{i=1}^m\varepsilon_i C∑i=1m​εi​,C越大,则 C ∑ i = 1 m ε i C\sum_{i=1}^m\varepsilon_i C∑i=1m​εi​对于取值的占比越大,此时为了让式子取最小,则会有尽可能多的 ε i \varepsilon_i εi​趋近于0

软间隔SVM的求解

由于 ε i   ( i = 1 , 2..... m ) \varepsilon_i \ (i=1,2.....m) εi​ (i=1,2.....m)为仿射函数,因此,我们仍然可以对式1.0使用拉格朗日对偶,则有下列过程

L ( w , b , α , β , ε ) = ∣ ∣ w ∣ ∣ 2 2 + C ∑ i = 1 m ε i + ∑ i = 1 m α i ( 1 − ε i − y i ( w T x i + b ) ) − ∑ i = 1 m β i ε i (式1.1) \begin{aligned} L(w,b,\alpha,\beta,\varepsilon)=\frac{||w||^2}{2}+C\sum_{i=1}^m\varepsilon_i+\sum_{i=1}^m\alpha_i(1-\varepsilon_i-y_i(w^Tx_i+b))-\sum_{i=1}^m \beta_i\varepsilon_i \tag{式1.1} \end{aligned} L(w,b,α,β,ε)=2∣∣w∣∣2​+Ci=1∑m​εi​+i=1∑m​αi​(1−εi​−yi​(wTxi​+b))−i=1∑m​βi​εi​​(式1.1)

拉格朗日对偶函数为 max ⁡ α , β min ⁡ w , b , ε L ( w , b , α , β , ε ) \max \limits_{\alpha,\beta} \min \limits_{w,b,\varepsilon}L(w,b,\alpha,\beta,\varepsilon) α,βmax​w,b,εmin​L(w,b,α,β,ε),若固定 α , β \alpha,\beta α,β,式1.1为 w , b , ε w,b,\varepsilon w,b,ε的凸函数,对 w , b , ε w,b,\varepsilon w,b,ε求导得

∂ L ( w , b , α , β , ε ) ∂ w = { w 1 − ∑ i = 1 m α i y i x i 1 w 2 − ∑ i = 1 m α i y i x i 2 . . . . . w n − ∑ i = 1 m α i y i x i 1 } = w − ∑ i = 1 m α i y i x i = 0 \frac{\partial L(w,b,\alpha,\beta,\varepsilon)}{\partial w}= \left\{ \begin{matrix} & w_1-\sum_{i=1}^m \alpha_iy_ix_{i1} \\ & w_2-\sum_{i=1}^m \alpha_iy_ix_{i2} \\ & .....\\ & w_n-\sum_{i=1}^m \alpha_iy_ix_{i1} \end{matrix} \right\}=w-\sum_{i=1}^m\alpha_iy_ix_i =0\\ ∂w∂L(w,b,α,β,ε)​=⎩⎪⎪⎨⎪⎪⎧​​w1​−∑i=1m​αi​yi​xi1​w2​−∑i=1m​αi​yi​xi2​.....wn​−∑i=1m​αi​yi​xi1​​⎭⎪⎪⎬⎪⎪⎫​=w−i=1∑m​αi​yi​xi​=0

∂ L ( w , b , α , β , ε ) ∂ b = ∑ i = 1 m α i y i = 0 \frac{\partial L(w,b,\alpha,\beta,\varepsilon)}{\partial b}=\sum_{i=1}^m \alpha_iy_i=0 ∂b∂L(w,b,α,β,ε)​=i=1∑m​αi​yi​=0

α i + β i = C    ( i = 1 , 2 , . . . , m ) \alpha_i+\beta_i=C \ \ (i=1,2,...,m) αi​+βi​=C  (i=1,2,...,m)

将上述式子代入式1.0,可得

min ⁡ 1 2 ∑ i = 1 m ∑ j = 1 m α i α j y i y j x i T x j − ∑ i = 1 m α i s . t   α i ≥ 0       β i ≥ 0   ( i = 1 , 2 , . . . , m )        α i + β i = C \begin{aligned} & \min \frac{1}{2}\sum_{i=1}^m\sum_{j=1}^m\alpha_i\alpha_jy_iy_jx_i^Tx_j-\sum_{i=1}^m\alpha_i\\ \\ & s.t \ \alpha_i\geq0\\ \\ & \ \ \ \ \ \beta_i\geq0 \ (i=1,2,...,m)\\ \\ & \ \ \ \ \ \ \alpha_i+\beta_i=C \end{aligned} ​min21​i=1∑m​j=1∑m​αi​αj​yi​yj​xiT​xj​−i=1∑m​αi​s.t αi​≥0     βi​≥0 (i=1,2,...,m)      αi​+βi​=C​

由于 β i = C − α i \beta_i=C-\alpha_i βi​=C−αi​,所以有 C − α i ≥ 0 C-\alpha_i\geq0 C−αi​≥0,即 C ≥ α i C\geq\alpha_i C≥αi​,则最终的优化对象为

min ⁡ 1 2 ∑ i = 1 m ∑ j = 1 m α i α j y i y j x i T x j − ∑ i = 1 m α i s . t   0 ≤ α i ≤ C ( i = 1 , 2 , . . . , m ) (式1.2) \begin{aligned} & \min \frac{1}{2}\sum_{i=1}^m\sum_{j=1}^m\alpha_i\alpha_jy_iy_jx_i^Tx_j-\sum_{i=1}^m\alpha_i\\ \\ & s.t \ 0 \leq \alpha_i\leq C (i=1,2,...,m) \end{aligned}\tag{式1.2} ​min21​i=1∑m​j=1∑m​αi​αj​yi​yj​xiT​xj​−i=1∑m​αi​s.t 0≤αi​≤C(i=1,2,...,m)​(式1.2)

接下来,我们使用SMO算法对其进行优化,我们将展现SMO算法的具体细节

SMO算法求解软间隔SVM

我们选择 α 1 , α 2 \alpha_1,\alpha_2 α1​,α2​作为可变元,剩下的 α i ( i = 3 , . . . . m ) \alpha_i (i=3,....m) αi​(i=3,....m)不变,式1.3的原型为

1 2 ∑ i = 1 m α i y i x i ∑ j = 1 m α j y j x j − ∑ i = 1 m α i = 1 2 ( α 1 y 1 x 1 + α 2 y 2 x 2 + . . . . + α m y m x m ) ( α 1 y 1 x 1 + α 2 y 2 x 2 + . . . . + α m y m x m ) − ∑ i = 1 m α i = 1 2 α 1 2 y 1 2 x 1 T x 1 + 1 2 α 2 2 y 2 2 x 2 T x 2 + α 1 α 2 y 1 y 2 x 1 T x 2 + α 1 y 1 x 1 T ∑ i = 3 m α i y i x i + α 2 y 2 x 2 T ∑ i = 3 m α i y i x i − α 1 − α 2 + H + K ( 式 1.3 ) \begin{aligned} &\frac{1}{2}\sum_{i=1}^m\alpha_iy_ix_i \sum_{j=1}^m\alpha_jy_jx_j -\sum_{i=1}^m\alpha_i\\ =&\frac{1}{2}(\alpha_1y_1x_1+\alpha_2y_2x_2+....+\alpha_my_mx_m)(\alpha_1y_1x_1+\alpha_2y_2x_2+....+\alpha_my_mx_m)-\sum_{i=1}^m\alpha_i\\ =&\frac{1}{2}\alpha_1^2y_1^2x_1^Tx_1+\frac{1}{2}\alpha_2^2y_2^2x_2^Tx_2+\alpha_1\alpha_2y_1y_2x_1^Tx_2+\alpha_1y_1x_1^T\sum_{i=3}^m\alpha_iy_ix_i+ \\ &\alpha_2y_2x_2^T\sum_{i=3}^m\alpha_iy_ix_i-\alpha_1-\alpha_2+H+K(式1.3) \end{aligned} ==​21​i=1∑m​αi​yi​xi​j=1∑m​αj​yj​xj​−i=1∑m​αi​21​(α1​y1​x1​+α2​y2​x2​+....+αm​ym​xm​)(α1​y1​x1​+α2​y2​x2​+....+αm​ym​xm​)−i=1∑m​αi​21​α12​y12​x1T​x1​+21​α22​y22​x2T​x2​+α1​α2​y1​y2​x1T​x2​+α1​y1​x1T​i=3∑m​αi​yi​xi​+α2​y2​x2T​i=3∑m​αi​yi​xi​−α1​−α2​+H+K(式1.3)​

其中

H = 1 2 ∑ i = 3 m ∑ j = 3 m α i y i α j y j x i T x j K = − ∑ i = 3 m α i \begin{aligned} &H=\frac{1}{2}\sum_{i=3}^m\sum_{j=3}^m\alpha_iy_i\alpha_jy_jx_i^Tx_j\\ &K=-\sum_{i=3}^m\alpha_i \end{aligned} ​H=21​i=3∑m​j=3∑m​αi​yi​αj​yj​xiT​xj​K=−i=3∑m​αi​​

设 k i j = x i T x j k_{ij}=x_i^Tx_j kij​=xiT​xj​,则式1.3变为

1 2 k 11 α 1 2 + 1 2 k 22 α 2 2 + α 1 α 2 y 1 y 2 k 12 + α 1 y 1 ∑ i = 3 m α i y i k i 1 + α 2 y 2 ∑ i = 3 m α i y i k i 2 + H + K − α 1 − α 2 (式1.4) \frac{1}{2}k_{11}\alpha_1^2+\frac{1}{2}k_{22}\alpha_2^2+\alpha_1\alpha_2y_1y_2k_{12}+\alpha_1y_1\sum_{i=3}^m\alpha_iy_ik_{i1}\\ +\alpha_2y_2\sum_{i=3}^m\alpha_iy_ik_{i2}+H+K-\alpha_1-\alpha_2\tag{式1.4} 21​k11​α12​+21​k22​α22​+α1​α2​y1​y2​k12​+α1​y1​i=3∑m​αi​yi​ki1​+α2​y2​i=3∑m​αi​yi​ki2​+H+K−α1​−α2​(式1.4)

由于 α 1 y 1 + α 2 y 2 + ∑ i = 3 m α i y i = 0 \alpha_1y_1+\alpha_2y_2+\sum_{i=3}^m\alpha_iy_i=0 α1​y1​+α2​y2​+∑i=3m​αi​yi​=0,所以有 α 1 y 1 + α 2 y 2 = − ∑ i = 3 m α i y i \alpha_1y_1+\alpha_2y_2=-\sum_{i=3}^m\alpha_iy_i α1​y1​+α2​y2​=−∑i=3m​αi​yi​,令 ξ = − ∑ i = 3 m α i y i \xi=-\sum_{i=3}^m\alpha_iy_i ξ=−∑i=3m​αi​yi​,则 α 1 = ξ y 1 − α 2 y 1 y 2 \alpha_1=\xi y_1-\alpha_2y_1y_2 α1​=ξy1​−α2​y1​y2​,将其代入式1.3可得

1 2 k 11 ( ξ − α 2 y 2 ) 2 + 1 2 k 22 α 2 2 + α 2 y 2 k 12 ( ξ − α 2 y 2 ) + α 2 y 2 v 2 + ( ξ − α 2 y 2 ) v 1 − ( α 1 + ξ y 1 − α 2 y 1 y 2 ) (式1.5) \frac{1}{2}k_{11}(\xi-\alpha_2y_2)^2+\frac{1}{2}k_{22}\alpha_2^2+\alpha_2y_2k_{12}(\xi-\alpha_2y_2)+\alpha_2y_2v_2+(\xi-\alpha_2y_2)v_1\\ -(\alpha_1+\xi y_1-\alpha_2y_1y_2)\tag{式1.5} 21​k11​(ξ−α2​y2​)2+21​k22​α22​+α2​y2​k12​(ξ−α2​y2​)+α2​y2​v2​+(ξ−α2​y2​)v1​−(α1​+ξy1​−α2​y1​y2​)(式1.5)

其中 v 1 = ∑ i = 3 m α i y i k i 1 v 2 = ∑ i = 3 m α i y i k i 2 v_1=\sum_{i=3}^m\alpha_iy_ik_{i1}\\ v_2=\sum_{i=3}^m\alpha_iy_ik_{i2} v1​=i=3∑m​αi​yi​ki1​v2​=i=3∑m​αi​yi​ki2​

式1.5是一个二次函数,其二次项系数为 1 2 k 11 + 1 2 k 22 − k 12 \frac{1}{2}k_{11}+\frac{1}{2}k_{22}-k_{12} 21​k11​+21​k22​−k12​,到这里其实就是一个高中数学题了,我们将问题重新表述一遍,即

min ⁡   1 2 k 11 ( ξ − α 2 y 2 ) 2 + 1 2 k 22 α 2 2 + α 2 y 2 k 12 ( ξ − α 2 y 2 ) + α 2 y 2 v 2 + ( ξ − α 2 y 2 ) v 1 − ( α 1 + ξ y 1 − α 2 y 1 y 2 ) s . t   0 ≤ α 1 ≤ C ⟹ 0 ≤ ξ y 1 − α 2 y 1 y 2 ≤ C 0 ≤ α 2 ≤ C \begin{aligned} \min \ & \frac{1}{2}k_{11}(\xi-\alpha_2y_2)^2+\frac{1}{2}k_{22}\alpha_2^2+\alpha_2y_2k_{12}(\xi-\alpha_2y_2)+\alpha_2y_2v_2+(\xi-\alpha_2y_2)v_1-(\alpha_1+\xi y_1-\alpha_2y_1y_2)\\ \\ s.t \ & 0 \leq \alpha_1\leq C \Longrightarrow 0 \leq \xi y_1-\alpha_2y_1y_2 \leq C\\ \\ &0 \leq \alpha_2\leq C \end{aligned} min s.t ​21​k11​(ξ−α2​y2​)2+21​k22​α22​+α2​y2​k12​(ξ−α2​y2​)+α2​y2​v2​+(ξ−α2​y2​)v1​−(α1​+ξy1​−α2​y1​y2​)0≤α1​≤C⟹0≤ξy1​−α2​y1​y2​≤C0≤α2​≤C​

其二次项系数为 1 2 k 11 + 1 2 k 22 − k 12 \frac{1}{2}k_{11}+\frac{1}{2}k_{22}-k_{12} 21​k11​+21​k22​−k12​,即在定义域内求二次函数极值的问题

我们对二次项次数进行分类讨论(绝大多数资料都会画图,这里我们直接用解高中题的思路就可以求解了)

1. 1 2 k 11 + 1 2 k 22 − k 12 > 0 \frac{1}{2}k_{11}+\frac{1}{2}k_{22}-k_{12}>0 21​k11​+21​k22​−k12​>0,式1.5为开口向上的二次函数

此时对式1.5求导置0可得 α 2 \alpha_2 α2​的新值为:

α 2 n e w = y 2 ( − k 12 ξ + k 11 ξ − v 1 + v 2 + y 2 − y 1 ) k 11 + k 22 − 2 k 12 \alpha_2^{new}=\frac{y_2(-k_{12}\xi+k_{11}\xi-v_1+v_2+y_2-y_1)}{k_{11}+k_{22}-2k_{12}} α2new​=k11​+k22​−2k12​y2​(−k12​ξ+k11​ξ−v1​+v2​+y2​−y1​)​

由于 v 1 = ∑ i = 3 m α i y i k i 1 , v 2 = ∑ i = 3 m α i y i k i 2 , ξ = α 1 y 1 + α 2 y 2 v_1=\sum_{i=3}^m\alpha_iy_ik_{i1},v_2=\sum_{i=3}^m\alpha_iy_ik_{i2},\xi=\alpha_1y_1+\alpha_2y_2 v1​=∑i=3m​αi​yi​ki1​,v2​=∑i=3m​αi​yi​ki2​,ξ=α1​y1​+α2​y2​,设 g ( x i ) = ∑ j = 1 m α j y j k j i + b g(x_i)=\sum_{j=1}^m\alpha_jy_jk_{ji}+b g(xi​)=∑j=1m​αj​yj​kji​+b故有

y 2 ( − k 12 ξ + k 11 ξ − v 1 + v 2 + y 2 − y 1 ) = y 2 ( y 2 − y 1 + g ( x 1 ) − g ( x 2 ) + ∑ j = 1 2 y j α j k j 1 − ∑ j = 1 2 y j α j k j 2 + ( k 11 − k 12 ) ( α 1 y 1 + α 2 y 2 ) ) = y 2 ( y 2 − y 1 + g ( x 1 ) + g ( x 2 ) + ∑ j = 1 2 α j y j k j 2 − ∑ j = 1 2 α j y j k j 1 + α 1 y 1 k 11 − α j y 1 k 12 + α 2 y 2 k 11 − α 2 y 2 k 12 ) = y 2 ( y 2 − y 1 + g ( x 1 ) − g ( x 2 ) + α 2 y 2 k 22 + α 2 y 2 k 11 − 2 α 2 y 2 k 12 ) \begin{aligned} &y_2(-k_{12}\xi+k_{11}\xi-v_1+v_2+y_2-y_1)\\ =&y_2(y_2-y_1+g(x_1)-g_(x_2)+\sum_{j=1}^2y_j\alpha_jk_{j1}-\sum_{j=1}^2y_j\alpha_jk_{j2}+(k_{11}-k_{12})(\alpha_1y_1+\alpha_2y_2))\\ =&y_2(y_2-y_1+g(x_1)+g(x_2)+\sum_{j=1}^2\alpha_jy_jk_{j2}-\sum_{j=1}^2\alpha_jy_jk_{j1}+\alpha_1y_1k_{11}-\alpha_jy_1k_{12}+\alpha_2y_2k_{11}-\alpha_2y_2k_{12})\\ =&y_2(y_2-y_1+g(x_1)-g(x_2)+\alpha_2y_2k_{22}+\alpha_2y_2k_{11}-2\alpha_2y_2k_{12}) \end{aligned} ===​y2​(−k12​ξ+k11​ξ−v1​+v2​+y2​−y1​)y2​(y2​−y1​+g(x1​)−g(​x2​)+j=1∑2​yj​αj​kj1​−j=1∑2​yj​αj​kj2​+(k11​−k12​)(α1​y1​+α2​y2​))y2​(y2​−y1​+g(x1​)+g(x2​)+j=1∑2​αj​yj​kj2​−j=1∑2​αj​yj​kj1​+α1​y1​k11​−αj​y1​k12​+α2​y2​k11​−α2​y2​k12​)y2​(y2​−y1​+g(x1​)−g(x2​)+α2​y2​k22​+α2​y2​k11​−2α2​y2​k12​)​

所以有

α 2 n e w = y 2 ( y 2 − y 1 + g ( x 1 ) − g ( x 2 ) + α 2 y 2 k 22 + α 2 y 2 k 11 − 2 α 2 y 2 k 12 ) k 11 + k 22 − 2 k 12 = y 2 ( E 1 − E 2 ) k 11 + k 22 − 2 k 12 + α 2 \begin{aligned} \alpha_2^{new}=&\frac{y_2(y_2-y_1+g(x_1)-g(x_2)+\alpha_2y_2k_{22}+\alpha_2y_2k_{11}-2\alpha_2y_2k_{12})}{k_{11}+k_{22}-2k_{12}}\\ =&\frac{y_2(E_1-E_2)}{k_{11}+k_{22}-2k_{12}}+\alpha_2 \end{aligned} α2new​==​k11​+k22​−2k12​y2​(y2​−y1​+g(x1​)−g(x2​)+α2​y2​k22​+α2​y2​k11​−2α2​y2​k12​)​k11​+k22​−2k12​y2​(E1​−E2​)​+α2​​

其中, E i = g ( x i ) − y i E_i=g(x_i)-y_i Ei​=g(xi​)−yi​

由于大部分求解思路

当 y 1 = 1 , y 2 = 1 当y_1=1,y_2=1 当y1​=1,y2​=1,首先先确定定义域,我们有 ξ − C ≤ α 2 ≤ ξ 0 ≤ α 2 ≤ C \begin{aligned} &\xi-C\leq\alpha_2\leq\xi \\ & 0\leq \alpha_2\leq C \end{aligned} ​ξ−C≤α2​≤ξ0≤α2​≤C​

定义域为 [ max ⁡ ( 0 , ξ − C ) , min ⁡ ( C , ξ ) ] [\max(0,\xi-C),\min(C,\xi)] [max(0,ξ−C),min(C,ξ)],因此,我们有

α 2 n e w = { max ⁡ ( 0 , ξ − C )    y 2 ( E 1 − E 2 ) k 11 + k 22 − 2 k 12 + α 2 < max ⁡ ( 0 , ξ − C ) y 2 ( E 1 − E 2 ) k 11 + k 22 − 2 k 12 + α 2     max ⁡ ( 0 , ξ − C ) ≤ y 2 ( E 1 − E 2 ) k 11 + k 22 − 2 k 12 + α 2 ≤ max ⁡ ( 0 , ξ − C ) min ⁡ ( C , ξ )     y 2 ( E 1 − E 2 ) k 11 + k 22 − 2 k 12 + α 2 > min ⁡ ( C , ξ ) \alpha_2^{new}=\left\{ \begin{aligned} &\max(0,\xi-C) \ \ \frac{y_2(E_1-E_2)}{k_{11}+k_{22}-2k_{12}}+\alpha_2<\max(0,\xi-C)\\ &\frac{y_2(E_1-E_2)}{k_{11}+k_{22}-2k_{12}}+\alpha_2 \ \ \ \max(0,\xi-C)\leq\frac{y_2(E_1-E_2)}{k_{11}+k_{22}-2k_{12}}+\alpha_2\leq\max(0,\xi-C)\\ &\min(C,\xi)\ \ \ \frac{y_2(E_1-E_2)}{k_{11}+k_{22}-2k_{12}}+\alpha_2>\min(C,\xi) \end{aligned} \right. α2new​=⎩⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎧​​max(0,ξ−C)  k11​+k22​−2k12​y2​(E1​−E2​)​+α2​<max(0,ξ−C)k11​+k22​−2k12​y2​(E1​−E2​)​+α2​   max(0,ξ−C)≤k11​+k22​−2k12​y2​(E1​−E2​)​+α2​≤max(0,ξ−C)min(C,ξ)   k11​+k22​−2k12​y2​(E1​−E2​)​+α2​>min(C,ξ)​

当 y 1 , y 2 y_1,y_2 y1​,y2​取其他值时,也具有类似的结构,在此不赘述

2. 1 2 k 11 + 1 2 k 22 − k 12 = 0 \frac{1}{2}k_{11}+\frac{1}{2}k_{22}-k_{12}=0 21​k11​+21​k22​−k12​=0,此时式1.5为一个一次函数,依据斜率取对应的定义域边界值

3. 1 2 k 11 + 1 2 k 22 − k 12 < 0 \frac{1}{2}k_{11}+\frac{1}{2}k_{22}-k_{12}<0 21​k11​+21​k22​−k12​<0,此时式1.5为一个开口向下的二次函数,求解思路与 1 2 k 11 + 1 2 k 22 − k 12 > 0 \frac{1}{2}k_{11}+\frac{1}{2}k_{22}-k_{12}>0 21​k11​+21​k22​−k12​>0一致,在此不赘述

选择两个α的方法(未解释原因,只阐明步骤)

以下内容摘自关于SVM数学细节逻辑的个人理解(三) :SMO算法理解

找第一个参数的具体过程是这样的:

①遍历一遍整个数据集,对每个不满足KKT条件的参数,选作第一个待修改参数

②在上面对整个数据集遍历一遍后,选择那些参数满足的子集,开始遍历,如果发现一个不满足KKT条件的,作为第一个待修改参数,然后找到第二个待修改的参数并修改,修改完后,重新开始遍历这个子集

③遍历完子集后,重新开始①②,直到在执行①和②时没有任何修改就结束

找第二个参数的过程是这样的:

①启发式找,找能让 ∣ E 1 − E 2 ∣ |E_1-E_2| ∣E1​−E2​∣最大的 α 2 \alpha_2 α2​

继续阅读