天天看點

l1限制的最小二乘學習

ℓ1constrained least squares

in sparse learning, ℓ1 constrained ls, also known as lasso regression, is a common learning method:

minθjls(θ)s.t.∥θ∥1≤r

where ∥θ∥1=∑j=1b|θj|

generally speaking, the solution of an ℓ1 constrained ls is located on the axis, that is to say, there are several parameters θj equal to zero (sparse).

then how to solve it? given the indifferentiable property of the absolute value at the origin, solving an ℓ1 constrained ls is not so easy as solving the ℓ2 constrained one. however, we can still apply lagrange multiplier.

minθj(θ),j(θ)=jls(θ)+λ∥θ∥1

note that

|θj|≤θ2j2cj+cj2,cj>0

i.e. we can optimize the upper-bound of j(θ). by iteration, we take the current solution θ~j≠0 as cj so as to formulate the upper bound constraint:

|θj|≤θ2j2|θ~j|+|θ~j|2

if θ~j=0, we should take θj=0. when we use general inverse, the inequality above can be referred as:

|θj|≤|θ~j|†2θ2j+|θ~j|2

therefore, we can get the following ℓ2 regularized constrained ls problem formulation:

θ^=argminθj~(θ),j~(θ)=jls(θ)+λ2θtΘ~†θ+c

where Θ~=⎛⎝⎜⎜⎜|θ~1|⋱|θ~b|⎞⎠⎟⎟⎟ and c=∑bj=1|θ~j|/2 are independent of θ.

take the parameterized linear model for example

fθ(x)=θtϕ(x)

then, by the use of lagrange multiplier, we can get

θ^=(ΦtΦ+λΘ~†)−1Φy

renew the estimation θ~ as θ~=θ^, go back to calculate the new θ^ until θ^ comes to the required precision.

for simplicity, the whole algorithm goes as follows:

initialize θ0 and i=1.

calculate Θi using θi−1.

estimate θi using Θi.

i=i+1, go back to step 2.

an example:

l1限制的最小二乘學習
上一篇: 定時任務
下一篇: HDU1824