天天看點

Stanford ML - Lecture 1 - Linear regression with one variable

Model representation

Cost function

Stanford ML - Lecture 1 - Linear regression with one variable

Cost function intuition I

Stanford ML - Lecture 1 - Linear regression with one variable

Cost function intuition II

Gradient descent

  • start with some \theta_0, \theta_1
  • keep changing \theta_0, \theta_1 to reduce J(\theta_0, \theta_1) until we hopefully end up at a minimum

simultaneous update all parameters in the cost function, and \alpha is positive.

Gradient descent intuition

Stanford ML - Lecture 1 - Linear regression with one variable

If \alpha is too small, gradient descent can be slow. If \alpha is too large, gradient descent can overshoot the minimum. It may fail to converge, or ever diverge. As we approach a local minimum, gradient descent will automatically take smaller steps. So, no need to decrease \alpha over time.

Gradient descent for linear regression “Batch” Gradient Descent: each step of gradient descent uses all the training samples.

繼續閱讀