Classification:不能用线性回归
Logistic Regression:0≤hθ(x)≤1
(a classification algortihm)
hθ(x)=g(θTx)
g(z)=1+e−z1
hθ(x)=P(y=1∣x;θ)
x,θ已知条件下y=1的概率
Suppose predict “y=1” if hθ(x)≥0.5
predict “y=0” if hθ(x)<0.5
Decision Boundary
hθ(x)=g(θ0+θ1x1+θ2x2) θ=⎣⎡−311⎦⎤
Predict “y=1” if -3+x1+x2≥ 0

Non-linear decision boundaries

Logistic regression cost function
cost(hθ(x),y)={−log(hθ(x))ify=1−log(1−hθ(x))ify=0
J(θ)=m1i=1∑mcost(hθ(x(i)),y(i))
cost(hθ(x),y)=-ylog(hθ(x))-(1-y)log(1-hθ(x))
J(θ)=−m1[i=1∑my(i)loghθ(x(i))+(1−y(i))log(1−hθ(x(i)))]
Want mine (J(θ))
Repeat
{
θj=θj−α∂θj∂J(θ)
}simultaneously update all θj
∂θj∂J(θ)=m1(hθ(x(i))−y(i))xj(i)
Optimization algorithms(没有细看)
多元分类

Train a logistic regression classier hθ(i)(x)for each class i to predict the possibility that y=i
On a new input x,to make a prediction,pick the class i that maximizes Z imax hθ(i)(x)
最后说一句,写博客真的好费时间(不是)
祝能有一个不错的五一