Qual função de perda está correta para a regressão logística?


31

Eu li sobre duas versões da função de perda para regressão logística, qual delas está correta e por quê?

  1. No Machine Learning , Zhou ZH (em chinês), com β=(w,b) and βTx=wTx+b :

    (1)l(β)=i=1m(yiβTxi+ln(1+eβTxi))

  2. Do meu curso na faculdade, com zi=yif(xi)=yi(wTxi+b) :

    (2)L(zi)=log(1+ezi)


Eu sei que o primeiro é um acúmulo de todas as amostras e o segundo é para uma única amostra, mas estou mais curioso sobre a diferença na forma de duas funções de perda. De alguma forma, sinto que eles são equivalentes.

Respostas:


31

O relacionamento é como segue: l(β)=iL(zi) .

Defina uma função logística como . Eles possuem a propriedade quef(f(z)=ez1+ez=11+ez . Ou em outras palavras:f(z)=1f(z)

11+ez=ez1+ez.

Se você considerar o recíproco de ambos os lados, faça o log que você obtém:

ln(1+ez)=ln(1+ez)+z.

Subtraia de ambos os lados e você verá o seguinte:z

yiβTxi+ln(1+eyiβTxi)=L(zi).

Editar:

No momento, estou relendo esta resposta e estou confuso sobre como eu consegui ser igual a - y i β T x i + l n ( 1 + e y iyiβTxi+ln(1+eβTxi)yiβTxi+ln(1+eyiβTxi) . Talvez haja um erro de digitação na pergunta original.

Edição 2:

Caso não tenha havido um erro de digitação na pergunta original, @ManelMorales parece estar correto ao chamar a atenção para o fato de que, quando , a função de massa de probabilidade pode ser escrita como P ( Y i = y i ) = f ( y i β T x i ) , devido à propriedade que f ( - z ) = 1 - f ( z )y{1,1}P(Yi=yi)=f(yiβTxi)f(z)=1f(z) . Estou reescrevendo-o de maneira diferente aqui, porque ele introduz um novo equívoco na notação zi. O restante segue considerando a probabilidade logarítmica negativa para cadacodificação y . Veja a resposta dele abaixo para mais detalhes.y


42

O OP acredita erroneamente que a relação entre essas duas funções se deve ao número de amostras (ou seja, uma única vs todas). No entanto, a diferença real é simplesmente como selecionamos nossos rótulos de treinamento.

In the case of binary classification we may assign the labels y=±1 or y=0,1.

As it has already been stated, the logistic function σ(z) is a good choice since it has the form of a probability, i.e. σ(z)=1σ(z) and σ(z)(0,1) as z±. If we pick the labels y=0,1 we may assign

P(y=1|z)=σ(z)=11+ezP(y=0|z)=1σ(z)=11+ez

which can be written more compactly as P(y|z)=σ(z)y(1σ(z))1y.

It is easier to maximize the log-likelihood. Maximizing the log-likelihood is the same as minimizing the negative log-likelihood. For m samples {xi,yi}, after taking the natural logarithm and some simplification, we will find out:

l(z)=log(imP(yi|zi))=imlog(P(yi|zi))=imyizi+log(1+ezi)

Full derivation and additional information can be found on this jupyter notebook. On the other hand, we may have instead used the labels y=±1. It is pretty obvious then that we can assign

P(y|z)=σ(yz).

It is also obvious that P(y=0|z)=P(y=1|z)=σ(z). Following the same steps as before we minimize in this case the loss function

L(z)=log(jmP(yj|zj))=jmlog(P(yj|zj))=jmlog(1+eyzj)

Where the last step follows after we take the reciprocal which is induced by the negative sign. While we should not equate these two forms, given that in each form y takes different values, nevertheless these two are equivalent:

yizi+log(1+ezi)log(1+eyzj)

The case yi=1 is trivial to show. If yi1, then yi=0 on the left hand side and yi=1 on the right hand side.

While there may be fundamental reasons as to why we have two different forms (see Why there are two different logistic loss formulation / notations?), one reason to choose the former is for practical considerations. In the former we can use the property σ(z)/z=σ(z)(1σ(z)) to trivially calculate l(z) and 2l(z), both of which are needed for convergence analysis (i.e. to determine the convexity of the loss function by calculating the Hessian).


Is logistic loss function convex?
user85361

2
Log reg l(z) IS convex, but not α-convex. Thus we can't place a bound on how long gradient descent takes to converge. We can adjust the form of l to make it strongly convex by adding a regularization term: with positive constant λ define our new function to be l(z)=l(z)+λz2 s.t l(z) is λ-strongly convex and we can now prove the convergence bound of l. Unfortunately, we are now minimizing a different function! Luckily, we can show that the value of the optimum of the regularized function is close to the value of the optimum of the original.
Manuel Morales

The notebook you referred has gone, I got another proof: statlect.com/fundamentals-of-statistics/…
Domi.Zhang

2
I found this to be the most helpful answer.
mohit6up

@ManuelMorales Do you have a link to the regularized function's optimum value being close to the original?
Mark

19

I learned the loss function for logistic regression as follows.

Logistic regression performs binary classification, and so the label outputs are binary, 0 or 1. Let P(y=1|x) be the probability that the binary output y is 1 given the input feature vector x. The coefficients w are the weights that the algorithm is trying to learn.

P(y=1|x)=11+ewTx

Because logistic regression is binary, the probability P(y=0|x) is simply 1 minus the term above.

P(y=0|x)=111+ewTx

The loss function J(w) is the sum of (A) the output y=1 multiplied by P(y=1) and (B) the output y=0 multiplied by P(y=0) for one training example, summed over m training examples.

J(w)=i=1my(i)logP(y=1)+(1y(i))logP(y=0)

where y(i) indicates the ith label in your training data. If a training instance has a label of 1, then y(i)=1, leaving the left summand in place but making the right summand with 1y(i) become 0. On the other hand, if a training instance has y=0, then the right summand with the term 1y(i) remains in place, but the left summand becomes 0. Log probability is used for ease of calculation.

If we then replace P(y=1) and P(y=0) with the earlier expressions, then we get:

J(w)=i=1my(i)log(11+ewTx)+(1y(i))log(111+ewTx)

You can read more about this form in these Stanford lecture notes.


This answer also provides some relevant perspective here.
GeoMatt22

6
The expression you have is not a loss (to be minimized), but rather a log-likelihood (to be maximized).
xenocyon

2
@xenocyon true - this same formulation is typically written with a negative sign applied to the full summation.
Alex Klibisz

1

Instead of Mean Squared Error, we use a cost function called Cross-Entropy, also known as Log Loss. Cross-entropy loss can be divided into two separate cost functions: one for y=1 and one for y=0.

j(θ)=1mi=1mCost(hθ(x(i)),y(i))Cost(hθ(x),y)=log(hθ(x))if y=1Cost(hθ(x),y)=log(1hθ(x))if y=0

When we put them together we have:

j(θ)=1mi=1m[y(i)log(hθ(x(i)))+(1y(i))log(1hθ(x)(i))]

Multiplying by y and (1y) in the above equation is a sneaky trick that let’s us use the same equation to solve for both y=1 and y=0 cases. If y=0, the first side cancels out. If y=1, the second side cancels out. In both cases we only perform the operation we need to perform.

If you don't want to use a for loop, you can try a vectorized form of the equation above

h=g(Xθ)J(θ)=1m(yTlog(h)(1y)Tlog(1h))

The entire explanation can be view on Machine Learning Cheatsheet.

Ao utilizar nosso site, você reconhece que leu e compreendeu nossa Política de Cookies e nossa Política de Privacidade.
Licensed under cc by-sa 3.0 with attribution required.