Explicação da função Yolo Loss


15

Estou tentando entender a função de perda do Yolo v2:

λcoordi=0S2j=0B1ijobj[(xix^i)2+(yiy^i)2]+λcoordi=0S2j=0B1ijobj[(wiw^i)2+(hih^i)2]+i=0S2j=0B1ijobj(CiC^i)2+λnoobji=0S2j=0B1ijnoobj(CiC^i)2+i=0S21iobjcclasses(pi(c)p^i(c))2

Se alguém puder detalhar a função.


5
nobody can help you without context...at least tell us what paper this is from.
bdeonovic

1
"I don't understand" and "detail the function" are overly broad. Please try to identify particular questions. Note that there are numerous questions relating to Yolo already, some of which may provide you with at least part of what you seek
Glen_b -Reinstate Monica

I would add my answer if you pointed to what's not clear from this excellent explanation: medium.com/@jonathan_hui/…
Aksakal

In this blog here there is a detailed graphic explanation of yolo and yolov2. It does answer the question regarding the loss function. Ifind it very useful for begginers and more advanced users.
MBoaretto

Respostas:


18

Explanation of the different terms :

  • The 3 λ constants are just constants to take into account more one aspect of the loss function. In the article λcoord is the highest in order to have the more importance in the first term
  • The prediction of YOLO is a SS(B5+C) vector : B bbox predictions for each grid cells and C class prediction for each grid cell (where C is the number of classes). The 5 bbox outputs of the box j of cell i are coordinates of tte center of the bbox xij yij , height hij, width wij and a confidence index Cij
  • I imagine that the values with a hat are the real one read from the label and the one without hat are the predicted ones. So what is the real value from the label for the confidence score for each bbox C^ij ? It is the intersection over union of the predicted bounding box with the one from the label.
  • 1iobj is 1 when there is an object in cell i and 0 elsewhere
  • 1ijobj "denotes that the jth bounding box predictor in cell i is responsible for that prediction". In other words, it is equal to 1 if there is an object in cell i and confidence of the jth predictors of this cell is the highest among all the predictors of this cell. 1ijnoobj is almost the same except it values 1 when there are NO objects in cell i

Note that I used two indexes i and j for each bbox predictions, this is not the case in the article because there is always a factor 1ijobj or 1ijnoobj so there is no ambigous interpretation : the j chosen is the one corresponding to the highest confidence score in that cell.

More general explanation of each term of the sum :

  1. this term penalize bad localization of center of cells
  2. this term penalize the bounding box with inacurate height and width. The square root is present so that erors in small bounding boxes are more penalizing than errors in big bounding boxes.
  3. this term tries to make the confidence score equal to the IOU between the object and the prediction when there is one object
  4. Tries to make confidence score close to 0 when there are no object in the cell
  5. This is a simple classification loss (not explained in the article)

1
Is the second point supposed to be B*(5+C)? Atleast that's the case for YOLO v3.
sachinruk

@sachinruk this reflects changes in the model between the original YOLO and it's v2 and v3.
David Refaeli

12

λcoordi=0S2j=0B1ijobj[(xix^i)2+(yiy^i)2]+λcoordi=0S2j=0B1ijobj[(wiw^i)2+(hih^i)2]+i=0S2j=0B1ijobj(CiC^i)2+λnoobji=0S2j=0B1ijnoobj(CiC^i)2+i=0S21iobjcclasses(pi(c)p^i(c))2

Doesn't the YOLOv2 Loss function looks scary? It's not actually! It is one of the boldest, smartest loss function around.

Let's first look at what the network actually predicts.

If we recap, YOLOv2 predicts detections on a 13x13 feature map, so in total, we have 169 maps/cells.

We have 5 anchor boxes. For each anchor box we need Objectness-Confidence Score (whether any object was found?), 4 Coordinates (tx,ty,tw, and th) for the anchor box, and 20 top classes. This can crudely be seen as 20 coordinates, 5 confidence scores, and 100 class probabilities for all 5 anchor box predictions put together.

We have few things to worry about:

  • xi,yi, which is the location of the centroid of the anchor box
  • wi,hi, which is the width and height of the anchor box
  • Ci, which is the Objectness, i.e. confidence score of whether there is an object or not, and
  • pi(c), which is the classification loss.
  • We not only need to train the network to detect an object if there is an object in a cell, we also need to punish the network, it if predicts an object in a cell, when there wasn't any. How do we do this? We use a mask (𝟙iobj and 𝟙inoobj) for each cell. If originally there was an object 𝟙iobj is 1 and other no-object cells are 0. 𝟙inoobj is just inverse of 𝟙iobj, where it is 1 if there was no object in the cell and 0 if there was.
  • We need to do this for all 169 cells, and
  • We need to do this 5 times (for each anchor box).

All losses are mean-squared errors, except classification loss, which uses cross-entropy function.

Now, let's break the code in the image.

  • We need to compute losses for each Anchor Box (5 in total)

    • j=0B represents this part, where B = 4 (5 - 1, since the index starts from 0)
  • We need to do this for each of the 13x13 cells where S = 12 (since we start index from 0)

    • i=0S2 represents this part.
  • 𝟙ijobj is 1 when there is an object in the cell i, else 0.

  • 𝟙ijnoobj is 1 when there is no object in the cell i, else 0.
  • 𝟙iobj is 1 when there is a particular class is predicted, else 0.
  • λs are constants. λ is highest for coordinates in order to focus more on detection (remember, in YOLOv2, we first train it for recognition and then for detection, penalizing heavily for recognition is waste of time, rather we focus on getting best bounding boxes!)
  • We can also notice that wi,hi are under square-root. This is done to penalize the smaller bounding boxes as we need better prediction on smaller objects than on bigger objects (author's call). Check out the table below and observe how the smaller values are punished more if we follow "square-root" method (look at the inflection point when we have 0.3 and 0.2 as the input values) (PS: I have kept the ratio of var1 and var2 same just for explanation):

        var1        | var2        | (var1 - var2)^2 | (sqrtvar1 - sqrtvar2)^2

        0.0300    | 0.020      | 9.99e-05          | 0.001

        0.0330    | 0.022      | 0.00012           | 0.0011

        0.0693    | 0.046      | 0.000533         | 0.00233

        0.2148    | 0.143      | 0.00512           | 0.00723

        0.3030    | 0.202      | 0.01                 | 0.01

        0.8808    | 0.587      | 0.0862             | 0.0296

        4.4920    | 2.994      | 2.2421             | 0.1512

Not that scary, right!

Read HERE for further details.


1
Should i and j in \sum start from 1 instead of 0?
webbertiger

1
Yes, that's correct webertiger, have updated the answer accordingly. Thanks!
RShravan

Isnt 1ijobj 1 when there is an object in cell i of bounding box j? and not for all j? how do we choose which j to set to one and the rest to zero. i.e. what is the correct scale/ anchor where it is turned on.
sachinruk

1
I believe S should still be 13 but if the summation starts in 0 it should end in S21
Julian

3
@RShravan, you say: "All losses are mean-squared errors, except classification loss, which uses cross-entropy function". Could you explain? In this equation, it looks as MSE also. Thanks in advance
Julian

3

Your loss function is for YOLO v1 and not YOLO v2. I was also confused with the difference in the two loss functions and seems like many people are: https://groups.google.com/forum/#!topic/darknet/TJ4dN9R4iJk

YOLOv2 paper explains the difference in architecture from YOLOv1 as follows:

We remove the fully connected layers from YOLO(v1) and use anchor boxes to predict bounding boxes... When we move to anchor boxes we also decouple the class prediction mechanism from the spatial location and instead predict class and objectness for every anchorbox.

This means that the confidence probability pi(c) above should depend not only on i and c but also an anchor box index, say j. Therefore, the loss needs to be different from above. Unfortunately, YOLOv2 paper does not explicitly state its loss function.

I try to make a guess on the loss function of YOLOv2 and discuss it here: https://fairyonice.github.io/Part_4_Object_Detection_with_Yolo_using_VOC_2012_data_loss.html


1

Here is my Study Note

  1. Loss function: sum-squared error

    a. Reason: Easy to optimize b. Problem: (1) Does not perfectly align with our goal of maximize average precision. (2) In every image, many grid cells do not contain any object. This pushes the confidence scores of those cells towards 0, often overpowering the gradient from cells that do contain an object. c. Solution: increase loss from bounding box coordinate predictions and decrease the loss from confidence predictions from boxes that don't contain objects. We use two parameters

    λcoord=5
    and λnoobj = 0.5 d. Sum-squared error also equally weights errors in large boxes and small boxes

  2. Only one bounding box should be responsible for each obejct. We assign one predictor to be responsible for predicting an object based on which prediction has the highest current IOU with the ground truth.

a. Loss from bound box coordinate (x, y) Note that the loss comes from one bounding box from one grid cell. Even if obj not in grid cell as ground truth.

{λcoordi=0S2[(xix^i)2+(yiyi^)2]responsible bounding box0 other

b. Loss from width w and height h. Note that the loss comes from one bounding box from one grid cell, even if the object is not in the grid cell as ground truth.

{λcoordi=0S2[(wiw^i)2+(hih^i)2]responsible bounding box0 other

c. Loss from the confidence in each bound box. Not that the loss comes from one bounding box from one grid cel, even if the object is not in the grid cell as ground truth.

{i=0S2(CiC^i)2obj in grid cell and responsible bounding boxλnoobji=0S2(CiC^i)2obj not in grid cell and responsible bounding box0other
d. Loss from the class probability of grid cell, only when object is in the grid cell as ground truth.

{i=0S2cclasses(pi(c)p^i(c))2obj in grid cell0other

Loss function only penalizes classification if obj is present in the grid cell. It also penalize bounding box coordinate if that box is responsible for the ground box (highest IOU)


Question about 'C', in the paper, confidence is the object-or-no object value outputted multiply by the IOU; is that just for test time or is that used for training cost function as well? I thought we just subtract C value from output and label (just like we did with grid values), but that is incorrect?
moondra

0

The loss formula you wrote is of the original YOLO paper loss, not the v2, or v3 loss.

There are some major differences between versions. I suggest reading the papers, or checking the code implementations. Papers: v2, v3.

Some major differences I noticed:

  • Class probability is calculated per bounding box (hence output is now S∗S∗B*(5+C) instead of SS(B*5 + C))

  • Bounding box coordinates now have a different representation

  • In v3 they use 3 boxes across 3 different "scales"

You can try getting into the nitty-gritty details of the loss, either by looking at the python/keras implementation v2, v3 (look for the function yolo_loss) or directly at the c implementation v3 (look for delta_yolo_box, and delta_yolo_class).

Ao utilizar nosso site, você reconhece que leu e compreendeu nossa Política de Cookies e nossa Política de Privacidade.
Licensed under cc by-sa 3.0 with attribution required.