Estou tentando entender a função de perda do Yolo v2:

Se alguém puder detalhar a função.

Estou tentando entender a função de perda do Yolo v2:

$$\begin{array}{rl}& {\lambda}_{coord}\sum _{i=0}^{{S}^{2}}\sum _{j=0}^{B}{\mathbb{1}}_{ij}^{obj}[({x}_{i}-{\hat{x}}_{i}{)}^{2}+({y}_{i}-{\hat{y}}_{i}{)}^{2}]\\ & +{\lambda}_{coord}\sum _{i=0}^{{S}^{2}}\sum _{j=0}^{B}{\mathbb{1}}_{ij}^{obj}[(\sqrt{{w}_{i}}-\sqrt{{\hat{w}}_{i}}{)}^{2}+(\sqrt{{h}_{i}}-\sqrt{{\hat{h}}_{i}}{)}^{2}]\\ & +\sum _{i=0}^{{S}^{2}}\sum _{j=0}^{B}{\mathbb{1}}_{ij}^{obj}({C}_{i}-{\hat{C}}_{i}{)}^{2}+{\lambda}_{noobj}\sum _{i=0}^{{S}^{2}}\sum _{j=0}^{B}{\mathbb{1}}_{ij}^{noobj}({C}_{i}-{\hat{C}}_{i}{)}^{2}\\ & +\sum _{i=0}^{{S}^{2}}{\mathbb{1}}_{i}^{obj}\sum _{c\in classes}({p}_{i}(c)-{\hat{p}}_{i}(c){)}^{2}\end{array}$$

Se alguém puder detalhar a função.

5

nobody can help you without context...at least tell us what paper this is from.

—
bdeonovic
"I don't understand" and "detail the function" are overly broad. Please try to identify particular questions. Note that there are numerous questions relating to Yolo already, some of which may provide you with at least part of what you seek

—
Glen_b -Reinstate Monica
I would add my answer if you pointed to what's not clear from this excellent explanation: medium.com/@jonathan_hui/…

—
Aksakal
Respostas:

Explanation of the different terms :

- The 3 $\lambda $ constants are just constants to take into account more one aspect of the loss function. In the article ${\lambda}_{coord}$ is the highest in order to have the more importance in the first term
- The prediction of YOLO is a $S\ast S\ast (B\ast 5+C)$ vector : $B$ bbox predictions for each grid cells and $C$ class prediction for each grid cell (where $C$ is the number of classes). The 5 bbox outputs of the box j of cell i are coordinates of tte center of the bbox ${x}_{ij}$ ${y}_{ij}$ , height ${h}_{ij}$, width ${w}_{ij}$ and a confidence index ${C}_{ij}$
- I imagine that the values with a hat are the real one read from the label and the one without hat are the predicted ones. So what is the real value from the label for the confidence score for each bbox ${\hat{C}}_{ij}$ ? It is the intersection over union of the predicted bounding box with the one from the label.
- ${\mathbb{1}}_{i}^{obj}$ is $1$ when there is an object in cell $i$ and $0$ elsewhere
- ${\mathbb{1}}_{ij}^{obj}$ "denotes that the $j$th bounding box predictor in cell $i$ is responsible for that prediction". In other words, it is equal to $1$ if there is an object in cell $i$ and confidence of the $j$th predictors of this cell is the highest among all the predictors of this cell. ${\mathbb{1}}_{ij}^{noobj}$ is almost the same except it values 1 when there are NO objects in cell $i$

Note that I used two indexes $i$ and $j$ for each bbox predictions, this is not the case in the article because there is always a factor ${\mathbb{1}}_{ij}^{obj}$ or ${\mathbb{1}}_{ij}^{noobj}$ so there is no ambigous interpretation : the $j$ chosen is the one corresponding to the highest confidence score in that cell.

More general explanation of each term of the sum :

- this term penalize bad localization of center of cells
- this term penalize the bounding box with inacurate height and width. The square root is present so that erors in small bounding boxes are more penalizing than errors in big bounding boxes.
- this term tries to make the confidence score equal to the IOU between the object and the prediction when there is one object
- Tries to make confidence score close to $0$ when there are no object in the cell
- This is a simple classification loss (not explained in the article)

Is the second point supposed to be

—
sachinruk
`B*(5+C)`

? Atleast that's the case for YOLO v3.
@sachinruk this reflects changes in the model between the original YOLO and it's v2 and v3.

—
David Refaeli
$$\begin{array}{rl}& {\lambda}_{coord}\sum _{i=0}^{{S}^{2}}\sum _{j=0}^{B}{\mathbb{1}}_{ij}^{obj}[({x}_{i}-{\hat{x}}_{i}{)}^{2}+({y}_{i}-{\hat{y}}_{i}{)}^{2}]\\ & +{\lambda}_{coord}\sum _{i=0}^{{S}^{2}}\sum _{j=0}^{B}{\mathbb{1}}_{ij}^{obj}[(\sqrt{{w}_{i}}-\sqrt{{\hat{w}}_{i}}{)}^{2}+(\sqrt{{h}_{i}}-\sqrt{{\hat{h}}_{i}}{)}^{2}]\\ & +\sum _{i=0}^{{S}^{2}}\sum _{j=0}^{B}{\mathbb{1}}_{ij}^{obj}({C}_{i}-{\hat{C}}_{i}{)}^{2}+{\lambda}_{noobj}\sum _{i=0}^{{S}^{2}}\sum _{j=0}^{B}{\mathbb{1}}_{ij}^{noobj}({C}_{i}-{\hat{C}}_{i}{)}^{2}\\ & +\sum _{i=0}^{{S}^{2}}{\mathbb{1}}_{i}^{obj}\sum _{c\in classes}({p}_{i}(c)-{\hat{p}}_{i}(c){)}^{2}\end{array}$$

Doesn't the YOLOv2 Loss function looks scary? It's not actually! It is one of the boldest, smartest loss function around.

Let's first look at what the network actually predicts.

If we recap, YOLOv2 predicts detections on a 13x13 feature map, so in total, we have 169 maps/cells.

We have 5 anchor boxes. For each anchor box we need Objectness-Confidence Score (whether any object was found?), 4 Coordinates (${t}_{x},{t}_{y},{t}_{w},$ and ${t}_{h}$) for the anchor box, and 20 top classes. This can crudely be seen as 20 coordinates, 5 confidence scores, and 100 class probabilities for all 5 anchor box predictions put together.

We have few things to worry about:

- ${x}_{i},{y}_{i}$, which is the location of the centroid of the anchor box
- ${w}_{i},{h}_{i}$, which is the width and height of the anchor box
- ${C}_{i}$, which is the
*Objectness*, i.e. confidence score of whether there is an object or not, and - ${p}_{i}(c)$, which is the classification loss.
- We not only need to train the network to detect an object if there is an object in a cell, we also need to punish the network, it if predicts an object in a cell, when there wasn't any. How do we do this? We use a mask (${\U0001d7d9}_{i}^{obj}$ and ${\U0001d7d9}_{i}^{noobj}$) for each cell. If originally there was an object ${\U0001d7d9}_{i}^{obj}$ is 1 and other
*no-object*cells are 0. ${\U0001d7d9}_{i}^{noobj}$ is just inverse of ${\U0001d7d9}_{i}^{obj}$, where it is 1 if*there was*in the cell and 0 if there was.**no**object - We need to do this for all 169 cells, and
- We need to do this 5 times (for each anchor box).

All losses are *mean-squared* errors, except classification loss, which uses *cross-entropy* function.

Now, let's break the code in the image.

We need to compute losses for each Anchor Box (5 in total)

- $\sum _{j=0}^{B}$ represents this part, where B = 4 (5 - 1, since the index starts from 0)

We need to do this for each of the 13x13 cells where S = 12 (since we start index from 0)

- $\sum _{i=0}^{{S}^{2}}$ represents this part.

${\U0001d7d9}_{ij}^{obj}$ is 1 when there is an object in the cell $i$, else 0.

- ${\U0001d7d9}_{ij}^{noobj}$ is 1 when there is no object in the cell $i$, else 0.
- ${\U0001d7d9}_{i}^{obj}$ is 1 when there is a particular class is predicted, else 0.
- λs are constants. λ is highest for coordinates in order to focus more on detection (remember, in YOLOv2, we first train it for recognition and then for detection, penalizing heavily for recognition is waste of time, rather we focus on getting best bounding boxes!)
- We can also notice that ${w}_{i},{h}_{i}$ are under square-root. This is done to penalize the smaller bounding boxes as we need better prediction on smaller objects than on bigger objects (author's call). Check out the table below and observe how the smaller values are punished more if we follow "square-root" method (look at the inflection point when we have 0.3 and 0.2 as the input values) (PS: I have kept the ratio of var1 and var2 same just for explanation):

var1 | var2 | (var1 - var2)^2 | (sqrtvar1 - sqrtvar2)^2

0.0300 | 0.020 | 9.99e-05 | 0.001

0.0330 | 0.022 | 0.00012 | 0.0011

0.0693 | 0.046 | 0.000533 | 0.00233

0.2148 | 0.143 | 0.00512 | 0.00723

0.3030 | 0.202 | 0.01 | 0.01

0.8808 | 0.587 | 0.0862 | 0.0296

4.4920 | 2.994 | 2.2421 | 0.1512

Not that scary, right!

Read HERE for further details.

Should i and j in \sum start from 1 instead of 0?

—
webbertiger
Yes, that's correct webertiger, have updated the answer accordingly. Thanks!

—
RShravan
Isnt ${\mathbb{1}}_{ij}^{obj}$ 1 when there is an object in cell i of bounding box j? and not for all j? how do we choose which j to set to one and the rest to zero. i.e. what is the correct scale/ anchor where it is turned on.

—
sachinruk
I believe S should still be 13 but if the summation starts in 0 it should end in ${S}^{2}-1$

—
Julian
@RShravan, you say: "All losses are mean-squared errors, except classification loss, which uses cross-entropy function". Could you explain? In this equation, it looks as MSE also. Thanks in advance

—
Julian
Your loss function is for YOLO v1 and not YOLO v2. I was also confused with the difference in the two loss functions and seems like many people are: https://groups.google.com/forum/#!topic/darknet/TJ4dN9R4iJk

YOLOv2 paper explains the difference in architecture from YOLOv1 as follows:

We remove the fully connected layers from YOLO(v1) and use anchor boxes to predict bounding boxes... When we move to anchor boxes we also decouple the class prediction mechanism from the spatial location and instead predict class and objectness for every anchorbox.

This means that the confidence probability ${p}_{i}(c)$ above should depend not only on $i$ and $c$ but also an anchor box index, say $j$. Therefore, the loss needs to be different from above. Unfortunately, YOLOv2 paper does not explicitly state its loss function.

I try to make a guess on the loss function of YOLOv2 and discuss it here: https://fairyonice.github.io/Part_4_Object_Detection_with_Yolo_using_VOC_2012_data_loss.html

Here is my Study Note

Loss function: sum-squared error

a. Reason: Easy to optimize b. Problem: (1) Does not perfectly align with our goal of maximize average precision. (2) In every image, many grid cells do not contain any object. This pushes the confidence scores of those cells towards 0, often overpowering the gradient from cells that do contain an object. c. Solution: increase loss from bounding box coordinate predictions and decrease the loss from confidence predictions from boxes that don't contain objects. We use two parameters

$${\lambda}_{coord}=5$$and ${\lambda}_{noobj}$ = 0.5 d. Sum-squared error also equally weights errors in large boxes and small boxesOnly one bounding box should be responsible for each obejct. We assign one predictor to be responsible for predicting an object based on which prediction has the highest current IOU with the ground truth.

a. Loss from bound box coordinate (x, y) Note that the loss comes from one bounding box from one grid cell. Even if obj not in grid cell as ground truth.

$$\{\begin{array}{ll}{\lambda}_{coord}\sum _{i=0}^{{S}^{2}}[({x}_{i}-{\hat{x}}_{i}{)}^{2}+({y}_{i}-\hat{{y}_{i}}{)}^{2}]& \text{responsible bounding box}\\ 0& \text{other}\end{array}$$

b. Loss from width w and height h. Note that the loss comes from one bounding box from one grid cell, even if the object is not in the grid cell as ground truth.

$$\{\begin{array}{ll}{\lambda}_{coord}\sum _{i=0}^{{S}^{2}}[(\sqrt{{w}_{i}}-\sqrt{{\hat{w}}_{i}}{)}^{2}+(\sqrt{{h}_{i}}-\sqrt{{\hat{h}}_{i}}{)}^{2}]& \text{responsible bounding box}\\ 0& \text{other}\end{array}$$

c. Loss from the confidence in each bound box. Not that the loss comes from one bounding box from one grid cel, even if the object is not in the grid cell as ground truth.

$$\{\begin{array}{ll}\sum _{i=0}^{{S}^{2}}({C}_{i}-{\hat{C}}_{i}{)}^{2}& \text{obj in grid cell and responsible bounding box}\\ {\lambda}_{noobj}\sum _{i=0}^{{S}^{2}}({C}_{i}-{\hat{C}}_{i}{)}^{2}& \text{obj not in grid cell and responsible bounding box}\\ 0& \text{other}\end{array}$$

d. Loss from the class probability of $$\{\begin{array}{ll}\sum _{i=0}^{{S}^{2}}\sum _{c\in classes}({p}_{i}(c)-{\hat{p}}_{i}(c){)}^{2}& \text{obj in grid cell}\\ 0& \text{other}\end{array}$$

Loss function only penalizes classification if obj is present in the grid cell. It also penalize bounding box coordinate if that box is responsible for the ground box (highest IOU)

Question about 'C', in the paper, confidence is the object-or-no object value outputted multiply by the IOU; is that just for test time or is that used for training cost function as well? I thought we just subtract C value from output and label (just like we did with grid values), but that is incorrect?

—
moondra
The loss formula you wrote is of the original YOLO paper loss, **not** the v2, or v3 loss.

There are some major differences between versions. I suggest reading the papers, or checking the code implementations. Papers: v2, v3.

Some major differences I noticed:

Class probability is calculated per bounding box (hence output is now S∗S∗B*(5+C) instead of S

*S*(B*5 + C))Bounding box coordinates now have a different representation

In v3 they use 3 boxes across 3 different "scales"

You can try getting into the nitty-gritty details of the loss, either by looking at the python/keras implementation v2, v3 (look for the function yolo_loss) or directly at the c implementation v3 (look for delta_yolo_box, and delta_yolo_class).

Ao utilizar nosso site, você reconhece que leu e compreendeu nossa Política de Cookies e nossa Política de Privacidade.

Licensed under cc by-sa 3.0
with attribution required.