In machine learning classification tasks, if you have an imbalanced training set and apply the training set directly for training, the overall accuracy might be good, but for some minority classes, their accuracy might be bad because they are overlooked during training. For example, in a binary classification task, you have 99 negative examples and 1 positive example in the training set, you model will likely try to learn how to classify the 99 negative samples correctly with extremely high confidence but the only one positive example will not receive attention in training and thus is likely to be classified incorrectly, or in a better scenario correctly with low confidence. What if people only care about the positive examples, even if you got 99% overall accuracy and 100% accuracy for the negative examples, the failure of positive example classification could hardly be tolerated. An ordinary way to overcome this problem is to do sampling to balance the dataset. If you have fewer samples in some classes, you sample them, or duplicate them so that the classes are balanced.
In object detection tasks, imbalanced training set problem is more significant. Given an image, the object detection algorithms usually have to propose a good number of regions in which potential objects might sit. In R-CNN and Fast R-CNN algorithms, the number of region proposed is limited intentionally to several thousand. In Faster R-CNN models and other models with CNN region proposal mechanisms, the number of regions proposed could be as high as several hundred thousand. Of course, most of the regions proposed are negative examples where there is no object inside. So this class imbalanced problem should definitely be addressed in object detection. In R-CNN and Fast R-CNN, because the model is not end-to-end and it consists several distinct models, the class imbalanced problem could be solved by sampling more minor class samples or removing major class samples. However, in end-to-end models, sampling to balance the classes could not be easily achieved to the best of my knowledge.
Mathematically, sampling is equivalent to adding weights to samples. For example, in some television art competitions, the professional judges and all the audience vote to decide which player is the winner. Because the number of audience is way larger than the number of professional judges, we should assigning more “weights” to the votes from professional judges to reflect their professional judge on the art. It could be that the one vote from professional judge is equivalent to one thousand votes from ordinary audience. Facebook AI research tried to solve the class imbalanced problem using a similar way.
Cross Entropy Loss
Normally, we use sigmoid function for binary classification and Softmax function for multi-class classification to calculate the probability of the sample being certain class. The loss function used, regardless if it is a binary classification or multi-class classification, is usually cross entropy loss.
The mathematical expression for the discrete version of cross entropy is:
Where $n$ is the number of all possible discretized distribution bins, $p_i$ is the probability of of being in bin $i$ in distribution $p$ and $q_i$ is the probability of of being in bin $i$ in distribution $q$. It should also be noted that cross entropy is not symmetric, i.e., $H(p,q) \neq H(q,p)$.
When this cross entropy used as loss function in classification problems, $p_i$ has to be the ground truth probability of the sample being as class $i$ and $q_i$ has to be the inferred probability of the sample being as class $i$ from neural network, because all the $p_i$s except one of them are zero, all the $q_i$s are non-zero due to the nature of sigmoid or Softmax function, and we could not do $\log(0)$.
It is also not hard to find out that the log loss we normally used in binary classification is actually a special case of cross entropy loss where the number of classes $n=2$:
It is also not hard to find out that the above expression is equivalent to the below expression:
More more concisely, $L = -\log(p_t)$ where
Facebook AI research added a weighted term in front of the cross entropy loss in paper “Focal Loss for Dense Object Detection”. They called this loss “focal loss”.
Formally, the focal loss is expressed as follows:
Where $\gamma$ is a prefixed positive scala value and
where $\alpha$ is a prefixed value between 0 and 1 to balance the positive labeled samples and negative labeled samples, and it is one of the most common way to balance the classes.
We see that the weight term $\alpha_t(1-p_t)^\gamma$ in addition to the cross entropy loss is dependent on the value of $p_t$. When $p_t$ is larger the weight is smaller, when $p_t$ is smaller the weight is larger.
This reminds me the boosting algorithm where the previously incorrectly classified examples will receive more weights, but in a different context.
Focal Loss in Object Detection
In end-to-end object detection, the region proposed has way more negative samples than positive samples. When the optimizer started to classify the negative samples correctly and the minor positive samples incorrectly. The loss from the positive samples will dominate the total loss and thus the effective training will still go on and the optimizer will try to optimize to classify the positive samples correctly. If using ordinary cross entropy loss, although the loss from single correctly classified negative is small, because of the population advantage, the loss from the negative samples will still likely to dominate the total loss. Thus optimizer will try to further optimize to classify the negative samples better, but “ignore” the positive samples.
Let us set $\alpha = 0.25$ and $\gamma = 2$. Thus $\alpha_t = 0.25$ for positive samples, and $\alpha_t = 0.75$ for negative samples. Note that it makes more sense to use $\alpha = 0.75$ since the positive samples are usually minorities. However, we could see in the calculations below that the contribution of $\alpha$ sometimes does not affect loss significantly. But in practice, we may fine tune this $\alpha$ to get better model accuracies.
If we classify a negative sample to the ground truth target with probability $p_t = 0.99$ (doing awesome job), because $\alpha_t = 1 - \alpha = 0.75$, $\alpha_t(1-p_t)^\gamma = 0.000075$. $-\log(p_t) = 0.0043648054$ is already a small number, further discounting it will make it even less important in training.
If we classify a positive sample to the ground truth target with probability $p_t = 0.01$ (doing awful job), because $\alpha_t = \alpha = 0.25$, $\alpha_t(1-p_t)^\gamma = 0.245025$. Although $-\log(p_t) = 2$ is still discounted, but it is less affected compared to the correctly classified examples.
Take us see an extremely example. Suppose we have 1000000 example with $p_t = 0.99$ and 10 example with $p_t = 0.01$. The 1000000 example with $p_t = 0.99$ happen to all be negative examples and the 10 example with $p_t = 0.01$ happen to all be positive examples.
In the scenario using ordinary cross entropy loss, the loss from negative examples is $1000000 \times 0.0043648054 = 4364$ and the loss from positive examples is $10 \times 2 = 20$. The loss contribution from positive examples is $20 / (4364 + 20) = 0.0046$. Almost negligible.
In the scenario using focal loss, the loss from negative examples is
$1000000 \times 0.0043648054 \times 0.000075 = 0.3274$ and the loss from positive examples is $10 \times 2 \times 0.245025 = 4.901$. The loss contribution from positive examples is $4.901 / (4.901 + 0.3274) = 0.9374$! It is dominating the total loss now!
This extreme example demonstrated that the minor class samples will be less likely ignored during training.
Focal Loss Trick
In practice, the focal loss does not work well if you do not apply some tricks. In ordinary neural network optimization, the weights are initialized according to some distributions so that the output from each layer will be normally distributed (Xavier initialization for example). If so, the value of the output layer for sigmoid function will also be normally distributed. Therefore the initial probability distributions of all the regions proposed, regardless of positive or negative samples, will follow some distribution centered at 0.5. This makes the minor positive samples less important in the initial training stage. Therefore, initially, maybe for a very long period of time, the training will not go very well for the positive samples.
The trick that Facebook AI Research used is to initialize the bias term of the last layer to some non-zero value such that the $p_t$ of positive samples is small and the $p_t$ of negative samples is large. Concretely, they set the bias term $b = -\log((1-\pi)/\pi)$. Here $\pi$ is simply are variable instead of the ordinary $\pi$. In their case, they set $\pi = 0.01$, therefore $b \gg wx$.
Therefore, according to sigmoid function,
$p = 0.01$ means that for positive samples there are all extremely incorrect inference thus receiving high weights, for negative examples there are extremely correct inference thus receiving low weights. Therefore, the positive examples will receive “attention” in the early training stage and the whole training process is likely to go smoothly.
Focal loss is very useful for training imbalanced dataset, especially in object detection tasks. However, I was surprised why such intuitive loss function was proposed such late. I am also not sure whether my memory about seeing similar loss functions in old literatures is correct. Anyway, hope this makes understanding focal loss, its motivation, intuition, and math crystal clear.