機器學習-Faster RCNN的損失函數(Loss Function)

Faster RCNN的損失函數(Loss Function)的形式如下:

機器學習-Faster RCNN的損失函數(Loss Function)

p(i): Anchor[i]的預測分類概率;

Anchor[i]是正樣本時,p(i)*=1;Anchor[i]是負樣本時,p(i)*=0;

什麼是正樣本與負樣本滿足以下條件的Anchor是正樣本:與Ground Truth Box的IOU(Intersection-Over-Union) 的重疊區域最大的Anchor;與Gound Truth Box的IOU的重疊區域>0.7;滿足以下條件的Anchor是負樣本:與Gound Truth Box的IOU的重疊區域 <0.3;既不屬於正樣本又不屬於負樣本的Anchor不參與訓練。

t(i): Anchor[i]預測的Bounding Box的參數化座標(parameterized coordinates);

t(i)*: Anchor[i]的Ground Truth的Bounding Box的參數化座標;

機器學習-Faster RCNN的損失函數(Loss Function)

N(cls): mini-batch size;

N(reg): Anchor Location的數量;

機器學習-Faster RCNN的損失函數(Loss Function)

其中,R是Smooth L1函數;

機器學習-Faster RCNN的損失函數(Loss Function)

表示只有在正樣本時才回歸Bounding Box。

Smooth L1 Loss

機器學習-Faster RCNN的損失函數(Loss Function)
機器學習-Faster RCNN的損失函數(Loss Function)
機器學習-Faster RCNN的損失函數(Loss Function)
Smooth L1完美地避開了 L1 和 L2 損失的缺陷,在 x 較小時,對 x 的梯度也會變小; 而在 x 很大時,對 x 的梯度的絕對值達到上限1,不會因預測值的梯度十分大導致訓練不穩定。

L(cls): 是兩個類別的對數損失

機器學習-Faster RCNN的損失函數(Loss Function)

λ: 權重平衡參數,在論文中作者設置λ=10,但實際實驗顯示,結果對的λ變化不敏感,如下表所示,λ取值從1變化到100,對最終結果的影響在1%以內。

機器學習-Faster RCNN的損失函數(Loss Function)

代碼實現

Smooth L1 Loss

def _smooth_l1_loss(self, bbox_pred, bbox_targets, bbox_inside_weights, bbox_outside_weights, sigma=1.0, dim=[1]):

sigma_2 = sigma ** 2

box_diff = bbox_pred - bbox_targets

in_box_diff = bbox_inside_weights * box_diff

abs_in_box_diff = tf.abs(in_box_diff)

smoothL1_sign = tf.stop_gradient(tf.to_float(tf.less(abs_in_box_diff, 1. / sigma_2)))

in_loss_box = tf.pow(in_box_diff, 2) * (sigma_2 / 2.) * smoothL1_sign \\

+ (abs_in_box_diff - (0.5 / sigma_2)) * (1. - smoothL1_sign)

out_loss_box = bbox_outside_weights * in_loss_box

loss_box = tf.reduce_mean(tf.reduce_sum(

out_loss_box,

axis=dim

))

return loss_box

代碼中的Smooth L1 Loss更加General。

機器學習-Faster RCNN的損失函數(Loss Function)

bbox_inside_weight對應於公式(1)(Faster RCNN的損失函數)中的p*,即當Anchor為正樣本時值為1,為負樣本時值為0。bbox_outside_weights對應於公式(1)(Faster RCNN的損失函數)中的N(reg)、λ、N(cls)的設置。在論文中,N(reg)=2400、λ=10、N(cls)=256,如此分類和迴歸兩個loss的權重基本相同。

在代碼中,N(reg)=N(cls),λ=1,如此分類和迴歸兩個loss的權重也基本相同。

Loss

def _add_losses(self, sigma_rpn=3.0):

with tf.variable_scope('LOSS_' + self._tag) as scope:

# RPN, class loss

rpn_cls_score = tf.reshape(self._predictions['rpn_cls_score_reshape'], [-1, 2])

rpn_label = tf.reshape(self._anchor_targets['rpn_labels'], [-1])

rpn_select = tf.where(tf.not_equal(rpn_label, -1))

rpn_cls_score = tf.reshape(tf.gather(rpn_cls_score, rpn_select), [-1, 2])

rpn_label = tf.reshape(tf.gather(rpn_label, rpn_select), [-1])

rpn_cross_entropy = tf.reduce_mean(

tf.nn.sparse_softmax_cross_entropy_with_logits(logits=rpn_cls_score, labels=rpn_label))

# RPN, bbox loss

rpn_bbox_pred = self._predictions['rpn_bbox_pred']

rpn_bbox_targets = self._anchor_targets['rpn_bbox_targets']

rpn_bbox_inside_weights = self._anchor_targets['rpn_bbox_inside_weights']

rpn_bbox_outside_weights = self._anchor_targets['rpn_bbox_outside_weights']

rpn_loss_box = self._smooth_l1_loss(rpn_bbox_pred, rpn_bbox_targets, rpn_bbox_inside_weights,

rpn_bbox_outside_weights, sigma=sigma_rpn, dim=[1, 2, 3])

# RCNN, class loss

cls_score = self._predictions["cls_score"]

label = tf.reshape(self._proposal_targets["labels"], [-1])

cross_entropy = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=cls_score, labels=label))

# RCNN, bbox loss

bbox_pred = self._predictions['bbox_pred']

bbox_targets = self._proposal_targets['bbox_targets']

bbox_inside_weights = self._proposal_targets['bbox_inside_weights']

bbox_outside_weights = self._proposal_targets['bbox_outside_weights']

loss_box = self._smooth_l1_loss(bbox_pred, bbox_targets, bbox_inside_weights, bbox_outside_weights)

self._losses['cross_entropy'] = cross_entropy

self._losses['loss_box'] = loss_box

self._losses['rpn_cross_entropy'] = rpn_cross_entropy

self._losses['rpn_loss_box'] = rpn_loss_box

loss = cross_entropy + loss_box + rpn_cross_entropy + rpn_loss_box

regularization_loss = tf.add_n(tf.losses.get_regularization_losses(), 'regu')

self._losses['total_loss'] = loss + regularization_loss

self._event_summaries.update(self._losses)

return loss

損失函數中包含了RPN交叉熵、RPN Box的regression、RCNN的交叉熵、RCNN Box的regression以及參數正則化損失。

IOU的計算

def bbox_overlaps(

np.ndarray[DTYPE_t, ndim=2] boxes,

np.ndarray[DTYPE_t, ndim=2] query_boxes):

"""

Parameters

----------

boxes: (N, 4) ndarray of float

query_boxes: (K, 4) ndarray of float

Returns

-------

overlaps: (N, K) ndarray of overlap between boxes and query_boxes

"""

cdef unsigned int N = boxes.shape[0]

cdef unsigned int K = query_boxes.shape[0]

cdef np.ndarray[DTYPE_t, ndim=2] overlaps = np.zeros((N, K), dtype=DTYPE)

cdef DTYPE_t iw, ih, box_area

cdef DTYPE_t ua

cdef unsigned int k, n

for k in range(K):

box_area = (

(query_boxes[k, 2] - query_boxes[k, 0] + 1) *

(query_boxes[k, 3] - query_boxes[k, 1] + 1)

)

for n in range(N):

iw = (

min(boxes[n, 2], query_boxes[k, 2]) -

max(boxes[n, 0], query_boxes[k, 0]) + 1

)

if iw > 0:

ih = (

min(boxes[n, 3], query_boxes[k, 3]) -

max(boxes[n, 1], query_boxes[k, 1]) + 1

)

if ih > 0:

ua = float(

(boxes[n, 2] - boxes[n, 0] + 1) *

(boxes[n, 3] - boxes[n, 1] + 1) +

box_area - iw * ih

)

overlaps[n, k] = iw * ih / ua

return overlaps

IOU覆蓋率的計算過程:IOU=C/(A+B-C)

機器學習-Faster RCNN的損失函數(Loss Function)

IOU計算


分享到:


相關文章: