人工智能python tensorflow2.0教程5独热编码和交叉熵 持续更新

交叉熵(Cross Entropy)是Loss函数的一种(也称为损失函数或代价函数),用于描述模型预测值与真实值的差距大小,常见的Loss函数就是均方平方差(Mean Squared Error)

交叉熵的原理

交叉熵刻画的是实际输出(概率)与期望输出(概率)的距离,也就是交叉熵的值越小,两个概率分布就越接近。假设概率分布p为期望输出,概率分布q为实际输出,H(p,q)为交叉熵,则:

人工智能python tensorflow2.0教程5独热编码和交叉熵 持续更新

这个公式如何表征距离呢,举个例子:假设N=3,期望输出为p=(1,0,0),实际输出q1=(0.5,0.2,0.3),q2=(0.8,0.1,0.1),那么:

人工智能python tensorflow2.0教程5独热编码和交叉熵 持续更新

TensorFlow的交叉熵函数

TensorFlow针对分类问题,实现了四个交叉熵函数,分别是

1.tf.nn.sigmoid_cross_entropy_with_logits

计算 给定 logits 的sigmoid函数 交叉熵。

2.tf.nn.softmax_cross_entropy_with_logits

计算 logits 和 labels 之间的 softmax 交叉熵

3.tf.nn.sparse_softmax_cross_entropy_with_logits

计算 logits 和 labels 之间的 稀疏softmax 交叉熵

4.tf.nn.weighted_cross_entropy_with_logits

计算加权交叉熵

softmax_cross_entropy_with_logits和sparse_softmax_cross_entropy_with_logits区别

英文解释:

Having two different functions is a convenience, as they produce the same result.

The difference is simple:

  • For sparse_softmax_cross_entropy_with_logits, labels must have the shape [batch_size] and the dtype int32 or int64. Each label is an int in range [0, num_classes-1].
  • For softmax_cross_entropy_with_logits, labels must have the shape [batch_size, num_classes] and dtype float32 or float64.

Labels used in softmax_cross_entropy_with_logits are the one hot version of labels used in sparse_softmax_cross_entropy_with_logits.

Another tiny difference is that with sparse_softmax_cross_entropy_with_logits, you can give -1 as a label to have loss 0 on this label.


独热编码one_hot

v = tuple(range(0, 5))

v_onehot = tf.one_hot(v, len(v))

v2 = tf.keras.utils.to_categorical(v,num_classes=len(v) )

print("v_onehot:\\n",v_onehot)

print("v2:\\n",v2, type(v2))

<code>v_onehot:
tf.Tensor(
[[1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 0. 1. 0. 0.]
[0. 0. 0. 1. 0.]
[0. 0. 0. 0. 1.]], shape=(5, 5), dtype=float32)
v2:
[[1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 0. 1. 0. 0.]
[0. 0. 0. 1. 0.]
[0. 0. 0. 0. 1.]] <class>/<code>


分享到:


相關文章: