L1范数、L2范数、MAE、MSE、RMSE的定义及其在PyTorch中的计算方式
一范数:元素绝对值之和二范数:元素平方和的平方根MAE:Mean Absolute Error,平均绝对误差MSE:Mean Square Error,均方误差
向量 / 矩阵范数定义
一范数:元素绝对值之和
∥
X
∥
1
=
∑
i
=
1
n
∣
x
i
∣
\begin{Vmatrix} X \end{Vmatrix}_1 = \sum_{i=1}^n \lvert x_i \rvert
X
1=i=1∑n∣xi∣
二范数:元素平方和的平方根
∥
X
∥
2
=
∑
i
=
1
n
∣
x
i
2
∣
\begin{Vmatrix} X \end{Vmatrix}_2 = \sqrt {\sum_{i=1}^n \lvert x_i^2 \rvert}
X
2=i=1∑n∣xi2∣
MAE:Mean Absolute Error,平均绝对误差
M
A
E
(
x
,
y
)
=
1
n
∑
i
=
1
n
∣
y
i
−
x
i
∣
MAE(x, y) = \frac1n \sum_{i=1}^n{\lvert y_i - x_i \rvert}
MAE(x,y)=n1i=1∑n∣yi−xi∣
MSE:Mean Square Error,均方误差
M
S
E
(
x
,
y
)
=
1
n
∑
i
=
1
n
(
y
i
−
x
i
)
2
MSE(x, y) = \frac1n \sum_{i=1}^n{(y_i - x_i)^2}
MSE(x,y)=n1i=1∑n(yi−xi)2
RMSE:Root Mean Square Error,均方根误差
R
M
S
E
(
x
,
y
)
=
M
S
E
(
x
,
y
)
=
1
n
∑
i
=
1
n
(
y
i
−
x
i
)
2
RMSE(x, y) = \sqrt{MSE(x, y)} =\sqrt{\frac1n \sum_{i=1}^n{(y_i - x_i)^2}}
RMSE(x,y)=MSE(x,y)=n1i=1∑n(yi−xi)2
PyTorch范数API
torch.norm(input, p='fro', dim=None, keepdim=False, out=None, dtype=None)
p:范数类型,默认为Frobenius
范数计算
>>> import torch
>>> x = torch.tensor([1, 2, 3], dtype=torch.float32)
>>> x
tensor([1., 2., 3.])
>>> torch.norm(x, 1) # 一范数
tensor(6.)
>>> torch.norm(x, 2) # 二范数
tensor(3.7417)
PyTorch损失函数
torch.nn.L1Loss(size_average=None, reduce=None, reduction='mean')
torch.nn.MSELoss(size_average=None, reduce=None, reduction='mean')
reduction为默认参数'mean'
时,计算结果会除以n(元素个数);设置为'sum'
时,计算结果等于范数(不除n)
官方文档:
reduction (string*,* optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
损失函数和范数计算结果对比
>>> import torch
>>> x = torch.tensor([1, 2], dtype=torch.float32)
>>> y = torch.tensor([3, 4], dtype=torch.float32)
>>> MAE = torch.nn.L1Loss() # MAE = (abs(3 - 1) + abs(4 - 2)) / 2
>>> MAE(x, y)
tensor(2.)
>>> l1_norm = torch.nn.L1Loss(reduction='sum') # (y - x)的一范数
>>> l1_norm(x, y)
tensor(4.)
>>> torch.norm((y - x), 1) # (y - x)的一范数
tensor(4.)
>>> MSE = torch.nn.MSELoss() # MSE = ((3 - 1) ** 2 + (4 - 2) ** 2) / 2, 默认 reduction='mean'
>>> MSE(x, y)
tensor(4.)
>>> torch.sqrt(MSE(x, y)) # RMSE
tensor(2.)
>>> MSE_sum = torch.nn.MSELoss(reduction='sum') # MSE * n
>>> l2_norm_square = MSE_sum # (y - x)的二范数的平方
>>> l2_norm_square(x, y)
tensor(8.)
>>> torch.sqrt(l2_norm_square(x, y)) # (y - x)的二范数, sqrt(MSE * n)
tensor(2.8284)
>>> torch.norm((y - x), 2) # (y - x)的二范数
tensor(2.8284)
更多推荐
所有评论(0)