469 阅读 2020-08-10 09:11:02 上传
以下文章来源于 认知语言学

所谓标准化(Normalization),本质上是将一堆样本数据求取均值


下面分别介绍BN、LN、IN和GN。


Batch Normalization :批标准化
批:一批数据,通常为mini- batch
标准化: 0均值,1方差
优点:
1.可以用更大学习率, 加速模型收敛
2.可以不用精心设计权值初始化
3.可以不用Dropout或较小的Dropout
4.可以不用L2正则化或者较小的weight decay
5.可以不用LRN (local response normalization)
参考论文:
《Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift》
PS:LRN局部响应值的标准化,在AlexNet中有用到,和BN的功能差不多,都是对数据的尺度进行规范。
基本算法如下:

几点说明:
算法的输入不仅要求输入一个BatchSize的数据,同时要求输入参数
和 ,这两个参数是可学习的,类似于神经网络层中的权重,通过 ,反向传播进行学习。
算法的第一步就求均值与方差
,然后进行标准化: ,这里的 是一个非常小的数,用于防止分母为零的情况,这步操作过后,得到的 服从均值为0,方差为1的正态分布。
算法最后一步:
非常重要,正是这一步赋予了 Batch Normalization 非常强大的功能,通过这一步得到的 服从一个均值为 ,方差为 的正态分布,由于 和 均是可以学习的参数,因此赋予了模型更多的可能性,模型通过学习自行决定是否需要对数据的分布进行改变,提高了模型的容纳能力,我们称这一步为仿射变换(Affine Transform ),特殊情况下,如果模型学习得到 ,则本质上 ,相当于没有对数据进行任何操作,称之为恒等映射。

BN本来是要解决ICS(Internal Covariate Shift)问题的,就是防止权重过大或者过小时造成深层神经网络的梯度爆炸或消失的问题。但是也顺带的带来了优点(1可用4不用)。

测试代码1

本次测试代码基于这篇文章代码,并进行对比。
# -*- coding: utf-8 -*-"""# @file name : bn_and_initialize.py# @author : 475# @date : 2020-07-25# @brief : bn与权值初始化"""import torchimport numpy as npimport torch.nn as nnfrom tools.common_tools import set_seedset_seed(1) # 设置随机种子class MLP(nn.Module):def __init__(self, neural_num, layers=100):super(MLP, self).__init__()self.linears = nn.ModuleList([nn.Linear(neural_num, neural_num, bias=False) for i in range(layers)])self.bns = nn.ModuleList([nn.BatchNorm1d(neural_num) for i in range(layers)])self.neural_num = neural_numdef forward(self, x):for (i, linear), bn in zip(enumerate(self.linears), self.bns):x = linear(x)# x = bn(x) # 注意BN层的位置x = torch.relu(x)if torch.isnan(x.std()):print("output is nan in {} layers".format(i))breakprint("layers:{}, std:{}".format(i, x.std().item()))return xdef initialize(self):for m in self.modules():if isinstance(m, nn.Linear):# method 1nn.init.normal_(m.weight.data, std=1) # normal: mean=0, std=1# method 2 kaiming# nn.init.kaiming_normal_(m.weight.data)neural_nums = 256layer_nums = 100batch_size = 16net = MLP(neural_nums, layer_nums)# net.initialize()inputs = torch.randn((batch_size, neural_nums)) # normal: mean=0, std=1output = net(inputs)print(output)

如果上面代码不进行初始化,会出现如下梯度消失:


这时,如果打开如下注释的代码:
net.initialize()则会出现如下结果:


这说明了不正确的权值初始化会导致模型的数据尺度变大,导致梯度爆炸。将正态分布初始化改成凯明初始化,即方式1改成方式2:
nn.init.kaiming_normal_(m.weight.data)
数据有一定的波动,但尺度在正常范围内,现在改用BN,即将如下代码打开注释:
x = bn(x) # 注意BN层的位置
可见加入BN后,数据尺度保持得更好了,现在不加上凯明初始化,即注释掉如下代码:
net.initialize()
数据尺度仍然保持挺好,说明加入BN后可以不用权值初始化了。
PS:bn层需要在激活函数前使用。
测试代码2

本次测试代码还是基于人民币二分类项目,参考这篇文章,跟测试代码1进行设置对照实验,下面补充搭建BN层的LeNet网络代码。
class LeNet_bn(nn.Module):def __init__(self, classes):super(LeNet_bn, self).__init__()self.conv1 = nn.Conv2d(3, 6, 5)self.bn1 = nn.BatchNorm2d(num_features=6)self.conv2 = nn.Conv2d(6, 16, 5)self.bn2 = nn.BatchNorm2d(num_features=16)self.fc1 = nn.Linear(16 * 5 * 5, 120)self.bn3 = nn.BatchNorm1d(num_features=120)self.fc2 = nn.Linear(120, 84)self.fc3 = nn.Linear(84, classes)def forward(self, x):out = self.conv1(x)out = self.bn1(out)out = F.relu(out)out = F.max_pool2d(out, 2)out = self.conv2(out)out = self.bn2(out)out = F.relu(out)out = F.max_pool2d(out, 2)out = out.view(out.size(0), -1)out = self.fc1(out)out = self.bn3(out)out = F.relu(out)out = F.relu(self.fc2(out))out = self.fc3(out)return outdef initialize_weights(self):for m in self.modules():if isinstance(m, nn.Conv2d):nn.init.xavier_normal_(m.weight.data)if m.bias is not None:m.bias.data.zero_()elif isinstance(m, nn.BatchNorm2d):m.weight.data.fill_(1)m.bias.data.zero_()elif isinstance(m, nn.Linear):nn.init.normal_(m.weight.data, 0, 1)m.bias.data.zero_()

先不加BN层,且不初始化进性测试:
# =================== step 2/5 模型 ======================# net = LeNet_bn(classes=2)net = LeNet(classes=2)# net.initialize_weights()

将上面的初始化打开,即打开注释的代码:
net.initialize_weights()
接下来使用LeNet_bn网络进行训练:
# =================== step 2/5 模型 ======================net = LeNet_bn(classes=2)# net = LeNet(classes=2)# net.initialize_weights()

注意看权值变化范围是在0.3之内的,说明BN可以约束特征数据值尺度范围,让数据保持一个良好的分布范围之内,这样有利于加速模型的训练。
Pytorch中的Batch Normalization实现

PyTorch中 Batch Normalization 实现都是基于基类:_BatchNorm实现的。



对于1d来说,如上图,有3个batch,每个batch的特征数量是5个,特征的维度是1.所以大小是3* 5 * 1,有时候1省略不写变成 3 * 5.
那如何计算BatchNorm1d 的四个属性呢?
对每个特征横着看,对上面3个1求均值方差,学习
对于2d来说,如上图,有3个batch,每个batch特征数量是3个,特征维度是2 * 2,所以大小是3 * 3 * 2 * 2。
对于3d来说,如上图,有3个batch,每个batch特征数量是4个,特征维度是 2 * 2 * 3,所以大小是3 * 4 * 3 * 2 * 2 。
nn.BatchNorm1d
# -*- coding: utf-8 -*-"""# @file name : bn_and_initialize.py# @author : 475# @date : 2020-07-25# @brief : bn于权值初始化"""import torchimport numpy as npimport torch.nn as nnfrom tools.common_tools import set_seedset_seed(1) # 设置随机种子# ======================================== nn.BatchNorm1d# flag = 1flag = 0if flag:batch_size = 3num_features = 5 # 特征数5个momentum = 0.3features_shape = (1) # 1d,就是图中的每个特征维度为1# 得到一个为1的张量feature_map = torch.ones(features_shape) # 1D# 然后在特征数量方向进行扩展,就是图中的y轴# shape为 5 * 1feature_maps = torch.stack([feature_map*(i+1) for i in range(num_features)], dim=0) # 2D# 在batch方向上进行扩展,就是图中的x轴# shape为 3 * 5 * 1feature_maps_bs = torch.stack([feature_maps for i in range(batch_size)], dim=0) # 3Dprint("input data:\n{} shape is {}".format(feature_maps_bs, feature_maps_bs.shape))bn = nn.BatchNorm1d(num_features=num_features, momentum=momentum)running_mean, running_var = 0, 1for i in range(2):outputs = bn(feature_maps_bs)print("\niteration:{}, running mean: {} ".format(i, bn.running_mean))print("iteration:{}, running var:{} ".format(i, bn.running_var))mean_t, var_t = 2, 0running_mean = (1 - momentum) * running_mean + momentum * mean_trunning_var = (1 - momentum) * running_var + momentum * var_tprint("iteration:{}, 第二个特征的running mean: {} ".format(i, running_mean))print("iteration:{}, 第二个特征的running var:{}".format(i, running_var))


根据公式:
同理,第二个特征(3个2)计算出来是0.6, 以此类推。
第二次迭代的时候:
nn.BatchNorm2d
# ======================================== nn.BatchNorm2dflag = 1# flag = 0if flag:batch_size = 3num_features = 3 # 特征数3个momentum = 0.3features_shape = (2, 2) # 2d,就是图中的每个特征维度为2 * 2# 得到一个为2 * 2的张量feature_map = torch.ones(features_shape) # 2D# 然后在特征数量方向进行扩展,就是图中的y轴# shape为 3 * 2 * 2feature_maps = torch.stack([feature_map*(i+1) for i in range(num_features)], dim=0) # 3D# 在batch方向上进行扩展,就是图中的x轴# shape为 3 * 3 * 2 * 2feature_maps_bs = torch.stack([feature_maps for i in range(batch_size)], dim=0) # 4Dprint("input data:\n{} shape is {}".format(feature_maps_bs, feature_maps_bs.shape))bn = nn.BatchNorm2d(num_features=num_features, momentum=momentum)running_mean, running_var = 0, 1for i in range(2):outputs = bn(feature_maps_bs)print("\niter:{}, running_mean.shape: {}".format(i, bn.running_mean.shape))print("iter:{}, running_var.shape: {}".format(i, bn.running_var.shape))print("iter:{}, weight.shape: {}".format(i, bn.weight.shape))print("iter:{}, bias.shape: {}".format(i, bn.bias.shape))



由于特征数
nn.BatchNorm3d
# ======================================== nn.BatchNorm3dflag = 1# flag = 0if flag:batch_size = 3num_features = 4 # 特征数3个momentum = 0.3features_shape = (2, 2, 3) # 3d,就是图中的每个特征维度为2 * 2 * 3feature = torch.ones(features_shape) # 3D# 然后在特征数量方向进行扩展,就是图中的y轴# shape为 4 * 2 * 2 * 3feature_map = torch.stack([feature * (i + 1) for i in range(num_features)], dim=0) # 4D# 在batch方向上进行扩展,就是图中的x轴# shape为 3 * 4 * 2 * 2 * 3feature_maps = torch.stack([feature_map for i in range(batch_size)], dim=0) # 5Dprint("input data:\n{} shape is {}".format(feature_maps, feature_maps.shape))bn = nn.BatchNorm3d(num_features=num_features, momentum=momentum)running_mean, running_var = 0, 1for i in range(2):outputs = bn(feature_maps)print("\niter:{}, running_mean.shape: {}".format(i, bn.running_mean.shape))print("iter:{}, running_var.shape: {}".format(i, bn.running_var.shape))print("iter:{}, weight.shape: {}".format(i, bn.weight.shape))print("iter:{}, bias.shape: {}".format(i, bn.bias.shape))

nn.BatchNorm3d
# ======================================== nn.BatchNorm3dflag = 1# flag = 0if flag:batch_size = 3num_features = 4 # 特征数3个momentum = 0.3features_shape = (3, 2, 2) # 3d,就是图中的每个特征维度为3 * 2 * 2feature = torch.ones(features_shape) # 3D# 然后在特征数量方向进行扩展,就是图中的y轴# shape为 4 * 3 * 2 * 2feature_map = torch.stack([feature * (i + 1) for i in range(num_features)], dim=0) # 4D# 在batch方向上进行扩展,就是图中的x轴# shape为 3 * 4 * 3 * 2 * 2feature_maps = torch.stack([feature_map for i in range(batch_size)], dim=0) # 5Dprint("input data:\n{} shape is {}".format(feature_maps, feature_maps.shape))bn = nn.BatchNorm3d(num_features=num_features, momentum=momentum)running_mean, running_var = 0, 1for i in range(2):outputs = bn(feature_maps)print("\niter:{}, running_mean.shape: {}".format(i, bn.running_mean.shape))print("iter:{}, running_var.shape: {}".format(i, bn.running_var.shape))print("iter:{}, weight.shape: {}".format(i, bn.weight.shape))print("iter:{}, bias.shape: {}".format(i, bn.bias.shape))



起因: BN不适用于变长的网络,如RNN
思路:逐层计算均值和方差

注意事项:
不再有
和 和
为逐元素的
参考文献《Layer Normalization》
nn.LayerNorm

测试代码
# -*- coding: utf-8 -*-"""# @file name : bn_and_initialize.py# @author : 475# @date : 2020-07-25# @brief : pytorch中常见的 normalization layers"""import torchimport numpy as npimport torch.nn as nnfrom tools.common_tools import set_seedset_seed(1) # 设置随机种子# ======================================== nn.layer normflag = 1# flag = 0if flag:batch_size = 8num_features = 6features_shape = (3, 4)feature_map = torch.ones(features_shape) # 2Dfeature_maps = torch.stack([feature_map * (i + 1) for i in range(num_features)], dim=0) # 3Dfeature_maps_bs = torch.stack([feature_maps for i in range(batch_size)], dim=0) # 4D# feature_maps_bs shape is [8, 6, 3, 4], B * C * H * W# 这里要告诉PyTorch,每一层,就是上图中画的圈圈的shape是什么样子ln = nn.LayerNorm(feature_maps_bs.size()[1:], elementwise_affine=True)# ln = nn.LayerNorm(feature_maps_bs.size()[1:], elementwise_affine=False)# 这里是可以自己指定圈圈的shape,但是指定的时候要根据# feature_maps_bs的shape[8, 6, 3, 4]从后往前属# ln = nn.LayerNorm([6, 3, 4])# 这句会报错,是因为设置圈圈的shape没有feature_maps_bs的shape从后往前数# ln = nn.LayerNorm([6, 3])output = ln(feature_maps_bs)print("Layer Normalization")print(ln.weight.shape)print(feature_maps_bs[0, ...]) # 取 B * C * H * W中的 C * H * Wprint(output[0, ...])



如果设置:
elementwise_affine=False则报错:
AttributeError: 'NoneType' object has no attribute 'shape'nn.LayerNorm()可以根据shape从后往前设置:
# feature_maps_bs shape is [8, 6, 3, 4], B * C * H * W,可以设置:
nn.LayerNorm([4])nn.LayerNorm([3,4])nn.LayerNorm([6,3,4])但不能设置:
nn.LayerNorm([6,3])

文献:
《Instance Normalization:The Missing Ingredient for Fast Stylization》
《Image Style Transfer Using Convolutional Neural Networks》
起因:BN在图像生成(lmage Generation)中不适用。
思路: 逐Instance ( channel )计算均值和方差。
PS:在图像生成任务中(如下图),每个batch的风格是不一样的,把不同batch的特征来求均值明显是不好的。


nn.InstanceNorm

测试代码
# ======================================== nn.instance norm 2dflag = 1# flag = 0if flag:batch_size = 3num_features = 3momentum = 0.3features_shape = (2, 2)feature_map = torch.ones(features_shape) # 2Dfeature_maps = torch.stack([feature_map * (i + 1) for i in range(num_features)], dim=0) # 3Dfeature_maps_bs = torch.stack([feature_maps for i in range(batch_size)], dim=0) # 4Dprint("Instance Normalization")print("input data:\n{} shape is {}".format(feature_maps_bs, feature_maps_bs.shape))instance_n = nn.InstanceNorm2d(num_features=num_features, momentum=momentum)for i in range(1):outputs = instance_n(feature_maps_bs)print(outputs)



从上面结果可以知道,每个特征图的数字是一样的,所以均值是0。这也验证了IN是逐Instance计算均值的。


起因:小batch样本中,BN估计的值不准
思路:数据不够,通道来凑
注意事项:
1.不再有running_mean和running_var
2.gamma和beta为逐通道( channel )的
应用场景:大模型(小batch size)任务
参考文献:《Group Normalization》

PS:通常一个batchsize为64或128或者更大,但是对于大任务,模型比较复杂的情况,GPU一次吃不下这么多数据,只能吃1个或2个batchsize的数据,这时候如果采用BN,计算出来的均值和方差可能不准。
测试代码
# ======================================== nn.grop normflag = 1# flag = 0if flag:batch_size = 2num_features = 4# 这个分组数必须要被特征数整除,否则会报错,这里3是会报错的。# 3 Expected number of channels in input to be divisible by num_groupsnum_groups = 2features_shape = (2, 2)# shape:2 * 2feature_map = torch.ones(features_shape) # 2D# shape: 4 * 2 * 2feature_maps = torch.stack([feature_map * (i + 1) for i in range(num_features)], dim=0) # 3D# shape: 2 * 4 * 2 * 2# feature_maps_bs = torch.stack([feature_maps * (i + 1) for i in range(batch_size)], dim=0) # 4Dfeature_maps_bs = torch.stack([feature_maps for i in range(batch_size)], dim=0) # 4Dprint("Group Normalization")print("input data:\n{} shape is {}".format(feature_maps_bs, feature_maps_bs.shape))gn = nn.GroupNorm(num_groups, num_features)outputs = gn(feature_maps_bs)print("Group Normalization")print(gn.weight.shape)print(outputs[0]) # shape:4 * 2 * 2



这里需要注意:设置分组数时,需要保证能够被特征数整除,否则会报错。
如果通道数(特征数)为256,分组数为4,那么每组的通道数为:64




相关工具









