大家好,又见面了,我是你们的朋友全栈君。如果您正在找激活码,请点击查看最新教程,关注关注公众号 “全栈程序员社区” 获取激活教程,可能之前旧版本教程已经失效.最新Idea2022.1教程亲测有效,一键激活。
Jetbrains全系列IDE稳定放心使用
arXiv-2017
文章目录
1 Background and Motivation
随着深度学习技术的发展,CNN 在很多计算机视觉任务中崭露头角,但 increased representational power also comes increased probability of overfitting, leading to poor generalization.
为提升模型的泛化性能,模拟 object occlusion, 作者提出了 Cutout 数据增强的方法——randomly masking out square regions of input during training,take more of the image context into consideration when making decisions.
This technique encourages the network to better utilize the full context of the image, rather than relying on the presence of a small set of specific visual features(which may not always be present).
2 Related Work
- Data Augmentation for Images
- Dropout in Convolutional Neural Networks
- Denoising Autoencoders & Context Encoders(self-supervised,挖去部分,网络补上,以强化特征)
3 Advantages / Contributions
监督学习中提出 Cutout 数据增强方法(dropout 的一种形式,自监督中也有类似方法)
4 Method
初始版:remove maximally activated features
最终版:随机中心点,正方形遮挡(可以在图片外,被图片边界截取后就不是正方形了)
使用时需要中心化一下(也即减去均值)
the dataset should be normalized about zero so that modified images will not have a large effect on the expected batch statistics.
5 Experiments
5.1 Datasets and Metrics
- CIFAR-10(32×32)
- CIFAR-100(32×32)
- SVHN(Street View House Numbers,32×32)
- STL-10(96×96)
评价指标为 top1 error
5.2 Experiments
1)CIFAR10 and CIFAR100
单个实验都重复跑了5次,±x
下图探索 cutout 中不同 patch length 的影响,
2)STL-10
3)Analysis of Cutout’s Effect on Activations
引入 cutout 后浅层激活均有提升,深层 in the tail end of the distribution.
The latter observation illustrates that cutout is indeed encouraging the network to take into account a wider variety of features when making predictions, rather than relying on the presence of a smaller number of features
再聚焦下单个样本的
6 Conclusion(own) / Future work
-
code:https://github.com/uoguelph-mlrg/Cutout
-
memory footprint 内存占用
-
相关工作介绍 drop out 时,文章中出现了这句话:All activations are kept when evaluating the network, but the resulting output is scaled according to the dropout probability
-
dropout 作用在 FC 上的效果比 Conv 上好,作者的解释是:1)convolutional layers already have much fewer parameters than fully-connected layers; 2)neighbouring pixels in images share much of the same information(丢一些无伤大雅)
-
cutout——连续区域的仅作用在输入层的 dropout 技术
发布者:全栈程序员-用户IM,转载请注明出处:https://javaforall.cn/189998.html原文链接:https://javaforall.cn
【正版授权,激活自己账号】: Jetbrains全家桶Ide使用,1年售后保障,每天仅需1毛
【官方授权 正版激活】: 官方授权 正版激活 支持Jetbrains家族下所有IDE 使用个人JB账号...