Maskrcnn中resnet50改为resnet34「建议收藏」

Maskrcnn中resnet50改为resnet34「建议收藏」因需要训练的数据集并不复杂,resnet50的结构有点冗余,于是就把maskrcnn的backbone从resnet50改为resnet34结构。找到model文件,将resnet50部分代码做一定的修改,就可以得到resnet34的相关代码下面是相关代码:##con_block修改为conv_block0并添加到model文件中defconv_block0(input_tensor…

大家好,又见面了,我是你们的朋友全栈君。如果您正在找激活码,请点击查看最新教程,关注关注公众号 “全栈程序员社区” 获取激活教程,可能之前旧版本教程已经失效.最新Idea2022.1教程亲测有效,一键激活。

Jetbrains全系列IDE稳定放心使用

找到很多关于maskrcnn具体用法的代码,但是全是基于resnet50/101的,因需要训练的数据集并不复杂,resnet50的结构有点冗余,于是就把maskrcnn的backbone从resnet50改为resnet34结构。
找到model文件,将resnet50(侵删)部分代码做一定的修改,就可以得到resnet34的相关代码
下面是相关代码:


## con_block修改为conv_block0并添加到model文件中
def conv_block0(input_tensor, kernel_size, filters, stage, block,
strides, use_bias=True, train_bn=True):
nb_filter1, nb_filter2 = filters
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
x = KL.Conv2D(nb_filter1, (kernel_size, kernel_size),padding='same',strides=strides,
name=conv_name_base + '2a', use_bias=use_bias)(input_tensor)
x = BatchNorm(name=bn_name_base + '2a')(x, training=train_bn)
x = KL.Activation('relu')(x)
x = KL.Conv2D(nb_filter2, (kernel_size, kernel_size),padding='same',
name=conv_name_base + '2b', use_bias=use_bias)(x)
x = BatchNorm(name=bn_name_base + '2b')(x, training=train_bn)
shortcut = KL.Conv2D(nb_filter2, (1, 1), strides=strides, padding='same',
name=conv_name_base + '1', use_bias=use_bias)(input_tensor)
shortcut = BatchNorm(name=bn_name_base + '1')(shortcut, training=train_bn)
x = KL.Add()([x,shortcut ])
x = KL.Activation('relu', name='res' + str(stage) + block + '_out')(x)
return x
## identity_block修改为identity_block0,并添加
def identity_block0(input_tensor, kernel_size, filters,  stage, block,
use_bias=True, train_bn=True):
nb_filter1, nb_filter2 = filters
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
x = KL.Conv2D(nb_filter1, (kernel_size, kernel_size),name=conv_name_base + '2a',
padding='same',
use_bias=use_bias)(input_tensor)
x = BatchNorm(name=bn_name_base + '2a')(x, training=train_bn)
x = KL.Activation('relu')(x)
x = KL.Conv2D(nb_filter2, (kernel_size, kernel_size), name=conv_name_base + '2b',padding='same',
use_bias=use_bias)(x)
x = BatchNorm(name=bn_name_base + '2b')(x, training=train_bn)
x = KL.Activation('relu', name='res' + str(stage) + block + '_out')(x)
x = KL.Add()([x, input_tensor])
return x
# 将resnet_graph改为
def resnet_graph(input_image, architecture, stage5=False, train_bn=True):
"""Build a ResNet graph. architecture: Can be resnet50 or resnet101 stage5: Boolean. If False, stage5 of the network is not created train_bn: Boolean. Train or freeze Batch Norm layers """
assert architecture in ["resnet34", "resnet50", "resnet101"]
block_identify = { 
"resnet34": 0, "resnet50": 1, "resnet101": 1}[architecture]
# Stage 1
x = KL.ZeroPadding2D((3, 3))(input_image)
x = KL.Conv2D(64, (7, 7), strides=(2, 2), name='conv1', use_bias=True)(x)
x = BatchNorm(name='bn_conv1')(x, training=train_bn)
x = KL.Activation('relu')(x)
C1 = x = KL.MaxPooling2D((3, 3), strides=(2, 2), padding="same")(x)
# Stage 2
if block_identify == 0:
x = conv_block0(x, 3, [64,64], stage=2, block='a',strides=(1, 1),train_bn=train_bn)
x = identity_block0(x, 3, [64,64], stage=2, block='b', train_bn=train_bn)
C2 = x = identity_block0(x, 3, [64,64], stage=2, block='c',train_bn=train_bn)
else:
x = conv_block(x, 3, [64, 64, 256], stage=2, block='a', strides=(1, 1), train_bn=train_bn)
x = identity_block(x, 3, [64, 64, 256], stage=2, block='b',train_bn=train_bn)
C2 = x = identity_block(x, 3, [64, 64, 256], stage=2, block='c', train_bn=train_bn)
# Stage 3
if block_identify == 0:
x = conv_block0(x, 3, [128,128], stage=3, block='a', strides=(2, 2),train_bn=train_bn)
x = identity_block0(x, 3, [128,128], stage=3, block='b', train_bn=train_bn)
x = identity_block0(x, 3, [128,128], stage=3, block='c', train_bn=train_bn)
C3 = x = identity_block0(x, 3, [128,128], stage=3, block='d', train_bn=train_bn)
else:
x = conv_block(x, 3, [128, 128, 512], stage=3, block='a', train_bn=train_bn)
x = identity_block(x, 3, [128, 128, 512], stage=3, block='b', train_bn=train_bn)
x = identity_block(x, 3, [128, 128, 512], stage=3, block='c', train_bn=train_bn)
C3 = x = identity_block(x, 3, [128, 128, 512], stage=3, block='d', train_bn=train_bn)
# Stage 4
block_count = { 
"resnet34": 5, "resnet50": 5, "resnet101": 22}[architecture]
if block_identify == 0:
x = conv_block0(x, 3, [256,256], stage=4, block='a', strides=(2, 2),train_bn=train_bn)
for i in range(block_count):
x = identity_block0(x, 3, [256,256], stage=4, block=chr(98 + i), train_bn=train_bn)
C4 = x
else:
x = conv_block(x, 3, [256, 256, 1024], stage=4, block='a', train_bn=train_bn)
for i in range(block_count):
x = identity_block(x, 3, [256, 256, 1024], stage=4, block=chr(98 + i), train_bn=train_bn)
C4 = x
# Stage 5
if stage5:
if block_identify == 0:
x = conv_block0(x, 3, [512,512], stage=5, block='a', strides=(2, 2),train_bn=train_bn)
x = identity_block0(x, 3, [512,512], stage=5, block='b', train_bn=train_bn)
C5 = x = identity_block0(x, 3, [512,512], stage=5, block='c', train_bn=train_bn)
else:
x = conv_block(x, 3, [512, 512, 2048], stage=5, block='a', train_bn=train_bn)
x = identity_block(x, 3, [512, 512, 2048], stage=5, block='b', train_bn=train_bn)
C5 = x = identity_block(x, 3, [512, 512, 2048], stage=5, block='c', train_bn=train_bn)
else:
C5 = None
return [C1, C2, C3, C4, C5]

注:
1.初始化权重时我使用的是
https://github.com/qubvel/classification_models/releases/download/0.0.1/resnet34_imagenet_1000.h5
2.compute_backbone_shapes中也要加入resnet34

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。

发布者:全栈程序员-用户IM,转载请注明出处:https://javaforall.cn/185483.html原文链接:https://javaforall.cn

【正版授权,激活自己账号】: Jetbrains全家桶Ide使用,1年售后保障,每天仅需1毛

【官方授权 正版激活】: 官方授权 正版激活 支持Jetbrains家族下所有IDE 使用个人JB账号...

(0)


相关推荐

  • STM32 三极管继电器驱动电路设计「建议收藏」

    STM32 三极管继电器驱动电路设计「建议收藏」继电器线圈需要流过较大的电流(约50mA)才能使继电器吸合,一般的集成电路不能提供这样的大电流,因此,必须要进行扩流,即设计驱动电路。三极管氛围NPN与PNP型两种,在使用中,我选择PNP型的S8550型号三极管。百度三极管驱动继电器,可以得到大量的参考电路设计,虽然花样繁多,但是可用,靠谱的比较少,并且基本都是从两三篇转载而来,上图就是比较经典的一个设计。对于PCB的设计,要保持严…

  • 修复weblogic的JAVA反序列化漏洞的多种方法

    修复weblogic的JAVA反序列化漏洞的多种方法

  • 什么是真正的云原生_云原生的定义

    什么是真正的云原生_云原生的定义云原生介绍以及发展史,云原生包含的核心技术概念介绍,云原生对开发岗的影响。

    2022年10月30日
  • 低延迟视频传输_网络延时

    低延迟视频传输_网络延时微信后台如何应对像跨年,特殊时刻(比如2022年2月22日22时22分22秒)这样的朋友圈突发流量,可做如下策略(只是比如):优先让发一张图片的成功。九宫格只成功一部分,剩余的强有损压缩。朋友多的优先成功。(也可以反过来)设置三天可见的优先成功。(也可以反过来)年轻女性优先成功。(这个没毛病)历史点赞多的优先成功。(也可以反过来)头像穿西装打领带的经理统统失败。…在不影响P99用户体验的前提下,提供有损服务,保证核心可用性,这就是柔性。但数据传输的思维定势并不认可柔性。受TCP的影

  • C++ 中的容器类详解

    C++ 中的容器类详解C++中的容器类包括“顺序存储结构”和“关联存储结构”,前者包括vector,list,deque等;后者包括set,map,multiset,multimap等。若需要存储的元素数在编译器间就可以确定,可以使用数组来存储,否则,就需要用到容器类了。1、vector    连续存储结构,每个元素在内存上是连续的;    支持高效的随机访问和在尾端插入/删除操作,但其他位置的插入/删除操…

  • 项目启动会应该注意的几点

    项目启动会应该注意的几点

发表回复

您的电子邮箱地址不会被公开。

关注全栈程序员社区公众号