Giter VIP home page Giter VIP logo

mobilenet-yolov4-keras's Introduction

YOLOV4:You Only Look Once目标检测模型-修改mobilenet系列主干网络-在Keras当中的实现


目录

  1. 仓库更新 Top News
  2. 相关仓库 Related code
  3. 性能情况 Performance
  4. 所需环境 Environment
  5. 文件下载 Download
  6. 训练步骤 How2train
  7. 预测步骤 How2predict
  8. 评估步骤 How2eval
  9. 参考资料 Reference

Top News

2022-04:支持多GPU训练,新增各个种类目标数量计算,新增heatmap。

2022-03:进行了大幅度的更新,修改了loss组成,使得分类、目标、回归loss的比例合适、支持step、cos学习率下降法、支持adam、sgd优化器选择、支持学习率根据batch_size自适应调整、新增图片裁剪。
BiliBili视频中的原仓库地址为:https://github.com/bubbliiiing/mobilenet-yolov4-keras/tree/bilibili

2021-10:进行了大幅度的更新,增加了大量注释、增加了大量可调整参数、对代码的组成模块进行修改、增加fps、视频预测、批量预测等功能。

相关仓库

模型 路径
YoloV3 https://github.com/bubbliiiing/yolo3-keras
Efficientnet-Yolo3 https://github.com/bubbliiiing/efficientnet-yolo3-keras
YoloV4 https://github.com/bubbliiiing/yolov4-keras
YoloV4-tiny https://github.com/bubbliiiing/yolov4-tiny-keras
Mobilenet-Yolov4 https://github.com/bubbliiiing/mobilenet-yolov4-keras
YoloV5-V5.0 https://github.com/bubbliiiing/yolov5-keras
YoloV5-V6.1 https://github.com/bubbliiiing/yolov5-v6.1-keras
YoloX https://github.com/bubbliiiing/yolox-keras
YoloV7 https://github.com/bubbliiiing/yolov7-keras
Yolov7-tiny https://github.com/bubbliiiing/yolov7-tiny-keras

性能情况

训练数据集 权值文件名称 测试数据集 输入图片大小 mAP 0.5:0.95 mAP 0.5
VOC07+12 yolov4_mobilenet_v1_025_voc.h5 VOC-Test07 416x416 - 66.29
VOC07+12 yolov4_mobilenet_v1_voc.h5 VOC-Test07 416x416 - 80.18
VOC07+12 yolov4_mobilenet_v2_voc.h5 VOC-Test07 416x416 - 79.72
VOC07+12 yolov4_mobilenet_v3_voc.h5 VOC-Test07 416x416 - 78.45
VOC07+12 yolov4_ghostnet_voc.h5 VOC-Test07 416x416 - 78.64
VOC07+12 yolov4_vgg_voc.h5 VOC-Test07 416x416 - 81.09
VOC07+12 yolov4_densenet121_voc.h5 VOC-Test07 416x416 - 82.90
VOC07+12 yolov4_resnet50_voc.h5 VOC-Test07 416x416 - 80.59

所需环境

tensorflow-gpu==1.13.1
keras==2.1.5

文件下载

训练所需的各个权值、主干的权值可在百度网盘中下载。
链接: https://pan.baidu.com/s/11a6jROKlctYgQBul1L7lUg
提取码: g916

VOC数据集下载地址如下,里面已经包括了训练集、测试集、验证集(与测试集一样),无需再次划分:
链接: https://pan.baidu.com/s/19Mw2u_df_nBzsC2lg20fQA
提取码: j5ge

训练步骤

a、训练VOC07+12数据集

  1. 数据集的准备
    本文使用VOC格式进行训练,训练前需要下载好VOC07+12的数据集,解压后放在根目录

  2. 数据集的处理
    修改voc_annotation.py里面的annotation_mode=2,运行voc_annotation.py生成根目录下的2007_train.txt和2007_val.txt。

  3. 开始网络训练
    train.py的默认参数用于训练VOC数据集,直接运行train.py即可开始训练。

  4. 训练结果预测
    训练结果预测需要用到两个文件,分别是yolo.py和predict.py。我们首先需要去yolo.py里面修改model_path以及classes_path,这两个参数必须要修改。
    model_path指向训练好的权值文件,在logs文件夹里。
    classes_path指向检测类别所对应的txt。

    完成修改后就可以运行predict.py进行检测了。运行后输入图片路径即可检测。

b、训练自己的数据集

  1. 数据集的准备
    本文使用VOC格式进行训练,训练前需要自己制作好数据集,
    训练前将标签文件放在VOCdevkit文件夹下的VOC2007文件夹下的Annotation中。
    训练前将图片文件放在VOCdevkit文件夹下的VOC2007文件夹下的JPEGImages中。

  2. 数据集的处理
    在完成数据集的摆放之后,我们需要利用voc_annotation.py获得训练用的2007_train.txt和2007_val.txt。
    修改voc_annotation.py里面的参数。第一次训练可以仅修改classes_path,classes_path用于指向检测类别所对应的txt。
    训练自己的数据集时,可以自己建立一个cls_classes.txt,里面写自己所需要区分的类别。
    model_data/cls_classes.txt文件内容为:

cat
dog
...

修改voc_annotation.py中的classes_path,使其对应cls_classes.txt,并运行voc_annotation.py。

  1. 开始网络训练
    训练的参数较多,均在train.py中,大家可以在下载库后仔细看注释,其中最重要的部分依然是train.py里的classes_path。
    classes_path用于指向检测类别所对应的txt,这个txt和voc_annotation.py里面的txt一样!训练自己的数据集必须要修改!
    修改完classes_path后就可以运行train.py开始训练了,在训练多个epoch后,权值会生成在logs文件夹中。

  2. 训练结果预测
    训练结果预测需要用到两个文件,分别是yolo.py和predict.py。在yolo.py里面修改model_path以及classes_path。
    model_path指向训练好的权值文件,在logs文件夹里。
    classes_path指向检测类别所对应的txt。

    完成修改后就可以运行predict.py进行检测了。运行后输入图片路径即可检测。

预测步骤

a、使用预训练权重

  1. 下载完库后解压,在百度网盘下载yolo_weights.pth,放入model_data,运行predict.py,输入
img/street.jpg
  1. 在predict.py里面进行设置可以进行fps测试和video视频检测。

b、使用自己训练的权重

  1. 按照训练步骤训练。
  2. 在yolo.py文件里面,在如下部分修改model_path和classes_path使其对应训练好的文件;model_path对应logs文件夹下面的权值文件,classes_path是model_path对应分的类
_defaults = {
    #--------------------------------------------------------------------------#
    #   使用自己训练好的模型进行预测一定要修改model_path和classes_path!
    #   model_path指向logs文件夹下的权值文件,classes_path指向model_data下的txt
    #   如果出现shape不匹配,同时要注意训练时的model_path和classes_path参数的修改
    #--------------------------------------------------------------------------#
    "model_path"        : 'model_data/yolov4_mobilenet_v2_voc.h5',
    "classes_path"      : 'model_data/voc_classes.txt',
    #---------------------------------------------------------------------#
    #   anchors_path代表先验框对应的txt文件,一般不修改。
    #   anchors_mask用于帮助代码找到对应的先验框,一般不修改。
    #---------------------------------------------------------------------#
    "anchors_path"      : 'model_data/yolo_anchors.txt',
    "anchors_mask"      : [[6, 7, 8], [3, 4, 5], [0, 1, 2]],
    #---------------------------------------------------------------------#
    #   输入图片的大小,必须为32的倍数。
    #---------------------------------------------------------------------#
    "input_shape"       : [416, 416],
    #---------------------------------------------------------------------#
    #   目标检测网络所使用的主干
    #---------------------------------------------------------------------#
    "backbone"          : 'mobilenetv2',
    #---------------------------------------------------------------------#
    #   通道的缩放系数
    #---------------------------------------------------------------------#
    "alpha"             : 1,
    #---------------------------------------------------------------------#
    #   只有得分大于置信度的预测框会被保留下来
    #---------------------------------------------------------------------#
    "confidence"        : 0.5,
    #---------------------------------------------------------------------#
    #   非极大抑制所用到的nms_iou大小
    #---------------------------------------------------------------------#
    "nms_iou"           : 0.3,
    "max_boxes"         : 100,
    #---------------------------------------------------------------------#
    #   该变量用于控制是否使用letterbox_image对输入图像进行不失真的resize,
    #   在多次测试后,发现关闭letterbox_image直接resize的效果更好
    #---------------------------------------------------------------------#
    "letterbox_image"   : False,
}
  1. 运行predict.py,输入
img/street.jpg
  1. 在predict.py里面进行设置可以进行fps测试和video视频检测。

评估步骤

a、评估VOC07+12的测试集

  1. 本文使用VOC格式进行评估。VOC07+12已经划分好了测试集,无需利用voc_annotation.py生成ImageSets文件夹下的txt。
  2. 在yolo.py里面修改model_path以及classes_path。model_path指向训练好的权值文件,在logs文件夹里。classes_path指向检测类别所对应的txt。
  3. 运行get_map.py即可获得评估结果,评估结果会保存在map_out文件夹中。

b、评估自己的数据集

  1. 本文使用VOC格式进行评估。
  2. 如果在训练前已经运行过voc_annotation.py文件,代码会自动将数据集划分成训练集、验证集和测试集。如果想要修改测试集的比例,可以修改voc_annotation.py文件下的trainval_percent。trainval_percent用于指定(训练集+验证集)与测试集的比例,默认情况下 (训练集+验证集):测试集 = 9:1。train_percent用于指定(训练集+验证集)中训练集与验证集的比例,默认情况下 训练集:验证集 = 9:1。
  3. 利用voc_annotation.py划分测试集后,前往get_map.py文件修改classes_path,classes_path用于指向检测类别所对应的txt,这个txt和训练时的txt一样。评估自己的数据集必须要修改。
  4. 在yolo.py里面修改model_path以及classes_path。model_path指向训练好的权值文件,在logs文件夹里。classes_path指向检测类别所对应的txt。
  5. 运行get_map.py即可获得评估结果,评估结果会保存在map_out文件夹中。

Reference

https://github.com/qqwweee/keras-yolo3
https://github.com/eriklindernoren/PyTorch-YOLOv3
https://github.com/BobLiu20/YOLOv3_PyTorch

mobilenet-yolov4-keras's People

Contributors

bubbliiiing avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

mobilenet-yolov4-keras's Issues

基于c++的模型部署

UP,我后续可能会用此模型,研究基于 c++,opencv-dnn,ONNX来进行模型部署研究,提前和你说一下哈

Project dependencies may have API risk issues

Hi, In mobilenet-yolov4-keras, inappropriate dependency versioning constraints can cause risks.

Below are the dependencies and version constraints that the project is using

scipy==1.2.1
numpy==1.17.0
Keras==2.1.5
matplotlib==3.1.2
opencv_python==4.1.2.30
tensorflow_gpu==1.13.2
tqdm==4.60.0
Pillow==8.2.0
h5py==2.10.0

The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict.
The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.

After further analysis, in this project,
The version constraint of dependency matplotlib can be changed to >=1.3.0,<=3.0.3.
The version constraint of dependency tqdm can be changed to >=4.36.0,<=4.64.0.
The version constraint of dependency Pillow can be changed to ==9.2.0.
The version constraint of dependency Pillow can be changed to >=2.0.0,<=9.1.1.

The above modification suggestions can reduce the dependency conflicts as much as possible,
and introduce the latest version as much as possible without calling Error in the projects.

The invocation of the current project includes all the following methods.

The calling methods from the matplotlib
matplotlib.use
The calling methods from the tqdm
tqdm.tqdm
The calling methods from the Pillow
PIL.Image.fromarray
PIL.ImageFont.truetype
PIL.Image.open
PIL.Image.new
The calling methods from the all methods
numpy.logspace
numpy.uint8
xml.endswith
xml.etree.ElementTree.parse.getroot
ghostnet.Ghostnet
image.resize.transpose
_squeeze
matplotlib.pyplot.gca
self.get_random_data_with_Mosaic
keras.backend.concatenate
kmeans
isinstance
line.os.path.basename.split
total_xml.append
self.yolo_model.load_weights
sigmoid
utils_map.get_coco_map
keras.layers.normalization.BatchNormalization
keras.layers.BatchNormalization
inputs.correct_pad.ZeroPadding2D
utils.utils_map.get_map
boxes_per_image.append
PIL.Image.fromarray
tensorflow.image.non_max_suppression
numpy.where
glob.glob
ValueError
utils.utils_bbox.DecodeBox
utils.callbacks.EvalCallback
format
cv2.VideoCapture.release
_activation
keras.layers.GlobalAveragePooling2D
os.path.splitext
cv2.resize
json.dump
prefix.channel_axis.BatchNormalization
numpy.logical_and.split
matplotlib.pyplot.gca.set_xlim
image_set.year.VOCdevkit_path.os.path.join.open.read
glob.glob1
matplotlib.pyplot.fill_between
gt_match.split
cv2.VideoCapture.read
img_name.lower
keras.layers.Activation
numpy.random.rand
hard_sigmoid.Activation
det_counter_per_class.keys
nets.yolo.yolo_body.load_weights
name.layers.Activation
matplotlib.pyplot.gcf.set_figheight
prefix.reduced_ch.Conv2D
functools.reduce
images.append
self.epoches.append
open.close
keras.layers.Input
image.crop.save
block_id.strides.depth_multiplier.DepthwiseConv2D
numpy.random.shuffle
matplotlib.pyplot.gcf.get_figheight
utils.callbacks.ParallelModelCheckpoint
json.dumps
utils.dataloader.YoloDatasets
classes.index
image_set.year.VOCdevkit_path.os.path.join.open.read.strip
_ghost_module
l.get_input_shape_at
fig.add_subplot.add_patch
open
numpy.median
matplotlib.pyplot.margins
os.walk
setattr
img_name.replace
DarknetConv2D
matplotlib.pyplot.plot
random.sample.append
keras.backend.max
self.ParallelModelCheckpoint.super.__init__
matplotlib.pyplot.yticks
cv2.getTextSize
numpy.tile
pycocotools.coco.COCO.loadImgs
box_datas.append
mobilenet_v3.MobileNetV3
keras.backend.cast
keras.backend.tile
tensorflow.math.atan2
image_set.year.VOCdevkit_path.os.path.join.open.read.strip.split
annotation_line.split
int
image.size.np.floor.astype
utils_bbox.DecodeBox
utils.utils.preprocess_input
keras.callbacks.ModelCheckpoint
os.listdir
VOCdevkit_path.os.path.join.open.read.strip.split
utils.utils_bbox.get_anchors_and_decode
nets.yolo.yolo_body
prefix.x.correct_pad.ZeroPadding2D
sub_block_id.str.block_id.str.BatchNormalization
json.load
results_file.write
pycocotools.cocoeval.COCOeval.evaluate
VOCdevkit_path.os.path.join.open.read
self.sess.close
numpy.zeros.append
os.path.basename
self.loss_plot
mAP_YOLO
os.path.normpath
sigmoid.append
keras.backend.get_session
self.ParallelModelCheckpoint.super.set_model
self.model_body.get_input_at
dense_block
super
keras.backend.ones_like
matplotlib.pyplot.subplots_adjust
hard_swish.Activation
_depthwise_conv_block
bounding_boxes.sort
PIL.ImageDraw.Draw
cv2.split
utils.utils.show_config
self.preprocess_true_boxes
bboxes.extend
matplotlib.pyplot.legend
self.merge_bboxes
keras.backend.round
tensorflow.Print
f.write
cv2.VideoWriter
matplotlib.pyplot.axis
numpy.empty
block_id.pointwise_conv_filters.Conv2D
self.rand
fig.canvas.get_renderer
nets.yolo.yolo_body.summary
numpy.array.paste
load_data
numpy.clip
txt_file.split
l.grid_shapes.t.b.true_boxes.np.floor.astype
keras.optimizers.Adam
functools.partial
adjust_axes
operator.itemgetter
os.path.join
matplotlib.pyplot.text.get_window_extent
matplotlib.pyplot.savefig
str.BatchNormalization
weight_decay.pointwise_conv_filters.DarknetConv2D
round
numpy.exp.append
name.reduction.x.backend.int_shape.int.layers.Conv2D
keras.callbacks.EarlyStopping
matplotlib.pyplot.imshow
keras.models.Model.summary
keras.layers.Lambda
utils.utils.resize_image
conv_name_base.filters3.Conv2D
keras.layers.ZeroPadding2D
os.path.isfile
numpy.argmax
keras.backend.placeholder
matplotlib.pyplot.gcf.savefig
colorsys.hsv_to_rgb
self.maps.append
tensorflow.ones_like
keras.initializers.random_normal
name.layers.AveragePooling2D
keras.layers.UpSampling2D.weight_decay.alpha.int.DarknetConv2D_BN_Leaky.compose
annotation_line.split.split
str
nets.yolo.get_train_model
gt_counter_per_class.keys
math.exp
conv_name_base.strides.filters3.Conv2D
top.np.floor.astype
densenet.DenseNet
box.split
utils.resize_image
bounding_boxes.append
sub_block_id.str.block_id.str.random_normal.hidden_channel.Conv2D
line.split
keras.backend.minimum
utils.utils.cvtColor.crop
utils.utils.get_classes
prefix.pointwise_filters.Conv2D
keras.layers.Add
part.str.sub_block_id.str.block_id.str.random_normal.ratio.dw_size.DepthwiseConv2D
results.append
keras.backend.get_value
self.get_map_txt
keras.initializers.random_normal.update
glob.glob.sort
keras.backend.relu
box_iou
range
keras.layers.Multiply
keras.backend.constant
image_id.VOCdevkit_path.os.path.join.ET.parse.getroot
PIL.ImageFont.truetype
conv_name_base.filters1.Conv2D
keras.optimizers.SGD
cv2.VideoWriter_fourcc
matplotlib.pyplot.text
keras.backend.int_shape
matplotlib.pyplot.xlabel
weight_decay.num_classes.anchors_mask.len.DarknetConv2D
numpy.floor
self.WarmUpCosineDecayScheduler.super.__init__
near.last_clu.all
yolo.YOLO
block_id.relu6.Activation
detection.split
keras.backend.exp
file_lines_to_list
fp_sorted.append
len
numpy.reshape
prefix.relu6.Activation
already_seen_classes.append
obj.findtext
dict
i.j.List1.rjust
new_f.write
matplotlib.pyplot.show
keras.callbacks.LearningRateScheduler
conv_block
i_list.append
tensorflow.cast
numpy.insert
matplotlib.pyplot.gca.get_xlim
class_names.index
cv2.imshow
prec.insert
pycocotools.coco.COCO
strides.kernel.filters.Conv2D
numpy.minimum
tensorflow.logging.set_verbosity
VOCdevkit_path.os.path.join.open.read.strip
prefix.exp_size.Conv2D
image_id.VOCdevkit_path.os.path.join.ET.parse.getroot.findall
get_anchors_and_decode
cv2.cvtColor
pow
yolo.detect_image.append
transition_block
cv2.VideoCapture
obj.find
image.resize.resize
label.encode.encode
matplotlib.pyplot.gcf.tight_layout
keras.backend.reshape
convert_annotation
nets.yolo_training.get_lr_scheduler
numpy.max
matplotlib.pyplot.barh
pycocotools.coco.COCO.loadRes
cv2.destroyAllWindows
format.Add
matplotlib.pyplot.figure
_smooth_labels
numpy.shape
keras.backend.clear_session
obj.split
no_bias_kwargs.update
image.resize.convert
shutil.rmtree
keras.backend.min
self.losses.append
PIL.Image.open.save
keras.backend.learning_phase
_make_divisible.Conv2D
gamma.alpha.focal_loss.label_smoothing.num_classes.input_shape.input_shape.num_classes.anchors_mask.anchors.input_shape.yolo_loss.Lambda
l.get_output_shape_at
self.model_body.get_output_at
matplotlib.pyplot.gcf
format.BatchNormalization
anchors.np.array.reshape
random.shuffle
yolo.YOLO.detect_image
matplotlib.pyplot.close
yolo.YOLO.get_FPS
MobileNetV1
prefix.out_channels.Conv2D
nets.yolo.get_train_model.fit_generator
keras.backend.concatenate.append
math.ceil
img_name.lower.endswith
rec.append
str.astype
format.Activation
utils_map.get_map
block_id.BatchNormalization
keras.backend.expand_dims.stack
matplotlib.pyplot.cla
numpy.random.uniform
pycocotools.cocoeval.COCOeval.accumulate
part.str.sub_block_id.str.block_id.str.BatchNormalization
numpy.expand_dims
weight_decay.num_filters.DarknetConv2D_BN_Leaky
matplotlib.pyplot.gca.set_ylim
numpy.arange
pycocotools.cocoeval.COCOeval
make_five_convs
keras.layers.MaxPooling2D
keras.backend.set_value
prec.append
preprocess_gt
conv_name_base.kernel_size.filters2.Conv2D
c.strip
matplotlib.pyplot.gcf.get_figwidth
numpy.logical_and
identity_block
numpy.sum
keras.layers.Reshape
r.x.astype
numpy.zeros
sub_block_id.str.block_id.str.random_normal.ratio.hidden_channel._make_divisible.Conv2D
tmp_box.append
resnet50.ResNet50
numpy.argsort
self.ExponentDecayScheduler.super.__init__
keras.backend.sigmoid
os.path.abspath
_make_divisible
pycocotools.cocoeval.COCOeval.summarize
yolo.detect_image.save
kwargs.get
sorted
log_average_miss_rate
PIL.Image.open
format.Multiply
utils.utils.net_flops
keras.layers.add
print
utils.utils_map.get_coco_map
tensorflow.TensorArray
cv2.merge
_ghost_bottleneck
voc_ap
cv2.LUT
self.yolo_model.predict
cv2.putText
keras.initializers.random_normal.Conv2D
numpy.mean
keras.backend.image_data_format
zip
PIL.ImageDraw.Draw.textsize
numpy.log
scipy.signal.savgol_filter
numpy.concatenate
os.makedirs
keras.backend.maximum
vgg.VGG16
image.resize.paste
time.time
utils.utils.cvtColor
matplotlib.pyplot.scatter
l.self.anchors_mask.index
self.sess.run
x.strip
xml.etree.ElementTree.parse.iter
yolo_correct_boxes
tensorflow.sigmoid
conv_name_base.strides.filters1.Conv2D
matplotlib.pyplot.xlim
prefix.stride.kernel_size.DepthwiseConv2D
numpy.array
tensorflow.shape
tuple
keras.backend.epsilon
bool
f.readline
PIL.Image.new
numpy.random.normal
numpy.float64
box_ciou
cas_iou
tqdm.tqdm
self.__dict__.update
self.generate
matplotlib.pyplot.gca.invert_yaxis
f
tensorflow.while_loop
_inverted_res_block
annotations.append
matplotlib.pyplot.ylabel
anchors.np.array.reshape.split
fig.canvas.set_window_title
error
matplotlib.pyplot.title
t.b.true_boxes.astype
input
utils.callbacks.LossHistory
yolo.YOLO.get_map_txt
tp_sorted.append
xml.etree.ElementTree.parse.findtext
correct_pad
collections.defaultdict.keys
kwargs.items
obj.find.find
datetime.datetime.now
numpy.array.append
mAP_YOLO.detect_image
name.layers.Concatenate
merge_bbox.append
math.cos
tensorflow.maximum
keras.backend.expand_dims.write
pycocotools.coco.COCO.getCatIds
weight_decay.l2.strides.depth_multiplier.random_normal.DepthwiseConv2D
logs.get
self.EvalCallback.super.__init__
mobilenet_v2.MobileNetV2
utils.utils.compose
datetime.datetime.strftime
sub_block_id.str.block_id.str.random_normal.strides.kernel.DepthwiseConv2D
numpy.maximum
categories.append
draw_text_in_image
enumerate
float
keras.layers.DepthwiseConv2D
matplotlib.pyplot.grid
max
cv2.copyMakeBorder
bottom.np.floor.astype
mobilenet_v1.MobileNetV1
keras.backend.dtype
yolo.YOLO.detect_heatmap
cv2.copyMakeBorder.copy
cv2.imread
yolo.detect_image.show
cv2.VideoCapture.get
prefix.in_channels.expansion.Conv2D
list
numpy.random.choice
os.path.exists
keras.callbacks.TensorBoard
utils.cvtColor
exp.slices.Lambda
collections.defaultdict
avg_iou
cocoGt.imgToAnns.keys
PIL.ImageDraw.Draw.text
keras.layers.AveragePooling2D
relu6.Activation
min
cv2.VideoWriter.release
random.seed
draw_plot_func
utils.preprocess_input
keras.layers.Conv2D
utils.utils.get_anchors
keras.backend.sum
keras.backend.gather
part.str.sub_block_id.str.block_id.str.random_normal.stride.kernel_size.output_channels.Conv2D
os.path.expanduser.endswith
name.layers.BatchNormalization
tensorflow.boolean_mask
keras.backend.binary_crossentropy
keras.backend.square
num_classes.true_boxes.all
hard_swish
keras.backend.expand_dims
keras.utils.multi_gpu_utils.multi_gpu_model
cv2.rectangle
image_datas.append
cv2.imwrite
os.path.expanduser
prefix.stride.stride.DepthwiseConv2D
map
open.write
_bneck
right.np.floor.astype
dictionary.items
matplotlib.pyplot.Rectangle
DarknetConv2D_BN_Leaky
format.GlobalAveragePooling2D
keras.backend.shape
AssertionError
matplotlib.pyplot.figure.add_subplot
cv2.waitKey
sys.exit
g
prefix.exp_size.Reshape
random.sample
weight_decay.alpha.int.DarknetConv2D_BN_Leaky
xml.etree.ElementTree.parse
keras.regularizers.l2
sub_block_id.str.block_id.str.random_normal.output_channel.Conv2D
matplotlib.use
keras.layers.UpSampling2D
self.get_random_data
join
self.val_loss.append
r.x.np.clip.astype
_conv_block
cv2.VideoWriter.write
tree.getroot.iter
self.get_random_data_with_MixUp
keras.backend.arange
numpy.random.seed
keras.models.Model
functools.wraps
printTable
PIL.ImageDraw.Draw.rectangle
matplotlib.pyplot.ylim
rec.insert
name.growth_rate.layers.Conv2D
keras.layers.Concatenate
name.name_box_id.append
left.np.floor.astype
net_flops
numpy.argmin
f.readlines
preprocess_dr
numpy.exp
nets.yolo.get_train_model.compile
keras.backend.floatx

@developer
Could please help me check this issue?
May I pull a request to fix it?
Thank you very much.

AttributeError: 'str' object has no attribute 'decode'

Traceback (most recent call last): File "D:/AI/projects/mobilenet-yolov4-lite-keras-main/mobilenet-yolov4-lite-keras-main/train.py", line 257, in model_body.load_weights(weights_path, by_name=True, skip_mismatch=True) File "D:\AI\anaconda3-5.3.1\envs\yolov4\lib\site-packages\keras\engine\topology.py", line 2653, in load_weights reshape=reshape) File "D:\AI\anaconda3-5.3.1\envs\yolov4\lib\site-packages\keras\engine\topology.py", line 3407, in load_weights_from_hdf5_group_by_name original_keras_version = f.attrs['keras_version'].decode('utf8') AttributeError: 'str' object has no attribute 'decode' 训练时报错,怎能解决啊?

更换主干给实例分割网络

导师好,我想用这个Mobile net在yolact的主干上,我更改了预训练权重为这个mobilenet-yolov4-keras网络的权重,但是训练出来的效果很差,几乎没有预测框,但是我用你的原版yolact可以效果很好,请问这是什么原因,跟数据集格式有关么,yolact那个是基于coco。

权重文件大小问题

理论上mobilenetv1的参数量应该小于v3,但是最终保存的权重文件为啥v1有48m而v3是44.7m

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.