天天看點

SSD-tensorflow 單類目标的檢測

最近一段時間主要是做目标檢測的任務,在沒接觸DL之前,受到目标圖像尺度、特征不明顯等影響傳統方法效果并不是很好。

一、跑通SSD-tensorflow Demo

這一步基本都可以複現,主要參考了參照github上 balancap 的過程:

https://github.com/balancap/SSD-Tensorflow

中文翻譯 https://blog.csdn.net/yexiaogu1104/article/details/77415990

二、實作單類目标的檢測

跑通了上一步 該怎麼做呢,Demo中實作了20類 的目标檢測,但因需要,我隻訓練檢測行人。

1、準備資料

(1)、提取原voc資料集裡含有人的 xml 和 imge

參考網友的根據自己的目錄修改(我的目錄有點長,認真看)

bash xxx.sh
           
#!bin/sh
year="VOC2007"
#  mkdir  ...where to store   
#mkdir .././datasets/test2/test1/
mkdir .././datasets/VOCperson/${year}_Anno/
mkdir .././datasets/VOCperson/${year}_Image/

cd .././datasets/VOCtrainval_06-Nov-/VOCdevkit/VOC2007/Annotations/   
grep -H -R "<name>person</name>" > /media/xd/E3B2/txh_ubuntu/hands_on_ml/SSD-Tensorflow-master/datasets/VOCperson/temp.txt  #找到有關鍵字的行,并把這些行存到臨時文檔 
#grep -H -R "<name>person</name>" > temp.txt  #找到有關鍵字的行,并把這些行存到臨時文檔 

cd /media/xd/E3B2/txh_ubuntu/hands_on_ml/SSD-Tensorflow-master/datasets/VOCperson 

cat temp.txt | sort | uniq > $year.txt    #根據名字排序,并把相鄰的内容完全一樣的多餘行删除。

find -name $year.txt | xargs perl -pi -e 's|.xml:\t\t<name>person</name>||g'   #把文檔中字尾名和其他無用資訊删掉,隻保留沒字尾名的檔案名


cat $year.txt | xargs -i cp /media/xd/E3B2/txh_ubuntu/hands_on_ml/SSD-Tensorflow-master/datasets/VOCtrainval_06-Nov-/VOCdevkit/VOC2007/Annotations/{}.xml /media/xd/E3B2/txh_ubuntu/hands_on_ml/SSD-Tensorflow-master/datasets/VOCperson/${year}_Anno/

cat $year.txt | xargs -i cp /media/xd/E3B2/txh_ubuntu/hands_on_ml/SSD-Tensorflow-master/datasets/VOCtrainval_06-Nov-/VOCdevkit/VOC2007/JPEGImages/{}.jpg /media/xd/E3B2/txh_ubuntu/hands_on_ml/SSD-Tensorflow-master/datasets/VOCperson/${year}_Image/

rm temp.txt
           

這樣就得到了含有人的 imge和相應的xml檔案,如下

SSD-tensorflow 單類目标的檢測

(2)、修改xml 檔案

由于提取的xml檔案中可能還有其他物體的 object 資訊,需要進一步去除

bash xxx.sh  
           
#!/usr/bin/env python2  
# -*- coding: utf-8 -*-  
""" 
Created on Tue Oct 31 10:03:03 2017 

@author: hans 

"""  

import os  
import xml.etree.ElementTree as ET  

origin_ann_dir = 'Annotations_old/'  
new_ann_dir = 'Annotations/'  

for dirpaths, dirnames, filenames in os.walk(origin_ann_dir):  
  for filename in filenames:  
    if os.path.isfile(r'%s%s' %(origin_ann_dir, filename)):  
      origin_ann_path = os.path.join(r'%s%s' %(origin_ann_dir, filename))  
      new_ann_path = os.path.join(r'%s%s' %(new_ann_dir, filename))  
      tree = ET.parse(origin_ann_path)  

      root = tree.getroot()  
      for object in root.findall('object'):  
        name = str(object.find('name').text)  
        if not (name == "person"):  #去除 不是 person的 object
          root.remove(object)

      tree.write(new_ann_path)  
           

(3)、訓練集、測試集 劃分

通過修改自己的相關目錄,注意這裡是 運作 .py 檔案 ,

import os  
import random   

xmlfilepath=r'/media/xd/000398040009E3B2/txh_ubuntu/hands_on_ml/SSD-Tensorflow-master/datasets/VOCperson/VOC2007_Anno'  
saveBasePath=r"/media/xd/000398040009E3B2/txh_ubuntu/hands_on_ml/SSD-Tensorflow-master/datasets/VOCperson"  

trainval_percent=  
train_percent=  
total_xml = os.listdir(xmlfilepath)  
num=len(total_xml)    
list=range(num)    
tv=int(num*trainval_percent)    
tr=int(tv*train_percent)    
trainval= random.sample(list,tv)    
train=random.sample(trainval,tr)    

print("train and val size",tv)  
print("traub size",tr)  
ftrainval = open(os.path.join(saveBasePath,'/media/xd/000398040009E3B2/txh_ubuntu/hands_on_ml/SSD-Tensorflow-master/datasets/VOCperson/ImageSets/Main/trainval.txt'), 'w')    
ftest = open(os.path.join(saveBasePath,'/media/xd/000398040009E3B2/txh_ubuntu/hands_on_ml/SSD-Tensorflow-master/datasets/VOCperson/ImageSets/Main/test.txt'), 'w')    
ftrain = open(os.path.join(saveBasePath,'/media/xd/000398040009E3B2/txh_ubuntu/hands_on_ml/SSD-Tensorflow-master/datasets/VOCperson/ImageSets/Main/train.txt'), 'w') 
fval = open(os.path.join(saveBasePath,'/media/xd/000398040009E3B2/txh_ubuntu/hands_on_ml/SSD-Tensorflow-master/datasets/VOCperson/ImageSets/Main/val.txt'), 'w')    

for i  in list:    
    name=total_xml[i][:-]+'\n'    
    if i in trainval:    
        ftrainval.write(name)    
        if i in train:    
            ftrain.write(name)    
        else:    
            fval.write(name)    
    else:    
        ftest.write(name)    

ftrainval.close()    
ftrain.close()    
fval.close()    
ftest .close()    
           

(4)、轉 tfrecord

這一步沒有太多修改,沒多大問題

三、訓練網絡(fine-tune)

(1)修改 pascalvoc_common.py檔案

SSD-tensorflow 單類目标的檢測

(2)注意這裡是微調的,才開始搞的時候我把 CHECKPOINT_PATH 注釋了,直接導緻 loss 30~50 ,訓練出來的模型也識别不出任何結果。(困了 好幾天,呵呵)。用VGG-16 模型進行訓練效果也同樣(參數有問題?)

set files are stored.
DATASET_DIR=/media/xd/000398040009E3B2/txh_ubuntu/hands_on_ml/SSD-Tensorflow-master/datasets/VOCperson/tfrecord/
#../../../../common/dataset/VOC2007/VOCtrainval_06-Nov-2007/VOCdevkit/VOC2007_tfrecord/

#Directory where checkpoints and event logs are written to.
TRAIN_DIR=.././log_files/log_person/

#The path to a checkpoint from which to fine-tune
CHECKPOINT_PATH=/media/xd/000398040009E3B2/txh_ubuntu/hands_on_ml/SSD-Tensorflow-master/checkpoints/VGG_VOC0712_SSD_300x300_iter_120000/VGG_VOC0712_SSD_300x300_iter_120000.ckpt




python3 ../train_ssd_network.py \
    --train_dir=${TRAIN_DIR} \
    --dataset_dir=${DATASET_DIR} \
    --dataset_name=pascalvoc_2007 \
    --dataset_split_name=train \
    --model_name=ssd_300_vgg \
    --checkpoint_path=${CHECKPOINT_PATH} \
    --save_summaries_secs= \
    --save_interval_secs= \
    --weight_decay=. \
    --optimizer=adam \
    --learning_rate=. \
    --batch_size= \ 
           

最後,上張效果圖(有點不理想),調參、調參

SSD-tensorflow 單類目标的檢測

四、遙感圖像檢測

見下一篇中 要 解決高分辨率遙感圖像檢測中圖像太大檢測不出來的問題

,也就是把圖像放大後再截取圖像又可以重新檢測出來了。

繼續閱讀