刘军, 后士浩, 张凯, 张睿, 胡超超. 基于增强Tiny YOLOV3算法的车辆实时检测与跟踪[J]. 农业工程学报, 2019, 35(8): 118-125. DOI: 10.11975/j.issn.1002-6819.2019.08.014
    引用本文: 刘军, 后士浩, 张凯, 张睿, 胡超超. 基于增强Tiny YOLOV3算法的车辆实时检测与跟踪[J]. 农业工程学报, 2019, 35(8): 118-125. DOI: 10.11975/j.issn.1002-6819.2019.08.014
    Liu Jun, Hou Shihao, Zhang Kai, Zhang Rui, Hu Chaochao. Real-time vehicle detection and tracking based on enhanced Tiny YOLOV3 algorithm[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2019, 35(8): 118-125. DOI: 10.11975/j.issn.1002-6819.2019.08.014
    Citation: Liu Jun, Hou Shihao, Zhang Kai, Zhang Rui, Hu Chaochao. Real-time vehicle detection and tracking based on enhanced Tiny YOLOV3 algorithm[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2019, 35(8): 118-125. DOI: 10.11975/j.issn.1002-6819.2019.08.014

    基于增强Tiny YOLOV3算法的车辆实时检测与跟踪

    Real-time vehicle detection and tracking based on enhanced Tiny YOLOV3 algorithm

    • 摘要: 针对深度学习方法在视觉车辆检测过程中对小目标车辆漏检率高和难以实现嵌入式实时检测的问题,该文基于Tiny YOLOV3算法提出了增强Tiny YOLOV3模型,并通过匈牙利匹配和卡尔曼滤波算法实现目标车辆的跟踪。在车载Jetson TX2嵌入式平台上,分别在白天和夜间驾驶环境下进行了对比试验。试验结果表明:与Tiny YOLOV3模型相比,增强Tiny YOLOV3模型的车辆检测平均准确率提高4.6%,平均误检率减少0.5%,平均漏检率降低7.4%,算法平均耗时增加43.8 ms/帧;加入跟踪算法后,本文算法模型的车辆检测平均准确率提高10.6%,平均误检率减少1.2%,平均漏检率降低23.6%,平均运算速度提高5倍左右,可达30帧/s。结果表明,所提出的算法能够实时准确检测出目标车辆,为卷积神经网络模型的嵌入式工程应用提供了参考。

       

      Abstract: Abstract: For intelligent vehicles and advanced driving assistant systems, real-time and accurate vehicle objects detection and tracking through on-board visual sensors are conducive to discovering potential dangers, and can take timely warning to drivers or measures to control vehicle braking and steering systems to avoid traffic accidents by active safety system. In recent years, vehicle detection based on deep learning has become a research hotspot. Although the deep learning method has made a significant breakthrough in vehicle detection precision, it will lead to high missed detection rate of small vehicle targets and rely on expensive computing resources in visual vehicle detection tasks, which is difficult to achieve in embedded real-time applications. Further analysis shows that the main reason for the above problems is that deep convolution neural network cannot reasonably prune network layer parameters, especially cannot reasonably utilize the shallow semantic information. On the contrary, a series of operations at the lower sampling layers will lead to the loss of vehicle information, especially for the small vehicle objects. Therefore, how to effectively extract and utilize the semantic information of small vehicle objects is a problem to be solved in this paper. On this basis, the problem of pruning network layer parameters reasonably was discussed. For the detection algorithm, on the one hand, based on visual analysis of receptive field of Tiny YOLOV3 network shallow layers, the use of shallow semantic information was enhanced by constructing a shallow feature pyramid structure, on the other hand, the shallow down sampling layer was replaced by convolution layer to reduce the semantic information loss of shallow network layers and increase the shallow layer features of vehicle objects to be extracted. Combine the above 2 aspects, the enhanced Tiny YOLOV3 network was proposed. For the tracking algorithm, because of the high frame rate of the vehicle-mounted camera, assuming that the vehicle objects in the adjacent frame moving uniformly without considering the image information, Kalman filter algorithm was used to track the vehicle target, and its observation position was estimated optimally. The proposed enhanced Tiny YOLO V3 network was trained by using 50 000 images collected by the on-board vehicle camera during the day and night. The training strategy included pre-training model, multi-scale training, batch normalization and data augment methods, which same as Tiny YOLOV3 network. On the vehicle Jetson TX2 embedded platform, 8 groups of comparative experiments were carried out with Tiny YOLOV3 model, including day and night traffic scenes. The experimental results showed that compared with Tiny YOLO V3 model,the mean precision rate of the enhanced Tiny YOLOV3 model proposed in this paper was improved by 4.6%, the mean detection error rate was reduced by 0.5%, the mean missing detection rate was reduced by 7.4%, and the mean time consumption was increased by 43.8 ms/frame without tracking algorithm. After adding the vehicle tracking algorithm, the mean precision rate was improved by 10.6%, the mean detection error rate was reduced by 1.2%, the mean missing detection rate was reduced by 23.6%, and the mean operation speed was 5 times faster than that of the Tiny YOLOV3 model, reaching 30 frame/s. The study provides an important guidance for embedded vehicle detection and tracking algorithm application in intelligent vehicles and advanced driving assistant systems.

       

    /

    返回文章
    返回