基于深度学习的作物损伤评估目标检测方法的比较(CS)

恶劣的天气会给农民带来巨大的经济损失。关于受灾地点和严重程度的详细信息将帮助农民、保险公司和灾害应对机构做出明智的灾后决策。本研究的目的是验证利用计算机视觉和深度学习技术从航空图像中检测受损作物区域。一个具体的目标是比较现有的目标检测算法,以确定最适合作物损害检测的算法。我们模拟了玉米生产中常见的两种作物损伤模式:最低穗部茎秆倒伏和地面茎秆倒伏,使用模拟损伤创建训练和分析数据集还有使用配备RGB摄像机的无人机系统(UAS)进行图像采集。同时也评估了三种流行的物体检测器(Faster R-CNN,YOLOv2和RetinaNet)在野外检测损坏区域的能力并且使用平均精度来比较目标检测器。YOLOv2和RetinaNet能够检测作物在多个生长后期阶段的损害而速度更快的R-CNN不如其他两个高级探测器成功。对所有被测的目标探测器来说,在作物生长后期检测作物损伤都比较困难是因为模拟损伤图中的杂草压力和增加的目标密度都增加了额外的复杂性。

原文题目:Comparison of object detection methods for crop damage assessment using deep learning

原文:Severe weather events can cause large financial losses to farmers. Detailed information on the location and severity of damage will assist farmers, insurance companies, and disaster response agencies in making wise post-damage decisions. The goal of this study was a proof-of-concept to detect damaged crop areas from aerial imagery using computer vision and deep learning techniques. A specific objective was to compare existing object detection algorithms to determine which was best suited for crop damage detection. Two modes of crop damage common in maize (corn) production were simulated: stalk lodging at the lowest ear and stalk lodging at ground level. Simulated damage was used to create a training and analysis data set. An unmanned aerial system (UAS) equipped with a RGB camera was used for image acquisition. Three popular object detectors (Faster R-CNN, YOLOv2, and RetinaNet) were assessed for their ability to detect damaged regions in a field. Average precision was used to compare object detectors. YOLOv2 and RetinaNet were able to detect crop damage across multiple late-season growth stages. Faster R-CNN was not successful as the other two advanced detectors. Detecting crop damage at later growth stages was more difficult for all tested object detectors. Weed pressure in simulated damage plots and increased target density added additional complexity.

原文作者:Ali HamidiSepehr,Seyed Vahid Mirnezami,James Ward

原文地址:https://arxiv.org/abs/1912.13199