基於深度學習的作物損傷評估目標檢測方法的比較(CS)

惡劣的天氣會給農民帶來巨大的經濟損失。關於受災地點和嚴重程度的詳細資訊將幫助農民、保險公司和災害應對機構做出明智的災後決策。本研究的目的是驗證利用電腦視覺和深度學習技術從航空影像中檢測受損作物區域。一個具體的目標是比較現有的目標檢測演算法,以確定最適合作物損害檢測的演算法。我們模擬了玉米生產中常見的兩種作物損傷模式:最低穗部莖稈倒伏和地面莖稈倒伏,使用模擬損傷創建訓練和分析數據集還有使用配備RGB攝像機的無人機系統(UAS)進行影像採集。同時也評估了三種流行的物體檢測器(Faster R-CNN,YOLOv2和RetinaNet)在野外檢測損壞區域的能力並且使用平均精度來比較目標檢測器。YOLOv2和RetinaNet能夠檢測作物在多個生長後期階段的損害而速度更快的R-CNN不如其他兩個高級探測器成功。對所有被測的目標探測器來說,在作物生長後期檢測作物損傷都比較困難是因為模擬損傷圖中的雜草壓力和增加的目標密度都增加了額外的複雜性。

原文題目:Comparison of object detection methods for crop damage assessment using deep learning

原文:Severe weather events can cause large financial losses to farmers. Detailed information on the location and severity of damage will assist farmers, insurance companies, and disaster response agencies in making wise post-damage decisions. The goal of this study was a proof-of-concept to detect damaged crop areas from aerial imagery using computer vision and deep learning techniques. A specific objective was to compare existing object detection algorithms to determine which was best suited for crop damage detection. Two modes of crop damage common in maize (corn) production were simulated: stalk lodging at the lowest ear and stalk lodging at ground level. Simulated damage was used to create a training and analysis data set. An unmanned aerial system (UAS) equipped with a RGB camera was used for image acquisition. Three popular object detectors (Faster R-CNN, YOLOv2, and RetinaNet) were assessed for their ability to detect damaged regions in a field. Average precision was used to compare object detectors. YOLOv2 and RetinaNet were able to detect crop damage across multiple late-season growth stages. Faster R-CNN was not successful as the other two advanced detectors. Detecting crop damage at later growth stages was more difficult for all tested object detectors. Weed pressure in simulated damage plots and increased target density added additional complexity.

原文作者:Ali HamidiSepehr,Seyed Vahid Mirnezami,James Ward

原文地址:https://arxiv.org/abs/1912.13199