VisionNet:用於自主駕駛的基於驅動空間的互動式運動預測網路(CS cv)

  • 2020 年 1 月 10 日
  • 筆記

對環境交通狀況的理解在很大程度上確保了自動駕駛汽車的行駛安全。近幾年,為了實現該目標而進行了大量研究調查,但由於在複雜的場景中受到集體影響的限制,這個問題很難得到很好的解決。已有的這些方法通過目標障礙物及其旁邊物體之間的空間關係對交互進行建模。然而,由於交互的訓練階段缺乏有效的監督,它們過分簡化了挑戰。因此導致了這些模型遠不能令人滿意。更直觀地說,我們將問題轉化為計算可感知交互的可驅動空間,並提出基於CNN的VisionNet進行軌跡預測。VisionNet接受一系列運動狀態,即位置,速度和加速度,估計未來的可駕駛空間。經過改進的交互作用顯著提高了VisionNet的解釋能力並完善了預測。為了進一步提高性能,我們提出了一種互動式損失來指導可駕駛空間的生成。同時在多個公共數據集上的實驗證明了該方法的有效性。

原文題目:VisionNet: A Drivable-space-based Interactive Motion Prediction Network for Autonomous Driving

原文:The comprehension of environmental traffic situation largely ensures the driving safety of autonomous vehicles. Recently, the mission has been investigated by plenty of researches, while it is hard to be well addressed due to the limitation of collective influence in complex scenarios. These approaches model the interactions through the spatial relations between the target obstacle and its neighbors. However, they oversimplify the challenge since the training stage of the interactions lacks effective supervision. As a result, these models are far from promising. More intuitively, we transform the problem into calculating the interaction-aware drivable spaces and propose the CNN-based VisionNet for trajectory prediction. The VisionNet accepts a sequence of motion states, i.e., location, velocity, and acceleration, to estimate the future drivable spaces. The reified interactions significantly increase the interpretation ability of the VisionNet and refine the prediction. To further advance the performance, we propose an interactive loss to guide the generation of the drivable spaces. Experiments on multiple public datasets demonstrate the effectiveness of the proposed VisionNet.

原文作者:Yanliang Zhu,Deheng Qian,Dongchun Ren,Huaxia Xia

原文地址:https://arxiv.org/abs/2001.02354