VisionNet:用于自主驾驶的基于驱动空间的交互式运动预测网络(CS cv)

  • 2020 年 1 月 10 日
  • 筆記

对环境交通状况的理解在很大程度上确保了自动驾驶汽车的行驶安全。近几年,为了实现该目标而进行了大量研究调查,但由于在复杂的场景中受到集体影响的限制,这个问题很难得到很好的解决。已有的这些方法通过目标障碍物及其旁边物体之间的空间关系对交互进行建模。然而,由于交互的训练阶段缺乏有效的监督,它们过分简化了挑战。因此导致了这些模型远不能令人满意。更直观地说,我们将问题转化为计算可感知交互的可驱动空间,并提出基于CNN的VisionNet进行轨迹预测。VisionNet接受一系列运动状态,即位置,速度和加速度,估计未来的可驾驶空间。经过改进的交互作用显着提高了VisionNet的解释能力并完善了预测。为了进一步提高性能,我们提出了一种交互式损失来指导可驾驶空间的生成。同时在多个公共数据集上的实验证明了该方法的有效性。

原文题目:VisionNet: A Drivable-space-based Interactive Motion Prediction Network for Autonomous Driving

原文:The comprehension of environmental traffic situation largely ensures the driving safety of autonomous vehicles. Recently, the mission has been investigated by plenty of researches, while it is hard to be well addressed due to the limitation of collective influence in complex scenarios. These approaches model the interactions through the spatial relations between the target obstacle and its neighbors. However, they oversimplify the challenge since the training stage of the interactions lacks effective supervision. As a result, these models are far from promising. More intuitively, we transform the problem into calculating the interaction-aware drivable spaces and propose the CNN-based VisionNet for trajectory prediction. The VisionNet accepts a sequence of motion states, i.e., location, velocity, and acceleration, to estimate the future drivable spaces. The reified interactions significantly increase the interpretation ability of the VisionNet and refine the prediction. To further advance the performance, we propose an interactive loss to guide the generation of the drivable spaces. Experiments on multiple public datasets demonstrate the effectiveness of the proposed VisionNet.

原文作者:Yanliang Zhu,Deheng Qian,Dongchun Ren,Huaxia Xia

原文地址:https://arxiv.org/abs/2001.02354