学习超声视频表现形式的自我监督方法(CS.CV)

  • 2020 年 3 月 27 日
  • 筆記

深度学习的最新进展在医学图像分析方面取得了令人鼓舞的性能,而在大多数情况下,必须由人类专家提供真实的注解来训练深度模型。在实践中,这样的注释收集起来昂贵并且对于医学成像应用而言可能是稀缺的。因此,人们非常关注从未标记的原始数据中学习表示形式。在本文中,我们提出了一种自我监督的学习方法,可以从医学影像视频中学习有意义且可转移的表示形式,而无需任何类型的人工注释。我们假设为了学习这种表示,模型应该从未标记的数据中识别解剖结构。因此,我们迫使模型在数据本身的免费监督下处理解剖感知任务。具体而言,该模型旨在校正改组后的视频片段的顺序,同时预测应用于视频片段的几何变换。胎儿超声视频的实验表明,该方法可以有效地学习有意义且有力的表示形式,并将其很好地转移到诸如标准平面检测和显着性预测等下游任务中。

原文题目:Self-supervised Representation Learning for Ultrasound Video

原文:Recent advances in deep learning have achieved promising performance for medical image analysis, while in most cases ground-truth annotations from human experts are necessary to train the deep model. In practice, such annotations are expensive to collect and can be scarce for medical imaging applications. Therefore, there is significant interest in learning representations from unlabelled raw data. In this paper, we propose a self-supervised learning approach to learn meaningful and transferable representations from medical imaging video without any type of human annotation. We assume that in order to learn such a representation, the model should identify anatomical structures from the unlabelled data. Therefore we force the model to address anatomy-aware tasks with free supervision from the data itself. Specifically, the model is designed to correct the order of a reshuffled video clip and at the same time predict the geometric transformation applied to the video clip. Experiments on fetal ultrasound video show that the proposed approach can effectively learn meaningful and strong representations, which transfer well to downstream tasks like standard plane detection and saliency prediction.

原文作者:Jianbo Jiao, Richard Droste, Lior Drukker, Aris T. Papageorghiou, J. Alison Noble

原文地址:https://arxiv.org/abs/2003.00105