CNN結構模型一句話概述:從LeNet到ShuffleNet

  • 2019 年 11 月 24 日
  • 筆記

由簡入繁,由繁入簡。已瘋……

  1. LeNet:Gradient based learning applied to document recognition
  2. AlexNet:ImageNet Classification with Deep Convolutional Neural Networks
  3. ZFNet:Visualizing and understanding convolutional networks
  4. VGGNet:Very deep convolutional networks for large-scale image recognition
  5. NiN:Network in network
  6. GoogLeNet:Going deeper with convolutions
  7. Inception-v3:Rethinking the inception architecture for computer vision
  8. ResNet:Deep residual learning for image recognition
  9. Stochastic_Depth:Deep networks with stochastic depth
  10. WResNet:Weighted residuals for very deep networks
  11. Inception-ResNet:Inception-v4,inception-resnet and the impact of residual connections on learning
  12. Fractalnet:Ultra-deep neural networks without residuals
  13. WRN:Wide residual networks
  14. ResNeXt:Aggregated Residual Transformations for Deep Neural Networks
  15. DenseNet:Densely connected convolutional networks
  16. PyramidNet:Deep Pyramidal Residual Networks
  17. DPN:Dual Path Networks
  18. SqueezeNet:AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size
  19. MobileNets:Efficient Convolutional Neural Networks for Mobile Vision Applications
  20. ShuffleNet:An Extremely Efficient Convolutional Neural Network for Mobile Devices
  21. LeNet:基於漸變的學習應用於文檔識別
  22. AlexNet:具有深卷積神經網絡的ImageNet分類
  23. ZFNet:可視化和理解卷積網絡
  24. VGGNet:用於大規模圖像識別的非常深的卷積網絡
  25. NiN:網絡中的網絡
  26. GoogLeNet:捲入更深入
  27. Inception-v3:重新思考計算機視覺的初始架構
  28. ResNet:圖像識別的深度殘差學習
  29. Stochastic_Depth:具有隨機深度的深層網絡
  30. WResNet:非常深的網絡的加權殘差
  31. Inception-ResNet:Inception-v4,inception-resnet以及剩餘連接對學習的影響
  32. Fractalnet:沒有殘差的超深層神經網絡
  33. WRN:寬殘留網絡
  34. ResNeXt:深層神經網絡的聚合殘差變換
  35. DenseNet:密集連接的卷積網絡
  36. PyramidNet:深金字塔殘留網絡
  37. DPN:雙路徑網絡
  38. SqueezeNet:AlexNet級準確度,參數減少50倍,模型尺寸小於0.5MB
  39. MobileNets:用於移動視覺應用的高效卷積神經網絡
  40. ShuffleNet:移動設備極高效的卷積神經網絡

原創文章,轉載請註明: 轉載自URl-team

本文鏈接地址: CNN結構模型一句話概述:從LeNet到ShuffleNet