CNN結構模型一句話概述:從LeNet到ShuffleNet
- 2019 年 11 月 24 日
- 筆記
由簡入繁,由繁入簡。已瘋……
- LeNet:Gradient based learning applied to document recognition
- AlexNet:ImageNet Classification with Deep Convolutional Neural Networks
- ZFNet:Visualizing and understanding convolutional networks
- VGGNet:Very deep convolutional networks for large-scale image recognition
- NiN:Network in network
- GoogLeNet:Going deeper with convolutions
- Inception-v3:Rethinking the inception architecture for computer vision
- ResNet:Deep residual learning for image recognition
- Stochastic_Depth:Deep networks with stochastic depth
- WResNet:Weighted residuals for very deep networks
- Inception-ResNet:Inception-v4,inception-resnet and the impact of residual connections on learning
- Fractalnet:Ultra-deep neural networks without residuals
- WRN:Wide residual networks
- ResNeXt:Aggregated Residual Transformations for Deep Neural Networks
- DenseNet:Densely connected convolutional networks
- PyramidNet:Deep Pyramidal Residual Networks
- DPN:Dual Path Networks
- SqueezeNet:AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size
- MobileNets:Efficient Convolutional Neural Networks for Mobile Vision Applications
- ShuffleNet:An Extremely Efficient Convolutional Neural Network for Mobile Devices
- LeNet:基於漸變的學習應用於文檔識別
- AlexNet:具有深卷積神經網絡的ImageNet分類
- ZFNet:可視化和理解卷積網絡
- VGGNet:用於大規模圖像識別的非常深的卷積網絡
- NiN:網絡中的網絡
- GoogLeNet:捲入更深入
- Inception-v3:重新思考計算機視覺的初始架構
- ResNet:圖像識別的深度殘差學習
- Stochastic_Depth:具有隨機深度的深層網絡
- WResNet:非常深的網絡的加權殘差
- Inception-ResNet:Inception-v4,inception-resnet以及剩餘連接對學習的影響
- Fractalnet:沒有殘差的超深層神經網絡
- WRN:寬殘留網絡
- ResNeXt:深層神經網絡的聚合殘差變換
- DenseNet:密集連接的卷積網絡
- PyramidNet:深金字塔殘留網絡
- DPN:雙路徑網絡
- SqueezeNet:AlexNet級準確度,參數減少50倍,模型尺寸小於0.5MB
- MobileNets:用於移動視覺應用的高效卷積神經網絡
- ShuffleNet:移動設備極高效的卷積神經網絡
原創文章,轉載請註明: 轉載自URl-team
本文鏈接地址: CNN結構模型一句話概述:從LeNet到ShuffleNet