论文地址:H-DenseUNet: Hybrid Densely Connected UNet for
Liver and Tumor Segmentation from CT Volumes
这是一篇使用Unet改进进行医学图像分割的论文
对于传统的2DUnet由于只采取单一切片进行训练,只能得到intra-slice特征,无法得到inter-slice特征,对于3DUnet直接将整个3D图像放进去,虽然可以同时得到intra inter -slice特征,但是内存不够用
因此本文提出来使用2D 3Dslice进行融合的方法
![[深度学习从入门到女装]H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation from CT Volu [深度学习从入门到女装]H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation from CT Volu](/default/index/img?u=L2RlZmF1bHQvaW5kZXgvaW1nP3U9YUhSMGNITTZMeTl3YVdGdWMyaGxiaTVqYjIwdmFXMWhaMlZ6THpFd09DOWxZMll5Tm1RNU9XRm1aR1kyTXpKbVlqQXlNVEl6WW1ZNE9UTXdNVFUyWXk1d2JtYz0=)
![[深度学习从入门到女装]H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation from CT Volu [深度学习从入门到女装]H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation from CT Volu](/default/index/img?u=L2RlZmF1bHQvaW5kZXgvaW1nP3U9YUhSMGNITTZMeTl3YVdGdWMyaGxiaTVqYjIwdmFXMWhaMlZ6THpFeU55ODFPVFl3WVRVM09UZGhOemd4TmpnM1lqZzFOR1l4WXpaaFptTTJZMll6Tnk1d2JtYz0=)
1. 感兴趣区域(ROI)提取
先使用resnet进行感兴趣区域提取,也就是得到我们需要分割器官或者肿瘤所在的切片,在本文中是得到12个切片
2. 2D DenseUnet进行intra-slice特征提取
我们使用I∈Rn×224×224×12×1,其中n为batch_size,22424412为input volumes,Ground Truth使用Y∈Rn×224×224×12×1表示,Yi,j,k=c表示像素点(i,j,k)的类别为c(back ground,liver,tumor)
2D网络输入转换
使用I2d∈R12n×224×224×3来表示2D网络的输入,I2d=F(I),F为转换函数,转换方式如下图所示
![[深度学习从入门到女装]H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation from CT Volu [深度学习从入门到女装]H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation from CT Volu](/default/index/img?u=L2RlZmF1bHQvaW5kZXgvaW1nP3U9YUhSMGNITTZMeTl3YVdGdWMyaGxiaTVqYjIwdmFXMWhaMlZ6THpJNE9DODNaVE14WmpNeE1HTmpZVGs0T0RVek1UYzBZMkZsTnpkaE1HVXhZV0psTUM1d2JtYz0=)
如上图,原本的I为12层slice,然后在最顶层和最下层分别padding一层,也就是得到14层切片,然后每三层作为一个块,最终将I分别了12块,concat到一起就成了I2d
X2d=f2d(I2d;θ2d),X2d∈R12n×224×224×64 为2D DenseNet的upsampling layer5的输出
y2d^=f2dcls(X2d;θ2dcls),y2d^∈R12n×224×224×3为2D DenseNet的输出
3. 3D DenseUnet进行inter-slice特征提取
3D DenseUnet网络的输出为2D DenseUnet网络得到的分割图y2d^的转换
的转换y2d^′并与原始图像I的concate(I,y2d^′)其中
y2d^′=F−1(y2d^),y2d^′∈Rn×224×224×12×3
4.使用hybrid feature fusion (HFF) layer进行2D 3D特征融合
特征融合层输入为Z=X3d+X2d′其中
X2d′=F−1(X2d),X2d′∈Rn×224×224×12×64
X3d=f3d(I,y2d^′;θ3d)
X3d为3D DenseUnet中upsampling layer5层的feature map
![[深度学习从入门到女装]H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation from CT Volu [深度学习从入门到女装]H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation from CT Volu](/default/index/img?u=L2RlZmF1bHQvaW5kZXgvaW1nP3U9YUhSMGNITTZMeTl3YVdGdWMyaGxiaTVqYjIwdmFXMWhaMlZ6THpnNU5DOWtObUU1TlRObFlqWXlaVEEzWWpJNU5qRTNZMlk1WWpaalptRmtOVEJpWlM1d2JtYz0=)