Abstract:Considering the three-dimensional (3D) U-Net lacks sufficient local feature extraction for image features and lacks attention to the fusion of high- and low-level features, we propose a new model called 3DMAU-Net based on the 3D U-Net architecture for liver region segmentation. Our model replaces the last two layers of the 3D U-Net with a sliding window-based multilayer perceptron (SMLP), enabling better extraction of local image features. We also design a high- and low-level feature fusion dilated convolution block that focuses on local features and better supplements the surrounding information of the target region. This block is embedded in the entire encoding process, ensuring that the overall network is not simply downsampling. Before each feature extraction, the input features are processed by the dilated convolution block. We validate our experiments on the liver tumor segmentation challenge 2017 (Lits2017) dataset, and our model achieves a Dice coefficient of 0.95, which is an improvement of 0.015 compared to the 3D U-Net model. Furthermore, we compare our results with other segmentation methods, and our model consistently outperforms them.