3DUNet implemented with pytorch
Introduction
The repository is a 3DUNet implemented with pytorch, referring to
this project.
I have redesigned the code structure and used the model to perform liver and tumor segmentation on the lits2017 dataset.
paper: 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation
Requirements:
pytorch >= 1.1.0
torchvision
SimpleITK
Tensorboard
Scipy
Code Structure
βββ dataset # Training and testing dataset
β βββ dataset_lits_train.py
β βββ dataset_lits_val.py
β βββ dataset_lits_test.py
β βββ transforms.py
βββ models # Model design
β βββ nn
β β βββ module.py
β βββ ResUNet.py # 3DUNet class
β βββ Unet.py # 3DUNet class
β βββ SegNet.py # 3DUNet class
β βββ KiUNet.py # 3DUNet class
βββ experiments # Trained model
|ββ utils # Some related tools
| βββ common.py
| βββ weights_init.py
| βββ logger.py
| βββ metrics.py
| βββ loss.py
βββ preprocess_LiTS.py # preprocessing for raw data
βββ test.py # Test code
βββ train.py # Standard training code
βββ config.py # Configuration information for training and testing
Quickly Start
1) LITS2017 dataset preprocessing:
- Download dataset from google drive: Liver Tumor Segmentation Challenge.
Or from my share: https://pan.baidu.com/s/1WgP2Ttxn_CV-yRT4UyqHWw Extraction codeοΌhfl8 (The dataset consists of two parts: batch1 and batch2) - Then you need decompress and merge batch1 and batch2 into one folder. It is recommended to use 20 samples(27~46) of the LiTS dataset as the testset and 111 samples(0~26 and 47~131) as the trainset. Please put the volume data and segmentation labels of trainset and testset into different local folders, such as:
raw_dataset:
βββ test # 20 samples(27~46)
βΒ Β βββ ct
βΒ Β βΒ Β βββ volume-27.nii
βΒ Β βΒ Β βββ volume-28.nii
| | |ββ ...
βΒ Β βββ label
βΒ Β βββ segmentation-27.nii
βΒ Β βββ segmentation-28.nii
| |ββ ...
βΒ Β
βββ train # 111 samples(0\~26 and 47\~131)
βΒ Β βββ ct
βΒ Β βΒ Β βββ volume-0.nii
βΒ Β βΒ Β βββ volume-1.nii
| | |ββ ...
βΒ Β βββ label
βΒ Β βββ segmentation-0.nii
βΒ Β βββ segmentation-1.nii
| |ββ ...
- Finally, you need to change the root path of the volume data and segmentation labels in
./preprocess_LiTS.py
, such as:
row_dataset_path = './raw_dataset/train/' # path of origin dataset
fixed_dataset_path = './fixed_data/' # path of fixed(preprocessed) dataset
- Run
python ./preprocess_LiTS.py
If nothing goes wrong, you can see the following files in the dir./fixed_data
βββ train_path_list.txt
βββ val_path_list.txt
β
|ββ ct
β volume-0.nii
β volume-1.nii
β volume-2.nii
β ...
ββ label
segmentation-0.nii
segmentation-1.nii
segmentation-2.nii
...
2) Training 3DUNet
- Firstly, you should change the some parameters in
config.py
,especially, please set--dataset_path
to./fixed_data
All parameters are commented in the fileconfig.py
. - Secondely,run
python train.py --save model_name
- Besides, you can observe the dice and loss during the training process
in the browser through
tensorboard --logdir ./output/model_name
.
3) Testing 3DUNet
run test.py
Please pay attention to path of trained model in test.py
.
(Since the calculation of the 3D convolution operation is too large,
I use a sliding window to block the input tensor before prediction, and then stitch the results to get the final result.
The size of the sliding window can be set by yourself in config.py
)
After the test, you can get the test results in the corresponding folder:./experiments/model_name/result
You can also read my Chinese introduction about this 3DUNet project here. However, I no longer update the blog, I will try my best to update the github code.
If you have any suggestions or questions,
welcome to open an issue to communicate with me.