an End-to-End Universal Lesion Segmentation Framework for 2D Ultrasound Images
ULS4US is composed of a multiple-in multiple-out (MIMO) UNet integrating multiscale features from both full and cropped partial images, a two-stage lesion-aware learning algorithm recursively locating and segmenting the lesions in a reinforced way, and an lesion-adaptive loss function for the MIMO-UNet consisting of two weighted and one self-supervised loss components designed for intra- and inter-branches of network outputs, respectively.
The diagram and workflows of ULS4US framework is shown as below
In summary, we break the task of ULS for US images into two stages: (Stage 1) to detect the presence of the lesion and roughly delineate its outline in the original image (i.e., to treat the lesion segmentation as a small-scale object segmentation problem), and then, (Stage 2) to crop the original image to contain the lesion only and stay focus on the cropped partial image in order to obtain the accurate lesion boundary (i.e., to treat the lesion segmentation as normal or even large size object segmentation problem)
We re-design the conventional UNet architecture to implement a new multiscale feature fusion network as MIMO-UNet, shown as below
Main modifications to UNet include
- an additional input branch (IB), along with an additional output branch (OB), is added. The input image size of this additional IB is 1/4 of the original IB; i.e., the number of pixels in both horizontal and vertical directions is 1/2 of the original image.
- two input images for dual IBs are fed separately into two encoders, which extract the features from the input image and convert them into a 32x32 feature map with 512 dimensions
- besides the up-sampled feature maps, the decoder performs a concatenate operation on feature maps of the same size from both encoders via skip-connections.
- a customized layer to compute the network loss is appended as the third output branch
The performance of ULS4US is assessed in a unified dataset consisting of two public and three private US image datasets which involve over 2300 images and three specific types of organs, and comparative experiments on the individual and unified datasets suggest that ULS4US is likely scalable with more data. The trained network weight can be downloaded from [Baidu NetDisk with the access code 'ccj1'].
-
fully tested with Ubuntu 18.04 LTS, Python 3.6.9 and Keras 2.4.0 with Tensorflow 2.4.1 as the backend in a server equipped with Nvidia GTX 3090 GPUs
-
clone the repo to local directory
git clone https://github.com/cakuba/ULS4US.git
- prepare the training and test dataset
-
go to the directory
data
and create a new sub-directory as you like; e.g.,breastUS
-
enter
breastUS
directory and arrange the structure for the training and test set asbreastUS | -- training | -- img | -- 000.png -- 001.png -- mask | -- 000.png -- 001.png | -- test | -- img | -- 000.png -- 001.png -- mask | -- 000.png -- 001.png
-
go to the directory
conf
and add thebreastUS
data information into the filedataset.conf
as[breastUS] training_data_dir = ./data/breastUS/training test_data_dir = ./data/breastUS/test data_name = breast
-
in the same directory, update the file
training.conf
by replacing the value of the key 'dataset' tobreastUS
under the section [ULS4US] (NOTE: this corresponds to the section name in the filedataset.conf
- start training
python ULS4US.py
Most of the training hyperparameters can be defined in the file training.conf
under the section [ULS4US]
- performance evaluation of ULS4US
python evaluate_performance_ULS4US.py
you should observe some outputs as below
- predictions of ULS4US
python prediction_ULS4US.py
you should find the predictions for the sample test data saved as PNG files in the sub-directory 'pred'.
And congratulations, you have just used ULS4US for your own data! Also, we have prepared some sample test data in the directory data/all_mixed
and provided the predictions of ULS4US for these test data in the directory pred
. Please feel free to let us know if you have any questions.
ULS4US is proposed and maintained by researchers from WIT.
See LICENSE for ULS4US