This repository accompanies the paper Joint Segmentation and Sub-Pixel Localization in Structured Light Laryngoscopy. This is a joint work of the Chair of Visual Computing of the Friedrich-Alexander University of Erlangen-Nuremberg and the Phoniatric Division of the University Hospital Erlangen.
(SSSL)² := SSSLSSSL stands for Semantic Segmentation and Sub-Pixel Accurate Localization in Single-Shot Structured Light Laryngoscopy. Our approach estimates a semantic segmentation of the Vocal Folds and Glottal Gap while simultaneously predicting sub-pixel accurate laserpoint positions in an efficient manner. It is used on a per frame basis (single-shot) in an active reconstruction setting (structured light) in laryngeal endoscopy (laryngoscopy).
The vocal fold segmentations were integrated into the original repository and can be found here on GitHub!
Make sure that you have a Python version >=3.5 installed. A CUDA capable GPU is recommended, but not necessary. However, note that inference times are most definitely higher, and training a network from is not recommended.
We supply a environment.yaml inside this folder.
You can use this to easily setup a conda environment using
conda env create -f environment.yml
The five trained U-Net models can be downloaded here.
If you want the other models as well, please contact me.
Uploading every model easily spends all of my available cloud space that I get from the university, lol.
We supply a Viewer that you can use to visualize the predictions of the trained networks.
You can use it via inference.py
, with
A
: Show Previous Frame
D
: Show Next Frame
W
: Toggle Predicted Keypoints (Green)
S
: Toggle Ground-Truth Keypoints (Blue)
Scroll Mousewheel
: Zoom In and Out
Click Mousewheel
: Drag View.
Can be done using evaluate.py
.
We are currently in the process of heavily refactoring this code. The most recent version can be found in the refactor branch.
The Gaussian regression code can be found in models/LSQ.py
. 👍
Precision ⬆️ | F1-Score ⬆️ | IoU ⬆️ | DICE ⬆️ | Inf. Speed (ms) ⬇️ | FPS ⬆️ | |
---|---|---|---|---|---|---|
Baseline | 0.64 | 0.69 | ||||
U-LSTM[8] | 0.70 ± 0.41 | 0.58 ± 0.32 | 0.52 ± 0.18 | 0.77 ± 0.08 | 65.57 ± 0.31 | 15 |
U-Net[18] | 0.92 ± 0.08 | 0.88 ± 0.04 | 0.68 ± 0.08 | 0.88 ± 0.02 | 4.54 ± 0.03 | 220 |
Sharan[21] | 0.17 ± 0.19 | 0.16 ± 0.17 | 5.97 ± 0.25 | 168 | ||
2.5D U-Net | 0.90 ± 0.08 | 0.81 ± 0.05 | 0.65 ± 0.06 | 0.87 ± 0.02 | 1.08 ± 0.01 | 926 |
Please cite this paper, if this work helps you with your research:
@InProceedings{SegAndLocalize,
author="TBD",
title="TBD",
booktitle="TBD",
year="2023",
pages="?",
isbn="?"
}
A PDF of the Paper will be included in the assets/
Folder of this repository.
However, you can also find it here (at a later point in time).