This repository is associated to the article published by Alberto Sánchez-Bonaste, Luis F. S. Merchante, Carlos Gónzalez-Bravo and Alberto Carnicero
Measuring the thickness of cortical bone tissue is crucial for diagnosing bone diseases and monitoring treatment progress. One can perform this measurement visually from CT images or using semi-automatic algorithms from Houndsfield values. This article proposes a mechanism capable of measuring thickness over the entire bone surface, aligning and orienting all images in the same direction to reduce human intervention. The objective is to batch process large numbers of patients' CAT images to obtain thickness profiles of their cortical tissue for various applications.
To achieve this, classical morphological and segmentation techniques are used to extract the area of interest, filter and interpolate the bones, and detect their contour and Signed Distance Functions to measure cortical thickness. The set of bones is aligned by detecting their longitudinal direction, and the orientation is determined by computing their principal component of the center of mass slice.
Measuring cortical thickness would enable accurate traumatological surgeries and the study of structural properties. Obtaining thickness profiles of a vast number of patients can open the door to various studies aimed at identifying patterns between bone thickness and patients' medical, social, or demographic variables.
If you use this code for your research, please cite our work:
@article{sanchez2023systematic,
title={Systematic measuring cortical thickness in tibiae for bio-mechanical analysis},
author={S{\'a}nchez-Bonaste, Alberto and Merchante, Luis FS and G{\'o}nzalez-Bravo, Carlos and Carnicero, Alberto},
journal={Computers in Biology and Medicine},
doi={https://doi.org/10.1016/j.compbiomed.2023.107123},
pages={107123},
year={2023},
publisher={Elsevier}
}
In the DATA folder, there are two sets of CT images used in the article to evaluate the performance of the code provided in this repository. The rest of the images cannot be shared due to confidentiality reasons. If any researcher is interested in replicating the article's results with the same dataset, they can proceed after signing the appropriate confidentiality agreements with the authors of the paper.
More information about the two sets of CT images can be found in this file.
All the configuration is centralized in the file.ini in the "config" folder. That file is organized in sections:
This section configures all the paths:
- data_path_dicom. Folder with the CT images sets
- output_path. Folder to store STL files
- resources_path. Temporary folder to assist Visual Logging
- spacing. Milimeters per slice and milimeters per pixel. Used to sample all the datasets at the same resolution. Default value: [0.5,0.25,0.25]
- threshold. Segmentation parameter. Set to 50 to extract the whole leg. To extract isolated bones, use higher values around 210, but it is more instable. It can be modified but first execution is recommended to be left to 50.
- extract. List of segmented elements IDs to extract. If empty, it extracts the largest element. In our case, the tibiae.
- size. Main erosion and dilation kernel size. With spacing [0.5, 0.25, 0.25] it is set to 60. When the X and Y sampling rate is different, kernel size is updated authomatically. It can be provided but first execution is recommended not to be toched. If a different value is provided it will override the automated kernel size.
- kernel_preerosion. Smoothing and hole filling operators. It can be modified but first execution is recommended to be left to [1,1]
- kernel_firstdilation. Smoothing and hole filling operators. It can be modified but first execution is recommended to be left to [7,7]
- kernel_firsterosion. Smoothing and hole filling operators. It can be modified but first execution is recommended to be left to [6,6]
- threshold_between_min. Lower bound of the HU units to be filtered. It can be modified but first execution is recommended to be left to 250 for cortical detection
- threshold_between_max. Upper bound of the HU units to be filtered. It rarely needs to be modified from 2000
- convert_stl. Boolean variable that indicates if STL needs to be generated or not.
- num_views_thickness. If 1D thickness permiter profiles are desired, this variables set the number of those profiles. The main profile is captured in the Center of Mass, and the rest are extracted equidistantly from it.
- reference_bone. It the CT bones are desired to be aligned and oriented against a reference bone, this variable sets the PATH to its STL file
- orientation_vector. This parameter is filled out after the execution of the script referencBone.py. Only two coordinates (X and Y) are required. The rest of the bones will orient against this reference. User doesn't need to provide a value
- alignment_vector . This parameter is filled out after the execution of the script referencBone.py. Three spacial coordinates are required (X,Y,Z). The rest of the bones will align against this reference. For instance [0,0,1] to be aligned against the Z axis. User doesn't need to provide a value
-
legX. The X of this parameter is the ID assiged by the application and can be retrived from the log file and it indentifies the bone to be re-executed to apply some corrections. It indicates two possible corrections. Its value is a tuple of two binary values meaning no correction (0) or correction required (1). The first element of the tuple is for changing the leg side (if the reference bone is from the left side and the bone to be computed is from the right side). The second element is to fix the PCA main variance direction (check paper for more details). For example, if the bone 4 needs to be applyed the side correction but not the PCA correction, then this parameter would look like this:
leg4 = 1,0
A description of the implemented methods can be found in README file from "docs" folder.
A user guide can be found in USERGuide file from "docs" folder.
To replicate the results provided in the article, run those steps:
-
Clone repository
-
Use this configuration file updating the paths according to the cloning path:
[dicom] data_path_dicom = C:/corticalMeasurement/data/ output_path = C:/corticalMeasurement/output/ resources_path = C:/corticalMeasurement/resources/ [pre-process] spacing = [0.5,0.25,0.25] threshold = 50 extract = [] size = 60 kernel_preerosion = [1,1] kernel_firstdilation = [7,7] kernel_firsterosion = [6,6] [post-process] threshold_between_min = 250 threshold_between_max = 2000 convert_stl = True [thickness] num_views_thickness = 9 [all dicom] reference_bone = C:/corticalMeasurement/data/leg1.stl [reference vectors] orientation_vector = alignment_vector = [retake]
-
Run PYTHON code:
The folder structure that we should have is the following.
To run PYTHON code without errors, we must first install the libraries stored in the requirements.txt
Run the first script --> generateSTLs.py
Before continuing, establish in file.ini the path to the stl of the reference_bone.
Example: reference_bone = D:/corticalMeasurement/output/leg1.stl
Run the second script --> referenceBone.py
Run the third script --> correctionsThickness.py
After execution, check the log file correctionsThickness.html and add to the retake section of the file.ini, if needed, the necessary values for the retake.
Example: leg2=1.0
Run the fourth script --> correctionsThickness.py
-
Review LOG file
See requirements.txt for tested library versions
pip install visual-logging
pip install vg