I use the val data set for evaluation. However, there are some errors when running to certain data. Does the evaluation.py file have any problems with the handling the NII files without fractures?
First, many thanks for publicing such an excellent dataset.
However, it seems that the ‘environ.py’ in the ‘ribfrac’ is empty.
I wonder if you have adjusted the annotation of the label when you adjust the file structure.
I mean, the original label files seem to be marked according to the serial number of the fracture. How will they be evaluated? Do the labels need to be converted into specific category annotations according to the csv file?
I have a question regarding the fracture detection task. In the training dataset, there are fractures labeled as -1. Should these -1 labeled fractures be included in the detection task, or should they be ignored?
During verifying the performance of our method, we meet some problems with annotations in Training set and Validation Set.
It mainly includes the following two aspects:
It seems that some fracture areas in the Validation Set are not annotated. As shown in the figure, there are two abnormal cortical bones, but the annotated area does not include these two places.
Part of the fracture occurred on multiple ribs, but only have one label. Such as the RibFrac114 in Training set and RibFrac445 in Validation Set. This may be for the convenience of annotating. However, annotations like this with gaps in the middle may have problems while under evaluation. (see line 246 in evaluation.py for details) When the centroid(x, y, z) is located in these gaps, whether it exists possible for the original labels to be classified as the 'Background'?