miracle-center / yolo_universal_anatomical_landmark_detection Goto Github PK
View Code? Open in Web Editor NEW[MICCAI 2021] You Only Learn Once: Universal Anatomical Landmark Detection https://arxiv.org/abs/2103.04657
License: MIT License
[MICCAI 2021] You Only Learn Once: Universal Anatomical Landmark Detection https://arxiv.org/abs/2103.04657
License: MIT License
After testing,I got results included some .png and .npy
I want to know how to fuse them to show the points in the Chest X-Ray pics.
I used the following code,it didn't work:
import numpy as np
import cv2
import matplotlib.pyplot as plt
image = np.load(r'D:\code-zhangjian\code-space\src\chest\CHNCXR_0032_0_gt.npy')
src = plt.imread(r'D:\code-zhangjian\code-space\src\CHNCXR_0032_0.png')
for i in range(0,image.shape[0]):
plt.imshow(image[i,:,:],cmap='gray')
plt.imshow(src,cmap='gray')
cv2.imwrite(str(i)+".png",image[i,:,:])
plt.show()
Hello,
I have encountered an error with pztorch.save - cannot pickle weakmethod object error and opened a quesion on StackOverflow.
https://stackoverflow.com/questions/76738857/why-do-i-get-a-weakmethod-object-error-when-using-torch-save
Is there maybe a solution for this error?
Best regards,
Jan
Hello, first of all, thank you for sharing the code. After I clone, prepare the datasets and download the checkpoint, I tried tu run it but I get this error everytime ValueError: num_samples should be a positive integer value, but got num_samples=0
. The error seems to be in the line 55 of the file model\runner.py l = DataLoader(d, **loader_opts)
. I hope you can help me. Thank you :)
您好,谢谢您的代码。我在测试best.pt文件时遇见了递归错误,请问您该如何解决。具体报错如下
$ python3 main.py -d ../runs -r GU2Net -p test -C config.yaml -m gln -l u2net -n cephalometric -c best.pt
Traceback (most recent call last):
File "/home/jgzn/PycharmProjects/YOLO/YOLO_Universal_Anatomical_Landmark_Detection-main/universal_landmark_detection/model/utils/yamlConfig.py", line 105, in update_config
update_config(val, args)
File "/home/jgzn/PycharmProjects/YOLO/YOLO_Universal_Anatomical_Landmark_Detection-main/universal_landmark_detection/model/utils/yamlConfig.py", line 105, in update_config
update_config(val, args)
File "/home/jgzn/PycharmProjects/YOLO/YOLO_Universal_Anatomical_Landmark_Detection-main/universal_landmark_detection/model/utils/yamlConfig.py", line 105, in update_config
update_config(val, args)
[Previous line repeated 997 more times]
RecursionError: maximum recursion depth exceeded
Hello,
I am trying to use the pre-trained model (best.pt) provided by you. I would like to use the pre-trained model to make predictions on new images that were not part of the dataset used during training.
In addition to understanding how to use the pre-trained model for inference on unseen images, I would like to know if there are any specific size requirements or considerations for the input image. Should I resize or preprocess my images to a particular size before feeding them into the model for inference? If so, what are the recommended dimensions or any other relevant guidelines?
I would greatly appreciate your guidance, documentation, code snippets, or instructions regarding the necessary steps and considerations for utilizing the pre-trained model (best.pt) on images that were not included in the dataset.
Thank you very much for your assistance!
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/pandas/core/indexes/base.py", line 3361, in get_loc
return self._engine.get_loc(casted_key)
File "pandas/_libs/index.pyx", line 76, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/index.pyx", line 108, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/hashtable_class_helper.pxi", line 2131, in pandas._libs.hashtable.Int64HashTable.get_item
File "pandas/_libs/hashtable_class_helper.pxi", line 2140, in pandas._libs.hashtable.Int64HashTable.get_item
KeyError: 4064
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "main.py", line 32, in
Runner(args).run()
File "/content/drive/MyDrive/YOLO_Universal_Anatomical_Landmark_Detection/universal_landmark_detection/model/runner.py", line 152, in run
self.train()
File "/content/drive/MyDrive/YOLO_Universal_Anatomical_Landmark_Detection/universal_landmark_detection/model/runner.py", line 257, in train
self.update_params(epoch, pbar)
File "/content/drive/MyDrive/YOLO_Universal_Anatomical_Landmark_Detection/universal_landmark_detection/model/runner.py", line 294, in update_params
for i, data_dic in enumerate(loader):
File "/content/drive/MyDrive/YOLO_Universal_Anatomical_Landmark_Detection/universal_landmark_detection/model/utils/mixIter.py", line 52, in next
return next(self.cur_iter_list[idx]), idx # todo
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 530, in next
data = self._next_data()
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 570, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/content/drive/MyDrive/YOLO_Universal_Anatomical_Landmark_Detection/universal_landmark_detection/model/datasets/hand.py", line 49, in getitem
points = self.readLandmark(name, origin_size)
File "/content/drive/MyDrive/YOLO_Universal_Anatomical_Landmark_Detection/universal_landmark_detection/model/datasets/hand.py", line 65, in readLandmark
li = list(self.labels.loc[int(name), :])
File "/usr/local/lib/python3.7/dist-packages/pandas/core/indexing.py", line 925, in getitem
return self._getitem_tuple(key)
File "/usr/local/lib/python3.7/dist-packages/pandas/core/indexing.py", line 1100, in _getitem_tuple
return self._getitem_lowerdim(tup)
File "/usr/local/lib/python3.7/dist-packages/pandas/core/indexing.py", line 838, in _getitem_lowerdim
section = self._getitem_axis(key, axis=i)
File "/usr/local/lib/python3.7/dist-packages/pandas/core/indexing.py", line 1164, in _getitem_axis
return self._get_label(key, axis=axis)
File "/usr/local/lib/python3.7/dist-packages/pandas/core/indexing.py", line 1113, in _get_label
return self.obj.xs(label, axis=axis)
File "/usr/local/lib/python3.7/dist-packages/pandas/core/generic.py", line 3776, in xs
loc = index.get_loc(key)
File "/usr/local/lib/python3.7/dist-packages/pandas/core/indexes/base.py", line 3363, in get_loc
raise KeyError(key) from err
KeyError: 4064
Hello Heqin, great work!
I want to download the data of the landmarks annotation of the hand, but the original submission download information has been invalid, so can you share the all.csv file? Thanks!
The shared head dataset link1 in figshare website cannot download anymore, can you update the link again?
您好,我在下载手部数据后,发现和all.csv里的文件很大一部分对不上,下载的共有1028个jpg文件,但csv上有对应标注的只有600个,其他都不能找到标注,你有处理过的数据吗 谢谢
The `chest_label.zip` is not empty and it's link is [YOLO_Universal_Anatomical_Landmark_Detection/data/chest_labels.zip](https://github.com/MIRACLE-Center/YOLO_Universal_Anatomical_Landmark_Detection/tree/main/data/chest_labels.zip). The GitHub webpage won't display it since it's a zip file. You can `git clone` or `download` this repo and fetch it through above path.
Originally posted by @heqin-zhu in #16 (comment)
Hello,
Thanks for sharing the code. After downloading the required dataset, the code performs well. However, when I try to modify the optimizer, for example, replace adam by sgd, the training process cannot converge even with smaller learning rate.
Could you give me some suggestions or some hints I shall take care of ? In any case thanks a lot!
What is the difference between gln and gln2? It seems that gln2 is the model described in the paper. Can you explain the pros and cons of both.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.