Giter VIP home page Giter VIP logo

atr-net's People

Contributors

nickgkan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

atr-net's Issues

probable bug in src/utils/file_utils.py

I have been trying to run your code for sometime. But I was having error while loading the dataset. I changed line 34 of src/utils/file_utils.py as follows.

34: orig_img_names = set(os.listdir(ORIG_IMAGES_PATH[dataset]))

Previously it was ORIG_ANNOS_PATHS and the annotations variables was being null.

Now the training is going on. You might want to look into this issue.

Getting Error while trying to setup the codebase

Hello, I have installed and setup the code base in my system. However, I have run into an issue and I am having difficulty resolving it. I believe it has something to do with the version mismatch of the different libraries, but I couldn't pin-point the problem. Here is the Error Message:

File "/content/drive/My Drive/DirResearch/atr-net/faster_rcnn/model/faster_rcnn/faster_rcnn.py", line 10, in <module>
    from model.rpn.rpn import _RPN
  File "/content/drive/My Drive/DirResearch/atr-net/faster_rcnn/model/rpn/rpn.py", line 8, in <module>
    from .proposal_layer import _ProposalLayer
  File "/content/drive/My Drive/DirResearch/atr-net/faster_rcnn/model/rpn/proposal_layer.py", line 20, in <module>
    from model.nms.nms_wrapper import nms
  File "/content/drive/My Drive/DirResearch/atr-net/faster_rcnn/model/nms/nms_wrapper.py", line 10, in <module>
    from model.nms.nms_gpu import nms_gpu
  File "/content/drive/My Drive/DirResearch/atr-net/faster_rcnn/model/nms/nms_gpu.py", line 4, in <module>
    from ._ext import nms
  File "/content/drive/My Drive/DirResearch/atr-net/faster_rcnn/model/nms/_ext/nms/__init__.py", line 2, in <module>
    from torch.utils.ffi import _wrap_function
  File "/usr/local/lib/python3.6/dist-packages/torch/utils/ffi/__init__.py", line 1, in <module>
    raise ImportError("torch.utils.ffi is deprecated. Please use cpp extensions instead.")
ImportError: torch.utils.ffi is deprecated. Please use cpp extensions instead.

Please help me resolve this error.

Thank you

About RuntimeError: CUDA error: out of memory

Dear scholar,
I want to ask about the size of cuda memory by running the code

python3 main.py --dataset=VG200 --task=sggen --model=atr_net

I want to evaluate the model. And my cuda is 24G but running the below code will throw the error "RuntimeError: CUDA error: out of memory"

/home/byd/atr-net/faster_rcnn/model/utils/config.py:375: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
yaml_cfg = edict(yaml.load(f))
load checkpoint faster_rcnn/faster_rcnn_1_10_14657.pth
Traceback (most recent call last):
File "main.py", line 8, in
from src.models import (
File "/home/byd/atr-net/src/models/atr_net.py", line 21, in
from src.utils.train_test_utils import SGGTrainTester
File "/home/byd/atr-net/src/utils/train_test_utils.py", line 13, in
from src.utils.data_loading_utils import (
File "/home/byd/atr-net/src/utils/data_loading_utils.py", line 14, in
FEATURE_EXTRACTOR = BaseFeatureExtractor()
File "/home/byd/atr-net/src/tools/feature_extractors.py", line 80, in init
checkpoint = _load()
File "/home/byd/atr-net/src/tools/feature_extractors.py", line 67, in _load
checkpoint = torch.load(load_name)
File "/home/byd/.conda/envs/vilbert-mt/lib/python3.6/site-packages/torch/serialization.py", line 593, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/home/byd/.conda/envs/vilbert-mt/lib/python3.6/site-packages/torch/serialization.py", line 772, in _legacy_load
result = unpickler.load()
File "/home/byd/.conda/envs/vilbert-mt/lib/python3.6/site-packages/torch/serialization.py", line 728, in persistent_load
deserialized_objects[root_key] = restore_location(obj, location)
File "/home/byd/.conda/envs/vilbert-mt/lib/python3.6/site-packages/torch/serialization.py", line 175, in default_restore_location
result = fn(storage, location)
File "/home/byd/.conda/envs/vilbert-mt/lib/python3.6/site-packages/torch/serialization.py", line 155, in _cuda_deserialize
return storage_type(obj.size())
File "/home/byd/.conda/envs/vilbert-mt/lib/python3.6/site-packages/torch/cuda/init.py", line 484, in _lazy_new
return super(_CudaBase, cls).new(cls, *args, **kwargs)
RuntimeError: CUDA error: out of memory

Training for Image -> Scene graph

I would like to use your model to process a set of raw images into scene graphs. It is unclear to me which tasks I have to pipeline in order to process a raw image all the way to a scene graph.

What are the steps to pretrain on VG200 and then run scene graph generation on a new set of images?

Unable to read the dataset

I have tried checking the config and data_loading_utils file, but still unable to fix this error.
From the error,k it's not reading the dataset as the len definition if data_loading_utils returning zero.
@nickgkan Could you please help to fix this error? Thanks!

/home/kgl/atr-net/faster_rcnn/model/utils/config.py:374: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  yaml_cfg = edict(yaml.load(f))
load checkpoint faster_rcnn/faster_rcnn_1_10_14657.pth
2020-01-14 11:58:45,650 - DEBUG - Tackling predcls for 1 classes
{'attention': 'multi_head', 'use_language': True, 'use_spatial': True}
load checkpoint faster_rcnn/faster_rcnn_1_10_14657.pth
2020-01-14 11:58:46,947 - INFO - Performing training for atr_net_predcls_VG80K
2020-01-14 11:58:47,904 - DEBUG - Set up dataset of 0 files
2020-01-14 11:58:47,904 - DEBUG - Set up dataset of 0 files
2020-01-14 11:58:47,904 - DEBUG - Set up dataset of 0 files
Traceback (most recent call last):
  File "main.py", line 113, in <module>
    main()
  File "main.py", line 110, in main
    model.train_test(cfg)
  File "/home/kgl/atr-net/src/models/atr_net.py", line 487, in train_test
    epochs=30 if config.use_early_stopping else 7)
  File "/home/kgl/atr-net/src/utils/train_test_utils.py", line 81, in train
    self._set_data_loaders()
  File "/home/kgl/atr-net/src/utils/train_test_utils.py", line 358, in _set_data_loaders
    for split in mode_ids
  File "/home/kgl/atr-net/src/utils/train_test_utils.py", line 358, in <dictcomp>
    for split in mode_ids
  File "/home/kgl/atr-net/src/utils/data_loading_utils.py", line 320, in __init__
    collate_fn=collate_fn)
  File "/home/kgl/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 802, in __init__
    sampler = RandomSampler(dataset)
  File "/home/kgl/.local/lib/python3.6/site-packages/torch/utils/data/sampler.py", line 64, in __init__
    "value, but got num_samples={}".format(self.num_samples))
ValueError: num_samples should be a positive integeral value, but got num_samples=0

FileNotFoundError: [Errno 2] No such file or directory: 'json_annos/VG200_sggen.json'

First, thanks for your code. I want to train a model use the code as follow:

python3 main.py --dataset=VG200 --task=sggen --model=atr_net.

But, I got the error:

load checkpoint faster_rcnn/faster_rcnn_1_10_14657.pth
Tackling sggen for 51 classes
{'attention': 'multi_head', 'use_language': True, 'use_spatial': True}
2021-07-08 21:47:58,190 - DEBUG - Tackling sggen for 51 classes
load checkpoint faster_rcnn/faster_rcnn_1_10_14657.pth
2021-07-08 21:48:00,201 - INFO - Performing training for atr_net_predcls_VG200
Performing training for atr_net_predcls_VG200
Traceback (most recent call last):
File "/home/nimo/code_sgg/atr-net/main.py", line 113, in
main()
File "/home/nimo/code_sgg/atr-net/main.py", line 110, in main
model.train_test(cfg)
File "/mnt/e/wsl_code/atr-net/src/models/atr_net.py", line 488, in train_test
epochs=30 if config.use_early_stopping else 7)
File "/mnt/e/wsl_code/atr-net/src/utils/train_test_utils.py", line 82, in train
self._set_data_loaders()
File "/mnt/e/wsl_code/atr-net/src/utils/train_test_utils.py", line 334, in _set_data_loaders
self._filter_duplicate_rels, self.filter_multiple_preds)[0])
File "/mnt/e/wsl_code/atr-net/src/utils/file_utils.py", line 31, in load_annotations
with open(PATHS['json_path'] + dataset + '
' + _mode + '.json') as fid:
FileNotFoundError: [Errno 2] No such file or directory: 'json_annos/VG200_sggen.json'

Process finished with exit code 1

I check json_annos dir, but I didn't find the *_sggen.json. I also check the code of prepare_data.py and ***_transformer_class.py , howevery there are not any information for **sggen.json.

Can you help me?

Thank you very much!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.