Giter VIP home page Giter VIP logo

Comments (9)

tintranrynan avatar tintranrynan commented on June 1, 2024 3

Thank you for helping me, it's my pleasure
I run your code, but I have three problems include:

  1. I use NVIDIA 2080Ti 11GB for inference but the program raises Cuda out of memory, can I control memory. I don't need the program inference too fast.
  2. I use 2 graphics cards NVIDIA 2080Ti 11GB for inference, can I use the program to inference with multi-GPU
  3. What I need to edit to inference in CPU
    Please help me, thank you very much

from vit-adapter.

czczup avatar czczup commented on June 1, 2024 1

I have just downloaded model htc++_beit_adapter_large_fpn_3x_coco.pth and config from this github. But I cannot load model use this command:

from mmdet.apis import init_detector configFile = 'configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py' checkpointFile = 'checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth' model = init_detector(config_file, checkpoint_file, device='cuda:0')

img = 'demo.jpg' result = inference_detector(model, img)

please help me

Hello, I just updated image demo and video demo, you can use them according to the following instructions.

Prepare trained models

Before inference a trained model, you should first download the pre-trained backbone, for example, BEiT-L. Or you can edit the config file and set pretrained=None so that you don't have to download the pre-trained backbone.

After that, you should download the trained checkpoint, for example, ViT-Adapter-L-HTC++. Here, I put this file in a folder named checkpoint/.

Image Demo

You can run image_demo.py like this:

CUDA_VISIBLE_DEVICES=0 python image_demo.py data/coco/val2017/000000226984.jpg configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth.tar

The result will be saved in demo/:
000000226984

Video Demo

You can run video_demo.py like this:

CUDA_VISIBLE_DEVICES=0 python video_demo.py ./demo.mp4 configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth.tar  --out demo/demo.mp4

Here we take the demo.mp4 provided by mmdetection for example.

The result will be saved in demo/: link

from vit-adapter.

IamShubhamGupto avatar IamShubhamGupto commented on June 1, 2024 1

UPDATE:
Notebook runs as expected
Screenshot 2023-03-19 at 8 36 38 PM

Let me know if I can help in any other way

from vit-adapter.

IamShubhamGupto avatar IamShubhamGupto commented on June 1, 2024 1

Hello! I have run this notebook of detection. But i've got this error about downloading the pretrained model: CalledProcessError: Command 'cd /content/ViT-Adapter/detection mkdir pretrained cd pretrained wget https://conversationhub.blob.core.windows.net/beit-share-public/beit/beit_large_patch16_224_pt22k_ft22k.pth ' returned non-zero exit status 8.

It seems that i cannot reach this link. Could you help to solve this please ?

Maybe the authors can help you with this, the link was working at the time of notebook creation. Maybe weights were moved or the link needs to be refreshed

from vit-adapter.

IamShubhamGupto avatar IamShubhamGupto commented on June 1, 2024

Is it possible to have a collaboratory notebook for this as well? similar to this

from vit-adapter.

IamShubhamGupto avatar IamShubhamGupto commented on June 1, 2024

Hey I just made one similar to the previous notebook.

TODO

  • change dataset downloaded from ADE20K to COCO. if someone could help me identify the correct link to download the images from, that would be great.
  • general testing and documentation

Notebook

from vit-adapter.

jiangzeyu0120 avatar jiangzeyu0120 commented on June 1, 2024

Hello! I have run this notebook of detection. But i've got this error about downloading the pretrained model:
CalledProcessError: Command 'cd /content/ViT-Adapter/detection
mkdir pretrained
cd pretrained
wget https://conversationhub.blob.core.windows.net/beit-share-public/beit/beit_large_patch16_224_pt22k_ft22k.pth
' returned non-zero exit status 8.

It seems that i cannot reach this link. Could you help to solve this please ?

from vit-adapter.

jiangzeyu0120 avatar jiangzeyu0120 commented on June 1, 2024

I have just downloaded model htc++_beit_adapter_large_fpn_3x_coco.pth and config from this github. But I cannot load model use this command:
from mmdet.apis import init_detector configFile = 'configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py' checkpointFile = 'checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth' model = init_detector(config_file, checkpoint_file, device='cuda:0')
img = 'demo.jpg' result = inference_detector(model, img)
please help me

Hello, I just updated image demo and video demo, you can use them according to the following instructions.

Prepare trained models

Before inference a trained model, you should first download the pre-trained backbone, for example, BEiT-L. Or you can edit the config file and set pretrained=None so that you don't have to download the pre-trained backbone.

After that, you should download the trained checkpoint, for example, ViT-Adapter-L-HTC++. Here, I put this file in a folder named checkpoint/.

Image Demo

You can run image_demo.py like this:

CUDA_VISIBLE_DEVICES=0 python image_demo.py data/coco/val2017/000000226984.jpg configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth.tar

The result will be saved in demo/: 000000226984

Video Demo

You can run video_demo.py like this:

CUDA_VISIBLE_DEVICES=0 python video_demo.py ./demo.mp4 configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth.tar  --out demo/demo.mp4

Here we take the demo.mp4 provided by mmdetection for example.

The result will be saved in demo/: link

I have just downloaded model htc++_beit_adapter_large_fpn_3x_coco.pth and config from this github. But I cannot load model use this command:
from mmdet.apis import init_detector configFile = 'configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py' checkpointFile = 'checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth' model = init_detector(config_file, checkpoint_file, device='cuda:0')
img = 'demo.jpg' result = inference_detector(model, img)
please help me

Hello, I just updated image demo and video demo, you can use them according to the following instructions.

Prepare trained models

Before inference a trained model, you should first download the pre-trained backbone, for example, BEiT-L. Or you can edit the config file and set pretrained=None so that you don't have to download the pre-trained backbone.

After that, you should download the trained checkpoint, for example, ViT-Adapter-L-HTC++. Here, I put this file in a folder named checkpoint/.

Image Demo

You can run image_demo.py like this:

CUDA_VISIBLE_DEVICES=0 python image_demo.py data/coco/val2017/000000226984.jpg configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth.tar

The result will be saved in demo/: 000000226984

Video Demo

You can run video_demo.py like this:

CUDA_VISIBLE_DEVICES=0 python video_demo.py ./demo.mp4 configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth.tar  --out demo/demo.mp4

Here we take the demo.mp4 provided by mmdetection for example.

The result will be saved in demo/: link

Hi, I tried to download the pre-trained backbone you have mentioned hereBEiT-L. But it seems that it's invalid now. Could you please provide a new link ? Thanks a lot!

from vit-adapter.

yuecao0119 avatar yuecao0119 commented on June 1, 2024

I have just downloaded model htc++_beit_adapter_large_fpn_3x_coco.pth and config from this github. But I cannot load model use this command:
from mmdet.apis import init_detector configFile = 'configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py' checkpointFile = 'checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth' model = init_detector(config_file, checkpoint_file, device='cuda:0')
img = 'demo.jpg' result = inference_detector(model, img)
please help me

Hello, I just updated image demo and video demo, you can use them according to the following instructions.

Prepare trained models

Before inference a trained model, you should first download the pre-trained backbone, for example, BEiT-L. Or you can edit the config file and set so that you don't have to download the pre-trained backbone.pretrained=None
After that, you should download the trained checkpoint, for example, ViT-Adapter-L-HTC++. Here, I put this file in a folder named .checkpoint/

Image Demo

You can run like this:image_demo.py

CUDA_VISIBLE_DEVICES=0 python image_demo.py data/coco/val2017/000000226984.jpg configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth.tar

The result will be saved in : demo/000000226984

Video Demo

You can run like this:video_demo.py

CUDA_VISIBLE_DEVICES=0 python video_demo.py ./demo.mp4 configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth.tar  --out demo/demo.mp4

Here we take the demo.mp4 provided by mmdetection for example.
The result will be saved in : linkdemo/

I have just downloaded model htc++_beit_adapter_large_fpn_3x_coco.pth and config from this github. But I cannot load model use this command:
from mmdet.apis import init_detector configFile = 'configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py' checkpointFile = 'checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth' model = init_detector(config_file, checkpoint_file, device='cuda:0')
img = 'demo.jpg' result = inference_detector(model, img)
please help me

Hello, I just updated image demo and video demo, you can use them according to the following instructions.

Prepare trained models

Before inference a trained model, you should first download the pre-trained backbone, for example, BEiT-L. Or you can edit the config file and set so that you don't have to download the pre-trained backbone.pretrained=None
After that, you should download the trained checkpoint, for example, ViT-Adapter-L-HTC++. Here, I put this file in a folder named .checkpoint/

Image Demo

You can run like this:image_demo.py

CUDA_VISIBLE_DEVICES=0 python image_demo.py data/coco/val2017/000000226984.jpg configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth.tar

The result will be saved in : demo/000000226984

Video Demo

You can run like this:video_demo.py

CUDA_VISIBLE_DEVICES=0 python video_demo.py ./demo.mp4 configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth.tar  --out demo/demo.mp4

Here we take the demo.mp4 provided by mmdetection for example.
The result will be saved in : linkdemo/

Hi, I tried to download the pre-trained backbone you have mentioned hereBEiT-L. But it seems that it's invalid now. Could you please provide a new link ? Thanks a lot!

You can consider searching for the download link in https://github.com/microsoft/unilm/tree/master/beit.
However, it is worth noting that the link he provides cannot be obtained through wget. You should consider entering the link in the browser to obtain the download.

from vit-adapter.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.