Comments (9)
Thank you for helping me, it's my pleasure
I run your code, but I have three problems include:
- I use NVIDIA 2080Ti 11GB for inference but the program raises Cuda out of memory, can I control memory. I don't need the program inference too fast.
- I use 2 graphics cards NVIDIA 2080Ti 11GB for inference, can I use the program to inference with multi-GPU
- What I need to edit to inference in CPU
Please help me, thank you very much
from vit-adapter.
I have just downloaded model htc++_beit_adapter_large_fpn_3x_coco.pth and config from this github. But I cannot load model use this command:
from mmdet.apis import init_detector configFile = 'configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py' checkpointFile = 'checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth' model = init_detector(config_file, checkpoint_file, device='cuda:0')
img = 'demo.jpg' result = inference_detector(model, img)
please help me
Hello, I just updated image demo and video demo, you can use them according to the following instructions.
Prepare trained models
Before inference a trained model, you should first download the pre-trained backbone, for example, BEiT-L. Or you can edit the config file and set pretrained=None
so that you don't have to download the pre-trained backbone.
After that, you should download the trained checkpoint, for example, ViT-Adapter-L-HTC++. Here, I put this file in a folder named checkpoint/
.
Image Demo
You can run image_demo.py
like this:
CUDA_VISIBLE_DEVICES=0 python image_demo.py data/coco/val2017/000000226984.jpg configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth.tar
The result will be saved in demo/
:
Video Demo
You can run video_demo.py
like this:
CUDA_VISIBLE_DEVICES=0 python video_demo.py ./demo.mp4 configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth.tar --out demo/demo.mp4
Here we take the demo.mp4 provided by mmdetection for example.
The result will be saved in demo/
: link
from vit-adapter.
UPDATE:
Notebook runs as expected
Let me know if I can help in any other way
from vit-adapter.
Hello! I have run this notebook of detection. But i've got this error about downloading the pretrained model: CalledProcessError: Command 'cd /content/ViT-Adapter/detection mkdir pretrained cd pretrained wget https://conversationhub.blob.core.windows.net/beit-share-public/beit/beit_large_patch16_224_pt22k_ft22k.pth ' returned non-zero exit status 8.
It seems that i cannot reach this link. Could you help to solve this please ?
Maybe the authors can help you with this, the link was working at the time of notebook creation. Maybe weights were moved or the link needs to be refreshed
from vit-adapter.
Is it possible to have a collaboratory notebook for this as well? similar to this
from vit-adapter.
Hey I just made one similar to the previous notebook.
TODO
- change dataset downloaded from ADE20K to COCO. if someone could help me identify the correct link to download the images from, that would be great.
- general testing and documentation
from vit-adapter.
Hello! I have run this notebook of detection. But i've got this error about downloading the pretrained model:
CalledProcessError: Command 'cd /content/ViT-Adapter/detection
mkdir pretrained
cd pretrained
wget https://conversationhub.blob.core.windows.net/beit-share-public/beit/beit_large_patch16_224_pt22k_ft22k.pth
' returned non-zero exit status 8.
It seems that i cannot reach this link. Could you help to solve this please ?
from vit-adapter.
I have just downloaded model htc++_beit_adapter_large_fpn_3x_coco.pth and config from this github. But I cannot load model use this command:
from mmdet.apis import init_detector configFile = 'configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py' checkpointFile = 'checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth' model = init_detector(config_file, checkpoint_file, device='cuda:0')
img = 'demo.jpg' result = inference_detector(model, img)
please help meHello, I just updated image demo and video demo, you can use them according to the following instructions.
Prepare trained models
Before inference a trained model, you should first download the pre-trained backbone, for example, BEiT-L. Or you can edit the config file and set
pretrained=None
so that you don't have to download the pre-trained backbone.After that, you should download the trained checkpoint, for example, ViT-Adapter-L-HTC++. Here, I put this file in a folder named
checkpoint/
.Image Demo
You can run
image_demo.py
like this:CUDA_VISIBLE_DEVICES=0 python image_demo.py data/coco/val2017/000000226984.jpg configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth.tar
The result will be saved in
demo/
:Video Demo
You can run
video_demo.py
like this:CUDA_VISIBLE_DEVICES=0 python video_demo.py ./demo.mp4 configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth.tar --out demo/demo.mp4
Here we take the demo.mp4 provided by mmdetection for example.
The result will be saved in
demo/
: link
I have just downloaded model htc++_beit_adapter_large_fpn_3x_coco.pth and config from this github. But I cannot load model use this command:
from mmdet.apis import init_detector configFile = 'configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py' checkpointFile = 'checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth' model = init_detector(config_file, checkpoint_file, device='cuda:0')
img = 'demo.jpg' result = inference_detector(model, img)
please help meHello, I just updated image demo and video demo, you can use them according to the following instructions.
Prepare trained models
Before inference a trained model, you should first download the pre-trained backbone, for example, BEiT-L. Or you can edit the config file and set
pretrained=None
so that you don't have to download the pre-trained backbone.After that, you should download the trained checkpoint, for example, ViT-Adapter-L-HTC++. Here, I put this file in a folder named
checkpoint/
.Image Demo
You can run
image_demo.py
like this:CUDA_VISIBLE_DEVICES=0 python image_demo.py data/coco/val2017/000000226984.jpg configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth.tar
The result will be saved in
demo/
:Video Demo
You can run
video_demo.py
like this:CUDA_VISIBLE_DEVICES=0 python video_demo.py ./demo.mp4 configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth.tar --out demo/demo.mp4
Here we take the demo.mp4 provided by mmdetection for example.
The result will be saved in
demo/
: link
Hi, I tried to download the pre-trained backbone you have mentioned hereBEiT-L. But it seems that it's invalid now. Could you please provide a new link ? Thanks a lot!
from vit-adapter.
I have just downloaded model htc++_beit_adapter_large_fpn_3x_coco.pth and config from this github. But I cannot load model use this command:
from mmdet.apis import init_detector configFile = 'configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py' checkpointFile = 'checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth' model = init_detector(config_file, checkpoint_file, device='cuda:0')
img = 'demo.jpg' result = inference_detector(model, img)
please help meHello, I just updated image demo and video demo, you can use them according to the following instructions.
Prepare trained models
Before inference a trained model, you should first download the pre-trained backbone, for example, BEiT-L. Or you can edit the config file and set so that you don't have to download the pre-trained backbone.
pretrained=None
After that, you should download the trained checkpoint, for example, ViT-Adapter-L-HTC++. Here, I put this file in a folder named .checkpoint/
Image Demo
You can run like this:
image_demo.py
CUDA_VISIBLE_DEVICES=0 python image_demo.py data/coco/val2017/000000226984.jpg configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth.tar
The result will be saved in :
demo/
Video Demo
You can run like this:
video_demo.py
CUDA_VISIBLE_DEVICES=0 python video_demo.py ./demo.mp4 configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth.tar --out demo/demo.mp4
Here we take the demo.mp4 provided by mmdetection for example.
The result will be saved in : linkdemo/
I have just downloaded model htc++_beit_adapter_large_fpn_3x_coco.pth and config from this github. But I cannot load model use this command:
from mmdet.apis import init_detector configFile = 'configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py' checkpointFile = 'checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth' model = init_detector(config_file, checkpoint_file, device='cuda:0')
img = 'demo.jpg' result = inference_detector(model, img)
please help meHello, I just updated image demo and video demo, you can use them according to the following instructions.
Prepare trained models
Before inference a trained model, you should first download the pre-trained backbone, for example, BEiT-L. Or you can edit the config file and set so that you don't have to download the pre-trained backbone.
pretrained=None
After that, you should download the trained checkpoint, for example, ViT-Adapter-L-HTC++. Here, I put this file in a folder named .checkpoint/
Image Demo
You can run like this:
image_demo.py
CUDA_VISIBLE_DEVICES=0 python image_demo.py data/coco/val2017/000000226984.jpg configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth.tar
The result will be saved in :
demo/
Video Demo
You can run like this:
video_demo.py
CUDA_VISIBLE_DEVICES=0 python video_demo.py ./demo.mp4 configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth.tar --out demo/demo.mp4
Here we take the demo.mp4 provided by mmdetection for example.
The result will be saved in : linkdemo/
Hi, I tried to download the pre-trained backbone you have mentioned hereBEiT-L. But it seems that it's invalid now. Could you please provide a new link ? Thanks a lot!
You can consider searching for the download link in https://github.com/microsoft/unilm/tree/master/beit.
However, it is worth noting that the link he provides cannot be obtained through wget. You should consider entering the link in the browser to obtain the download.
from vit-adapter.
Related Issues (20)
- About the GFlops on the ADE20K dataset. HOT 3
- swin-t-adapter HOT 1
- Cannot download the checkpoint of upernet_deit_adapter_tiny.
- After Freezing the Vit backbone how many iterations are supposed to be trained?
- How can I use other pretrained ViTs or CNNs?
- number of iteration in the case of a batchsize equals to 1
- May I ask how to export onnx? There has been an error HOT 1
- ViT Adapter Not Working With Patch Size Different From 16 HOT 1
- Relative log amplitudes of Fourier transformed feature maps.
- Reccomended config for the case of binary segmentation
- KeyError: 'ViTAdapter is not in the models registry' HOT 2
- Poor performance on custom dataset.
- Dimension mismatching for beit_dapter with changing img_size and crop_size from (512, 512) to (384, 384)
- LayerScale and pre-trained models HOT 5
- Can ViT-Adapter be exported as an ONNX model? What are the respective versions of PyTorch and the ONNX subset for this?
- Request the configuration file of objects365 pre-training
- 从 mmseg0.x 迁移至 mmseg 1.x 复现出现问题
- 关于segmentation和detection的BEiT实现略有不同的问题 HOT 2
- Missing pretrained weights HOT 1
- Inference in batches and multiple GPU HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from vit-adapter.