Comments (6)
Hi, you can set with_cp=True
(in backbone and head) to save GPU memory. For example, when the input image is 512x512, the BEiT-Adapter-L-Mask2Former requires about 15G memory with 1 image per card (see my logs in issue #38).
I have provided some configs for ViT-T and ViT-S; you can try these configs and set with_cp=True
. Notice that the GPU memory shown in the released logs was inaccurate, due to some bugs in segmentation (getting accurate memory usage requires setting --deterministic
).
Also, you can modify cffn_ratio
, deform_num_heads
, interaction_indexes
to reduce GPU memory. cffn_ratio
is set to 0.25 now, you can try 0.125. You can try reducing deform_num_heads
by half, and reducing interaction_indexes
. Of course, these modifications will hurt performance to some extent.
For a 24-layer ViT model, we set N=4
, that is to say, interaction_indexes=[[0, 5], [6, 11], [12, 17], [18, 23]]
. If you want to set N=2
, you should use interaction_indexes=[[0, 11], [12, 23]]
.
from vit-adapter.
@czczup Hi,i would like to save gpu memory,so i change the crop size from 512 to 256,when training voc2007 dataset,it occurs other issue,assertionerror at the iteration of evaluation(160000/20000)。And i test the voc datasets also occur the same issue。But when i change the crop size from 256 to 512,it tests successfully,but the out of memory issue occurs。 I wonder whether is the reason of change crop size that occurs the assertionerror?I am waiting for your answers,thanks!
from vit-adapter.
@czczup Hi,i would like to save gpu memory,so i change the crop size from 512 to 256,when training voc2007 dataset,it occurs other issue,assertionerror at the iteration of evaluation(160000/20000)。And i test the voc datasets also occur the same issue。But when i change the crop size from 256 to 512,it tests successfully,but the out of memory issue occurs。 I wonder whether is the reason of change crop size that occurs the assertionerror?I am waiting for your answers,thanks!
Hi, in addition to crop size
, you also need to set the img_size
as same as crop size
in backbone.
from vit-adapter.
@czczup Thanks for your reply.I used to change the crop size in config file and datasets setting.But I can not find the img_size in backbone of vit_adapter.py.If I need to change the img_size in segmentation/mmseg_custom/models/backbones/base/vit.py from 224 to 256,or can you tell in details.thanks for your generous help!
from vit-adapter.
@czczup Thanks for your reply.I used to change the crop size in config file and datasets setting.But I can not find the img_size in backbone of vit_adapter.py.If I need to change the img_size in segmentation/mmseg_custom/models/backbones/base/vit.py from 224 to 256,or can you tell in details.thanks for your generous help!
Could you please provide the config you used.
from vit-adapter.
Regarding your comments on changing layers and interaction indexes, I'm trying to modify these parameters in upernet_deit_adapter_tiny_512_160k_ade20k.py.
Would the red highlight be the number of blocks? (I know N=4 was used in most of the models, but I'm experimenting with changing this. When I make these changes I get the following error:
What other changes might I need to make in order to get this to run?
from vit-adapter.
Related Issues (20)
- After Freezing the Vit backbone how many iterations are supposed to be trained?
- How can I use other pretrained ViTs or CNNs?
- number of iteration in the case of a batchsize equals to 1
- May I ask how to export onnx? There has been an error HOT 1
- ViT Adapter Not Working With Patch Size Different From 16 HOT 1
- Relative log amplitudes of Fourier transformed feature maps.
- Reccomended config for the case of binary segmentation
- KeyError: 'ViTAdapter is not in the models registry' HOT 2
- Poor performance on custom dataset.
- Dimension mismatching for beit_dapter with changing img_size and crop_size from (512, 512) to (384, 384)
- LayerScale and pre-trained models HOT 5
- Can ViT-Adapter be exported as an ONNX model? What are the respective versions of PyTorch and the ONNX subset for this?
- Request the configuration file of objects365 pre-training
- 从 mmseg0.x 迁移至 mmseg 1.x 复现出现问题
- 关于segmentation和detection的BEiT实现略有不同的问题 HOT 2
- Missing pretrained weights HOT 1
- Inference in batches and multiple GPU HOT 1
- Gradient accumulation implemention HOT 2
- ModuleNotFoundError: No module named 'MultiScaleDeformableAttention' HOT 5
- 关于drop path rate的问题 HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from vit-adapter.