Giter VIP home page Giter VIP logo

sam-adapter-pytorch's Introduction

SAM-Adapter: Adapting SAM in Underperformed Scenes (!!Now Support SAM2 in "SAM2-Adapter" Branch!!)

Tianrun Chen, Lanyun Zhu, Chaotao Ding, Runlong Cao, Yan Wang, Shangzhan Zhang, Zejian Li, Lingyun Sun, Papa Mao, Ying Zang

KOKONI, Moxin Technology (Huzhou) Co., LTD , Zhejiang University, Singapore University of Technology and Design, Huzhou University, Beihang University.

In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 3367-3375).

Update on 8 Aug, 2024: We add support for adapting with SAM2 (Segment Anything 2), a more powerful backbone! Please refer our new technical report! and see the code at "SAM2-Adapter" Branch!

Update on 24 July, 2024: The link of pre-trained model is updated.

Update on 30 August 2023: This paper will be prsented at ICCV 2023.

Update on 28 April 2023: We tested the performance of polyp segmentation to show our approach can also work on medical datasets. Update on 22 April: We report our SOTA result based on ViT-H version of SAM (use demo.yaml). We have also uploaded the yaml config for ViT-L and ViT-B version of SAM, suitable GPU with smaller memory (e.g. NVIDIA Tesla V-100), although they may compromise on accuracy.

Environment

This code was implemented with Python 3.8 and PyTorch 1.13.0. You can install all the requirements via:

pip install -r requirements.txt

Quick Start

  1. Download the dataset and put it in ./load.
  2. Download the pre-trained SAM(Segment Anything) and put it in ./pretrained.
  3. Training:
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nnodes 1 --nproc_per_node 4 loadddptrain.py --config configs/demo.yaml

!Please note that the SAM model consume much memory. We use 4 x A100 graphics card for training. If you encounter the memory issue, please try to use graphics cards with larger memory!

  1. Evaluation:
python test.py --config [CONFIG_PATH] --model [MODEL_PATH]

Train

CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch train.py --nnodes 1 --nproc_per_node 4 --config [CONFIG_PATH]

Updates on 30 July. As mentioned by @YunyaGaoTree in issue #39 You can also try to use the code below to gain (probably) faster training.

!torchrun train.py --config configs/demo.yaml
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nnodes 1 --nproc_per_node 4 loadddptrain.py --config configs/demo.yaml

Test

python test.py --config [CONFIG_PATH] --model [MODEL_PATH]

Pre-trained Models

https://drive.google.com/file/d/13JilJT7dhxwMIgcdtnvdzr08vcbREFlR/view?usp=sharing

Dataset

Camouflaged Object Detection

Shadow Detection

Polyp Segmentation - Medical Applications

Citation

If you find our work useful in your research, please consider citing:


@inproceedings{chen2023sam,
  title={Sam-adapter: Adapting segment anything in underperformed scenes},
  author={Chen, Tianrun and Zhu, Lanyun and Deng, Chaotao and Cao, Runlong and Wang, Yan and Zhang, Shangzhan and Li, Zejian and Sun, Lingyun and Zang, Ying and Mao, Papa},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={3367--3375},
  year={2023}
}

@misc{chen2024sam2adapterevaluatingadapting,
      title={SAM2-Adapter: Evaluating & Adapting Segment Anything 2 in Downstream Tasks: Camouflage, Shadow, Medical Image Segmentation, and More}, 
      author={Tianrun Chen and Ankang Lu and Lanyun Zhu and Chaotao Ding and Chunan Yu and Deyi Ji and Zejian Li and Lingyun Sun and Papa Mao and Ying Zang},
      year={2024},
      eprint={2408.04579},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2408.04579}, 
}


@misc{chen2023sam,
      title={SAM Fails to Segment Anything? -- SAM-Adapter: Adapting SAM in Underperformed Scenes: Camouflage, Shadow, and More}, 
      author={Tianrun Chen and Lanyun Zhu and Chaotao Ding and Runlong Cao and Shangzhan Zhang and Yan Wang and Zejian Li and Lingyun Sun and Papa Mao and Ying Zang},
      year={2023},
      eprint={2304.09148},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}


Acknowledgements

The part of the code is derived from Explicit Visual Prompt by Weihuang Liu, Xi Shen, Chi-Man Pun, and Xiaodong Cun by University of Macau and Tencent AI Lab.

sam-adapter-pytorch's People

Contributors

tianrun-chen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sam-adapter-pytorch's Issues

!torchrun train.py --config configs/demo.yaml may perform faster

Hello authors,

Thanks so much for sharing these codes.
The codes are very useful to fine-tune SAM for downstream works : )

I reduced datasize, adapted the codes and run them in Google Colab with one A100 GPU.
I found using torchrun is faster and more convenient compared to the original commands as shows below.
https://pytorch.org/docs/stable/elastic/run.html

!torchrun train.py --config configs/demo.yaml

CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nnodes 1 --nproc_per_node 4 loadddptrain.py --config configs/demo.yaml
Besides loadddptrain.py seems wrong in the GitHub.

Thanks so much again.
Kind regards.

关于训练步骤的请教

你好,感谢你的开源!我直接使用CAMO数据训练,得到的指标如下所示,和论文差了比较多,这个可能是什么原因呢?

metric1: 0.4876                                                                                                                                                                                                    
metric2: 0.6093
metric3: 0.2453
metric4: 0.2108

我的训练脚本如下。

python3 -m torch.distributed.launch \
--master_port=12000 --nnodes 1 --nproc_per_node 4 \
train.py \
--config configs/demo.yaml

Mismatching erros

There are mismatch errors when loading the pretrained weight: sam_vit_l_0b3195.pth. So, the hyper-parameter in demo.yaml should be wrong.

I have found an error: the embed_dim should be 1024, instead of 1280. After correcting it, I also got few mismatch errors as following:

RuntimeError: Error(s) in loading state_dict for SAM: size mismatch for image_encoder.blocks.5.attn.rel_pos_h: copying a param with shape torch.Size([127, 64]) from checkpoint, the shape in current model is torch.Size([27, 64]). size mismatch for image_encoder.blocks.5.attn.rel_pos_w: copying a param with shape torch.Size([127, 64]) from checkpoint, the shape in current model is torch.Size([27, 64]). size mismatch for image_encoder.blocks.7.attn.rel_pos_h: copying a param with shape torch.Size([27, 64]) from checkpoint, the shape in current model is torch.Size([127, 64]). size mismatch for image_encoder.blocks.7.attn.rel_pos_w: copying a param with shape torch.Size([27, 64]) from checkpoint, the shape in current model is torch.Size([127, 64]). size mismatch for image_encoder.blocks.11.attn.rel_pos_h: copying a param with shape torch.Size([127, 64]) from checkpoint, the shape in current model is torch.Size([27, 64]). size mismatch for image_encoder.blocks.11.attn.rel_pos_w: copying a param with shape torch.Size([127, 64]) from checkpoint, the shape in current model is torch.Size([27, 64]). size mismatch for image_encoder.blocks.15.attn.rel_pos_h: copying a param with shape torch.Size([27, 64]) from checkpoint, the shape in current model is torch.Size([127, 64]). size mismatch for image_encoder.blocks.15.attn.rel_pos_w: copying a param with shape torch.Size([27, 64]) from checkpoint, the shape in current model is torch.Size([127, 64]). size mismatch for image_encoder.blocks.17.attn.rel_pos_h: copying a param with shape torch.Size([127, 64]) from checkpoint, the shape in current model is torch.Size([27, 64]). size mismatch for image_encoder.blocks.17.attn.rel_pos_w: copying a param with shape torch.Size([127, 64]) from checkpoint, the shape in current model is torch.Size([27, 64]).

So, could you provide a correct yaml file? Thanks a lot!

F input for adapter

Can you show me the code where the F input is created for input to the adapter?

Freeze encoder

Hi, thanks for your wonderful work! I have a question that SAM-adapter freezes the image encoder which is described in the paper. But I see that although the freeze code is provided, they are not runned. Whether is it necessary to freeze image encoder. Thanks!

FFT for High frequency component

@tianrun-chen In the paper, It is mentioned that you used HFC of the image as part of the input image embedding. But I can't find the corresponding code in your Github repo. I was able to find the code in Explicit-Visual_prompt repo but not in yours and It looks like you have made some changes to their code to implement SAM. Did I miss something or have you not implemented FFT?

怎么安装使用?

This code was implemented with Python 3.8 and PyTorch 1.13.0. You can install all the requirements via:

pip install -r requirements.txt

没有看见requirements.txt文件,并且看见model文件夹内有mmseg,是否需要安装mmseg?

python environment!

ERROR: Invalid requirement: 'torch~=1.13.0+cu116' (from line 10 of requirements.txt)

How to run on one GPU?

Hi, thanks for your excellent work. I am new to this area, I am wondering if I can use only one GPU to train this model, if posssible, how should I change the setting, thanks.

Questions about the MLPi tune

image

and

image

About the MLPi tune, it is a shared MLP. In the paper, you say it has 32 layers. But in the code it seems only 1 layer.

    self.shared_mlp = nn.Linear(self.embed_dim//self.scale_factor, self.embed_dim)

train not succeed

size mismatch for image_encoder.blocks.23.mlp.lin1.weight: copying a param with shape torch.Size([4096, 1024]) from checkpoint, the shape in current model is torch.Size([5120, 1280]).
size mismatch for image_encoder.blocks.23.mlp.lin1.bias: copying a param with shape torch.Size([4096]) from checkpoint, the shape in current model is torch.Size([5120]).
size mismatch for image_encoder.blocks.23.mlp.lin2.weight: copying a param with shape torch.Size([1024, 4096]) from checkpoint, the shape in current model is torch.Size([1280, 5120]).
size mismatch for image_encoder.blocks.23.mlp.lin2.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for image_encoder.neck.0.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 1280, 1, 1]).
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 3207998) of binary: /home/syy/anaconda3/envs/SAM_Adapter/bin/python
Traceback (most recent call last):
File "/home/syy/anaconda3/envs/SAM_Adapter/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/syy/anaconda3/envs/SAM_Adapter/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/syy/anaconda3/envs/SAM_Adapter/lib/python3.8/site-packages/torch/distributed/launch.py", line 195, in
main()
File "/home/syy/anaconda3/envs/SAM_Adapter/lib/python3.8/site-packages/torch/distributed/launch.py", line 191, in main
launch(args)
File "/home/syy/anaconda3/envs/SAM_Adapter/lib/python3.8/site-packages/torch/distributed/launch.py", line 176, in launch
run(args)
File "/home/syy/anaconda3/envs/SAM_Adapter/lib/python3.8/site-packages/torch/distributed/run.py", line 753, in run
elastic_launch(
File "/home/syy/anaconda3/envs/SAM_Adapter/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 132, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/syy/anaconda3/envs/SAM_Adapter/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

============================================================
train.py FAILED

Failures:
[1]:
time : 2023-04-24_19:02:47
host : vip
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 3208003)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[2]:
time : 2023-04-24_19:02:47
host : vip
rank : 2 (local_rank: 2)
exitcode : 1 (pid: 3208005)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[3]:
time : 2023-04-24_19:02:47
host : vip
rank : 3 (local_rank: 3)
exitcode : 1 (pid: 3208011)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Root Cause (first observed failure):
[0]:
time : 2023-04-24_19:02:47
host : vip
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 3207998)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

(SAM_Adapter) syy@vip:~/code/data_auto/SAM-Adapter-PyTorch$ python -m torch.distributed.launch --nnodes 1 --nproc_per_node 4 train.py --config configs/demo.yaml

1、环境版本按照要求配置
readme 中的 loadddptrain.py 没有,使用的是train,
2、下载的数据是cmos,, 请问数据处理有其他要求吗
训练实验用的数据,只有下面的伪装物检测数据,制作的1500
CAMO-COCO-V.1.0-CVIU2019\Camouflage\Images GT
image

model.load_state_dict(sam_checkpoint, strict=False)

RuntimeError: Error(s) in loading state_dict for SAM:
size mismatch for image_encoder.pos_embed: copying a param with shape torch.Size([1, 64, 64, 1024]) from checkpoint, the shape in current model is torch.Size([1, 64, 64, 1280]).

Why I have such problem when I plan to load the weights? May be the code has some problems?

Where's ddp train?

You provide a way of training called 'laodddptrain', but where is the file?

GPU memory is not released after training.

I set batch size to 1, and the GPU usage during training is around 28192MiB. However, when the training finished and eval started, the memory usage would be double. Any way to fix this issue?

断点继续训练

感谢您优秀的工作!我注意到模型保存时没有保存optimizer的参数,您是如何设置断点训练的呢?

The code for adapter

Hi, thanks for your excellent work! I read your code and find mask_decoder the same as the initial SAM model. So could you please tell me the filename of the adapter code? Thanks a lot!

visualize the predicted mask

Hello, are there some code or scripts to achieve the visualization of predicted masks and save the mask picture? Thanks a lot!

instance segmentation

can this model suit the instance segmentation? For example, for one image, there are multiple masks.

Suggestion - Integrate MobileSAM into the pipeline for lightweight and faster inference

Reference: https://github.com/ChaoningZhang/MobileSAM

Our project performs on par with the original SAM and keeps exactly the same pipeline as the original SAM except for a change on the image encode, therefore, it is easy to Integrate into any project.

MobileSAM is around 60 times smaller and around 50 times faster than original SAM, and it is around 7 times smaller and around 5 times faster than the concurrent FastSAM. The comparison of the whole pipeline is summarzed as follows:

image

image

Best Wishes,

Qiao

train_setr_evp_cod.yaml

Hello, first of all, thank you very much for your work. I would like to ask about train_setr_evp_cod.yaml, where is this file?I can't find it

1

OutOfMemoryError

What hardware did you train on? Even with 4 times 24 GB graphics cards I get the error torch.cuda.OutOfMemoryError.

Why are most generated mask images blurry?

hi,
after fine-tuning on the Kvasir dataset, many of the mask images generated on the Kvasir dataset images were very blurry.
cju2rnkt22xep0801as160g9t
cju2nd7l7z98o0799gfjvyfmw

and after fine-tuning on our custom dataset, the correct mask was also not generated

0-0-inp

Parameter mismatch

Thank you for your work.

When I tried to use the script you provided to infer images, I encountered the following error. Can you provide a correct parameter configuration file?

error info:
size mismatch for image_encoder.blocks.22.mlp.lin1.bias: copying a param with shape torch.Size([4096]) from checkpoint, the shape in current model is torch.Size([5120]).
size mismatch for image_encoder.blocks.22.mlp.lin2.weight: copying a param with shape torch.Size([1024, 4096]) from checkpoint, the shape in current model is torch.Size([1280, 5120]).
size mismatch for image_encoder.blocks.22.mlp.lin2.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for image_encoder.blocks.23.norm1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for image_encoder.blocks.23.norm1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for image_encoder.blocks.23.attn.rel_pos_h: copying a param with shape torch.Size([127, 64]) from checkpoint, the shape in current model is torch.Size([127, 80]).
size mismatch for image_encoder.blocks.23.attn.rel_pos_w: copying a param with shape torch.Size([127, 64]) from checkpoint, the shape in current model is torch.Size([127, 80]).
size mismatch for image_encoder.blocks.23.attn.qkv.weight: copying a param with shape torch.Size([3072, 1024]) from checkpoint, the shape in current model is torch.Size([3840, 1280]).
size mismatch for image_encoder.blocks.23.attn.qkv.bias: copying a param with shape torch.Size([3072]) from checkpoint, the shape in current model is torch.Size([3840]).
size mismatch for image_encoder.blocks.23.attn.proj.weight: copying a param with shape torch.Size([1024, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
size mismatch for image_encoder.blocks.23.attn.proj.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for image_encoder.blocks.23.norm2.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for image_encoder.blocks.23.norm2.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for image_encoder.blocks.23.mlp.lin1.weight: copying a param with shape torch.Size([4096, 1024]) from checkpoint, the shape in current model is torch.Size([5120, 1280]).
size mismatch for image_encoder.blocks.23.mlp.lin1.bias: copying a param with shape torch.Size([4096]) from checkpoint, the shape in current model is torch.Size([5120]).
size mismatch for image_encoder.blocks.23.mlp.lin2.weight: copying a param with shape torch.Size([1024, 4096]) from checkpoint, the shape in current model is torch.Size([1280, 5120]).
size mismatch for image_encoder.blocks.23.mlp.lin2.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for image_encoder.neck.0.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 1280, 1, 1]).

kvasir dataset results in the paper

hello, as mentioned in the paper, this dataset follow the evaluation protocol of mediaeval 2020: Automatic polyp segmentation. However, their test set doesn't contain ground truth. And the dataset we can download from their website contains a training set and val set. Are the results reported in the paper are generated by the val set?

关于SAM原文的分割结果

您好

感谢分享这篇工作,想请教一下对比的SAM baseline的分割结果是如何获得的呢?在另一个issue里面似乎您提及了通过使用覆盖全图的bbox作为prompt,那除了这个bbox以外是否使用everything模式或者某个点作为prompt进行提示呢?

期待您的回复

How to use after training

Hello, how to use it after training? My code is as follows, the mask obtained after running is the same no matter what the picture is
code:
modelWeight = "/Users/xxx/Desktop/sement/segment-adapter/SAM-Adapter-PyTorch/pretrained/model_epoch_last.pth"
imageTransform = transforms.Compose([
transforms.Resize((1024, 1024)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
configPath = "/Users/xxx/Desktop/sement/segment-adapter/SAM-Adapter-PyTorch/configs/cod-sam-vit-b.yaml"
file1 = "/Users/xxx/Desktop/sement/segment-adapter/SAM-Adapter-PyTorch/camourflage_00103.jpg"
file2 = "/Users/xxx/Desktop/sement/segment-adapter/SAM-Adapter-PyTorch/camourflage_00001.jpg"
img1 = transforms.Resize((1024, 1024))(Image.open(file1).convert('RGB'))
img2 = transforms.Resize((1024, 1024))(Image.open(file2).convert('RGB'))
img1 = imageTransform(img1)
img2 = imageTransform(img2)
bs = torch.stack([img1,img2],dim=0)
with open(configPath, 'r') as f:
config = yaml.load(f, Loader=yaml.FullLoader)
model = models.make(config['model'])
sam_checkpoint = torch.load(modelWeight,map_location=torch.device('cpu'))
model.load_state_dict(sam_checkpoint, strict=True)
model.eval()
masks = model.infer(bs)
sig = torch.sigmoid(masks)
single = torch.squeeze(masks, dim=0)
to_pil = transforms.ToPILImage()
mask_img = to_pil(single[1])
mask_img.show()

support for prompts

Hi,
Great work. Does your code support prompts, like point-prompts or box prompts which are in the original SAM

Has anyone obtained the same results as in the paper? From the code it looks like the Adapter is not added to the training?

Adapter is not added to the training or am I misunderstanding the code?

the model is init in:

def prepare_training():
if config.get('resume') is not None:
model = models.make(config['model']).cuda()
optimizer = utils.make_optimizer(
model.parameters(), config['optimizer'])
epoch_start = config.get('resume') + 1
else:
model = models.make(config['model']).cuda()
optimizer = utils.make_optimizer(
model.parameters(), config['optimizer'])
epoch_start = 1
max_epoch = config.get('epoch_max')
lr_scheduler = CosineAnnealingLR(optimizer, max_epoch, eta_min=config.get('lr_min'))
if local_rank == 0:
log('model: #params={}'.format(utils.compute_num_params(model, text=True)))
return model, optimizer, epoch_start, lr_scheduler

In config file, the code chooses 'sam', and the 'sam' model is defined in 'models/sam.py'.

In the forward of sam:

def forward(self):
bs = 1
# Embed prompts
sparse_embeddings = torch.empty((bs, 0, self.prompt_embed_dim), device=self.input.device)
dense_embeddings = self.no_mask_embed.weight.reshape(1, -1, 1, 1).expand(
bs, -1, self.image_embedding_size, self.image_embedding_size
)
self.features = self.image_encoder(self.input)
# Predict masks
low_res_masks, iou_predictions = self.mask_decoder(
image_embeddings=self.features,
image_pe=self.get_dense_pe(),
sparse_prompt_embeddings=sparse_embeddings,
dense_prompt_embeddings=dense_embeddings,
multimask_output=False,
)
# Upscale the masks to the original image resolution
masks = self.postprocess_masks(low_res_masks, self.inp_size, self.inp_size)
self.pred_mask = masks

Use "self.image_encoder" to get image embedding, this sub-module is defined in 'ImageEncoderViT', it is implemented in ‘models/mmseg/models/sam/image_encoder.py’. In this file, I just can't find any implemented about adapter, only see one variable that is not being used ‘self.adapter’.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.