Giter VIP home page Giter VIP logo

acmix's People

Contributors

leaplabthu avatar panxuran avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

acmix's Issues

Pre-Trained model

你好,感谢你们杰出的工作,我想使用你们的ACmix模型来训练自己的数据,但是直接从0开始,需要大量的计算,但我在mindspore找不到对应的模型,请问能否提供一个在imagenet下训练好的模型?谢谢!

我使用的是configs/acmix_swin_tiny_patch4_window7_224.yaml

pre-train

When will you release the pre-train model of ResNet?

RuntimeError: Input type (float) and bias type (c10::Half) should be the same

使用YOLOv7结合ACmix,出现如下报错:

`Traceback (most recent call last):
  File "/home/liu/桌面/zwx/YOLOv7-main/train.py", line 613, in <module>
    train(hyp, opt, device, tb_writer)
  File "/home/liu/桌面/zwx/YOLOv7-main/train.py", line 415, in train
    results, maps, times = test.test(data_dict,
  File "/home/liu/桌面/zwx/YOLOv7-main/test.py", line 110, in test
    out, train_out = model(img, augment=augment)  # inference and training outputs
  File "/home/liu/anaconda3/envs/yolo-torch2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/liu/桌面/zwx/YOLOv7-main/models/yolo.py", line 320, in forward
    return self.forward_once(x, profile)  # single-scale inference, train
  File "/home/liu/桌面/zwx/YOLOv7-main/models/yolo.py", line 346, in forward_once
    x = m(x)  # run
  File "/home/liu/anaconda3/envs/yolo-torch2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/liu/桌面/zwx/YOLOv7-main/models/common.py", line 530, in forward
    pe = self.conv_p(position(h, w, x.is_cuda))
  File "/home/liu/anaconda3/envs/yolo-torch2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/liu/anaconda3/envs/yolo-torch2/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 463, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "/home/liu/anaconda3/envs/yolo-torch2/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (float) and bias type (c10::Half) should be the `same`

目标检测

请问作者可以放出关于目标检测实现的代码吗

About your paper

Hi!

Could you please share which tool was used to create Figure 1 in your paper ?

Thank you,

Is it a typo mistake?

ResNet/test_bottleneck.py
line 101
original:
f_conv = f_all.permute(0, 2, 1, 3).reshape(x.shape[0], -1, x.shape[-1], x.shape[-1])
but I think it should be:
f_conv = f_all.permute(0, 2, 1, 3).reshape(x.shape[0], -1, x.shape[-2], x.shape[-1])
to maintain the shape of input height and width.
I do not know if it is correct. Looking forward to your reply. Thanks.

疑问

请问这个投影部分为什么用三个重复的1*1操作呀,卷积的过程没太看明白,可以帮我解释一下吗?谢谢

Based on ResNet

Hello, I would like to ask whether the code based on ResNet will be made public

为什么参数量并没有下降反而上升了好几倍??

我自己测试了一下用nn.Conv2d(16, 64, 1),输入大小是(1, 16, 224, 224),这个参数量只有1088,但是如果用ACmix得到的参数量是8604,这差了快8倍了,但是文章说 “同时与纯卷积或self-attention相比具有最小的计算开销”,好像没有体现,这是咋回事啊?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.