Giter VIP home page Giter VIP logo

man's People

Contributors

icandle avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

man's Issues

Mult-Adds

大佬,请问下你论文中测试Multi-Adds指标的代码在哪里有?没找到可不可以提供下?我用的测试x2尺寸的,设置输入的尺寸为1x3x640x320结果爆显存了。如下所示,我给出我用的测试代码。
`import time
import argparse
import torch, warnings
from archs.MAN_arch import MAN
from torchprofile import profile_macs
from thop import profile

parser = argparse.ArgumentParser()
parser.add_argument('--model', default='MAN', type=str, help='model name')
parser.add_argument('--device', default='cuda', type=str, help='test device')
parser.add_argument('--profiler', action='store_true', default=False, help='use profiler')
args = parser.parse_args()

warnings.filterwarnings('ignore')

if name == 'main':
test_w = [256]
test_h = [256]
test_iter = [1]
test_epoch = 1

network = eval(args.model)()
network.to(args.device)
network.eval()

macs = profile_macs(network, torch.rand([1, 3, 128, 128]).to(args.device))
macs_G = macs / (1024**3)

_, params = profile(network, inputs=(torch.rand([1, 3, 128, 128]).to(args.device), ))
params_M = params / (1024 ** 2)

with torch.no_grad():
	for (h, w, it) in zip(test_w, test_h, test_iter):
		rand_img = torch.rand([1, 3, h, w]).to(args.device)
		trace_network = torch.jit.trace(network, [rand_img])

		if args.profiler:							# torch.profiler slows down the model
			with torch.profiler.profile(
				activities=[
					torch.profiler.ProfilerActivity.CPU,
					torch.profiler.ProfilerActivity.CUDA,
				]
			) as p:

				for _ in range(it):
					output = trace_network(rand_img)

			print(p.key_averages().table(
				sort_by="self_cuda_time_total", row_limit=-1))

		fps_list = []
		for i in range(test_epoch):

			torch.cuda.synchronize()
			t1 = time.time()
			
			for _ in range(it):
				output = trace_network(rand_img)

			torch.cuda.synchronize()
			t2 = time.time()

			fps = it/(t2-t1)
			fps_list.append(fps)

		fps_list = sorted(fps_list)
		avg_fps = fps_list[test_epoch//2]

		print('Input Shape: {0:s}\nParams (M): {1:.3f}\nMACs (G): {2:.3f}\nRuntime (ms): {3:.2f}'
			  .format(str((1, 3, h, w)), params_M, macs_G, 1e3 / avg_fps))`

关于FLOPs的计算

D2@5$LQBL 4L1DSY2}I}G 3
你好~
我想问一下论文里面的FLOPs是怎么计算的呀?
另外Swinir的X2X3的预训练标识是怎么理解?

关于Mult-Adds和Params的计算

大佬,请问能否提供下您计算网络的Mult-Adds和Params的代码?我用了在网上找的代码进行测试,Params计算与论文中一样,但是Mult-Adds计算不正确。

Why PSNR and SSIM is lower than that in paper? I use the pretrained model and basicsr metrics(psnr、ssim)to test.

MANx4 set5/14 PSNR/SSIM results as below:

2022-11-30 17:30:34,111 INFO: Model [SRModel] is created.
2022-11-30 17:30:34,111 INFO: Testing Set5...
2022-11-30 17:30:37,142 INFO: Validation Set5
# psnr: 30.8720 Best: 30.8720 @ MAN_SR iter
# ssim: 0.8750 Best: 0.8750 @ MAN_SR iter

2022-11-30 17:30:37,142 INFO: Testing Set14...
2022-11-30 17:30:43,182 INFO: Validation Set14
# psnr: 27.2578 Best: 27.2578 @ MAN_SR iter
# ssim: 0.7576 Best: 0.7576 @ MAN_SR iter

30.87<32.81 87.50<90.24
27.25<29.07 75.76<78.34

关于亚像素卷积上采样的问题

作者你好!
我想请教一下MAN代码里面的上采样,上面是先把维度从180降到3*scale^2,这样子会不会对亚像素卷积的上采样效果产生影响呀?
image
期待你的答复!!!

Visualize activation maps

Thank you for your amazing project.
Can you share the code that you used to draw the Fig. 4. Visual activation maps.

Thank you in advance.

Training file setup issues

Hi, I'd like to start by asking some questions about the training file settings.
First, I would like to ask if there is a problem with the number of iterations inside the scheduler settings, which is 1600K instead of 160K.
Also, I found that you use MultiStepLR in the loss function setting instead of CosineAnnealingRestartLR as the paper says, which setting should I use better?
Looking forward to your reply!!!

UULXA4U`HT$ SL69SXDYQY8

多卡训练问题

您好,我又来请教问题了,关于多卡训练。是这样我之前用的一张3090的卡训练MAN-light,我发现它所占显存并不大,所以现在换成了2张3080卡做多卡训练,想加快训练速度。在使用多卡训练时,我只改变了trian_MAN.yml中的num_gpu数。相对于单卡训练,我使用多卡训练时,是不是还要改变训练迭代的总次数?比如之前单卡时训练的总迭代次数为1600K,现在用两张卡,总迭代次数是不是应该减半?我训练时终端的输出如下图
image

MAB question

Hello, I have a few questions I want to ask: 1. What does the RCAN-style block in your paper look like? 2. Why is it necessary to pass the Layer norm in the MAB block, and what is its main function? And Why layer norm?

关于复现MAN-light低于论文结果的问题

大佬,最近我按照论文中的设置跑了一遍MAN-light,但是并没有达到论文中的指标。如下为我训练完后各测试集的指标:
image
后来,我使用了你给出的测试集进行测试,效果稍微好了一点,但是还是没有达到论文中的效果。我想是否因为我的训练集出了问题?可能是因为我的训练集不是从官网中直接下载的缘故。同时,我还是想询问下之前询问过的问题,我们的训练集要先裁剪成小的patch再送入网络进行训练,其中您所设置的裁剪LR图像的‘crop_size’和'step'以及裁剪HR图像的'crop_size'和‘step’分别是多少?3450张DF2K图像一共裁剪出多少张patch图像?

在调优阶段的学习率更新策略问题

大佬,我看到在论文中初始学习率更新策略采用的是cosine annealing learning策略,初始学习率为5e-4,但是在调优阶段只写了初始学习率为1e-4,没有写更新策略。请问还是使用的cosine annealing learning策略吗?但是我看train_MAN.yml文件中,有一个类型为MutiStepLR的策略,图一是论文中的截图,图二是train_MAN.yml相应部分的截图。
image

image

测试结果与论文指标有误差

测试MAN-light的时候,使用提供的预训练模型测试Set5 2×为38.16,论文中为38.18,请问是测试集的问题吗,我本地用的GTmod12,我看您是GTmod2,有什么区别吗,或者是什么其他原因导致了数据的不一致

提供的测试集中B100和Urban100没有x3尺度的数据

大佬,你好,你提供的测试集中B100和Urban100没有x3尺度的数据,请问去哪里可以找到,实际上我自己有这两个数据集x3尺度的数据,但是我怕使用自己的数据进行训练会和你论文的效果有出路,所以能够提供一下你的这两个数据集的x3尺度数据?

PSNR and SSIM Problem

I have a question about calculating psnr and ssim.
as you know the size of some pictures in database set 14 , B100 and Manga 109 are not a correct multiple of 2, 3, 4, and 8. For this reason, when we downscale them to make low-resolution images and then super-resolution, we face the problem of changing the dimensions of the images.
For example, suppose that the size of HR Image (100x100) and LR/3 Image (33x33) and finally SR image is (99x99) so we should calculate psnr and ssim between HR(100x100) and SR(99x99) that they have different size and we can not calculate the PSNR and SSIM between these two image.
Is it possible to guide in this field?

自集成问题

您好,我想请问下,模型使用自集成策略(如MAN和使用了自集成后的MAN+)后可以进一步提升效果,在代码中如何设置才能使用这种自集成方式来训练或者测试模型呢?

Questions about PWConv

First of all thank you for your outstanding work! Here I have some questions about modules for you:
I find that in GroupGLKA and SGAB in the code, the PWConv of the two paths are not two different convolution cores, but torch.chunk is used to cut the same convolution kernel when the number of channels is X2. Is this inconsistent with the module schematic diagram given in the paper?
E03S96LVM LL ` NZ29_B
Y$M~%9@7YG8Y X$X@QL3{XI
Looking forward to your reply!

about training

Hi ! I'm interesting in your work, and I want to know how much GPU memory is required for training three different versions of models

Post-training models

First of all, thank you for your outstanding work! I would like to ask if the post-training model file for MAN is available.
Looking forward to your reply!!!

Train and Test

Can we train and Test another SR Model such as SAN and RCAN in your code?
If Possible please Write Details.
Thanks for Your Response.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.